threads
listlengths
1
2.99k
[ { "msg_contents": "Hello hackers,\n\nWhile investigating the recent skink failure [1], I've reproduced this\nfailure under Valgrind on a slow machine and found that this happens due to\nthe last checkpoint recorded in the segment 2, that is removed in the test:\nThe failure log contains:\n2023-10-10 19:10:08.212 UTC [2144251][startup][:0] LOG:  invalid checkpoint record\n2023-10-10 19:10:08.214 UTC [2144251][startup][:0] PANIC:  could not locate a valid checkpoint record\n\nThe line above:\n[19:10:02.701](318.076s) ok 1 - 000000010000000000000001 differs from 000000010000000000000002\ntells us about the duration of previous operations (> 5 mins).\n\nsrc/test/recovery/tmp_check/log/026_overwrite_contrecord_primary.log:\n2023-10-10 19:04:50.149 UTC [1845798][postmaster][:0] LOG:  database system is ready to accept connections\n...\n2023-10-10 19:09:49.131 UTC [1847585][checkpointer][:0] LOG: checkpoint starting: time\n...\n2023-10-10 19:10:02.058 UTC [1847585][checkpointer][:0] LOG: checkpoint complete: ... lsn=0/*2093980*, redo lsn=0/1F62760\n\nAnd here is one more instance of this failure [2]:\n2022-11-08 02:35:25.826 UTC [1614205][][:0] PANIC:  could not locate a valid checkpoint record\n2022-11-08 02:35:26.164 UTC [1612967][][:0] LOG:  startup process (PID 1614205) was terminated by signal 6: Aborted\n\nsrc/test/recovery/tmp_check/log/026_overwrite_contrecord_primary.log:\n2022-11-08 02:29:57.961 UTC [1546469][][:0] LOG:  database system is ready to accept connections\n...\n2022-11-08 02:35:10.764 UTC [1611737][][2/10:0] LOG:  statement: SELECT pg_walfile_name(pg_current_wal_insert_lsn())\n2022-11-08 02:35:11.598 UTC [1546469][][:0] LOG:  received immediate shutdown request\n\nThe next successful run after the failure [1] shows the following duration:\n[21:34:48.556](180.150s) ok 1 - 000000010000000000000001 differs from 000000010000000000000002\nAnd the last successful run:\n[03:03:53.892](126.206s) ok 1 - 000000010000000000000001 differs from 000000010000000000000002\n\nSo to fail on the test, skink should perform at least twice slower than\nusual, and may be it's an extraordinary condition indeed, but on the other\nhand, may be increase checkpoint_timeout as already done in several tests\n(015_promotion_pages, 038_save_logical_slots_shutdown, 039_end_of_wal, ...).\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-10%2017%3A10%3A11\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-11-07%2020%3A27%3A11\n\nBest regards,\nAlexander\n\n\n\n\n\n Hello hackers,\n\n While investigating the recent skink failure [1], I've reproduced\n this\n failure under Valgrind on a slow machine and found that this happens\n due to\n the last checkpoint recorded in the segment 2, that is removed in\n the test:\n The failure log contains:\n 2023-10-10 19:10:08.212 UTC [2144251][startup][:0] LOG:  invalid\n checkpoint record\n 2023-10-10 19:10:08.214 UTC [2144251][startup][:0] PANIC:  could not\n locate a valid checkpoint record\n\n The line above:\n [19:10:02.701](318.076s) ok 1 - 000000010000000000000001 differs\n from 000000010000000000000002\n tells us about the duration of previous operations (> 5 mins).\n\nsrc/test/recovery/tmp_check/log/026_overwrite_contrecord_primary.log:\n 2023-10-10 19:04:50.149 UTC [1845798][postmaster][:0] LOG:  database\n system is ready to accept connections\n ...\n 2023-10-10 19:09:49.131 UTC [1847585][checkpointer][:0] LOG: \n checkpoint starting: time\n ...\n 2023-10-10 19:10:02.058 UTC [1847585][checkpointer][:0] LOG: \n checkpoint complete: ... lsn=0/2093980, redo lsn=0/1F62760\n\n And here is one more instance of this failure [2]:\n 2022-11-08 02:35:25.826 UTC [1614205][][:0] PANIC:  could not locate\n a valid checkpoint record\n 2022-11-08 02:35:26.164 UTC [1612967][][:0] LOG:  startup process\n (PID 1614205) was terminated by signal 6: Aborted\n\nsrc/test/recovery/tmp_check/log/026_overwrite_contrecord_primary.log:\n 2022-11-08 02:29:57.961 UTC [1546469][][:0] LOG:  database system is\n ready to accept connections\n ...\n 2022-11-08 02:35:10.764 UTC [1611737][][2/10:0] LOG:  statement:\n SELECT pg_walfile_name(pg_current_wal_insert_lsn())\n 2022-11-08 02:35:11.598 UTC [1546469][][:0] LOG:  received immediate\n shutdown request\n\n The next successful run after the failure [1] shows the following\n duration:\n [21:34:48.556](180.150s) ok 1 - 000000010000000000000001 differs\n from 000000010000000000000002\n And the last successful run:\n [03:03:53.892](126.206s) ok 1 - 000000010000000000000001 differs\n from 000000010000000000000002\n\n So to fail on the test, skink should perform at least twice slower\n than\n usual, and may be it's an extraordinary condition indeed, but on the\n other\n hand, may be increase checkpoint_timeout as already done in several\n tests\n (015_promotion_pages, 038_save_logical_slots_shutdown,\n 039_end_of_wal, ...).\n\n [1]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-10%2017%3A10%3A11\n [2]\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-11-07%2020%3A27%3A11\n\n Best regards,\n Alexander", "msg_date": "Thu, 12 Oct 2023 14:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Test 026_overwrite_contrecord fails on very slow machines (under\n Valgrind)" }, { "msg_contents": "On Thu, Oct 12, 2023 at 02:00:00PM +0300, Alexander Lakhin wrote:\n> So to fail on the test, skink should perform at least twice slower than\n> usual, and may be it's an extraordinary condition indeed, but on the other\n> hand, may be increase checkpoint_timeout as already done in several tests\n> (015_promotion_pages, 038_save_logical_slots_shutdown, 039_end_of_wal, ...).\n> \n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-10-10%2017%3A10%3A11\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2022-11-07%2020%3A27%3A11\n\nThanks for the investigation. Increasing the checkpoint timeout is\nnot a perfect science but at least it would work until a machine is\nable to be slower than the current limit reached, so I would be OK\nwith your suggestion and raise the bar a bit more to prevent the race\ncreated by these extra checkpoints triggered because of the time.\n--\nMichael", "msg_date": "Fri, 13 Oct 2023 08:30:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 026_overwrite_contrecord fails on very slow machines (under\n Valgrind)" }, { "msg_contents": "Hi,\n\nOn 2023-10-12 14:00:00 +0300, Alexander Lakhin wrote:\n> So to fail on the test, skink should perform at least twice slower than\n> usual\n\nThe machine skink is hosted on runs numerous buildfarm animals (24 I think\nright now, about to be 28). While it has plenty resources (16 cores/32\nthreads, 128GB RAM), test runtime is still pretty variable depending on what\nother tests are running at the same time...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Oct 2023 16:46:02 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 026_overwrite_contrecord fails on very slow machines (under\n Valgrind)" }, { "msg_contents": "On Thu, Oct 12, 2023 at 04:46:02PM -0700, Andres Freund wrote:\n> The machine skink is hosted on runs numerous buildfarm animals (24 I think\n> right now, about to be 28). While it has plenty resources (16 cores/32\n> threads, 128GB RAM), test runtime is still pretty variable depending on what\n> other tests are running at the same time...\n\nOkay. It seems to me that just setting checkpoint_timeout to 1h\nshould leave plenty of room to make sure that the test is not unstable\ncompared to the default of 5 mins. So, why not just do that and see\nwhat happens for a few days?\n--\nMichael", "msg_date": "Wed, 18 Oct 2023 15:26:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Test 026_overwrite_contrecord fails on very slow machines (under\n Valgrind)" } ]
[ { "msg_contents": "In pg_upgrade, we reset WAL archives (remove WAL), transaction id,\netc. in copy_xact_xlog_xid() for the new cluster. Then, we create new\nobjects in the new cluster, and again towards the end of the upgrade\nwe invoke pg_resetwal with the -o option to reset the next OID. Now,\nalong with resetting the OID, pg_resetwal will again reset the WAL. I\nam not sure if that is intentional and it may not have any problem\ntoday except that it seems redundant to reset WAL again.\n\nHowever, this can be problematic for the ongoing work to upgrade the\nlogical replication slots [1]. We want to create/migrate logical slots\nin the new cluster before calling the final pg_resetwal (which resets\nthe next OID) to ensure that there is no new WAL inserted by\nbackground processes or otherwise between resetwal location and\ncreation of slots. So, we thought that we would compute the next WAL\nlocation by doing the computation similar to what pg_resetwal does to\nreset a new WAL location, create slots using that location, and pass\nthe same to pg_resetwal using the -l option. However, that doesn't\nwork because pg_resetwal uses the passed -l option only as a hint but\ncan reset the later WAL if present which can remove the WAL position\nwe have decided as restart_lsn (point to start reading WAL) for slots.\nSo, we came up with another idea that we will reset the WAL just\nbefore creating slots and use that location to create slots and then\ninvent a new option in pg_resetwal where it won't reset the WAL.\n\nNow, as mentioned in the first paragraph, it seems we anyway don't\nneed to reset the WAL at the end when setting the next OID for the new\ncluster with the -o option. If that is true, then I think even without\nslots work it will be helpful to have such an option in pg_resetwal.\n\nThoughts?\n\n[1] - https://commitfest.postgresql.org/45/4273/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Oct 2023 16:44:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Thu, Oct 12, 2023 at 7:17 AM Amit Kapila <[email protected]> wrote:\n> Now, as mentioned in the first paragraph, it seems we anyway don't\n> need to reset the WAL at the end when setting the next OID for the new\n> cluster with the -o option. If that is true, then I think even without\n> slots work it will be helpful to have such an option in pg_resetwal.\n>\n> Thoughts?\n\nI wonder if we should instead provide a way to reset the OID counter\nwith a function call inside the database, gated by IsBinaryUpgrade.\nHaving something like pg_resetwal --but-dont-actually-reset-the-wal\nseems both self-contradictory and vulnerable to abuse that we might be\nbetter off not inviting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 14:30:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, Oct 13, 2023 at 12:00 AM Robert Haas <[email protected]> wrote:\n>\n> On Thu, Oct 12, 2023 at 7:17 AM Amit Kapila <[email protected]> wrote:\n> > Now, as mentioned in the first paragraph, it seems we anyway don't\n> > need to reset the WAL at the end when setting the next OID for the new\n> > cluster with the -o option. If that is true, then I think even without\n> > slots work it will be helpful to have such an option in pg_resetwal.\n> >\n> > Thoughts?\n>\n> I wonder if we should instead provide a way to reset the OID counter\n> with a function call inside the database, gated by IsBinaryUpgrade.\n>\n\nI think the challenge in doing so would be that when the server is\nrunning, a concurrent checkpoint can also update the OID counter value\nin the control file. See below code:\n\nCreateCheckPoint()\n{\n...\nLWLockAcquire(OidGenLock, LW_SHARED);\ncheckPoint.nextOid = ShmemVariableCache->nextOid;\nif (!shutdown)\ncheckPoint.nextOid += ShmemVariableCache->oidCount;\nLWLockRelease(OidGenLock);\n...\nUpdateControlFile()\n...\n}\n\nNow, we can try to pass some startup options like checkpoint_timeout\nwith a large value to ensure that checkpoint won't interfere but not\nsure if that would be bulletproof. Instead, how about allowing\npg_upgrade to update the control file of the new cluster (with the\nrequired value of OID) following the same method as pg_resetwal does\nin RewriteControlFile()?\n\n> Having something like pg_resetwal --but-dont-actually-reset-the-wal\n> seems both self-contradictory and vulnerable to abuse that we might be\n> better off not inviting.\n>\n\nFair point.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Oct 2023 09:29:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, Oct 13, 2023 at 9:29 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 12:00 AM Robert Haas <[email protected]> wrote:\n> >\n> > On Thu, Oct 12, 2023 at 7:17 AM Amit Kapila <[email protected]> wrote:\n> > > Now, as mentioned in the first paragraph, it seems we anyway don't\n> > > need to reset the WAL at the end when setting the next OID for the new\n> > > cluster with the -o option. If that is true, then I think even without\n> > > slots work it will be helpful to have such an option in pg_resetwal.\n> > >\n> > > Thoughts?\n> >\n> > I wonder if we should instead provide a way to reset the OID counter\n> > with a function call inside the database, gated by IsBinaryUpgrade.\n> >\n>\n> I think the challenge in doing so would be that when the server is\n> running, a concurrent checkpoint can also update the OID counter value\n> in the control file. See below code:\n>\n> CreateCheckPoint()\n> {\n> ...\n> LWLockAcquire(OidGenLock, LW_SHARED);\n> checkPoint.nextOid = ShmemVariableCache->nextOid;\n> if (!shutdown)\n> checkPoint.nextOid += ShmemVariableCache->oidCount;\n> LWLockRelease(OidGenLock);\n> ...\n> UpdateControlFile()\n> ...\n> }\n>\n\nBut is this a problem? basically, we will set the\nShmemVariableCache->nextOid counter to the value that we want in the\nIsBinaryUpgrade-specific function. And then the shutdown checkpoint\nwill flush that value to the control file and that is what we want no?\n I mean instead of resetwal directly modifying the control file we\nwill modify that value in the server using the binary_upgrade function\nand then have that value flush to the disk by shutdown checkpoint.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 13 Oct 2023 10:37:20 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, Oct 13, 2023 at 10:37 AM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 9:29 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Oct 13, 2023 at 12:00 AM Robert Haas <[email protected]> wrote:\n> > >\n> > > On Thu, Oct 12, 2023 at 7:17 AM Amit Kapila <[email protected]> wrote:\n> > > > Now, as mentioned in the first paragraph, it seems we anyway don't\n> > > > need to reset the WAL at the end when setting the next OID for the new\n> > > > cluster with the -o option. If that is true, then I think even without\n> > > > slots work it will be helpful to have such an option in pg_resetwal.\n> > > >\n> > > > Thoughts?\n> > >\n> > > I wonder if we should instead provide a way to reset the OID counter\n> > > with a function call inside the database, gated by IsBinaryUpgrade.\n> > >\n> >\n> > I think the challenge in doing so would be that when the server is\n> > running, a concurrent checkpoint can also update the OID counter value\n> > in the control file. See below code:\n> >\n> > CreateCheckPoint()\n> > {\n> > ...\n> > LWLockAcquire(OidGenLock, LW_SHARED);\n> > checkPoint.nextOid = ShmemVariableCache->nextOid;\n> > if (!shutdown)\n> > checkPoint.nextOid += ShmemVariableCache->oidCount;\n> > LWLockRelease(OidGenLock);\n> > ...\n> > UpdateControlFile()\n> > ...\n> > }\n> >\n>\n> But is this a problem? basically, we will set the\n> ShmemVariableCache->nextOid counter to the value that we want in the\n> IsBinaryUpgrade-specific function. And then the shutdown checkpoint\n> will flush that value to the control file and that is what we want no?\n>\n\nI think that can work. Basically, we need to do something like what\nSetNextObjectId() does and then let the shutdown checkpoint update the\nactual value in the control file.\n\n> I mean instead of resetwal directly modifying the control file we\n> will modify that value in the server using the binary_upgrade function\n> and then have that value flush to the disk by shutdown checkpoint.\n>\n\nTrue, the alternative to consider is to let pg_upgrade update the\ncontrol file by itself with the required value of OID. The point I am\nslightly worried about doing via server-side function is that some\nonline and or shutdown checkpoint can update other values in the\ncontrol file as well whereas if we do via pg_upgrade, we can have\nbetter control over just updating the OID.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Oct 2023 11:06:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, Oct 13, 2023 at 11:07 AM Amit Kapila <[email protected]> wrote:\n>\n> > But is this a problem? basically, we will set the\n> > ShmemVariableCache->nextOid counter to the value that we want in the\n> > IsBinaryUpgrade-specific function. And then the shutdown checkpoint\n> > will flush that value to the control file and that is what we want no?\n> >\n>\n> I think that can work. Basically, we need to do something like what\n> SetNextObjectId() does and then let the shutdown checkpoint update the\n> actual value in the control file.\n\nRight.\n\n> > I mean instead of resetwal directly modifying the control file we\n> > will modify that value in the server using the binary_upgrade function\n> > and then have that value flush to the disk by shutdown checkpoint.\n> >\n>\n> True, the alternative to consider is to let pg_upgrade update the\n> control file by itself with the required value of OID. The point I am\n> slightly worried about doing via server-side function is that some\n> online and or shutdown checkpoint can update other values in the\n> control file as well whereas if we do via pg_upgrade, we can have\n> better control over just updating the OID.\n\nYeah, thats a valid point.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 13 Oct 2023 14:03:23 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, Oct 13, 2023 at 2:03 PM Dilip Kumar <[email protected]> wrote:\n>\n> On Fri, Oct 13, 2023 at 11:07 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > But is this a problem? basically, we will set the\n> > > ShmemVariableCache->nextOid counter to the value that we want in the\n> > > IsBinaryUpgrade-specific function. And then the shutdown checkpoint\n> > > will flush that value to the control file and that is what we want no?\n> > >\n> >\n> > I think that can work. Basically, we need to do something like what\n> > SetNextObjectId() does and then let the shutdown checkpoint update the\n> > actual value in the control file.\n>\n> Right.\n>\n> > > I mean instead of resetwal directly modifying the control file we\n> > > will modify that value in the server using the binary_upgrade function\n> > > and then have that value flush to the disk by shutdown checkpoint.\n> > >\n> >\n> > True, the alternative to consider is to let pg_upgrade update the\n> > control file by itself with the required value of OID. The point I am\n> > slightly worried about doing via server-side function is that some\n> > online and or shutdown checkpoint can update other values in the\n> > control file as well whereas if we do via pg_upgrade, we can have\n> > better control over just updating the OID.\n>\n> Yeah, thats a valid point.\n>\n\nBut OTOH, just updating the control file via pg_upgrade may not be\nsufficient because we should keep the shutdown checkpoint record also\nupdated with that value. See how pg_resetwal achieves it via\nRewriteControlFile() and WriteEmptyXLOG(). So, considering both the\napproaches it seems better to go with a server-side function as Robert\nsuggested.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Oct 2023 16:14:30 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "Dear hackers,\r\n\r\n> >\r\n> > > > I mean instead of resetwal directly modifying the control file we\r\n> > > > will modify that value in the server using the binary_upgrade function\r\n> > > > and then have that value flush to the disk by shutdown checkpoint.\r\n> > > >\r\n> > >\r\n> > > True, the alternative to consider is to let pg_upgrade update the\r\n> > > control file by itself with the required value of OID. The point I am\r\n> > > slightly worried about doing via server-side function is that some\r\n> > > online and or shutdown checkpoint can update other values in the\r\n> > > control file as well whereas if we do via pg_upgrade, we can have\r\n> > > better control over just updating the OID.\r\n> >\r\n> > Yeah, thats a valid point.\r\n> >\r\n> \r\n> But OTOH, just updating the control file via pg_upgrade may not be\r\n> sufficient because we should keep the shutdown checkpoint record also\r\n> updated with that value. See how pg_resetwal achieves it via\r\n> RewriteControlFile() and WriteEmptyXLOG(). So, considering both the\r\n> approaches it seems better to go with a server-side function as Robert\r\n> suggested.\r\n\r\nBased on these discussion, I implemented a server-side approach. An attached patch\r\nadds a upgrade function which sets ShmemVariableCache->nextOid. It is called at\r\nthe end of the upgrade. Comments and name of issue_warnings_and_set_wal_level()\r\nis also updated because they become outdated.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Fri, 13 Oct 2023 11:43:02 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Fri, 13 Oct 2023 at 17:15, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear hackers,\n>\n> > >\n> > > > > I mean instead of resetwal directly modifying the control file we\n> > > > > will modify that value in the server using the binary_upgrade function\n> > > > > and then have that value flush to the disk by shutdown checkpoint.\n> > > > >\n> > > >\n> > > > True, the alternative to consider is to let pg_upgrade update the\n> > > > control file by itself with the required value of OID. The point I am\n> > > > slightly worried about doing via server-side function is that some\n> > > > online and or shutdown checkpoint can update other values in the\n> > > > control file as well whereas if we do via pg_upgrade, we can have\n> > > > better control over just updating the OID.\n> > >\n> > > Yeah, thats a valid point.\n> > >\n> >\n> > But OTOH, just updating the control file via pg_upgrade may not be\n> > sufficient because we should keep the shutdown checkpoint record also\n> > updated with that value. See how pg_resetwal achieves it via\n> > RewriteControlFile() and WriteEmptyXLOG(). So, considering both the\n> > approaches it seems better to go with a server-side function as Robert\n> > suggested.\n>\n> Based on these discussion, I implemented a server-side approach. An attached patch\n> adds a upgrade function which sets ShmemVariableCache->nextOid. It is called at\n> the end of the upgrade. Comments and name of issue_warnings_and_set_wal_level()\n> is also updated because they become outdated.\n\nFew comments:\n1) Most of the code in binary_upgrade_set_next_oid is similar to\nSetNextObjectId, but SetNextObjectId has the error handling to report\nan error if an invalid nextOid is specified:\nif (ShmemVariableCache->nextOid > nextOid)\nelog(ERROR, \"too late to advance OID counter to %u, it is now %u\",\nnextOid, ShmemVariableCache->nextOid);\n\nIs this check been left out from binary_upgrade_set_next_oid function\nintentionally? Have you left this because it could be like a dead\ncode. If so, should we have an assert for this here?\n\n2) How about changing issue_warnings_and_set_oid function name to\nissue_warnings_and_set_next_oid?\n void\n-issue_warnings_and_set_wal_level(void)\n+issue_warnings_and_set_oid(void)\n {\n\n3) We have removed these comments, is there any change to the rsync\ninstructions? If so we could update the comments accordingly.\n- * We unconditionally start/stop the new server because\npg_resetwal -o set\n- * wal_level to 'minimum'. If the user is upgrading standby\nservers using\n- * the rsync instructions, they will need pg_upgrade to write its final\n- * WAL record showing wal_level as 'replica'.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 15 Oct 2023 21:55:18 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> \r\n> Few comments:\r\n> 1) Most of the code in binary_upgrade_set_next_oid is similar to\r\n> SetNextObjectId, but SetNextObjectId has the error handling to report\r\n> an error if an invalid nextOid is specified:\r\n> if (ShmemVariableCache->nextOid > nextOid)\r\n> elog(ERROR, \"too late to advance OID counter to %u, it is now %u\",\r\n> nextOid, ShmemVariableCache->nextOid);\r\n> \r\n> Is this check been left out from binary_upgrade_set_next_oid function\r\n> intentionally? Have you left this because it could be like a dead\r\n> code. If so, should we have an assert for this here?\r\n\r\nYeah, they were removed intentionally, but I did rethink that they could be\r\ncombined. ereport() would be skipped during the upgrade mode. Thought?\r\n\r\nRegarding the first ereport(ERROR), it just requires that we are doing initdb.\r\n\r\nAs for the second ereport(ERROR), it requires that next OID is not rollbacked.\r\nThe restriction seems OK during the initialization, but it is not appropriate for\r\nupgrading: wraparound of OID counter might be occurred on old cluster but we try\r\nto restore the counter anyway.\r\n\r\n> 2) How about changing issue_warnings_and_set_oid function name to\r\n> issue_warnings_and_set_next_oid?\r\n> void\r\n> -issue_warnings_and_set_wal_level(void)\r\n> +issue_warnings_and_set_oid(void)\r\n> {\r\n\r\nFixed.\r\n\r\n> 3) We have removed these comments, is there any change to the rsync\r\n> instructions? If so we could update the comments accordingly.\r\n> - * We unconditionally start/stop the new server because\r\n> pg_resetwal -o set\r\n> - * wal_level to 'minimum'. If the user is upgrading standby\r\n> servers using\r\n> - * the rsync instructions, they will need pg_upgrade to write its final\r\n> - * WAL record showing wal_level as 'replica'.\r\n>\r\n\r\nHmm, I thought comments for rsync seemed outdated so that removed. I still think\r\nthis is not needed. Since controlfile->wal_level is not updated to 'minimal'\r\nanymore, the unconditional startup is not required for physical standby.\r\n\r\n\r\n[1] : https://www.postgresql.org/docs/devel/pgupgrade.html#:~:text=the%20next%20step.-,Run%20rsync,-When%20using%20link\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 16 Oct 2023 05:03:16 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "Note that this patch falsifies the comment in SetNextObjectId that\ntaking the lock is pro forma only -- it no longer is, since in upgrade\nmode there can be multiple subprocesses running -- so I think it should\nbe updated.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 18 Oct 2023 17:35:33 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "On Mon, 16 Oct 2023 at 10:33, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thank you for reviewing! PSA new version.\n>\n> >\n> > Few comments:\n> > 1) Most of the code in binary_upgrade_set_next_oid is similar to\n> > SetNextObjectId, but SetNextObjectId has the error handling to report\n> > an error if an invalid nextOid is specified:\n> > if (ShmemVariableCache->nextOid > nextOid)\n> > elog(ERROR, \"too late to advance OID counter to %u, it is now %u\",\n> > nextOid, ShmemVariableCache->nextOid);\n> >\n> > Is this check been left out from binary_upgrade_set_next_oid function\n> > intentionally? Have you left this because it could be like a dead\n> > code. If so, should we have an assert for this here?\n>\n> Yeah, they were removed intentionally, but I did rethink that they could be\n> combined. ereport() would be skipped during the upgrade mode. Thought?\n>\n> Regarding the first ereport(ERROR), it just requires that we are doing initdb.\n>\n> As for the second ereport(ERROR), it requires that next OID is not rollbacked.\n> The restriction seems OK during the initialization, but it is not appropriate for\n> upgrading: wraparound of OID counter might be occurred on old cluster but we try\n> to restore the counter anyway.\n>\n> > 2) How about changing issue_warnings_and_set_oid function name to\n> > issue_warnings_and_set_next_oid?\n> > void\n> > -issue_warnings_and_set_wal_level(void)\n> > +issue_warnings_and_set_oid(void)\n> > {\n>\n> Fixed.\n>\n> > 3) We have removed these comments, is there any change to the rsync\n> > instructions? If so we could update the comments accordingly.\n> > - * We unconditionally start/stop the new server because\n> > pg_resetwal -o set\n> > - * wal_level to 'minimum'. If the user is upgrading standby\n> > servers using\n> > - * the rsync instructions, they will need pg_upgrade to write its final\n> > - * WAL record showing wal_level as 'replica'.\n> >\n>\n> Hmm, I thought comments for rsync seemed outdated so that removed. I still think\n> this is not needed. Since controlfile->wal_level is not updated to 'minimal'\n> anymore, the unconditional startup is not required for physical standby.\n\nWe can update the commit message with the details of the same, it will\nhelp to understand that it is intentionally done.\n\nThere are couple of typos with the new patch:\n1) \"uprade logical replication slot\" should be \"upgrade logical\nreplication slot\":\nPreviously, the OID counter is restored by invoking pg_resetwal with the -o\noption, at the end of upgrade. This is not problematic for now, but WAL removals\nare not happy if we want to uprade logical replication slot. Therefore, a new\nupgrade function is introduced to reset next OID.\n2) \"becasue the value\" should be \"because the value\":\nNote that we only update the on-memory data to avoid concurrent update of\ncontrol with the chekpointer. It is harmless becasue the value would be set at\nshutdown checkpoint.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 19 Oct 2023 08:48:30 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "Dear Alvaro,\r\n\r\nThank you for updating! PSA new version.\r\n\r\n> Note that this patch falsifies the comment in SetNextObjectId that\r\n> taking the lock is pro forma only -- it no longer is, since in upgrade\r\n> mode there can be multiple subprocesses running -- so I think it should\r\n> be updated.\r\n\r\nIndeed, some comments were updated.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 23 Oct 2023 05:34:55 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_upgrade's interaction with pg_resetwal seems confusing" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! New patch can be available in [1].\r\n\r\n> We can update the commit message with the details of the same, it will\r\n> help to understand that it is intentionally done.\r\n\r\nBoth comments and a commit message were updated related.\r\n\r\n> There are couple of typos with the new patch:\r\n> 1) \"uprade logical replication slot\" should be \"upgrade logical\r\n> replication slot\":\r\n> Previously, the OID counter is restored by invoking pg_resetwal with the -o\r\n> option, at the end of upgrade. This is not problematic for now, but WAL removals\r\n> are not happy if we want to uprade logical replication slot. Therefore, a new\r\n> upgrade function is introduced to reset next OID.\r\n\r\nFixed.\r\n\r\n> 2) \"becasue the value\" should be \"because the value\":\r\n> Note that we only update the on-memory data to avoid concurrent update of\r\n> control with the chekpointer. It is harmless becasue the value would be set at\r\n> shutdown checkpoint.\r\n\r\nFixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866DAFE000F8677C49ACD66F5D8A%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 23 Oct 2023 05:36:04 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_upgrade's interaction with pg_resetwal seems confusing" } ]
[ { "msg_contents": "Hi hackers!\n\nPlease advise on the idea of preserving pg_proc oids during pg_upgrade, in\na way like relfilenodes, type id and so on. What are possible downsides of\nsuch a solution?\n\nThanks!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!Please advise on the idea of preserving pg_proc oids during pg_upgrade, in a way like relfilenodes, type id and so on. What are possible downsides of such a solution?Thanks!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 12 Oct 2023 17:24:54 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "Nikita Malakhov <[email protected]> writes:\n> Please advise on the idea of preserving pg_proc oids during pg_upgrade, in\n> a way like relfilenodes, type id and so on. What are possible downsides of\n> such a solution?\n\nYou have the burden of proof backwards. That would add a great deal\nof new mechanism, and you haven't provided even one reason why it'd\nbe worth doing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Oct 2023 10:34:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "Hi!\n\nSay, we have data processed by some user function and we want to keep\nreference to this function\nin our data. In this case we have two ways - first - store string output of\nregprocedure, which is not\nvery convenient, and the second - store its OID, which requires slight\nmodification of pg_upgrade\n(pg_dump and func/procedure creation function).\n\nI've read previous threads about using regproc, and agree that this is not\na very good case anyway,\nbut I haven't found any serious obstacles that forbid modifying pg_upgrade\nthis way.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Say, we have data processed by some user function and we want to keep reference to this functionin our data. In this case we have two ways - first - store string output of regprocedure, which is notvery convenient, and the second - store its OID, which requires slight modification of pg_upgrade(pg_dump and func/procedure creation function).I've read previous threads about using regproc, and agree that this is not a very good case anyway,but I haven't found any serious obstacles that forbid modifying pg_upgrade this way.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 12 Oct 2023 19:56:32 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 9:57 AM Nikita Malakhov <[email protected]> wrote:\n\n> Say, we have data processed by some user function and we want to keep\n> reference to this function\n> in our data.\n>\n\nThen you need to keep the user-visible identifier of said function\n(schema+name+input argument types - you'd probably want to incorporate\nversion into the name) in your user-space code. Exposing runtime generated\noids to user-space is not something I can imagine the system supporting.\nIt goes against the very definition of \"implementation detail\" that\nuser-space code is not supposed to depend upon.\n\nDavid J.\n\nOn Thu, Oct 12, 2023 at 9:57 AM Nikita Malakhov <[email protected]> wrote:Say, we have data processed by some user function and we want to keep reference to this functionin our data.Then you need to keep the user-visible identifier of said function (schema+name+input argument types - you'd probably want to incorporate version into the name) in your user-space code.  Exposing runtime generated oids to user-space is not something I can imagine the system supporting.  It goes against the very definition of \"implementation detail\" that user-space code is not supposed to depend upon.David J.", "msg_date": "Thu, 12 Oct 2023 10:24:17 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 7:36 AM Tom Lane <[email protected]> wrote:\n\n> Nikita Malakhov <[email protected]> writes:\n> > Please advise on the idea of preserving pg_proc oids during pg_upgrade,\n> in\n> > a way like relfilenodes, type id and so on. What are possible downsides\n> of\n> > such a solution?\n>\n> You have the burden of proof backwards. That would add a great deal\n> of new mechanism, and you haven't provided even one reason why it'd\n> be worth doing.\n>\n>\nI was curious about the comment regarding type oids being copied over and I\nfound the commentary in pg_upgrade.c that describes which oids are copied\nover and why, but the IMPLEMENTATION seems to be out-of-sync with the\nactual implementation.\n\n\"\"\"\nIt preserves the relfilenode numbers so TOAST and other references\nto relfilenodes in user data is preserved. (See binary-upgrade usage\nin pg_dump). We choose to preserve tablespace and database OIDs as well.\n\"\"\"\n\nDavid J.\n\nOn Thu, Oct 12, 2023 at 7:36 AM Tom Lane <[email protected]> wrote:Nikita Malakhov <[email protected]> writes:\n> Please advise on the idea of preserving pg_proc oids during pg_upgrade, in\n> a way like relfilenodes, type id and so on. What are possible downsides of\n> such a solution?\n\nYou have the burden of proof backwards.  That would add a great deal\nof new mechanism, and you haven't provided even one reason why it'd\nbe worth doing.I was curious about the comment regarding type oids being copied over and I found the commentary in pg_upgrade.c that describes which oids are copied over and why, but the IMPLEMENTATION seems to be out-of-sync with the actual implementation.\"\"\"It preserves the relfilenode numbers so TOAST and other referencesto relfilenodes in user data is preserved.  (See binary-upgrade usagein pg_dump). We choose to preserve tablespace and database OIDs as well.\"\"\"David J.", "msg_date": "Thu, 12 Oct 2023 11:00:39 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 10:35 AM Tom Lane <[email protected]> wrote:\n> You have the burden of proof backwards. That would add a great deal\n> of new mechanism, and you haven't provided even one reason why it'd\n> be worth doing.\n\n\"A great deal of new mechanism\" seems like a slight exaggeration. We\npreserve a bunch of kinds of OIDs already, and it wouldn't be any\nharder to preserve this one than the ones we preserve already, or so I\nthink. So it would be some additional mechanism, but maybe not a great\ndeal.\n\nAs to whether it's a good idea, it isn't necessary for the system to\noperate properly, so we didn't, but it's a judgement call whether it's\nbetter for other reasons, like being able to have regprocedure columns\nsurvive an upgrade, or making users being less confused, or allowing\npeople supporting PostgreSQL having an easier time debugging issues.\nPersonally, I've never been quite sure we made the right decision\nthere. I admit that I'm not particularly keen to try to add the amount\nof mechanism that would be required to preserve every single OID\neverywhere, but I also somehow feel like the fact that we don't is\npretty weird.\n\nThe pg_upgrade experience right now is a bit as if you woke up in the\nmorning and found that city officials came by during the night and\nrenumbered your house, thus changing your address. Then, they sent\nchange of address forms to everyone who ever mails you anything, plus\nupdated your address with your doctor's office and your children's\nschool. In a way, there's no problem: nothing has really changed for\nyou in any way that matters. Yet, I think that would feel pretty\nuncomfortable if it actually happened to you, and I think the\npg_upgrade experience is uncomfortable in the same way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 14:20:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023, 11:21 Robert Haas <[email protected]> wrote:\n\n>\n> The pg_upgrade experience right now is a bit as if you woke up in the\n> morning and found that city officials came by during the night and\n> renumbered your house, thus changing your address. Then, they sent\n> change of address forms to everyone who ever mails you anything, plus\n> updated your address with your doctor's office and your children's\n> school. In a way, there's no problem: nothing has really changed for\n> you in any way that matters. Yet, I think that would feel pretty\n> uncomfortable if it actually happened to you, and I think the\n> pg_upgrade experience is uncomfortable in the same way.\n>\n\nIt's more like a lot number or surveying tract than an postal address.\nUseful for a single party, the builder or the government, but not something\nyou give out to other people so they can find you.\n\nWhether or not we copy over oids should be done based upon our internal\nneeds, not end users. Which is why the fee that do get copied exists,\nbecause we store them in internal files that we want to copy as part of the\nupgrade. It also isn't like pg_dump/restore is going to retain them and\nthe less divergence between that and pg_upgrade arguably the better.\n\nDavid J.\n\n>\n\nOn Thu, Oct 12, 2023, 11:21 Robert Haas <[email protected]> wrote:\nThe pg_upgrade experience right now is a bit as if you woke up in the\nmorning and found that city officials came by during the night and\nrenumbered your house, thus changing your address. Then, they sent\nchange of address forms to everyone who ever mails you anything, plus\nupdated your address with your doctor's office and your children's\nschool. In a way, there's no problem: nothing has really changed for\nyou in any way that matters. Yet, I think that would feel pretty\nuncomfortable if it actually happened to you, and I think the\npg_upgrade experience is uncomfortable in the same way.It's more like a lot number or surveying tract than an postal address.  Useful for a single party, the builder or the government, but not something you give out to other people so they can find you.Whether or not we copy over oids should be done based upon our internal needs, not end users.  Which is why the fee that do get copied exists, because we store them in internal files that we want to copy as part of the upgrade.  It also isn't like pg_dump/restore is going to retain them and the less divergence between that and pg_upgrade arguably the better.David J.", "msg_date": "Thu, 12 Oct 2023 11:38:11 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 2:38 PM David G. Johnston\n<[email protected]> wrote:\n> It's more like a lot number or surveying tract than an postal address. Useful for a single party, the builder or the government, but not something you give out to other people so they can find you.\n>\n> Whether or not we copy over oids should be done based upon our internal needs, not end users. Which is why the fee that do get copied exists, because we store them in internal files that we want to copy as part of the upgrade. It also isn't like pg_dump/restore is going to retain them and the less divergence between that and pg_upgrade arguably the better.\n\nWe build the product for the end users. Their desires and needs are\nrelevant. And if they're telling us we did it wrong, we need to listen\nto that. We don't have to do everything that everybody wants, but\ntreating developer needs as strictly more important than end-user\nneeds is self-defeating.\n\nI agree that there's a trade-off here. Preserving more OIDs requires\nmore code and makes pg_dump and other things more complicated, which\nis not great. But, at least to me, arguing that there are no downsides\nof not preserving these OIDs is simply not a believable argument.\n\nWell, maybe somebody believes it. But I don't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 14:43:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 11:43 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Oct 12, 2023 at 2:38 PM David G. Johnston\n> <[email protected]> wrote:\n> > It's more like a lot number or surveying tract than an postal address.\n> Useful for a single party, the builder or the government, but not something\n> you give out to other people so they can find you.\n> >\n> > Whether or not we copy over oids should be done based upon our internal\n> needs, not end users. Which is why the fee that do get copied exists,\n> because we store them in internal files that we want to copy as part of the\n> upgrade. It also isn't like pg_dump/restore is going to retain them and\n> the less divergence between that and pg_upgrade arguably the better.\n>\n> We build the product for the end users. Their desires and needs are\n> relevant. And if they're telling us we did it wrong, we need to listen\n> to that. We don't have to do everything that everybody wants, but\n> treating developer needs as strictly more important than end-user\n> needs is self-defeating.\n>\n\nEvery catalog has both a natural and a surrogate key. Developers get to\nuse the surrogate key while end-users get to use the natural one (i.e., the\none they provided). I see no reason to change that specification. And I\ndo believe there are no compelling reasons for an end-user to need to use\nthe surrogate key instead of the natural one. The example provided by the\nOP isn't one, IMO, the overall goal can be accomplished via the natural key\n(if it cannot, maybe we need to make retrieving the natural key for a\npg_proc record given an OID easier). The fact that OIDs are not even\naccessible via SQL further reinforces this belief. The only reason to\nneed OIDs as a DBA is to perform joins among the catalogs and all such\njoins are local to the database and even session executing them - the\nspecific values are immaterial.\n\nThe behavior of pg_upgrade only preserving OIDs that are necessary due to\nthe physical copying of data files from the old server to the new one seems\nsufficient both in terms of effort and the principle of doing the minimum\namount to solve the problem at hand.\n\nDavid J.\n\nOn Thu, Oct 12, 2023 at 11:43 AM Robert Haas <[email protected]> wrote:On Thu, Oct 12, 2023 at 2:38 PM David G. Johnston\n<[email protected]> wrote:\n> It's more like a lot number or surveying tract than an postal address.  Useful for a single party, the builder or the government, but not something you give out to other people so they can find you.\n>\n> Whether or not we copy over oids should be done based upon our internal needs, not end users.  Which is why the fee that do get copied exists, because we store them in internal files that we want to copy as part of the upgrade.  It also isn't like pg_dump/restore is going to retain them and the less divergence between that and pg_upgrade arguably the better.\n\nWe build the product for the end users. Their desires and needs are\nrelevant. And if they're telling us we did it wrong, we need to listen\nto that. We don't have to do everything that everybody wants, but\ntreating developer needs as strictly more important than end-user\nneeds is self-defeating.Every catalog has both a natural and a surrogate key.  Developers get to use the surrogate key while end-users get to use the natural one (i.e., the one they provided).  I see no reason to change that specification.  And I do believe there are no compelling reasons for an end-user to need to use the surrogate key instead of the natural one.  The example provided by the OP isn't one, IMO, the overall goal can be accomplished via the natural key (if it cannot, maybe we need to make retrieving the natural key for a pg_proc record given an OID easier).  The fact that OIDs are not even accessible via SQL further reinforces this belief.  The only reason to need OIDs as a DBA is to perform joins among the catalogs and all such joins are local to the database and even session executing them - the specific values are immaterial.The behavior of pg_upgrade only preserving OIDs that are necessary due to the physical copying of data files from the old server to the new one seems sufficient both in terms of effort and the principle of doing the minimum amount to solve the problem at hand.David J.", "msg_date": "Thu, 12 Oct 2023 12:36:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 3:36 PM David G. Johnston\n<[email protected]> wrote:\n> Every catalog has both a natural and a surrogate key. Developers get to use the surrogate key while end-users get to use the natural one (i.e., the one they provided). I see no reason to change that specification.\n\nI agree with this.\n\n> And I do believe there are no compelling reasons for an end-user to need to use the surrogate key instead of the natural one.\n\nBut I disagree with this.\n\n> The example provided by the OP isn't one, IMO, the overall goal can be accomplished via the natural key (if it cannot, maybe we need to make retrieving the natural key for a pg_proc record given an OID easier). The fact that OIDs are not even accessible via SQL further reinforces this belief. The only reason to need OIDs as a DBA is to perform joins among the catalogs and all such joins are local to the database and even session executing them - the specific values are immaterial.\n\nThis just all seems very simplistic to me. In theory it's true, but in\npractice it isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 16:16:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "Hi,\n\nI've already implemented preserving PG_PROC oids during pg_upgrade\nin a way like relfilenodes, etc, actually, it is quite simple, and on the\nfirst\nlook there are no any problems.\n\nAbout using surrogate key - this feature is more for data generated by\nthe DBMS itself, i.e. data processed by some extension and saved\nand re-processed automatically or by user's request, but without bothering\nuser with these internal keys.\n\nThe main question - maybe, are there pitfalls of which I am not aware of?\n\nThanks for your replies!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I've already implemented preserving PG_PROC oids during pg_upgradein a way like relfilenodes, etc, actually, it is quite simple, and on the firstlook there are no any problems.About using surrogate key - this feature is more for data generated bythe DBMS itself, i.e. data processed by some extension and savedand re-processed automatically or by user's request, but without botheringuser with these internal keys.The main question - maybe, are there pitfalls of which I am not aware of?Thanks for your replies!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 12 Oct 2023 23:30:53 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 1:31 PM Nikita Malakhov <[email protected]> wrote:\n\n> About using surrogate key - this feature is more for data generated by\n> the DBMS itself, i.e. data processed by some extension and saved\n> and re-processed automatically or by user's request, but without bothering\n> user with these internal keys.\n>\n\nThen what does it matter whether you spell it:\n\n12345\nor\nmy_ext.do_something(int)\n?\n\nWhy do you require us to redefine the scope for which pg_proc.oid is useful\nin order to implement this behavior?\n\nYour extension breaks if your user uses logical backups or we otherwise\nget into a position where pg_upgrade cannot be used to migrate in the\nfuture. Is avoiding the textual representation so necessary that you need\nto add another dependency to the system? That just seems unwise regardless\nof how easy it may be to accomplish.\n\nDavid J.\n\nOn Thu, Oct 12, 2023 at 1:31 PM Nikita Malakhov <[email protected]> wrote:About using surrogate key - this feature is more for data generated bythe DBMS itself, i.e. data processed by some extension and savedand re-processed automatically or by user's request, but without botheringuser with these internal keys.Then what does it matter whether you spell it:12345ormy_ext.do_something(int)?Why do you require us to redefine the scope for which pg_proc.oid is useful in order to implement this behavior?Your extension breaks if your user uses logical backups or we otherwise get into a position where pg_upgrade cannot be used to migrate in the future.  Is avoiding the textual representation so necessary that you need to add another dependency to the system?  That just seems unwise regardless of how easy it may be to accomplish.David J.", "msg_date": "Thu, 12 Oct 2023 14:46:41 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "Hi,\n\nTextual representation requires a long text field because it could contain\nschema,\narguments, it is difficult and not effective to be saved as part of the\ndata, and must\nbe parsed to retrieve function oid. By using direct oid (actually, a value\nof the regprocedure field) we avoid it and function could be retrieved by\npk.\n\nWhy pg_upgrade cannot be used? OID preservation logic is already implemented\nfor several OIDs in catalog tables, like pg_class, type, relfilenode,\nenum...\nI've mentioned twice that this logic is already implemented and I haven't\nencountered\nany problems with pg_upgrade.\n\nActually, I've asked here because there are several references to PG_PROC\noids\nfrom other tables in the system catalog, so I was worried if this logic\ncould break\nsomething I do not know about.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Textual representation requires a long text field because it could contain schema,arguments, it is difficult and not effective to be saved as part of the data, and mustbe parsed to retrieve function oid. By using direct oid (actually, a valueof the regprocedure field) we avoid it and function could be retrieved by pk.Why pg_upgrade cannot be used? OID preservation logic is already implementedfor several OIDs in catalog tables, like pg_class, type, relfilenode, enum...I've mentioned twice that this logic is already implemented and I haven't encounteredany problems with pg_upgrade.Actually, I've asked here because there are several references to PG_PROC oidsfrom other tables in the system catalog, so I was worried if this logic could breaksomething I do not know about.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Fri, 13 Oct 2023 00:57:53 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, Oct 12, 2023 at 2:58 PM Nikita Malakhov <[email protected]> wrote:\n\n> Why pg_upgrade cannot be used?\n>\n\nWe document both a pg_dump/pg_restore migration and a pg_upgrade one (not\nto mention that logical backup and restore would cause the oids to\nchange). It seems odd to have a feature that requires pg_upgrade to be the\nchosen one. pg_upgrade is an option, not a requirement. Same goes for\npg_basebackup.\n\npg_upgrade itself warns that should the on-disk file format change then it\nwould be unusable - though I suspect that we'd end up with some kind of\nhybrid approach in that case.\n\n\n> OID preservation logic is already implemented\n> for several OIDs in catalog tables, like pg_class, type, relfilenode,\n> enum...\n>\n>\nWe are allowed to preserve oids if we wish but that doesn't mean we must,\nnor does doing so constitute a declaration that such oids are part of\nthe public API. And I don't see us making OIDs part of the public API\nunless we modify pg_dump to include them in its output.\n\n\n> Actually, I've asked here because there are several references to PG_PROC\n> oids\n> from other tables in the system catalog\n>\n\nOf course there are, e.g., views depending on functions would result is\nthose. But pg_upgrade et al. recomputes the views so the changing of oids\nisn't a problem.\n\nLong text fields are common in databases; and if there are concerns with\nparsing/interpretation we can add functions to make doing that simpler.\n\nDavid J.\n\nOn Thu, Oct 12, 2023 at 2:58 PM Nikita Malakhov <[email protected]> wrote:Why pg_upgrade cannot be used?We document both a pg_dump/pg_restore migration and a pg_upgrade one (not to mention that logical backup and restore would cause the oids to change).  It seems odd to have a feature that requires pg_upgrade to be the chosen one.  pg_upgrade is an option, not a requirement.  Same goes for pg_basebackup.pg_upgrade itself warns that should the on-disk file format change then it would be unusable - though I suspect that we'd end up with some kind of hybrid approach in that case.  OID preservation logic is already implementedfor several OIDs in catalog tables, like pg_class, type, relfilenode, enum...We are allowed to preserve oids if we wish but that doesn't mean we must, nor does doing so constitute a declaration that such oids are part of the public API.  And I don't see us making OIDs part of the public API unless we modify pg_dump to include them in its output.Actually, I've asked here because there are several references to PG_PROC oidsfrom other tables in the system catalogOf course there are, e.g., views depending on functions would result is those.  But pg_upgrade et al. recomputes the views so the changing of oids isn't a problem.Long text fields are common in databases; and if there are concerns with parsing/interpretation we can add functions to make doing that simpler.David J.", "msg_date": "Thu, 12 Oct 2023 15:28:51 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On Thu, 2023-10-12 at 19:56 +0300, Nikita Malakhov wrote:\n> Say, we have data processed by some user function and we want to keep reference to this function\n> in our data. In this case we have two ways - first - store string output of regprocedure, which is not\n> very convenient, and the second - store its OID, which requires slight modification of pg_upgrade\n> (pg_dump and func/procedure creation function).\n\nSo far, we have lived quite well with the rule \"don't store any system OIDs in the database\nif you want to pg_upgrade\" (views on system objects, reg* data types, ...).\n\nWhat is inconvenient about storing the output of regprocedure?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 13 Oct 2023 08:14:54 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "On 2023-Oct-13, Nikita Malakhov wrote:\n\n> Textual representation requires a long text field because it could\n> contain schema, arguments, it is difficult and not effective to be\n> saved as part of the data, and must be parsed to retrieve function\n> oid.\n\nIt is worse than that: the regproc textual representation depends on\nsearch_path. If you store the text now, the meaning could change later,\ndepending on the search_path that applies at read time.\n\nOf course, the storage for OID is much shorter and not subject to this\nproblem; but it is subject to the problem that it breaks if you drop and\nreplace the function, which could happen for instance in an extensions\nupgrade script.\n\nI think a better way to store a function's identity is to store the\n'identity' column from pg_identify_object(). It is fully qualified and\nyou can cast to regprocedure with no ambiguity (which gives you an OID,\nif you need one). And it should upgrade cleanly.\n\nIf you have a regproc column that you want to upgrade, maybe it would\nwork to do 'ALTER TABLE .. SET TYPE TEXT USING' and turn the value into\npg_identify_object().identity.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:36:21 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" }, { "msg_contents": "Hi,\n\nThank you very much, I'll check it out. It looks like the\ngetObjectIdentity() used in\npg_identify_object() could do.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Thank you very much, I'll check it out. It looks like the getObjectIdentity() used inpg_identify_object() could do.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Mon, 16 Oct 2023 15:10:16 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pro et contra of preserving pg_proc oids during pg_upgrade" } ]
[ { "msg_contents": "Greetings,\n\nI've been running into challenges building 32 bit windows version. I\nsuspect there are no build farms and nobody really builds this.\n\nThe reason I need these is to be able to build 32 bit dll's for ODBC. At\none time EDB used to provide binaries but that doesn't appear to be the\ncase.\n\nrunning build.bat in an x86 environment fails but that can be easily fixed\nby adding\n\n$ENV{CONFIG}=\"x86\"; in buld_env.pl\n\nbuild postgres then works as advertised, however\n\ninstall <dir> fails with\n\n\"Copying build output files...Could not copy release\\zic\\zic.exe to\npostgres\\bin\\zic.exe\"\n\nApparently 32 bit dlls are required. If there is an easier way to get\nlibpq.dll and the include files for building I'm all ears.\n\n\nDave Cramer\n\nGreetings,I've been running into challenges building 32 bit windows version. I suspect there are no build farms and nobody really builds this.The reason I need these is to be able to build 32 bit dll's for ODBC. At one time EDB used to provide binaries but that doesn't appear to be the case.running build.bat in an x86 environment fails but that can be easily fixed by adding$ENV{CONFIG}=\"x86\"; in buld_env.plbuild postgres then works as advertised, howeverinstall <dir> fails with\"Copying build output files...Could not copy release\\zic\\zic.exe to postgres\\bin\\zic.exe\"Apparently 32 bit dlls are required. If there is an easier way to get libpq.dll and the include files for building I'm all ears.Dave Cramer", "msg_date": "Thu, 12 Oct 2023 15:49:18 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "building 32bit windows version" } ]
[ { "msg_contents": "Hi,\n\nAshutosh Bapat reported me off-list a possible issue in how BRIN\nminmax-multi calculate distance for infinite timestamp/date values.\n\nThe current code does this:\n\n if (TIMESTAMP_NOT_FINITE(dt1) || TIMESTAMP_NOT_FINITE(dt2))\n PG_RETURN_FLOAT8(0);\n\nso means infinite values are \"very close\" to any other value, and thus\nlikely to be merged into a summary range. That's exactly the opposite of\nwhat we want to do, possibly resulting in inefficient indexes.\n\nConsider this example\n\n create table test (a timestamptz) with (fillfactor=50);\n\n insert into test\n select (now() + ((10000 * random())::int || ' seconds')::interval)\n from generate_series(1,1000000) s(i);\n\n update test set a = '-infinity'::timestamptz where random() < 0.01;\n update test set a = 'infinity'::timestamptz where random() < 0.01;\n\n explain (analyze, timing off, costs off)\n select * from test where a = '2024-01-01'::timestamptz;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------\n Bitmap Heap Scan on test (actual rows=0 loops=1)\n Recheck Cond: (a = '2024-01-01 00:00:00+01'::timestamp with time zone)\n Rows Removed by Index Recheck: 680662\n Heap Blocks: lossy=6024\n -> Bitmap Index Scan on test_a_idx (actual rows=60240 loops=1)\n Index Cond: (a = '2024-01-01 00:00:00+01'::timestamp with time\nzone)\n Planning Time: 0.075 ms\n Execution Time: 106.871 ms\n(8 rows)\n\nClearly, large part of the table gets scanned - this happens because\nwhen building the index, we end up with ranges like this:\n\n\n [-infinity,a,b,c,...,x,y,z,infinity]\n\nand we conclude that distance for [-infinity,a] is 0, and we combine\nthese values into a range. And the same for [z,infinity]. But we should\ndo exactly the opposite thing - never merge those.\n\nAttached is a patch fixing this, with which the plan looks like this:\n\n QUERY PLAN\n\n------------------------------------------------------------------------------\n Bitmap Heap Scan on test (actual rows=0 loops=1)\n Recheck Cond: (a = '2024-01-01 00:00:00+01'::timestamp with time zone)\n -> Bitmap Index Scan on test_a_idx (actual rows=0 loops=1)\n Index Cond: (a = '2024-01-01 00:00:00+01'::timestamp with time\nzone)\n Planning Time: 0.289 ms\n Execution Time: 9.432 ms\n(6 rows)\n\nWhich seems much better.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 13 Oct 2023 00:38:40 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "BRIN minmax multi - incorrect distance for infinite timestamp/date" }, { "msg_contents": "On Thu, 12 Oct 2023 at 23:43, Tomas Vondra\n<[email protected]> wrote:\n>\n> Ashutosh Bapat reported me off-list a possible issue in how BRIN\n> minmax-multi calculate distance for infinite timestamp/date values.\n>\n> The current code does this:\n>\n> if (TIMESTAMP_NOT_FINITE(dt1) || TIMESTAMP_NOT_FINITE(dt2))\n> PG_RETURN_FLOAT8(0);\n>\n\nYes indeed, that looks wrong. I noticed the same thing while reviewing\nthe infinite interval patch.\n\n> so means infinite values are \"very close\" to any other value, and thus\n> likely to be merged into a summary range. That's exactly the opposite of\n> what we want to do, possibly resulting in inefficient indexes.\n>\n\nIs this only inefficient? Or can it also lead to wrong query results?\n\n> Attached is a patch fixing this\n>\n\nI wonder if it's actually necessary to give infinity any special\nhandling at all for dates and timestamps. For those types, \"infinity\"\nis actually just INT_MIN/MAX, which compares correctly with any finite\nvalue, and will be much larger/smaller than any common value, so it\nseems like it isn't necessary to give \"infinite\" values any special\ntreatment. That would be consistent with date_cmp() and\ntimestamp_cmp().\n\nSomething else that looks wrong about that BRIN code is that the\ninteger subtraction might lead to overflow -- it is subtracting two\ninteger values, then casting the result to float8. It should cast each\ninput before subtracting, more like brin_minmax_multi_distance_int8().\n\nIOW, I think brin_minmax_multi_distance_date/timestamp could be made\nbasically identical to brin_minmax_multi_distance_int4/8.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 13 Oct 2023 10:21:58 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/13/23 11:21, Dean Rasheed wrote:\n> On Thu, 12 Oct 2023 at 23:43, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Ashutosh Bapat reported me off-list a possible issue in how BRIN\n>> minmax-multi calculate distance for infinite timestamp/date values.\n>>\n>> The current code does this:\n>>\n>> if (TIMESTAMP_NOT_FINITE(dt1) || TIMESTAMP_NOT_FINITE(dt2))\n>> PG_RETURN_FLOAT8(0);\n>>\n> \n> Yes indeed, that looks wrong. I noticed the same thing while reviewing\n> the infinite interval patch.\n> \n>> so means infinite values are \"very close\" to any other value, and thus\n>> likely to be merged into a summary range. That's exactly the opposite of\n>> what we want to do, possibly resulting in inefficient indexes.\n>>\n> \n> Is this only inefficient? Or can it also lead to wrong query results?\n> \n\nI don't think it can produce incorrect results. It only affects which\nvalues we \"merge\" into an interval when building the summaries.\n\n>> Attached is a patch fixing this\n>>\n> \n> I wonder if it's actually necessary to give infinity any special\n> handling at all for dates and timestamps. For those types, \"infinity\"\n> is actually just INT_MIN/MAX, which compares correctly with any finite\n> value, and will be much larger/smaller than any common value, so it\n> seems like it isn't necessary to give \"infinite\" values any special\n> treatment. That would be consistent with date_cmp() and\n> timestamp_cmp().\n> \n\nRight, but ....\n\n> Something else that looks wrong about that BRIN code is that the\n> integer subtraction might lead to overflow -- it is subtracting two\n> integer values, then casting the result to float8. It should cast each\n> input before subtracting, more like brin_minmax_multi_distance_int8().\n> \n\n... it also needs to fix this, otherwise it overflows. Consider\n\n delta = dt2 - dt1;\n\nand assume dt1 is INT64_MIN, or that dt2 is INT64_MAX.\n\n> IOW, I think brin_minmax_multi_distance_date/timestamp could be made\n> basically identical to brin_minmax_multi_distance_int4/8.\n> \n\nRight. Attached is a patch doing it this way.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 13 Oct 2023 12:44:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Fri, 13 Oct 2023 at 11:44, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 10/13/23 11:21, Dean Rasheed wrote:\n> >\n> > Is this only inefficient? Or can it also lead to wrong query results?\n>\n> I don't think it can produce incorrect results. It only affects which\n> values we \"merge\" into an interval when building the summaries.\n>\n\nAh, I get it now. These \"distance\" support functions are only used to\nsee how far apart 2 ranges are, for the purposes of the algorithm that\nmerges the 2 closest ranges. So if it gets it wrong, it only leads to\na poor choice of ranges to merge, making the query inefficient, but\nstill correct.\n\nPresumably, that also makes this kind of change safe to back-patch\n(not sure if you were planning to do that?), since it will only affect\nrange merging choices when inserting new values into existing indexes.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 13 Oct 2023 13:04:02 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/13/23 14:04, Dean Rasheed wrote:\n> On Fri, 13 Oct 2023 at 11:44, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> On 10/13/23 11:21, Dean Rasheed wrote:\n>>>\n>>> Is this only inefficient? Or can it also lead to wrong query results?\n>>\n>> I don't think it can produce incorrect results. It only affects which\n>> values we \"merge\" into an interval when building the summaries.\n>>\n> \n> Ah, I get it now. These \"distance\" support functions are only used to\n> see how far apart 2 ranges are, for the purposes of the algorithm that\n> merges the 2 closest ranges. So if it gets it wrong, it only leads to\n> a poor choice of ranges to merge, making the query inefficient, but\n> still correct.\n> \n\nRight.\n\n> Presumably, that also makes this kind of change safe to back-patch\n> (not sure if you were planning to do that?), since it will only affect\n> range merging choices when inserting new values into existing indexes.\n> \n\nI do plan to backpatch this, yes. I don't think there are many people\naffected by this (few people are using infinite dates/timestamps, but\nmaybe the overflow could be more common).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 13 Oct 2023 14:17:30 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Fri, 13 Oct 2023 at 13:17, Tomas Vondra\n<[email protected]> wrote:\n>\n> I do plan to backpatch this, yes. I don't think there are many people\n> affected by this (few people are using infinite dates/timestamps, but\n> maybe the overflow could be more common).\n>\n\nOK, though I doubt that such values are common in practice.\n\nThere's also an overflow problem in\nbrin_minmax_multi_distance_interval() though, and I think that's worse\nbecause overflows there throw \"interval out of range\" errors, which\ncan prevent index creation or inserts.\n\nThere's a patch (0009 in [1]) as part of the infinite interval work,\nbut that just uses interval_mi(), and so doesn't fix the\ninterval-out-of-range errors, except for infinite intervals, which are\ntreated as special cases, which I don't think is really necessary.\n\nI think this should be rewritten to compute delta from ia and ib\nwithout going via an intermediate Interval value. I.e., instead of\ncomputing \"result\", just do something like\n\n dayfraction = (ib->time % USECS_PER_DAY) - (ia->time % USECS_PER_DAY);\n days = (ib->time / USECS_PER_DAY) - (ia->time / USECS_PER_DAY);\n days += (int64) ib->day - (int64) ia->day;\n days += ((int64) ib->month - (int64) ia->month) * INT64CONST(30);\n\nthen convert to double precision as it does now:\n\n delta = (double) days + dayfraction / (double) USECS_PER_DAY;\n\nSo the first part is exact 64-bit integer arithmetic, with no chance\nof overflow, and it'll handle \"infinite\" intervals just fine, when\nthat feature gets added.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/CAExHW5u1JE7dxK=WLzqhCszNToxQzJdieRmhREpW6r8w6kcRGQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 13 Oct 2023 16:28:14 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "Thanks Tomas for bringing this discussion to hackers.\n\n\nOn Fri, Oct 13, 2023 at 8:58 PM Dean Rasheed <[email protected]> wrote:\n>\n> On Fri, 13 Oct 2023 at 13:17, Tomas Vondra\n> <[email protected]> wrote:\n> >\n> > I do plan to backpatch this, yes. I don't think there are many people\n> > affected by this (few people are using infinite dates/timestamps, but\n> > maybe the overflow could be more common).\n> >\n\nThe example you gave is missing CREATE INDEX command. Is it \"create\nindex test_idx_a on test using brin(a);\"\n\nDo already create indexes have this issue? Do they need to rebuilt\nafter upgrading?\n\n>\n> OK, though I doubt that such values are common in practice.\n>\n> There's also an overflow problem in\n> brin_minmax_multi_distance_interval() though, and I think that's worse\n> because overflows there throw \"interval out of range\" errors, which\n> can prevent index creation or inserts.\n>\n> There's a patch (0009 in [1]) as part of the infinite interval work,\n> but that just uses interval_mi(), and so doesn't fix the\n> interval-out-of-range errors, except for infinite intervals, which are\n> treated as special cases, which I don't think is really necessary.\n>\n\nRight. I used interval_mi() to preserve the finite value behaviour as\nis. But ...\n\n> I think this should be rewritten to compute delta from ia and ib\n> without going via an intermediate Interval value. I.e., instead of\n> computing \"result\", just do something like\n>\n> dayfraction = (ib->time % USECS_PER_DAY) - (ia->time % USECS_PER_DAY);\n> days = (ib->time / USECS_PER_DAY) - (ia->time / USECS_PER_DAY);\n> days += (int64) ib->day - (int64) ia->day;\n> days += ((int64) ib->month - (int64) ia->month) * INT64CONST(30);\n>\n> then convert to double precision as it does now:\n>\n> delta = (double) days + dayfraction / (double) USECS_PER_DAY;\n>\n\nGiven Tomas's explanation of how these functions are supposed to work,\nI think your suggestions is better.\n\nI was worried that above calculations may not produce the same result\nas the current code when there is no error because modulo and integer\ndivision are not distributive over subtraction. But it looks like\ntogether they behave as normal division which is distributive over\nsubtraction. I couldn't find an example where that is not true.\n\nTomas, you may want to incorporate this in your patch. If not, I will\nincorporate it in my infinite interval patchset in [1].\n\n[1] https://www.postgresql.org/message-id/CAExHW5u1JE7dxK=WLzqhCszNToxQzJdieRmhREpW6r8w6kcRGQ@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 16 Oct 2023 14:55:24 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/16/23 11:25, Ashutosh Bapat wrote:\n> Thanks Tomas for bringing this discussion to hackers.\n> \n> \n> On Fri, Oct 13, 2023 at 8:58 PM Dean Rasheed <[email protected]> wrote:\n>>\n>> On Fri, 13 Oct 2023 at 13:17, Tomas Vondra\n>> <[email protected]> wrote:\n>>>\n>>> I do plan to backpatch this, yes. I don't think there are many people\n>>> affected by this (few people are using infinite dates/timestamps, but\n>>> maybe the overflow could be more common).\n>>>\n> \n> The example you gave is missing CREATE INDEX command. Is it \"create\n> index test_idx_a on test using brin(a);\"\n\nAh, you're right - apologies. FWIW when testing I usually use 1-page\nranges, to possibly hit more combinations / problems. More importantly,\nit needs to specify the opclass, otherwise it'll use the default minmax\nopclass which does not use distance at all:\n\ncreate index test_idx_a on test\n using brin (a timestamptz_minmax_multi_ops) with (pages_per_range=1);\n\n> \n> Do already create indexes have this issue? Do they need to rebuilt\n> after upgrading?\n> \n\nYes, existing indexes will have inefficient ranges. I'm not sure we want\nto push people to reindex everything, the issue seem somewhat unlikely\nin practice.\n\n>>\n>> OK, though I doubt that such values are common in practice.\n>>\n>> There's also an overflow problem in\n>> brin_minmax_multi_distance_interval() though, and I think that's worse\n>> because overflows there throw \"interval out of range\" errors, which\n>> can prevent index creation or inserts.\n>>\n>> There's a patch (0009 in [1]) as part of the infinite interval work,\n>> but that just uses interval_mi(), and so doesn't fix the\n>> interval-out-of-range errors, except for infinite intervals, which are\n>> treated as special cases, which I don't think is really necessary.\n>>\n> \n> Right. I used interval_mi() to preserve the finite value behaviour as\n> is. But ...\n> \n>> I think this should be rewritten to compute delta from ia and ib\n>> without going via an intermediate Interval value. I.e., instead of\n>> computing \"result\", just do something like\n>>\n>> dayfraction = (ib->time % USECS_PER_DAY) - (ia->time % USECS_PER_DAY);\n>> days = (ib->time / USECS_PER_DAY) - (ia->time / USECS_PER_DAY);\n>> days += (int64) ib->day - (int64) ia->day;\n>> days += ((int64) ib->month - (int64) ia->month) * INT64CONST(30);\n>>\n>> then convert to double precision as it does now:\n>>\n>> delta = (double) days + dayfraction / (double) USECS_PER_DAY;\n>>\n> \n> Given Tomas's explanation of how these functions are supposed to work,\n> I think your suggestions is better.\n> \n> I was worried that above calculations may not produce the same result\n> as the current code when there is no error because modulo and integer\n> division are not distributive over subtraction. But it looks like\n> together they behave as normal division which is distributive over\n> subtraction. I couldn't find an example where that is not true.\n> \n> Tomas, you may want to incorporate this in your patch. If not, I will\n> incorporate it in my infinite interval patchset in [1].\n> \n\nI'd rather keep it as separate patch, although maybe let's deal with it\nseparately from the larger patches. It's a bug, and having it in a patch\nset that adds a feature does not seem like a good idea (or maybe I don't\nunderstand what the other thread does, I haven't looked very closely).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 16 Oct 2023 16:03:12 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Mon, Oct 16, 2023 at 7:33 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 10/16/23 11:25, Ashutosh Bapat wrote:\n> > Thanks Tomas for bringing this discussion to hackers.\n> >\n> >\n> > On Fri, Oct 13, 2023 at 8:58 PM Dean Rasheed <[email protected]> wrote:\n> >>\n> >> On Fri, 13 Oct 2023 at 13:17, Tomas Vondra\n> >> <[email protected]> wrote:\n> >>>\n> >>> I do plan to backpatch this, yes. I don't think there are many people\n> >>> affected by this (few people are using infinite dates/timestamps, but\n> >>> maybe the overflow could be more common).\n> >>>\n> >\n> > The example you gave is missing CREATE INDEX command. Is it \"create\n> > index test_idx_a on test using brin(a);\"\n>\n> Ah, you're right - apologies. FWIW when testing I usually use 1-page\n> ranges, to possibly hit more combinations / problems. More importantly,\n> it needs to specify the opclass, otherwise it'll use the default minmax\n> opclass which does not use distance at all:\n>\n> create index test_idx_a on test\n> using brin (a timestamptz_minmax_multi_ops) with (pages_per_range=1);\n>\n\nThanks.\n\n> >\n> > Do already create indexes have this issue? Do they need to rebuilt\n> > after upgrading?\n> >\n>\n> Yes, existing indexes will have inefficient ranges. I'm not sure we want\n> to push people to reindex everything, the issue seem somewhat unlikely\n> in practice.\n>\n\nIf the column has infinity values only then they need to rebuild the\nindex. Such users may notice this bug fix in the release notes and\ndecide to rebuild the index themselves.\n\n> >> I think this should be rewritten to compute delta from ia and ib\n> >> without going via an intermediate Interval value. I.e., instead of\n> >> computing \"result\", just do something like\n> >>\n> >> dayfraction = (ib->time % USECS_PER_DAY) - (ia->time % USECS_PER_DAY);\n> >> days = (ib->time / USECS_PER_DAY) - (ia->time / USECS_PER_DAY);\n> >> days += (int64) ib->day - (int64) ia->day;\n> >> days += ((int64) ib->month - (int64) ia->month) * INT64CONST(30);\n> >>\n> >> then convert to double precision as it does now:\n> >>\n> >> delta = (double) days + dayfraction / (double) USECS_PER_DAY;\n> >>\n> >\n> > Given Tomas's explanation of how these functions are supposed to work,\n> > I think your suggestions is better.\n> >\n> > I was worried that above calculations may not produce the same result\n> > as the current code when there is no error because modulo and integer\n> > division are not distributive over subtraction. But it looks like\n> > together they behave as normal division which is distributive over\n> > subtraction. I couldn't find an example where that is not true.\n> >\n> > Tomas, you may want to incorporate this in your patch. If not, I will\n> > incorporate it in my infinite interval patchset in [1].\n> >\n>\n> I'd rather keep it as separate patch, although maybe let's deal with it\n> separately from the larger patches. It's a bug, and having it in a patch\n> set that adds a feature does not seem like a good idea (or maybe I don't\n> understand what the other thread does, I haven't looked very closely).\n>\n\nIf you incorporate these changes, I will need to remove 0009, which\nmostly rewrites that function, from my patchset. If you don't, my\npatch rewrites anyway. Either way is fine with me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 16 Oct 2023 21:02:06 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "Hi,\n\nHere's a couple cleaned-up patches fixing the various discussed here.\nI've tried to always add a regression test demonstrating the issue\nfirst, and then fix it in the next patch.\n\nIn particular, this deals with these issues:\n\n1) overflows in distance calculation for large timestamp values (0002)\n\n2) incorrect subtraction in distance for date values (0003)\n\n3) incorrect distance for infinite date/timestamp values (0005)\n\n4) failing distance for extreme interval values (0007)\n\nAll the problems except \"2\" have been discussed earlier, but this seems\na bit more serious than the other issues, as it's easier to hit. It\nsubtracts the values in the opposite order (smaller - larger), so the\ndistances are negated. Which means we actually merge the values from the\nmost distant ones, and thus are \"guaranteed\" to build very a very\ninefficient summary. People with multi-minmax indexes on \"date\" columns\nprobably will need to reindex.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 17 Oct 2023 22:25:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/17/23 22:25, Tomas Vondra wrote:\n> Hi,\n> \n> Here's a couple cleaned-up patches fixing the various discussed here.\n> I've tried to always add a regression test demonstrating the issue\n> first, and then fix it in the next patch.\n> \n> In particular, this deals with these issues:\n> \n> 1) overflows in distance calculation for large timestamp values (0002)\n> \n> 2) incorrect subtraction in distance for date values (0003)\n> \n> 3) incorrect distance for infinite date/timestamp values (0005)\n> \n> 4) failing distance for extreme interval values (0007)\n> \n> All the problems except \"2\" have been discussed earlier, but this seems\n> a bit more serious than the other issues, as it's easier to hit. It\n> subtracts the values in the opposite order (smaller - larger), so the\n> distances are negated. Which means we actually merge the values from the\n> most distant ones, and thus are \"guaranteed\" to build very a very\n> inefficient summary. People with multi-minmax indexes on \"date\" columns\n> probably will need to reindex.\n> \n\nBTW when adding the tests with extreme values, I noticed this:\n\n test=# select '5874897-01-01'::date;\n date\n ---------------\n 5874897-01-01\n (1 row)\n\n test=# select '5874897-01-01'::date + '1 second'::interval;\n ERROR: date out of range for timestamp\n\nIIUC this happens because the first thing date_pl_interval does is\ndate2timestamp, ignoring the fact that the ranges of those data types\nare different - dates allow values up to '5874897 AD' while timestamps\nonly allows values up to '294276 AD'.\n\nThis seems to be a long-standing behavior, added by a9e08392dd6f in\n2004. Not sure how serious it is, I just noticed when I tried to do\narithmetics on the extreme values in tests.\n\n\nregards\n\n--\nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Oct 2023 10:16:21 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Tue, 17 Oct 2023 at 21:25, Tomas Vondra\n<[email protected]> wrote:\n>\n> Here's a couple cleaned-up patches fixing the various discussed here.\n> I've tried to always add a regression test demonstrating the issue\n> first, and then fix it in the next patch.\n>\n\nThis looks good to me.\n\n> 2) incorrect subtraction in distance for date values (0003)\n>\n> All the problems except \"2\" have been discussed earlier, but this seems\n> a bit more serious than the other issues, as it's easier to hit. It\n> subtracts the values in the opposite order (smaller - larger), so the\n> distances are negated. Which means we actually merge the values from the\n> most distant ones, and thus are \"guaranteed\" to build very a very\n> inefficient summary.\n>\n\nYeah, that's not good. Amusingly this accidentally made infinite dates\nbehave correctly, since they were distance 0 away from anything else,\nwhich was larger than all the other negative distances! But yes, that\nneeded fixing properly.\n\nThanks for taking care of this.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:13:48 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Wed, Oct 18, 2023 at 1:55 AM Tomas Vondra\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Here's a couple cleaned-up patches fixing the various discussed here.\n> I've tried to always add a regression test demonstrating the issue\n> first, and then fix it in the next patch.\n\nIt will be good to commit the test changes as well.\n\n>\n> In particular, this deals with these issues:\n>\n> 1) overflows in distance calculation for large timestamp values (0002)\n\nI could reduce the SQL for timestamp overflow test to just\n-- test overflows during CREATE INDEX with extreme timestamp values\nCREATE TEMPORARY TABLE brin_timestamp_test(a TIMESTAMPTZ);\n\nSET datestyle TO iso;\n\nINSERT INTO brin_timestamp_test VALUES\n('4713-01-01 00:00:30 BC'),\n('294276-12-01 00:00:01');\n\nCREATE INDEX ON brin_timestamp_test USING brin (a\ntimestamptz_minmax_multi_ops) WITH (pages_per_range=1);\n\nI didn't understand the purpose of adding 60 odd values to the table.\nIt didn't tell which of those values triggers the overflow. Minimal\nset above is much easier to understand IMO. Using a temporary table\njust avoids DROP TABLE statement. But I am ok if you want to use\nnon-temporary table with DROP.\n\nCode changes in 0002 look fine. Do we want to add a comment \"cast to a\nwider datatype to avoid overflow\"? Or is that too explicit?\n\nThe code changes fix the timestamp issue but there's a diff in case of\n\n>\n> 2) incorrect subtraction in distance for date values (0003)\n\nThe test case for date brin index didn't crash though. Even after\napplying 0003 patch. The reason why date subtraction can't overflow is\na bit obscure. PostgreSQL doesn't allow dates beyond 4714-12-31 BC\nbecause of the code below\n#define IS_VALID_DATE(d) \\\n((DATETIME_MIN_JULIAN - POSTGRES_EPOCH_JDATE) <= (d) && \\\n(d) < (DATE_END_JULIAN - POSTGRES_EPOCH_JDATE))\nThis prevents the lower side to be well within the negative int32\noverflow threshold and we always subtract higher value from the lower\none. May be good to elaborate this? A later patch does use float 8\ncasting eliminating \"any\" possibility of overflow. So the comment may\nnot be necessary after squashing all the changes.\n\n>\n> 3) incorrect distance for infinite date/timestamp values (0005)\n\nThe tests could use a minimal set of rows here too.\n\nThe code changes look fine and fix the problem seen with the tests alone.\n\n>\n> 4) failing distance for extreme interval values (0007)\n\nI could reproduce the issue with a minimal set of values\n-- test handling of overflow for interval values\nCREATE TABLE brin_interval_test(a INTERVAL);\n\nINSERT INTO brin_interval_test VALUES\n('177999985 years'),\n('-178000000 years');\n\nCREATE INDEX ON brin_interval_test USING brin (a\ninterval_minmax_multi_ops) WITH (pages_per_range=1);\nDROP TABLE brin_interval_test;\n\nThe code looks fine and fixed the issue seen with the test.\n\nWe may want to combine various test cases though. Like the test adding\ninfinity and extreme values could be combined. Also the number of\nvalues it inserts in the table for the reasons stated above.\n\n>\n> All the problems except \"2\" have been discussed earlier, but this seems\n> a bit more serious than the other issues, as it's easier to hit. It\n> subtracts the values in the opposite order (smaller - larger), so the\n> distances are negated. Which means we actually merge the values from the\n> most distant ones, and thus are \"guaranteed\" to build very a very\n> inefficient summary. People with multi-minmax indexes on \"date\" columns\n> probably will need to reindex.\n>\n\nRight. Do we highlight that in the commit message so that the person\nwriting release notes picks it up from there?\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 18 Oct 2023 16:17:40 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Wed, 18 Oct 2023 at 09:16, Tomas Vondra\n<[email protected]> wrote:\n>\n> BTW when adding the tests with extreme values, I noticed this:\n>\n> test=# select '5874897-01-01'::date;\n> date\n> ---------------\n> 5874897-01-01\n> (1 row)\n>\n> test=# select '5874897-01-01'::date + '1 second'::interval;\n> ERROR: date out of range for timestamp\n>\n\nThat's correct because date + interval returns timestamp, and the\nvalue is out of range for a timestamp. This is equivalent to:\n\nselect '5874897-01-01'::date::timestamp + '1 second'::interval;\nERROR: date out of range for timestamp\n\nand I think it's good that it gives a different error from this:\n\nselect '294276-01-01'::date::timestamp + '1 year'::interval;\nERROR: timestamp out of range\n\nso you can tell that the overflow in the first case happens before the addition.\n\nOf course a side effect of internally casting first is that you can't\ndo things like this:\n\nselect '5874897-01-01'::date - '5872897 years'::interval;\nERROR: date out of range for timestamp\n\nwhich arguably ought to return '2000-01-01 00:00:00'. In practice\nthough, I think it would be far more trouble than it's worth trying to\nchange that.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:56:39 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "\n\nOn 10/18/23 12:13, Dean Rasheed wrote:\n> On Tue, 17 Oct 2023 at 21:25, Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Here's a couple cleaned-up patches fixing the various discussed here.\n>> I've tried to always add a regression test demonstrating the issue\n>> first, and then fix it in the next patch.\n>>\n> \n> This looks good to me.\n> \n>> 2) incorrect subtraction in distance for date values (0003)\n>>\n>> All the problems except \"2\" have been discussed earlier, but this seems\n>> a bit more serious than the other issues, as it's easier to hit. It\n>> subtracts the values in the opposite order (smaller - larger), so the\n>> distances are negated. Which means we actually merge the values from the\n>> most distant ones, and thus are \"guaranteed\" to build very a very\n>> inefficient summary.\n>>\n> \n> Yeah, that's not good. Amusingly this accidentally made infinite dates\n> behave correctly, since they were distance 0 away from anything else,\n> which was larger than all the other negative distances! But yes, that\n> needed fixing properly.\n> \n\nRight. Apparently two wrongs can make a right, after all ;-)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Oct 2023 16:29:47 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/18/23 12:47, Ashutosh Bapat wrote:\n> On Wed, Oct 18, 2023 at 1:55 AM Tomas Vondra\n> <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> Here's a couple cleaned-up patches fixing the various discussed here.\n>> I've tried to always add a regression test demonstrating the issue\n>> first, and then fix it in the next patch.\n> \n> It will be good to commit the test changes as well.\n> \n\nI do plan to commit them, ofc. I was just explaining why I'm adding the\ntests first, and then fixing the issue in a separate commit.\n\n>>\n>> In particular, this deals with these issues:\n>>\n>> 1) overflows in distance calculation for large timestamp values (0002)\n> \n> I could reduce the SQL for timestamp overflow test to just\n> -- test overflows during CREATE INDEX with extreme timestamp values\n> CREATE TEMPORARY TABLE brin_timestamp_test(a TIMESTAMPTZ);\n> \n> SET datestyle TO iso;\n> \n> INSERT INTO brin_timestamp_test VALUES\n> ('4713-01-01 00:00:30 BC'),\n> ('294276-12-01 00:00:01');\n> \n> CREATE INDEX ON brin_timestamp_test USING brin (a\n> timestamptz_minmax_multi_ops) WITH (pages_per_range=1);\n> \n> I didn't understand the purpose of adding 60 odd values to the table.\n> It didn't tell which of those values triggers the overflow. Minimal\n> set above is much easier to understand IMO. Using a temporary table\n> just avoids DROP TABLE statement. But I am ok if you want to use\n> non-temporary table with DROP.\n> \n> Code changes in 0002 look fine. Do we want to add a comment \"cast to a\n> wider datatype to avoid overflow\"? Or is that too explicit?\n> \n> The code changes fix the timestamp issue but there's a diff in case of\n> \n\nI did use that many values to actually force \"compaction\" and merging of\npoints into ranges. We only keep 32 values per page range, so with 2\nvalues we'll not build a range. You're right it may still trigger the\noverflow (we probably still calculate distances, I didn't realize that),\nbut without the compaction we can't check the query plans.\n\nHowever, I agree 60 values may be a bit too much. And I realized we can\nreduce the count quite a bit by using the values_per_range option. We\ncould set it to 8 (which is the minimum).\n\n>>\n>> 2) incorrect subtraction in distance for date values (0003)\n> \n> The test case for date brin index didn't crash though. Even after\n> applying 0003 patch. The reason why date subtraction can't overflow is\n> a bit obscure. PostgreSQL doesn't allow dates beyond 4714-12-31 BC\n> because of the code below\n> #define IS_VALID_DATE(d) \\\n> ((DATETIME_MIN_JULIAN - POSTGRES_EPOCH_JDATE) <= (d) && \\\n> (d) < (DATE_END_JULIAN - POSTGRES_EPOCH_JDATE))\n> This prevents the lower side to be well within the negative int32\n> overflow threshold and we always subtract higher value from the lower\n> one. May be good to elaborate this? A later patch does use float 8\n> casting eliminating \"any\" possibility of overflow. So the comment may\n> not be necessary after squashing all the changes.\n> \n\nNot sure what you mean by \"crash\". Yes, it doesn't hit an assert,\nbecause there's none when calculating distance for date. It however\nshould fail in the query plan check due to the incorrect order of\nsubtractions.\n\nAlso, the commit message does not claim to fix overflow. In fact, it\nsays it can't overflow ...\n\n>>\n>> 3) incorrect distance for infinite date/timestamp values (0005)\n> \n> The tests could use a minimal set of rows here too.\n> \n> The code changes look fine and fix the problem seen with the tests alone.\n> \n\nOK\n\n>>\n>> 4) failing distance for extreme interval values (0007)\n> \n> I could reproduce the issue with a minimal set of values\n> -- test handling of overflow for interval values\n> CREATE TABLE brin_interval_test(a INTERVAL);\n> \n> INSERT INTO brin_interval_test VALUES\n> ('177999985 years'),\n> ('-178000000 years');\n> \n> CREATE INDEX ON brin_interval_test USING brin (a\n> interval_minmax_multi_ops) WITH (pages_per_range=1);\n> DROP TABLE brin_interval_test;\n> \n> The code looks fine and fixed the issue seen with the test.\n> \n> We may want to combine various test cases though. Like the test adding\n> infinity and extreme values could be combined. Also the number of\n> values it inserts in the table for the reasons stated above.\n> \n\nI prefer not to do that. I find it more comprehensible to keep the tests\nseparate / testing different things. If the tests were expensive to\nsetup or something like that, that'd be a different situation.\n\n>>\n>> All the problems except \"2\" have been discussed earlier, but this seems\n>> a bit more serious than the other issues, as it's easier to hit. It\n>> subtracts the values in the opposite order (smaller - larger), so the\n>> distances are negated. Which means we actually merge the values from the\n>> most distant ones, and thus are \"guaranteed\" to build very a very\n>> inefficient summary. People with multi-minmax indexes on \"date\" columns\n>> probably will need to reindex.\n>>\n> \n> Right. Do we highlight that in the commit message so that the person\n> writing release notes picks it up from there?\n> \n\nYes, I think I'll mention what impact each issue can have on indexes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Oct 2023 16:53:05 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Wed, Oct 18, 2023 at 8:23 PM Tomas Vondra\n<[email protected]> wrote:\n\n>\n> I did use that many values to actually force \"compaction\" and merging of\n> points into ranges. We only keep 32 values per page range, so with 2\n> values we'll not build a range. You're right it may still trigger the\n> overflow (we probably still calculate distances, I didn't realize that),\n> but without the compaction we can't check the query plans.\n>\n> However, I agree 60 values may be a bit too much. And I realized we can\n> reduce the count quite a bit by using the values_per_range option. We\n> could set it to 8 (which is the minimum).\n>\n\nI haven't read BRIN code, except the comments in the beginning of the\nfile. From what you describe it seems we will store first 32 values as\nis, but later as the number of values grow create ranges from those?\nPlease point me to the relevant source code/documentation. Anyway, if\nwe can reduce the number of rows we insert, that will be good.\n\n> >\n>\n> Not sure what you mean by \"crash\". Yes, it doesn't hit an assert,\n> because there's none when calculating distance for date. It however\n> should fail in the query plan check due to the incorrect order of\n> subtractions.\n>\n> Also, the commit message does not claim to fix overflow. In fact, it\n> says it can't overflow ...\n>\n\n\nReading the commit message\n\"Tests for overflows with dates and timestamps in BRIN ...\n\n...\n\nThe new regression tests check this for date and timestamp data types.\nIt adds tables with data close to the allowed min/max values, and builds\na minmax-multi index on it.\"\n\nI expected the CREATE INDEX statement to throw an error or fail the\n\"Assert(delta >= 0);\" in brin_minmax_multi_distance_date(). But a\nlater commit mentions that the overflow is not possible.\n\n> >\n> > We may want to combine various test cases though. Like the test adding\n> > infinity and extreme values could be combined. Also the number of\n> > values it inserts in the table for the reasons stated above.\n> >\n>\n> I prefer not to do that. I find it more comprehensible to keep the tests\n> separate / testing different things. If the tests were expensive to\n> setup or something like that, that'd be a different situation.\n\nFair enough.\n\n> >>\n> >\n> > Right. Do we highlight that in the commit message so that the person\n> > writing release notes picks it up from there?\n> >\n>\n> Yes, I think I'll mention what impact each issue can have on indexes.\n\nThanks.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 19 Oct 2023 10:02:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Thu, 19 Oct 2023, 05:32 Ashutosh Bapat, <[email protected]>\nwrote:\n\n> On Wed, Oct 18, 2023 at 8:23 PM Tomas Vondra\n> <[email protected]> wrote:\n>\n> >\n> > I did use that many values to actually force \"compaction\" and merging of\n> > points into ranges. We only keep 32 values per page range, so with 2\n> > values we'll not build a range. You're right it may still trigger the\n> > overflow (we probably still calculate distances, I didn't realize that),\n> > but without the compaction we can't check the query plans.\n> >\n> > However, I agree 60 values may be a bit too much. And I realized we can\n> > reduce the count quite a bit by using the values_per_range option. We\n> > could set it to 8 (which is the minimum).\n> >\n>\n> I haven't read BRIN code, except the comments in the beginning of the\n> file. From what you describe it seems we will store first 32 values as\n> is, but later as the number of values grow create ranges from those?\n> Please point me to the relevant source code/documentation. Anyway, if\n> we can reduce the number of rows we insert, that will be good.\n>\n\nI don't think 60 values is excessive, but instead of listing them out by\nhand, perhaps use generate_series().\n\nRegards,\nDean\n\nOn Thu, 19 Oct 2023, 05:32 Ashutosh Bapat, <[email protected]> wrote:On Wed, Oct 18, 2023 at 8:23 PM Tomas Vondra\n<[email protected]> wrote:\n\n>\n> I did use that many values to actually force \"compaction\" and merging of\n> points into ranges. We only keep 32 values per page range, so with 2\n> values we'll not build a range. You're right it may still trigger the\n> overflow (we probably still calculate distances, I didn't realize that),\n> but without the compaction we can't check the query plans.\n>\n> However, I agree 60 values may be a bit too much. And I realized we can\n> reduce the count quite a bit by using the values_per_range option. We\n> could set it to 8 (which is the minimum).\n>\n\nI haven't read BRIN code, except the comments in the beginning of the\nfile. From what you describe it seems we will store first 32 values as\nis, but later as the number of values grow create ranges from those?\nPlease point me to the relevant source code/documentation. Anyway, if\nwe can reduce the number of rows we insert, that will be good.I don't think 60 values is excessive, but instead of listing them out by hand, perhaps use generate_series().Regards,Dean", "msg_date": "Thu, 19 Oct 2023 08:04:44 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "\n\nOn 10/19/23 06:32, Ashutosh Bapat wrote:\n> On Wed, Oct 18, 2023 at 8:23 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>>\n>> I did use that many values to actually force \"compaction\" and merging of\n>> points into ranges. We only keep 32 values per page range, so with 2\n>> values we'll not build a range. You're right it may still trigger the\n>> overflow (we probably still calculate distances, I didn't realize that),\n>> but without the compaction we can't check the query plans.\n>>\n>> However, I agree 60 values may be a bit too much. And I realized we can\n>> reduce the count quite a bit by using the values_per_range option. We\n>> could set it to 8 (which is the minimum).\n>>\n> \n> I haven't read BRIN code, except the comments in the beginning of the\n> file. From what you describe it seems we will store first 32 values as\n> is, but later as the number of values grow create ranges from those?\n> Please point me to the relevant source code/documentation. Anyway, if\n> we can reduce the number of rows we insert, that will be good.\n> \n\nI don't think we have documentation other than what's at the beginning\nof the file. What the comment tries to explain is that the summary has a\nmaximum size (32 values by default), and each value can be either a\npoint or a range. A point requires one value, range requires two. So we\naccumulate values one by one - until 32 values that's fine. Once we get\n33, we have to merge some of the points into ranges, and we do that in a\ngreedy way by distance.\n\nFor example, this may happen:\n\n33 values\n-> 31 values + 1 range [requires 33]\n-> 30 values + 1 range [requires 32]\n...\n\nThe exact steps depend on which values/ranges are picked for the merge,\nof course. In any case, there's no difference between the initial 32\nvalues and the values added later.\n\nDoes that explain the algorithm? I'm not against clarifying the comment,\nof course.\n\n>>>\n>>\n>> Not sure what you mean by \"crash\". Yes, it doesn't hit an assert,\n>> because there's none when calculating distance for date. It however\n>> should fail in the query plan check due to the incorrect order of\n>> subtractions.\n>>\n>> Also, the commit message does not claim to fix overflow. In fact, it\n>> says it can't overflow ...\n>>\n> \n> \n> Reading the commit message\n> \"Tests for overflows with dates and timestamps in BRIN ...\n> \n> ...\n> \n> The new regression tests check this for date and timestamp data types.\n> It adds tables with data close to the allowed min/max values, and builds\n> a minmax-multi index on it.\"\n> \n> I expected the CREATE INDEX statement to throw an error or fail the\n> \"Assert(delta >= 0);\" in brin_minmax_multi_distance_date(). But a\n> later commit mentions that the overflow is not possible.\n> \n\nHmmm, yeah. The comment should mention the date doesn't have issue with\noverflows, but other bugs.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Oct 2023 11:01:44 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "\n\nOn 10/19/23 09:04, Dean Rasheed wrote:\n> On Thu, 19 Oct 2023, 05:32 Ashutosh Bapat, <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On Wed, Oct 18, 2023 at 8:23 PM Tomas Vondra\n> <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> >\n> > I did use that many values to actually force \"compaction\" and\n> merging of\n> > points into ranges. We only keep 32 values per page range, so with 2\n> > values we'll not build a range. You're right it may still trigger the\n> > overflow (we probably still calculate distances, I didn't realize\n> that),\n> > but without the compaction we can't check the query plans.\n> >\n> > However, I agree 60 values may be a bit too much. And I realized\n> we can\n> > reduce the count quite a bit by using the values_per_range option. We\n> > could set it to 8 (which is the minimum).\n> >\n> \n> I haven't read BRIN code, except the comments in the beginning of the\n> file. From what you describe it seems we will store first 32 values as\n> is, but later as the number of values grow create ranges from those?\n> Please point me to the relevant source code/documentation. Anyway, if\n> we can reduce the number of rows we insert, that will be good.\n> \n> \n> I don't think 60 values is excessive, but instead of listing them out by\n> hand, perhaps use generate_series().\n> \n\nI tried to do that, but I ran into troubles with the \"date\" tests. I\nneeded to build values that close to the min/max values, so I did\nsomething like\n\nSELECT '4713-01-01 BC'::date + (i || ' days')::interval FROM\ngenerate_series(1,10) s(i);\n\nAnd then the same for the max date, but that fails because of the\ndate/timestamp conversion in date plus operator.\n\nHowever, maybe two simple generate_series() would work ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Oct 2023 11:05:45 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Thu, Oct 19, 2023 at 2:31 PM Tomas Vondra\n<[email protected]> wrote:\n\n>\n> Does that explain the algorithm? I'm not against clarifying the comment,\n> of course.\n\nThanks a lot for this explanation. It's clear now.\n\n> I tried to do that, but I ran into troubles with the \"date\" tests. I\n> needed to build values that close to the min/max values, so I did\n> something like\n>\n> SELECT '4713-01-01 BC'::date + (i || ' days')::interval FROM\n> generate_series(1,10) s(i);\n>\n> And then the same for the max date, but that fails because of the\n> date/timestamp conversion in date plus operator.\n>\n> However, maybe two simple generate_series() would work ...\n>\n\nSomething like this? select i::date from generate_series('4713-02-01\nBC'::date, '4713-01-01 BC'::date, '-1 day'::interval) i;\n i\n---------------\n 4713-02-01 BC\n 4713-01-31 BC\n 4713-01-30 BC\n 4713-01-29 BC\n 4713-01-28 BC\n 4713-01-27 BC\n 4713-01-26 BC\n 4713-01-25 BC\n 4713-01-24 BC\n 4713-01-23 BC\n 4713-01-22 BC\n 4713-01-21 BC\n 4713-01-20 BC\n 4713-01-19 BC\n 4713-01-18 BC\n 4713-01-17 BC\n 4713-01-16 BC\n 4713-01-15 BC\n 4713-01-14 BC\n 4713-01-13 BC\n 4713-01-12 BC\n 4713-01-11 BC\n 4713-01-10 BC\n 4713-01-09 BC\n 4713-01-08 BC\n 4713-01-07 BC\n 4713-01-06 BC\n 4713-01-05 BC\n 4713-01-04 BC\n 4713-01-03 BC\n 4713-01-02 BC\n 4713-01-01 BC\n(32 rows)\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 19 Oct 2023 14:52:42 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/19/23 11:22, Ashutosh Bapat wrote:\n> On Thu, Oct 19, 2023 at 2:31 PM Tomas Vondra\n> <[email protected]> wrote:\n> \n>>\n>> Does that explain the algorithm? I'm not against clarifying the comment,\n>> of course.\n> \n> Thanks a lot for this explanation. It's clear now.\n> \n>> I tried to do that, but I ran into troubles with the \"date\" tests. I\n>> needed to build values that close to the min/max values, so I did\n>> something like\n>>\n>> SELECT '4713-01-01 BC'::date + (i || ' days')::interval FROM\n>> generate_series(1,10) s(i);\n>>\n>> And then the same for the max date, but that fails because of the\n>> date/timestamp conversion in date plus operator.\n>>\n>> However, maybe two simple generate_series() would work ...\n>>\n> \n> Something like this? select i::date from generate_series('4713-02-01\n> BC'::date, '4713-01-01 BC'::date, '-1 day'::interval) i;\n\nThat works, but if you try the same thing with the largest date, that'll\nfail\n\n select i::date from generate_series('5874896-12-01'::date,\n '5874897-01-01'::date,\n '1 day'::interval) i;\n\n ERROR: date out of range for timestamp\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Oct 2023 13:21:51 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Thu, Oct 19, 2023 at 4:51 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 10/19/23 11:22, Ashutosh Bapat wrote:\n> > On Thu, Oct 19, 2023 at 2:31 PM Tomas Vondra\n> > <[email protected]> wrote:\n> >\n> >>\n> >> Does that explain the algorithm? I'm not against clarifying the comment,\n> >> of course.\n> >\n> > Thanks a lot for this explanation. It's clear now.\n> >\n> >> I tried to do that, but I ran into troubles with the \"date\" tests. I\n> >> needed to build values that close to the min/max values, so I did\n> >> something like\n> >>\n> >> SELECT '4713-01-01 BC'::date + (i || ' days')::interval FROM\n> >> generate_series(1,10) s(i);\n> >>\n> >> And then the same for the max date, but that fails because of the\n> >> date/timestamp conversion in date plus operator.\n> >>\n> >> However, maybe two simple generate_series() would work ...\n> >>\n> >\n> > Something like this? select i::date from generate_series('4713-02-01\n> > BC'::date, '4713-01-01 BC'::date, '-1 day'::interval) i;\n>\n> That works, but if you try the same thing with the largest date, that'll\n> fail\n>\n> select i::date from generate_series('5874896-12-01'::date,\n> '5874897-01-01'::date,\n> '1 day'::interval) i;\n>\n> ERROR: date out of range for timestamp\n\nHmm, I see. This uses generate_series(timestamp, timestamp, interval) version.\n\ndate + integer -> date though, so the following works. It's also an\nexample at https://www.postgresql.org/docs/16/functions-srf.html.\n#SELECT '5874896-12-01'::date + i FROM\ngenerate_series(1,10) s(i);\n\nI think we should provide generate_series(date, date, integer) which\nwill use date + integer -> date.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 19 Oct 2023 18:11:34 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Thu, Oct 19, 2023 at 6:11 PM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> I think we should provide generate_series(date, date, integer) which\n> will use date + integer -> date.\n\nJust to be clear, I don't mean that this patch should add it.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:22:31 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On 10/20/23 11:52, Ashutosh Bapat wrote:\n> On Thu, Oct 19, 2023 at 6:11 PM Ashutosh Bapat\n> <[email protected]> wrote:\n>>\n>> I think we should provide generate_series(date, date, integer) which\n>> will use date + integer -> date.\n> \n> Just to be clear, I don't mean that this patch should add it.\n> \n\nI'm not against adding such generate_series() variant. For this patch\nI'll use something like the query you proposed, I think.\n\nI was thinking about the (date + interval) failure a bit more, and while\nI think it's confusing it's not quite wrong. The problem is that the\ninterval may have hours/minutes, so it makes sense that the operator\nreturns timestamp. That's not what most operators do, where the data\ntype does not change. So a bit unexpected, but seems correct.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 22 Oct 2023 18:04:01 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "FWIW I've cleaned up and pushed all the patches we came up with this\nthread. And I've backpatched all of them to 14+.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 27 Oct 2023 19:02:02 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" }, { "msg_contents": "On Fri, Oct 27, 2023 at 10:32 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> FWIW I've cleaned up and pushed all the patches we came up with this\n> thread. And I've backpatched all of them to 14+.\n>\n\nThanks a lot Tomas.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 30 Oct 2023 10:16:00 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BRIN minmax multi - incorrect distance for infinite\n timestamp/date" } ]
[ { "msg_contents": "Hello. I started reading through the Glossary[^1] terms to learn from the\ndefinitions, and to double check them against what I'd written elsewhere. I\nfound myself making edits. :)\n\nI've put the edits together into a patch. My goal was to focus on wording\nsimplifications that are smoother to read, without big changes.\n\nI realized I should check with others though, so this is a mid-point\ncheck-in. For now I went through terms from \"A\" through \"I\".\n\n\nHere's a recap of the changes:\n\n\n\n - Changed places like “to make” to use the verb directly, i.e. “make”\n - When describing options for a command, changed to “option of” instead\n of “option to”\n - “system- or user-supplied”, removed the dash after system. Or I’d\n suggest system-supplied or user-supplied, to hyphenate both.\n - Changed “will access” to “access”\n - Changed “helps to prevent” to “helps prevent”\n - Changed “volume of records has been written” to “volume of records\n were written”\n - “It is required that this user exist” changed to “This user is\n required to exist...” (I’d also suggest “This user must exist before”) as a\n simplification, but that’s a bigger difference from what’s there now.\n - Changed “operating-system” to remove the hyphen, which is how it’s\n written elsewhere in the Glossary.\n - Many examples of “an SQL”. I changed those to “a SQL...”. For example\n I changed “An SQL command which” to “A SQL command that”. I'm not an\n English major so maybe I'm missing something here.\n - I often thought “that” was easier to read than “which”, and there are\n several examples in the patch. For example “Space in data pages that…”\n replaced “Space in data pages which…”\n - Simplifications like: “There also exist two secondary forks” to “There\n are two secondary forks”\n\nI was able to build the documentation locally and preview the HTML version.\n\n\nIf these types of changes are helpful, and can continue a consistent style\nthrough all the terms and provide a new (larger) v2 patch.\n\n\n\nThanks for taking a look.\n\nAndrew Atkinson\n\n[^1]: https://www.postgresql.org/docs/current/glossary.html", "msg_date": "Fri, 13 Oct 2023 23:16:56 -0500", "msg_from": "Andrew Atkinson <[email protected]>", "msg_from_op": true, "msg_subject": "[Doc] Glossary Term Definitions Edits" }, { "msg_contents": "On Sat, 14 Oct 2023, 5:20 pm Andrew Atkinson, <[email protected]>\nwrote:\n\n>\n> - Many examples of “an SQL”. I changed those to “a SQL...”. For\n> example I changed “An SQL command which” to “A SQL command that”. I'm not\n> an English major so maybe I'm missing something here.\n>\n> It would depend on how you pronounce SQL. For those that say es-que-el,\n\"An\" is the correct article. If you say sequel then it's \"a\". We've\nstandardised our docs to use \"an SQL\", so any changes we make would be the\nopposite way.\n\nDavid\n\n>\n\nOn Sat, 14 Oct 2023, 5:20 pm Andrew Atkinson, <[email protected]> wrote:\nMany examples of “an SQL”. I changed those to “a SQL...”. For example I changed “An SQL command which” to “A SQL command that”. I'm not an English major so maybe I'm missing something here.It would depend on how you pronounce SQL.  For those that say es-que-el, \"An\" is the correct article. If you say sequel then it's \"a\". We've standardised our docs to use \"an SQL\", so any changes we make would be the opposite way. David", "msg_date": "Sat, 14 Oct 2023 18:24:32 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Doc] Glossary Term Definitions Edits" }, { "msg_contents": "On 2023-10-14 06:16 +0200, Andrew Atkinson write:\n> - When describing options for a command, changed to “option of” instead\n> of “option to”\n\nI think \"option to\" is not wrong (maybe less common). I've seen this\nin other texts and took it as \"the X option [that applies] to Y\".\n\n> - “system- or user-supplied”, removed the dash after system. Or I’d\n> suggest system-supplied or user-supplied, to hyphenate both.\n\nThat's a suspended hyphen and is common usage.\n\n> - Changed “volume of records has been written” to “volume of records\n> were written”\n\n\"Has been\" means that something happened just now. This is perfectly\nfine when talking about checkpoints IMO.\n\n> - Many examples of “an SQL”. I changed those to “a SQL...”. For example\n> I changed “An SQL command which” to “A SQL command that”. I'm not an\n> English major so maybe I'm missing something here.\n\nDepends on how you pronounce SQL (ess-cue-el or sequel). \"An SQL\"\nis more common in the docs whereas \"a SQL\" is more common in code\ncomments.\n\n-- \nErik\n\n\n", "msg_date": "Sat, 14 Oct 2023 07:55:53 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Doc] Glossary Term Definitions Edits" }, { "msg_contents": "> It would depend on how you pronounce SQL.\nGot it, makes sense.\n\n> We've standardised our docs\nMakes sense. This \"a vs. an\" could be a nice thing to add to a\n\"conventions\" or \"doc standards\" if it's not there already. I checked\nhttps://www.postgresql.org/docs/current/notation.html and\nhttps://wiki.postgresql.org/wiki/Main_Page Is there a docs page that has\nthat information? If there's an existing page where it could be added, I'd\nbe happy to add it.\n\n> That's a suspended hyphen and is common usage.\nSounds good, reset it back.\n\n > \"Has been\" means that something happened just now.\nSounds good, reset it back. \"has been\" is also used in the materialized\nterm, \"has been pre-computed\".\n\n> I think \"option to\" is not wrong\nOk, don't feel strongly. Reset it back.\n\n> That's a suspended hyphen and is common usage.\nOk, reset it back.\n\n\nCurious what people think about this. I thought the first phrase was\npossibly redundant.\n\n\n- On operating systems with a <literal>root</literal> user,\n\n- said user is not allowed to be the cluster owner.\n\n+ The user <literal>root</literal> is not allowed to be the cluster\nowner.\n\n\n\nI reviewed the definitions of assure vs. ensure, and I think ensure fits\nbetter, but I also noticed elsewhere the word “assurances” is used, as in\nan assurance about durability.\n\n\n\n- makes it visible to other transactions and assures its\n\n+ makes it visible to other transactions and ensures its\n\n\n\nRe: that/which, I put this into ChatGPT :) and apparently there is a\n“relative clause” vs. non-relative clause. My understanding was a\nnon-relative clause would typically be inside commas, and could be removed\nwithout changing the meaning.\n\n\nSince this section is talking about Bloat, and the space in data pages with\nnon-current row versions is part of bloat, I don’t think it could be\nremoved. So I think it’s a “relative clause” and “that” makes more sense.\n\nThis is another situation though where if there’s English majors or\ndocumentation experts, I’m happy to learn why I’m wrong. :)\n\n\n\n- Space in data pages which does not contain current row versions,\n\n+ Space in data pages that does not contain current row versions,\n\n\n\n\nSmaller patch attached!\n\nThanks.\n\n\n\n\n\n\nOn Sat, Oct 14, 2023 at 12:55 AM Erik Wienhold <[email protected]> wrote:\n\n> On 2023-10-14 06:16 +0200, Andrew Atkinson write:\n> > - When describing options for a command, changed to “option of”\n> instead\n> > of “option to”\n>\n> I think \"option to\" is not wrong (maybe less common). I've seen this\n> in other texts and took it as \"the X option [that applies] to Y\".\n>\n> > - “system- or user-supplied”, removed the dash after system. Or I’d\n> > suggest system-supplied or user-supplied, to hyphenate both.\n>\n> That's a suspended hyphen and is common usage.\n>\n> > - Changed “volume of records has been written” to “volume of records\n> > were written”\n>\n> \"Has been\" means that something happened just now. This is perfectly\n> fine when talking about checkpoints IMO.\n>\n> > - Many examples of “an SQL”. I changed those to “a SQL...”. For\n> example\n> > I changed “An SQL command which” to “A SQL command that”. I'm not an\n> > English major so maybe I'm missing something here.\n>\n> Depends on how you pronounce SQL (ess-cue-el or sequel). \"An SQL\"\n> is more common in the docs whereas \"a SQL\" is more common in code\n> comments.\n>\n> --\n> Erik\n>", "msg_date": "Sat, 14 Oct 2023 09:54:57 -0500", "msg_from": "Andrew Atkinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Doc] Glossary Term Definitions Edits" } ]
[ { "msg_contents": "Hackers,\n\nI was recently discussing the complexities of dealing with pg_control \nand backup_label with some hackers at PGConf NYC, when David Christensen \ncommented that backup_label was not a very good name since it gives the \nimpression of being informational and therefore something the user can \ndelete. In fact, we see this happen quite a lot, and there have been \nsome other discussions about it recently, see [1] and [2]. I bounced the \nidea of a rename off various hackers at the conference and in general \npeople seemed to think it was a good idea.\n\nAttached is a patch to rename backup_label to recovery_control. The \npurpose is to make it more obvious that the file should not be deleted. \nI'm open to other names, e.g. recovery.control. That makes the naming \ndistinct from tablespace_map, which is perhaps a good thing, but is also \nmore likely to be confused with recovery.signal.\n\nI did a pretty straight-forward search and replace on comments and \ndocumentation with only light editing. If this seems like a good idea \nand we choose a final name, I'll do a more thorough pass through the \ncomments and docs to try and make the usage more consistent.\n\nNote that there is one usage of backup label that remains, i.e. the text \nthat the user can set to describe the backup.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n[2] \nhttps://www.postgresql.org/message-id/flat/CAM_vCudkSjr7NsNKSdjwtfAm9dbzepY6beZ5DP177POKy8%3D2aw%40mail.gmail.com#746e492bfcd2667635634f1477a61288", "msg_date": "Sat, 14 Oct 2023 14:19:42 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Rename backup_label to recovery_control" }, { "msg_contents": "At Sat, 14 Oct 2023 14:19:42 -0400, David Steele <[email protected]> wrote in \n> I was recently discussing the complexities of dealing with pg_control\n> and backup_label with some hackers at PGConf NYC, when David\n> Christensen commented that backup_label was not a very good name since\n> it gives the impression of being informational and therefore something\n> the user can delete. In fact, we see this happen quite a lot, and\n> there have been some other discussions about it recently, see [1] and\n> [2]. I bounced the idea of a rename off various hackers at the\n> conference and in general people seemed to think it was a good idea.\n> \n> Attached is a patch to rename backup_label to recovery_control. The\n\nJust an idea in a slightly different direction, but I'm wondering if\nwe can simply merge the content of backup_label into control file.\nThe file is 8192 bytes long, yet only 256 bytes are used. As a result,\nwe anticipate no overhead. Sucha configuration would forcibly prevent\nuses from from removing the backup information.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:16:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "At Mon, 16 Oct 2023 13:16:42 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Just an idea in a slightly different direction, but I'm wondering if\n> we can simply merge the content of backup_label into control file.\n> The file is 8192 bytes long, yet only 256 bytes are used. As a result,\n> we anticipate no overhead. Sucha configuration would forcibly prevent\n> uses from from removing the backup information.\n\nIn second thought, that would break the case of file-system level\nbackups, which require backup information separately from control\ndata.\n\nSorry for the noise.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:26:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On Mon, Oct 16, 2023 at 01:16:42PM +0900, Kyotaro Horiguchi wrote:\n> Just an idea in a slightly different direction, but I'm wondering if\n> we can simply merge the content of backup_label into control file.\n> The file is 8192 bytes long, yet only 256 bytes are used. As a result,\n> we anticipate no overhead. Sucha configuration would forcibly prevent\n> uses from from removing the backup information.\n\nWith the critical assumptions behind PG_CONTROL_MAX_SAFE_SIZE, that\ndoes not sound like a good idea to me. And that's without the fact\nthat base backup labels could make the control file theoretically even\nlarger than PG_CONTROL_FILE_SIZE, even if that's unlikely going to\nhappen in practice.\n--\nMichael", "msg_date": "Mon, 16 Oct 2023 13:29:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On 10/16/23 00:26, Kyotaro Horiguchi wrote:\n> At Mon, 16 Oct 2023 13:16:42 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n>> Just an idea in a slightly different direction, but I'm wondering if\n>> we can simply merge the content of backup_label into control file.\n>> The file is 8192 bytes long, yet only 256 bytes are used. As a result,\n>> we anticipate no overhead. Sucha configuration would forcibly prevent\n>> uses from from removing the backup information.\n> \n> In second thought, that would break the case of file-system level\n> backups, which require backup information separately from control\n> data.\n\nExactly -- but we do have a proposal to do the opposite and embed \npg_control into backup_label [1] (or hopefully recovery_control).\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n\n\n", "msg_date": "Mon, 16 Oct 2023 09:43:03 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On Sat, Oct 14, 2023 at 2:22 PM David Steele <[email protected]> wrote:\n> I was recently discussing the complexities of dealing with pg_control\n> and backup_label with some hackers at PGConf NYC, when David Christensen\n> commented that backup_label was not a very good name since it gives the\n> impression of being informational and therefore something the user can\n> delete. In fact, we see this happen quite a lot, and there have been\n> some other discussions about it recently, see [1] and [2]. I bounced the\n> idea of a rename off various hackers at the conference and in general\n> people seemed to think it was a good idea.\n\nPersonally, I feel like this is an area where we keep moving the parts\naround but I'm not sure we're really getting to anything better. We\ngot rid of recovery.conf. We got rid of exclusive backup mode. We\nreplaced pg_start_backup with pg_backup_start. It feels like every\nother release or so we whack something around here, but I'm not\nconvinced that any of it is really making much of an impact. If\nthere's been any decrease in people screwing up their backups, I\nhaven't noticed it.\n\nTo be fair, I will grant that renaming pg_clog to pg_xact_status and\npg_xlog to pg_wal does seem to have reduced the incidence of people\nnuking those directories, at least IME. So maybe this change would\nhelp too, for similar reasons. But I'm still concerned that we're\ndoing too much superficial tinkering in this area. Breaking\ncompatibility is not without cost.\n\nI also do wonder with recovery_control is really a better name. Maybe\nI just have backup_label too firmly stuck in my head, but is what that\nfile does really best described as recovery control? I'm not so sure\nabout that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 10:19:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On 10/16/23 10:19, Robert Haas wrote:\n> On Sat, Oct 14, 2023 at 2:22 PM David Steele <[email protected]> wrote:\n>> I was recently discussing the complexities of dealing with pg_control\n>> and backup_label with some hackers at PGConf NYC, when David Christensen\n>> commented that backup_label was not a very good name since it gives the\n>> impression of being informational and therefore something the user can\n>> delete. In fact, we see this happen quite a lot, and there have been\n>> some other discussions about it recently, see [1] and [2]. I bounced the\n>> idea of a rename off various hackers at the conference and in general\n>> people seemed to think it was a good idea.\n> \n> Personally, I feel like this is an area where we keep moving the parts\n> around but I'm not sure we're really getting to anything better. We\n> got rid of recovery.conf. \n\nI agree that this was not an improvement. I was fine with bringing the \nrecovery options into the GUC fold but never really liked forcing them \ninto postgresql.auto.conf. But I lost that argument.\n\n> We got rid of exclusive backup mode. We\n> replaced pg_start_backup with pg_backup_start. \n\nI do think this was an improvement. For example it allows us to do [1], \nwhich I believe is a better overall solution to the problem of torn \nreads of pg_control. With exclusive backup we would not have this option.\n\n> It feels like every\n> other release or so we whack something around here, but I'm not\n> convinced that any of it is really making much of an impact. If\n> there's been any decrease in people screwing up their backups, I\n> haven't noticed it.\n\nIt's pretty subjective, but I feel much the same way. However, I think \nthe *areas* that people are messing up are changing as we remove \nobstacles and I feel like we should address them. backup_label has \nalways been a bit of a problem -- basically deciding should it be deleted?\n\nWith the removal of exclusive backup we removed the only valid use case \n(I think) for removing backup_label manually. Now, it should probably \nnever be removed manually, so we need to make adjustments to make that \nclearer to the user, also see [1].\n\nBetter messaging may also help, and I am also thinking about that.\n\n> To be fair, I will grant that renaming pg_clog to pg_xact_status and\n> pg_xlog to pg_wal does seem to have reduced the incidence of people\n> nuking those directories, at least IME. So maybe this change would\n> help too, for similar reasons. But I'm still concerned that we're\n> doing too much superficial tinkering in this area. Breaking\n> compatibility is not without cost.\n\nTrue enough, but ISTM that we have gotten few (or any) actual complaints \noutside of hackers speculating that there will be complaints. For the \nvarious maintainers of backup software this is just business as usual. \nThe changes to pg_basebackup are also pretty trivial.\n\n> I also do wonder with recovery_control is really a better name. Maybe\n> I just have backup_label too firmly stuck in my head, but is what that\n> file does really best described as recovery control? I'm not so sure\n> about that.\n\nThe thing it does that describes it as \"recovery control\" in my view is \nthat it contains the LSN where Postgres must start recovery (plus TLI, \nbackup method, etc.). There is some other informational stuff in there, \nbut the important fields are all about ensuring consistent recovery.\n\nAt the end of the day the entire point of backup *is* recovery and users \nwill interact with this file primarily in recovery scenarios.\n\nRegards,\n-David\n\n---\n\n[1] \nhttps://www.postgresql.org/message-id/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n\n\n", "msg_date": "Mon, 16 Oct 2023 11:15:53 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On Mon, Oct 16, 2023 at 11:15:53AM -0400, David Steele wrote:\n> On 10/16/23 10:19, Robert Haas wrote:\n> > We got rid of exclusive backup mode. We replaced pg_start_backup\n> > with pg_backup_start.\n> \n> I do think this was an improvement. For example it allows us to do\n> [1], which I believe is a better overall solution to the problem of\n> torn reads of pg_control. With exclusive backup we would not have this\n> option.\n\nWell maybe, but it also seems to mean that any other 3rd party (i.e. not\nPostgres-specific) backup tool seems to only support Postgres up till\nversion 14, as they cannot deal with non-exclusive mode - they are used\nto a simple pre/post-script approach.\n\nNot sure what to do about this, but as people/companies start moving to\n15, I am afraid we will get people complaining about this. I think\nhaving exclusive mode still be the default for pg_start_backup() (albeit\ndeprecated) in one release and then dropping it in the next was too\nfast.\n\nOr is somebody helping those \"enterprise\" backup solutions along in\nimplementing non-exclusive Postgres backups?\n\n\nMichael\n\n\n", "msg_date": "Mon, 16 Oct 2023 18:06:40 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On Mon, Oct 16, 2023 at 12:06 PM Michael Banck <[email protected]> wrote:\n> Not sure what to do about this, but as people/companies start moving to\n> 15, I am afraid we will get people complaining about this. I think\n> having exclusive mode still be the default for pg_start_backup() (albeit\n> deprecated) in one release and then dropping it in the next was too\n> fast.\n\nI completely agree, and I said so at the time, but got shouted down. I\nthink the argument that exclusive backups were breaking anything at\nall was very weak. Nobody was being forced to use them, and they broke\nnothing for people who didn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 12:12:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "\n\nOn 10/16/23 12:06, Michael Banck wrote:\n> On Mon, Oct 16, 2023 at 11:15:53AM -0400, David Steele wrote:\n>> On 10/16/23 10:19, Robert Haas wrote:\n>>> We got rid of exclusive backup mode. We replaced pg_start_backup\n>>> with pg_backup_start.\n>>\n>> I do think this was an improvement. For example it allows us to do\n>> [1], which I believe is a better overall solution to the problem of\n>> torn reads of pg_control. With exclusive backup we would not have this\n>> option.\n> \n> Well maybe, but it also seems to mean that any other 3rd party (i.e. not\n> Postgres-specific) backup tool seems to only support Postgres up till\n> version 14, as they cannot deal with non-exclusive mode - they are used\n> to a simple pre/post-script approach.\n\nI'd be curious to know what enterprise solutions currently depend on \nthis method. At the very least they'd need to manage a WAL archive since \ncopying pg_wal is not a safe thing to do (without a snapshot), so it's \nnot just a matter of using start/stop scripts. And you'd probably want \nPITR, etc.\n\n> Not sure what to do about this, but as people/companies start moving to\n> 15, I am afraid we will get people complaining about this. I think\n> having exclusive mode still be the default for pg_start_backup() (albeit\n> deprecated) in one release and then dropping it in the next was too\n> fast.\n\nBut lots of companies are on PG15 and lots of hosting providers support \nit, apparently with no issues. Perhaps the companies you are referring \nto are lagging in adoption (a pretty common scenario) but I still see no \nevidence that there is a big problem looming.\n\nExclusive backup was deprecated for six releases, which should have been \nample time to switch over. All the backup solutions I am familiar with \nhave supported non-exclusive backup for years.\n\n> Or is somebody helping those \"enterprise\" backup solutions along in\n> implementing non-exclusive Postgres backups?\n\nI couldn't say, but there are many examples in open source projects of \nhow to do this. Somebody (Laurenz, I believe) also wrote a shell script \nto simulate exclusive backup behavior for those that want to continue \nusing it. Not what I would recommend, but he showed that it was possible.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:16:19 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On 10/16/23 12:12, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 12:06 PM Michael Banck <[email protected]> wrote:\n>> Not sure what to do about this, but as people/companies start moving to\n>> 15, I am afraid we will get people complaining about this. I think\n>> having exclusive mode still be the default for pg_start_backup() (albeit\n>> deprecated) in one release and then dropping it in the next was too\n>> fast.\n> \n> I completely agree, and I said so at the time, but got shouted down. I\n> think the argument that exclusive backups were breaking anything at\n> all was very weak. Nobody was being forced to use them, and they broke\n> nothing for people who didn't.\n\nMy argument then (and now) is that exclusive backup prevented us from \nmaking material improvements in backup and recovery. It was complicated, \nduplicative (in code and docs), and entirely untested.\n\nSo you are correct that it was only dangerous to the people who were \nusing it (even if they did not know they were), but it was also a \nbarrier to progress.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:23:32 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On Mon, 2023-10-16 at 12:12 -0400, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 12:06 PM Michael Banck <[email protected]> wrote:\n> > Not sure what to do about this, but as people/companies start moving to\n> > 15, I am afraid we will get people complaining about this. I think\n> > having exclusive mode still be the default for pg_start_backup() (albeit\n> > deprecated) in one release and then dropping it in the next was too\n> > fast.\n> \n> I completely agree, and I said so at the time, but got shouted down. I\n> think the argument that exclusive backups were breaking anything at\n> all was very weak. Nobody was being forced to use them, and they broke\n> nothing for people who didn't.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 16 Oct 2023 19:28:06 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On 16.10.23 17:15, David Steele wrote:\n>> I also do wonder with recovery_control is really a better name. Maybe\n>> I just have backup_label too firmly stuck in my head, but is what that\n>> file does really best described as recovery control? I'm not so sure\n>> about that.\n> \n> The thing it does that describes it as \"recovery control\" in my view is \n> that it contains the LSN where Postgres must start recovery (plus TLI, \n> backup method, etc.). There is some other informational stuff in there, \n> but the important fields are all about ensuring consistent recovery.\n> \n> At the end of the day the entire point of backup *is* recovery and users \n> will interact with this file primarily in recovery scenarios.\n\nMaybe \"restore\" is better than \"recovery\", since recovery also happens \nseparate from backups, but restoring is something you do with a backup \n(and there is also restore_command etc.).\n\n\n\n", "msg_date": "Wed, 18 Oct 2023 09:07:38 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Rename backup_label to recovery_control" }, { "msg_contents": "On 10/18/23 03:07, Peter Eisentraut wrote:\n> On 16.10.23 17:15, David Steele wrote:\n>>> I also do wonder with recovery_control is really a better name. Maybe\n>>> I just have backup_label too firmly stuck in my head, but is what that\n>>> file does really best described as recovery control? I'm not so sure\n>>> about that.\n>>\n>> The thing it does that describes it as \"recovery control\" in my view \n>> is that it contains the LSN where Postgres must start recovery (plus \n>> TLI, backup method, etc.). There is some other informational stuff in \n>> there, but the important fields are all about ensuring consistent \n>> recovery.\n>>\n>> At the end of the day the entire point of backup *is* recovery and \n>> users will interact with this file primarily in recovery scenarios.\n> \n> Maybe \"restore\" is better than \"recovery\", since recovery also happens \n> separate from backups, but restoring is something you do with a backup \n> (and there is also restore_command etc.).\n\nI would not object to restore (there is restore_command) but I do think \nof what PostgreSQL does as \"recovery\" as opposed to \"restore\", which \ncomes before the recovery. Recovery is used a lot in the docs and there \nis also recovery.signal.\n\nBut based on the discussion in [1] I think we might be able to do away \nwith backup_label entirely, which would make this change moot.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/0f948866-7caf-0759-d53c-93c3e266ec3f%40pgmasters.net\n\n\n", "msg_date": "Wed, 18 Oct 2023 10:31:26 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Rename backup_label to recovery_control" } ]
[ { "msg_contents": "Hackers,\n\nFollowing up from a suggestion from Tom Lane[1] to improve the documentation of boolean predicate JSON path expressions, please find enclosed a draft patch to do so. It does three things:\n\n1. Converts all of the example path queries to use jsonb_path_query() and show the results, to make it clearer what the behaviors are.\n\n2. Replaces the list of deviations from the standards with a new subsection, with each deviation in its own sub-subsection. The regex section is unchanged, but I’ve greatly expanded the boolean expression JSON path section with examples comparing standard filter expressions and nonstandard boolean predicates. I’ve also added an exhortation not use boolean expressions with @? or standard path expressions with @@.\n\n3. While converting the modes section to use jsonb_path_query() and show the results, I also added an example of strict mode returning an error.\n\nFollow-ups I’d like to make:\n\n1. Expand the modes section to show how the types of results can vary depending on the mode, thanks to the flattening. Examples:\n\ndavid=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', '$.a ?(@[*] > 2)');\njsonb_path_query \n------------------\n3\n4\n5\n(3 rows)\n\ndavid=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', 'strict $.a ?(@[*] > 2)');\njsonb_path_query \n------------------\n[1, 2, 3, 4, 5]\n\n2. Improve the descriptions and examples for @?/jsonb_path_exists() and @@/jsonb_path_match().\n\nBest,\n\nDavid\n\n[1] https://www.postgresql.org/message-id/1229727.1680535592%40sss.pgh.pa.us", "msg_date": "Sat, 14 Oct 2023 16:40:05 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 14, 2023, at 16:40, David E. Wheeler <[email protected]> wrote:\n\n> Following up from a suggestion from Tom Lane[1] to improve the documentation of boolean predicate JSON path expressions, please find enclosed a draft patch to do so.\n\nAnd now I see I can’t spell “Deviations”. Will fix along with any other requested revisions. GitHub diff here if you’re into that sort of thing:\n\n https://github.com/postgres/postgres/compare/master...theory:postgres:jsonpath-pred-docs\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sat, 14 Oct 2023 16:45:35 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-14 22:40 +0200, David E. Wheeler write:\n> Following up from a suggestion from Tom Lane[1] to improve the\n> documentation of boolean predicate JSON path expressions, please find\n> enclosed a draft patch to do so.\n\nThanks for putting this together. See my review at the end.\n\n> It does three things:\n> \n> 1. Converts all of the example path queries to use jsonb_path_query()\n> and show the results, to make it clearer what the behaviors are.\n\nNice. This really does help to make some sense of it. I checked all\nqueries and they do work out except for two queries where the path\nexpression string is not properly quoted (but the intended output is\nstill correct).\n\n> 2. Replaces the list of deviations from the standards with a new\n> subsection, with each deviation in its own sub-subsection. The regex\n> section is unchanged, but I’ve greatly expanded the boolean expression\n> JSON path section with examples comparing standard filter expressions\n> and nonstandard boolean predicates. I’ve also added an exhortation not\n> use boolean expressions with @? or standard path expressions with @@.\n\nLGTM.\n\n> 3. While converting the modes section to use jsonb_path_query() and\n> show the results, I also added an example of strict mode returning an\n> error.\n> \n> Follow-ups I’d like to make:\n> \n> 1. Expand the modes section to show how the types of results can vary\n> depending on the mode, thanks to the flattening. Examples:\n> \n> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', '$.a ?(@[*] > 2)');\n> jsonb_path_query \n> ------------------\n> 3\n> 4\n> 5\n> (3 rows)\n> \n> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', 'strict $.a ?(@[*] > 2)');\n> jsonb_path_query \n> ------------------\n> [1, 2, 3, 4, 5]\n> \n> 2. Improve the descriptions and examples for @?/jsonb_path_exists()\n> and @@/jsonb_path_match().\n\n+1\n\n> [1] https://www.postgresql.org/message-id/1229727.1680535592%40sss.pgh.pa.us\n\nMy review:\n\n> diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\n> index affd1254bb..295f8ca5c9 100644\n> --- a/doc/src/sgml/func.sgml\n> +++ b/doc/src/sgml/func.sgml\n> @@ -17205,7 +17205,7 @@ array w/o UK? | t\n> For example, suppose you have some JSON data from a GPS tracker that you\n> would like to parse, such as:\n> <programlisting>\n> -{\n> + \\set json '{\n\nPerhaps make it explicit that the reader must run this in psql in order\nto use \\set and :'json' in the ensuing samples? Some of the existing\nexamples already use psql output but they do not rely on any psql\nfeatures.\n\n> \"track\": {\n> \"segments\": [\n> {\n> @@ -17220,7 +17220,7 @@ array w/o UK? | t\n> }\n> ]\n> }\n> -}\n> +}'\n> </programlisting>\n> </para>\n> \n> @@ -17229,7 +17229,10 @@ array w/o UK? | t\n> <literal>.<replaceable>key</replaceable></literal> accessor\n> operator to descend through surrounding JSON objects:\n> <programlisting>\n> -$.track.segments\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments');\n> + jsonb_path_query\n> +-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> + [{\"HR\": 73, \"location\": [47.763, 13.4034], \"start time\": \"2018-10-14 10:05:14\"}, {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}]\n> </programlisting>\n\nThis should use <screen>, <userinput>, and <computeroutput> if it shows\na psql session, e.g.:\n\n\t<screen>\n\t<userinput>select jsonb_path_query(:'json', '$.track.segments');</userinput>\n\t<computeroutput>\n\t jsonb_path_query\n\t-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\t [{\"HR\": 73, \"location\": [47.763, 13.4034], \"start time\": \"2018-10-14 10:05:14\"}, {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}]\n\t</computeroutput>\n\t</screen>\n\nAlso the cast to jsonb is not necessary and only adds clutter IMO.\n\n> </para>\n> \n> @@ -17239,7 +17242,11 @@ $.track.segments\n> the following path will return the location coordinates for all\n> the available track segments:\n> <programlisting>\n> -$.track.segments[*].location\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments[*].location');\n> + jsonb_path_query\n> +-------------------\n> + [47.763, 13.4034]\n> + [47.706, 13.2635]\n> </programlisting>\n> </para>\n> \n> @@ -17248,7 +17255,10 @@ $.track.segments[*].location\n> specify the corresponding subscript in the <literal>[]</literal>\n> accessor operator. Recall that JSON array indexes are 0-relative:\n> <programlisting>\n> -$.track.segments[0].location\n> +select jsonb_path_query(:'json'::jsonb, 'strict $.track.segments[0].location');\n> + jsonb_path_query\n> +-------------------\n> + [47.763, 13.4034]\n> </programlisting>\n> </para>\n> \n> @@ -17259,7 +17269,10 @@ $.track.segments[0].location\n> Each method name must be preceded by a dot. For example,\n> you can get the size of an array:\n> <programlisting>\n> -$.track.segments.size()\n> +select jsonb_path_query(:'json'::jsonb, 'strict $.track.segments.size()');\n> + jsonb_path_query\n> +------------------\n> + 2\n> </programlisting>\n> More examples of using <type>jsonpath</type> operators\n> and methods within path expressions appear below in\n> @@ -17302,7 +17315,10 @@ $.track.segments.size()\n> For example, suppose you would like to retrieve all heart rate values higher\n> than 130. You can achieve this using the following expression:\n> <programlisting>\n> -$.track.segments[*].HR ? (@ &gt; 130)\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments[*].HR ? (@ &gt; 130)');\n> + jsonb_path_query\n> +------------------\n> + 135\n> </programlisting>\n> </para>\n> \n> @@ -17312,7 +17328,10 @@ $.track.segments[*].HR ? (@ &gt; 130)\n> filter expression is applied to the previous step, and the path used\n> in the condition is different:\n> <programlisting>\n> -$.track.segments[*] ? (@.HR &gt; 130).\"start time\"\n> + select jsonb_path_query(:'json'::jsonb, '$.track.segments[*] ? (@.HR &gt; 130).\"start time\"');\n> + jsonb_path_query\n> +-----------------------\n> + \"2018-10-14 10:39:21\"\n> </programlisting>\n> </para>\n> \n> @@ -17321,7 +17340,10 @@ $.track.segments[*] ? (@.HR &gt; 130).\"start time\"\n> example, the following expression selects start times of all segments that\n> contain locations with relevant coordinates and high heart rate values:\n> <programlisting>\n> -$.track.segments[*] ? (@.location[1] &lt; 13.4) ? (@.HR &gt; 130).\"start time\"\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments[*] ? (@.location[1] &lt; 13.4) ? (@.HR &gt; 130).\"start time\"');\n> + jsonb_path_query\n> +-----------------------\n> + \"2018-10-14 10:39:21\"\n> </programlisting>\n> </para>\n> \n> @@ -17330,46 +17352,81 @@ $.track.segments[*] ? (@.location[1] &lt; 13.4) ? (@.HR &gt; 130).\"start time\"\n> The following example first filters all segments by location, and then\n> returns high heart rate values for these segments, if available:\n> <programlisting>\n> -$.track.segments[*] ? (@.location[1] &lt; 13.4).HR ? (@ &gt; 130)\n> +select jsonb_path_query(:'json'::jsonb, $.track.segments[*] ? (@.location[1] &lt; 13.4).HR ? (@ &gt; 130)');\n\nThe opening quote is missing from the jsonpath literal.\n\n> + jsonb_path_query\n> +------------------\n> + 135\n> </programlisting>\n> </para>\n> \n> <para>\n> You can also nest filter expressions within each other:\n> <programlisting>\n> -$.track ? (exists(@.segments[*] ? (@.HR &gt; 130))).segments.size()\n> +select jsonb_path_query(:'json'::jsonb, $.track ? (exists(@.segments[*] ? (@.HR &gt; 130))).segments.size()');\n\nMissing opening quote here as well.\n\n> + jsonb_path_query\n> +------------------\n> + 2\n> </programlisting>\n> This expression returns the size of the track if it contains any\n> segments with high heart rate values, or an empty sequence otherwise.\n> </para>\n> \n> - <para>\n> - <productname>PostgreSQL</productname>'s implementation of the SQL/JSON path\n> - language has the following deviations from the SQL/JSON standard:\n> - </para>\n> + <sect3 id=\"devations-from-the-standard\">\n> + <title>Devaiations from the SQL Standard</title>\n\nTypo in \"deviations\" (section ID and title).\n\n> + <para>\n> + <productname>PostgreSQL</productname>'s implementation of the SQL/JSON path\n> + language has the following deviations from the SQL/JSON standard:\n\nThe sentence should and in a period when this para is no longer followed\nby an item list.\n\n> + </para>\n> \n> - <itemizedlist>\n> - <listitem>\n> + <sect4 id=\"boolean-predicate-path-expressions\">\n> + <title>Boolean Predicate Path Expressions</title>\n> <para>\n> - A path expression can be a Boolean predicate, although the SQL/JSON\n> - standard allows predicates only in filters. This is necessary for\n> - implementation of the <literal>@@</literal> operator. For example,\n> - the following <type>jsonpath</type> expression is valid in\n> - <productname>PostgreSQL</productname>:\n> + As an extension to the SQL standard, a <productname>PostgreSQL</productname>\n> + path expression can be a Boolean predicate, whereas the SQL standard allows\n> + predicates only in filters. Where SQL standard path expressions return the\n> + relevant contents of the queried JSON value, predicate path expressions\n> + return the three-value three-valued result of the predicate:\n\nRedundant \"three-value\" before \"three-valued result\".\n\n> + <literal>true</literal>, <literal>false</literal>, or\n> + <literal>unknown</literal>. Compare this filter <type>jsonpath</type>\n> + exression:\n> <programlisting>\n> -$.track.segments[*].HR &lt; 70\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments ?(@[*].HR &gt; 130)');\n> + jsonb_path_query\n> +---------------------------------------------------------------------------------\n> + {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}\n> </programlisting>\n> - </para>\n> - </listitem>\n> + To a predicate expression, which returns <literal>true</literal>\n> +<programlisting>\n> +select jsonb_path_query(:'json'::jsonb, '$.track.segments[*].HR &gt; 130');\n> + jsonb_path_query\n> +------------------\n> + true\n> +</programlisting>\n> + </para>\n> \n> - <listitem>\n> - <para>\n> - There are minor differences in the interpretation of regular\n> - expression patterns used in <literal>like_regex</literal> filters, as\n> - described in <xref linkend=\"jsonpath-regular-expressions\"/>.\n> - </para>\n> - </listitem>\n> - </itemizedlist>\n> + <para>\n> + Predicate-only path expressions are necessary for implementation of the\n> + <literal>@@</literal> operator (and the\n> + <function>jsonb_path_match</function> function), and should not be used\n> + with the <literal>@?</literal> operator (or\n> + <function>jsonb_path_exists</function> function).\n> + </para>\n> +\n> + <para>\n> + Conversely, non-predicate <type>jsonpath</type> expressions should not be\n> + used with the <literal>@@</literal> operator (or the\n> + <function>jsonb_path_match</function> function).\n> + </para>\n> + </sect4>\n\nBoth paras should be wrapped in a single <note> so that they stand out\nfrom the rest of the text. Maybe even <warning>, but <note> is already\nused on this page for things that I'd consider warnings.\n\n> + <sect4 id=\"jsonpath-regular-expression-deviation\">\n> + <title>Regular Expression Interpretation</title>\n> + <para>\n> + There are minor differences in the interpretation of regular\n> + expression patterns used in <literal>like_regex</literal> filters, as\n> + described in <xref linkend=\"jsonpath-regular-expressions\"/>.\n> + </para>\n> + </sect4>\n\n<sect3 id=\"devations-from-the-standard\"> should be closed here,\notherwise the docs won't build. This can be checked with\n`make -C doc/src/sgml check`.\n\n> \n> <sect3 id=\"strict-and-lax-modes\">\n> <title>Strict and Lax Modes</title>\n> @@ -17431,18 +17488,30 @@ $.track.segments[*].HR &lt; 70\n> abstract from the fact that it stores an array of segments\n> when using the lax mode:\n> <programlisting>\n> -lax $.track.segments.location\n> + select jsonb_path_query(:'json'::jsonb, 'lax $.track.segments.location');\n> + jsonb_path_query \n\n`git diff --check` shows a couple of lines with trailing whitespace\n(mostly psql output).\n\n> +-------------------\n> + [47.763, 13.4034]\n> + [47.706, 13.2635]\n> </programlisting>\n> </para>\n> \n> <para>\n> - In the strict mode, the specified path must exactly match the structure of\n> + In strict mode, the specified path must exactly match the structure of\n> the queried JSON document to return an SQL/JSON item, so using this\n> - path expression will cause an error. To get the same result as in\n> - the lax mode, you have to explicitly unwrap the\n> + path expression will cause an error:\n> +<programlisting>\n> +select jsonb_path_query(:'json'::jsonb, 'strict $.track.segments.location');\n> +ERROR: jsonpath member accessor can only be applied to an object\n> +</programlisting> \n> + To get the same result as in the lax mode, you have to explicitly unwrap the\n> <literal>segments</literal> array:\n> <programlisting>\n> -strict $.track.segments[*].location\n> +select jsonb_path_query(:'json'::jsonb, 'strict $.track.segments[*].location');\n> + jsonb_path_query \n> +-------------------\n> + [47.763, 13.4034]\n> + [47.706, 13.2635]\n> </programlisting>\n> </para>\n> \n> @@ -17451,7 +17520,13 @@ strict $.track.segments[*].location\n> when using the lax mode. For instance, the following query selects every\n> <literal>HR</literal> value twice:\n> <programlisting>\n> -lax $.**.HR\n> +select jsonb_path_query(:'json'::jsonb, 'lax $.**.HR');\n> + jsonb_path_query \n> +------------------\n> + 73\n> + 135\n> + 73\n> + 135\n> </programlisting>\n> This happens because the <literal>.**</literal> accessor selects both\n> the <literal>segments</literal> array and each of its elements, while\n> @@ -17460,7 +17535,11 @@ lax $.**.HR\n> the <literal>.**</literal> accessor only in the strict mode. The\n> following query selects each <literal>HR</literal> value just once:\n> <programlisting>\n> -strict $.**.HR\n> +select jsonb_path_query(:'json'::jsonb, 'strict $.**.HR');\n> + jsonb_path_query \n> +------------------\n> + 73\n> + 135\n> </programlisting>\n> </para>\n> \n\n-- \nErik\n\n\n", "msg_date": "Sun, 15 Oct 2023 01:51:05 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 14, 2023, at 19:51, Erik Wienhold <[email protected]> wrote:\n\n> Thanks for putting this together. See my review at the end.\n\nAppreciate the speedy review!\n\n> Nice. This really does help to make some sense of it. I checked all\n> queries and they do work out except for two queries where the path\n> expression string is not properly quoted (but the intended output is\n> still correct).\n\n🤦🏻‍♂️\n\n>> Follow-ups I’d like to make:\n>> \n>> 1. Expand the modes section to show how the types of results can vary\n>> depending on the mode, thanks to the flattening. Examples:\n>> \n>> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', '$.a ?(@[*] > 2)');\n>> jsonb_path_query \n>> ------------------\n>> 3\n>> 4\n>> 5\n>> (3 rows)\n>> \n>> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', 'strict $.a ?(@[*] > 2)');\n>> jsonb_path_query \n>> ------------------\n>> [1, 2, 3, 4, 5]\n>> \n>> 2. Improve the descriptions and examples for @?/jsonb_path_exists()\n>> and @@/jsonb_path_match().\n> \n> +1\n\nI planned to submit these changes in a separate patch, based on Tom Lane’s suggestion[1]. Would it be preferred to add them to this patch?\n\n> Perhaps make it explicit that the reader must run this in psql in order\n> to use \\set and :'json' in the ensuing samples? Some of the existing\n> examples already use psql output but they do not rely on any psql\n> features.\n\nGood call, done.\n\n> This should use <screen>, <userinput>, and <computeroutput> if it shows\n> a psql session, e.g.:\n> \n> <screen>\n> <userinput>select jsonb_path_query(:'json', '$.track.segments');</userinput>\n> <computeroutput>\n> jsonb_path_query\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> [{\"HR\": 73, \"location\": [47.763, 13.4034], \"start time\": \"2018-10-14 10:05:14\"}, {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}]\n> </computeroutput>\n> </screen>\n\nI pokwds around, and it appears the computeroutput bit is used for function output. So I followed the precedent in queries.sgml[2] and omitted the computeroutput tags but added prompt, e.g.,\n\n<screen>\n<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', 'strict $.**.HR');</userinput>\njsonb_path_query\n------------------\n73\n135\n</screen>\n\n> Also the cast to jsonb is not necessary and only adds clutter IMO.\n\nRight, removed them all in function calls.\n\n>> + <para>\n>> + Predicate-only path expressions are necessary for implementation of the\n>> + <literal>@@</literal> operator (and the\n>> + <function>jsonb_path_match</function> function), and should not be used\n>> + with the <literal>@?</literal> operator (or\n>> + <function>jsonb_path_exists</function> function).\n>> + </para>\n>> +\n>> + <para>\n>> + Conversely, non-predicate <type>jsonpath</type> expressions should not be\n>> + used with the <literal>@@</literal> operator (or the\n>> + <function>jsonb_path_match</function> function).\n>> + </para>\n>> + </sect4>\n> \n> Both paras should be wrapped in a single <note> so that they stand out\n> from the rest of the text. Maybe even <warning>, but <note> is already\n> used on this page for things that I'd consider warnings.\n\nAgreed. Would be good if we could teach these functions and operators to reject path expressions they don’t support.\n\n>> + <sect4 id=\"jsonpath-regular-expression-deviation\">\n>> + <title>Regular Expression Interpretation</title>\n>> + <para>\n>> + There are minor differences in the interpretation of regular\n>> + expression patterns used in <literal>like_regex</literal> filters, as\n>> + described in <xref linkend=\"jsonpath-regular-expressions\"/>.\n>> + </para>\n>> + </sect4>\n> \n> <sect3 id=\"devations-from-the-standard\"> should be closed here,\n> otherwise the docs won't build. This can be checked with\n> `make -C doc/src/sgml check`.\n\nThanks. That produces a bunch of warnings for postgres.sgml and legal.sgml (and a failure to load the docbook DTD), but func.sgml is clean now.\n\n> `git diff --check` shows a couple of lines with trailing whitespace\n> (mostly psql output).\n\nI must’ve cleaned those after I sent the patch, good now. Updated patch attached, this time created by `git format-patch -v2`.\n\nBest,\n\nDavid\n\n[1] https://www.postgresql.org/message-id/1229727.1680535592%40sss.pgh.pa.us\n[2] https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-JOIN", "msg_date": "Sun, 15 Oct 2023 19:04:39 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-16 01:04 +0200, David E. Wheeler write:\n> On Oct 14, 2023, at 19:51, Erik Wienhold <[email protected]> wrote:\n> \n> > Thanks for putting this together. See my review at the end.\n> \n> Appreciate the speedy review!\n\nYou're welcome.\n\n> >> Follow-ups I’d like to make:\n> >> \n> >> 1. Expand the modes section to show how the types of results can vary\n> >> depending on the mode, thanks to the flattening. Examples:\n> >> \n> >> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', '$.a ?(@[*] > 2)');\n> >> jsonb_path_query \n> >> ------------------\n> >> 3\n> >> 4\n> >> 5\n> >> (3 rows)\n> >> \n> >> david=# select jsonb_path_query('{\"a\":[1,2,3,4,5]}', 'strict $.a ?(@[*] > 2)');\n> >> jsonb_path_query \n> >> ------------------\n> >> [1, 2, 3, 4, 5]\n> >> \n> >> 2. Improve the descriptions and examples for @?/jsonb_path_exists()\n> >> and @@/jsonb_path_match().\n> > \n> > +1\n> \n> I planned to submit these changes in a separate patch, based on Tom\n> Lane’s suggestion[1]. Would it be preferred to add them to this patch?\n\nYour call but I'm not against including it in this patch because it\nalready touches the modes section.\n\n> I pokwds around, and it appears the computeroutput bit is used for\n> function output. So I followed the precedent in queries.sgml[2] and\n> omitted the computeroutput tags but added prompt, e.g.,\n> <screen>\n> <prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', 'strict $.**.HR');</userinput>\n> jsonb_path_query\n> ------------------\n> 73\n> 135\n> </screen>\n\nOkay, Not sure what the preferred style is but I saw <userinput> and\n<computeroutput> used together in doc/src/sgml/ref/createuser.sgml.\nBut it's not applied consistently in the rest of the docs.\n\n> >> + <para>\n> >> + Predicate-only path expressions are necessary for implementation of the\n> >> + <literal>@@</literal> operator (and the\n> >> + <function>jsonb_path_match</function> function), and should not be used\n> >> + with the <literal>@?</literal> operator (or\n> >> + <function>jsonb_path_exists</function> function).\n> >> + </para>\n> >> +\n> >> + <para>\n> >> + Conversely, non-predicate <type>jsonpath</type> expressions should not be\n> >> + used with the <literal>@@</literal> operator (or the\n> >> + <function>jsonb_path_match</function> function).\n> >> + </para>\n> >> + </sect4>\n> > \n> > Both paras should be wrapped in a single <note> so that they stand out\n> > from the rest of the text. Maybe even <warning>, but <note> is already\n> > used on this page for things that I'd consider warnings.\n> \n> Agreed. Would be good if we could teach these functions and operators\n> to reject path expressions they don’t support.\n\nRight, you mentioned that idea in [1] (separate types). Not sure what\nthe best strategy here is but it's likely to break existing queries.\nMaybe deprecating unsupported path expressions in the next major release\nand changing that to an error in the major release after that.\n\n> > This can be checked with `make -C doc/src/sgml check`.\n> \n> Thanks. That produces a bunch of warnings for postgres.sgml and\n> legal.sgml (and a failure to load the docbook DTD), but func.sgml is\n> clean now.\n\nHmm... I get no warnings on 1f89b73c4e. Did you install all tools as\ndescribed in [2]? The DTD needs to be installed as well.\n\n[1] https://www.postgresql.org/message-id/BAF11F2D-5EDD-4DBB-87FA-4F35845029AE%40justatheory.com\n[2] https://www.postgresql.org/docs/current/docguide-toolsets.html\n\n-- \nErik\n\n\n", "msg_date": "Mon, 16 Oct 2023 05:03:18 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 15, 2023, at 23:03, Erik Wienhold <[email protected]> wrote:\n\n> Your call but I'm not against including it in this patch because it\n> already touches the modes section.\n\nOkay, added, let’s just put all our cards on the table. :-)\n\n>> Agreed. Would be good if we could teach these functions and operators\n>> to reject path expressions they don’t support.\n> \n> Right, you mentioned that idea in [1] (separate types). Not sure what\n> the best strategy here is but it's likely to break existing queries.\n> Maybe deprecating unsupported path expressions in the next major release\n> and changing that to an error in the major release after that.\n\nWell if the functions have a JsonPathItem struct, they can check its type attribute and reject those with a root type that’s a predicate in @? and reject it if it’s not a predicate in @@. Example of checking type here:\n\nhttps://github.com/postgres/postgres/blob/54b208f90963cb8b48b9794a5392b2fae4b40a98/src/backend/utils/adt/jsonpath_exec.c#L622\n\n>>> This can be checked with `make -C doc/src/sgml check`.\n>> \n>> Thanks. That produces a bunch of warnings for postgres.sgml and\n>> legal.sgml (and a failure to load the docbook DTD), but func.sgml is\n>> clean now.\n> \n> Hmm... I get no warnings on 1f89b73c4e. Did you install all tools as\n> described in [2]? The DTD needs to be installed as well.\n\nThanks, got it down to one:\n\npostgres.sgml:112: element sect4: validity error : Element sect4 content does not follow the DTD, expecting (sect4info? , (title , subtitle? , titleabbrev?) , (toc | lot | index | glossary | bibliography)* , (((calloutlist | glosslist | bibliolist | itemizedlist | orderedlist | segmentedlist | simplelist | variablelist | caution | important | note | tip | warning | literallayout | programlisting | programlistingco | screen | screenco | screenshot | synopsis | cmdsynopsis | funcsynopsis | classsynopsis | fieldsynopsis | constructorsynopsis | destructorsynopsis | methodsynopsis | formalpara | para | simpara | address | blockquote | graphic | graphicco | mediaobject | mediaobjectco | informalequation | informalexample | informalfigure | informaltable | equation | example | figure | table | msgset | procedure | sidebar | qandaset | task | anchor | bridgehead | remark | highlights | abstract | authorblurb | epigraph | indexterm | beginpage)+ , (refentry* | sect5* | simplesect*)) | refentry+ | sect5+ | simplesect+) , (toc | lot | index | glossary | bibliography)*), got (para para )\n &func;\n\nDavid", "msg_date": "Mon, 16 Oct 2023 15:59:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-16 21:59 +0200, David E. Wheeler write:\n> On Oct 15, 2023, at 23:03, Erik Wienhold <[email protected]> wrote:\n> \n> > Your call but I'm not against including it in this patch because it\n> > already touches the modes section.\n> \n> Okay, added, let’s just put all our cards on the table. :-)\n\nI'll have a look but the attached v3 is not a patch but some applefile.\n\n> Thanks, got it down to one:\n> \n> postgres.sgml:112: element sect4: validity error : Element sect4 content does not follow the DTD, expecting (sect4info? , (title , subtitle? , titleabbrev?) , (toc | lot | index | glossary | bibliography)* , (((calloutlist | glosslist | bibliolist | itemizedlist | orderedlist | segmentedlist | simplelist | variablelist | caution | important | note | tip | warning | literallayout | programlisting | programlistingco | screen | screenco | screenshot | synopsis | cmdsynopsis | funcsynopsis | classsynopsis | fieldsynopsis | constructorsynopsis | destructorsynopsis | methodsynopsis | formalpara | para | simpara | address | blockquote | graphic | graphicco | mediaobject | mediaobjectco | informalequation | informalexample | informalfigure | informaltable | equation | example | figure | table | msgset | procedure | sidebar | qandaset | task | anchor | bridgehead | remark | highlights | abstract | authorblurb | epigraph | indexterm | beginpage)+ , (refentry* | sect5* | simplesect*)) | refentry+ | sect5+ | simplesect+) , (toc | lot | index | glossary | bibliography)*), got (para para )\n> &func;\n\nOne of the added <sect4> is invalid by the looks of it. Maybe <title>\nis missing because it says \"got (para para )\" at the end.\n\n-- \nErik\n\n\n", "msg_date": "Tue, 17 Oct 2023 00:07:02 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 16, 2023, at 18:07, Erik Wienhold <[email protected]> wrote:\n\n>> Okay, added, let’s just put all our cards on the table. :-)\n> \n> I'll have a look but the attached v3 is not a patch but some applefile.\n\nWeird, should be no different from previous attachments. I believe Apple Mail always uses application/octet-stream for attachments it doesn’t recognize, which includes .patch and .diff files, sadly.\n\n> One of the added <sect4> is invalid by the looks of it. Maybe <title>\n> is missing because it says \"got (para para )\" at the end.\n\nOh, I thought it would report issues from the files they were found in. You’re right, I forgot a title. Fixed in v4.\n\nDavid", "msg_date": "Mon, 16 Oct 2023 22:53:06 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Tue, Oct 17, 2023 at 10:56 AM David E. Wheeler <[email protected]> wrote:\n>\n>\n> Oh, I thought it would report issues from the files they were found in. You’re right, I forgot a title. Fixed in v4.\n>\n> David\n>\n\n+ Returns the result of a JSON path\n+ <link linkend=\"boolean-predicate-path-expressions\">predicate\n+ check</link> for the specified JSON value. If the result is\nnot Boolean,\n+ then <literal>NULL</literal> is returned. Do not use with non-predicate\n+ JSON path expressions.\n\n\"Do not use with non-predicate\", double negative is not easy to\ncomprehend. Maybe we can simplify it.\n\n16933: value. Use only SQL-standard JSON path expressions, not not\nthere are two \"not\".\n\n15842: SQL-standard JSON path expressions, not not\nthere are two \"not\".\n\n\n", "msg_date": "Thu, 19 Oct 2023 13:22:06 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 19, 2023, at 01:22, jian he <[email protected]> wrote:\n\n> \"Do not use with non-predicate\", double negative is not easy to\n> comprehend. Maybe we can simplify it.\n> \n> 16933: value. Use only SQL-standard JSON path expressions, not not\n> there are two \"not\".\n> \n> 15842: SQL-standard JSON path expressions, not not\n> there are two \"not”.\n\n\nThank you, jian. Updated patch attached and also on GitHub.\n\n https://github.com/postgres/postgres/compare/master...theory:postgres:jsonpath-pred-docs\n\nBest,\n\nDavid", "msg_date": "Thu, 19 Oct 2023 09:39:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-19 15:39 +0200, David E. Wheeler wrote:\n> On Oct 19, 2023, at 01:22, jian he <[email protected]> wrote:\n> \n> Updated patch attached and also on GitHub.\n> \n> https://github.com/postgres/postgres/compare/master...theory:postgres:jsonpath-pred-docs\n\nJust wanted to take a look at v5. But it's an applefile again :P\n\n-- \nErik\n\n\n", "msg_date": "Fri, 20 Oct 2023 04:49:06 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 19, 2023, at 10:49 PM, Erik Wienhold <[email protected]> wrote:\n\n> Just wanted to take a look at v5. But it's an applefile again :P\n\nI don’t get it. It was the other times too! Are you able to save it with a .patch suffix?\n\nD\n\n\n\n\n", "msg_date": "Thu, 19 Oct 2023 23:20:59 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-20 05:20 +0200, David E. Wheeler wrote:\n> On Oct 19, 2023, at 10:49 PM, Erik Wienhold <[email protected]> wrote:\n> \n> > Just wanted to take a look at v5. But it's an applefile again :P\n> \n> I don’t get it. It was the other times too! Are you able to save it\n> with a .patch suffix?\n\nSaving it is not the problem, but the actual file contents:\n\n\t$ xxd v5-0001-Improve-boolean-predicate-JSON-Path-docs.patch\n\t00000000: 0005 1600 0002 0000 0000 0000 0000 0000 ................\n\t00000010: 0000 0000 0000 0000 0002 0000 0009 0000 ................\n\t00000020: 0032 0000 000a 0000 0003 0000 003c 0000 .2...........<..\n\t00000030: 0036 0000 0000 0000 0000 0000 7635 2d30 .6..........v5-0\n\t00000040: 3030 312d 496d 7072 6f76 652d 626f 6f6c 001-Improve-bool\n\t00000050: 6561 6e2d 7072 6564 6963 6174 652d 4a53 ean-predicate-JS\n\t00000060: 4f4e 2d50 6174 682d 646f 6373 2e70 6174 ON-Path-docs.pat\n\t00000070: 6368 ch\n\nI don't even know what that represents, probably not some fancy file\ncompression.\n\n-- \nErik\n\n\n", "msg_date": "Fri, 20 Oct 2023 05:49:48 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 19, 2023, at 23:49, Erik Wienhold <[email protected]> wrote:\n\n> I don't even know what that represents, probably not some fancy file\n> compression.\n\nOh, weird. Trying from a webmail client instead.\n\nBest,\n\nDavid", "msg_date": "Fri, 20 Oct 2023 09:49:35 -0400", "msg_from": "\"David Wheeler\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-20 15:49 +0200, David Wheeler wrote:\n> On Oct 19, 2023, at 23:49, Erik Wienhold <[email protected]> wrote:\n> \n> > I don't even know what that represents, probably not some fancy file\n> > compression.\n\nThat's an AppleSingle file according to [1][2]. It only contains the\nresource fork and file name but no data fork.\n\n> Oh, weird. Trying from a webmail client instead.\n\nThanks.\n\n> + Does JSON path return any item for the specified JSON value? Use only\n> + SQL-standard JSON path expressions, not\n> + <link linkend=\"boolean-predicate-path-expressions\">predicate check\n> + expressions.</link>\n\nAny reason for calling it \"predicate check expressions\" (e.g. the link\ntext) and sometimes \"predicate path expressions\" (e.g. the linked\nsection title)? I think it should be named consistently to avoid\nconfusion and also to simplify searching.\n\n> + Returns the result of a JSON path\n> + <link linkend=\"boolean-predicate-path-expressions\">predicate\n> + check</link> for the specified JSON value. If the result is not Boolean,\n> + then <literal>NULL</literal> is returned. Use only with\n> + <link linkend=\"boolean-predicate-path-expressions\">predicate check\n> + expressions.</link>\n\nLinking the same section twice in the same paragraph seems excessive.\n\n> +<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', '$.track.segments');</userinput>\n> +select jsonb_path_query(:'json', '$.track.segments');\n\nPlease remove the second SELECT.\n\n> +<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', 'strict $.track.segments[0].location');</userinput>\n> + jsonb_path_query\n> +-------------------\n> + [47.763, 13.4034]\n\nStrict mode is unnecessary to get that result and I'd omit it because\nthe different modes are not introduced yet at this point.\n\n> +<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', 'strict $.track.segments.size()');</userinput>\n> + jsonb_path_query\n> +------------------\n> + 2\n\nStrict mode is unnecessary here as well.\n\n> + using the lax mode. To avoid surprising results, we recommend using\n> + the <literal>.**</literal> accessor only in the strict mode. The\n\nPlease change to \"in strict mode\" (without \"the\").\n\n[1] https://www.rfc-editor.org/rfc/rfc1740.txt\n[2] https://web.archive.org/web/20180311140826/http://kaiser-edv.de/documents/AppleSingle_AppleDouble.pdf\n\n-- \nErik\n\n\n", "msg_date": "Mon, 23 Oct 2023 02:36:26 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 22, 2023, at 20:36, Erik Wienhold <[email protected]> wrote:\n\n> That's an AppleSingle file according to [1][2]. It only contains the\n> resource fork and file name but no data fork.\n\nAh, I had “Send large attachments with Mail Drop” enabled. To me 20K is not big but whatever. Let’s see if turning it off fixes the issue.\n\n> Any reason for calling it \"predicate check expressions\" (e.g. the link\n> text) and sometimes \"predicate path expressions\" (e.g. the linked\n> section title)? I think it should be named consistently to avoid\n> confusion and also to simplify searching.\n\nI think \"predicate path expressions” is more descriptive, but \"predicate check expressions” is what was in the docs before, so let’s stick with that.\n\n> Linking the same section twice in the same paragraph seems excessive.\n\nFair. Will link the second one.\n\n>> +<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', '$.track.segments');</userinput>\n>> +select jsonb_path_query(:'json', '$.track.segments');\n> \n> Please remove the second SELECT.\n\nDone.\n\n>> +<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', 'strict $.track.segments[0].location');</userinput>\n>> + jsonb_path_query\n>> +-------------------\n>> + [47.763, 13.4034]\n> \n> Strict mode is unnecessary to get that result and I'd omit it because\n> the different modes are not introduced yet at this point.\n\nYep, pasto.\n\n> Strict mode is unnecessary here as well.\n\nFixed.\n\n>> + using the lax mode. To avoid surprising results, we recommend using\n>> + the <literal>.**</literal> accessor only in the strict mode. The\n> \n> Please change to \"in strict mode\" (without \"the\").\n\nHrm, I prefer it without the article, too, but it is consistently used that way elsewhere, like here:\n\n https://github.com/postgres/postgres/blob/5b36e8f/doc/src/sgml/func.sgml#L17401\n\nI’d be happy to change them all, but was keeping it consistent for now.\n\nUpdated patch attached, thank you!\n\nDavid", "msg_date": "Mon, 23 Oct 2023 18:58:18 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2023-10-24 00:58 +0200, David E. Wheeler wrote:\n> On Oct 22, 2023, at 20:36, Erik Wienhold <[email protected]> wrote:\n> \n> > That's an AppleSingle file according to [1][2]. It only contains the\n> > resource fork and file name but no data fork.\n> \n> Ah, I had “Send large attachments with Mail Drop” enabled. To me 20K\n> is not big but whatever. Let’s see if turning it off fixes the issue.\n\nI suspected it had something to do with iCloud. Glad you solved it!\n\n> > Please change to \"in strict mode\" (without \"the\").\n> \n> Hrm, I prefer it without the article, too, but it is consistently used\n> that way elsewhere, like here:\n> \n> https://github.com/postgres/postgres/blob/5b36e8f/doc/src/sgml/func.sgml#L17401\n> \n> I’d be happy to change them all, but was keeping it consistent for now.\n\nRight. I haven't really noticed that the article case is more common.\nI thought that you may have missed that one because I saw this change\nthat removes the article:\n\n> - In the strict mode, the specified path must exactly match the structure of\n> + In strict mode, the specified path must exactly match the structure of\n\n> Updated patch attached, thank you!\n\nLGTM. Would you create a commitfest entry? I'll set the status to RfC.\n\n-- \nErik\n\n\n", "msg_date": "Tue, 24 Oct 2023 02:20:26 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Oct 23, 2023, at 20:20, Erik Wienhold <[email protected]> wrote:\n\n> I thought that you may have missed that one because I saw this change\n> that removes the article:\n> \n>> - In the strict mode, the specified path must exactly match the structure of\n>> + In strict mode, the specified path must exactly match the structure of\n\nOh, didn’t realize. Fixed.\n\n> LGTM. Would you create a commitfest entry? I'll set the status to RfC.\n\nDone. \n\n https://commitfest.postgresql.org/45/4624/\n\nBest,\n\nDavid", "msg_date": "Tue, 24 Oct 2023 22:36:24 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: tested, passed\n\nI took a look for this commit, it looks correct to me", "msg_date": "Sun, 03 Dec 2023 01:17:08 +0000", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> [ v7-0001-Improve-boolean-predicate-JSON-Path-docs.patch ]\n\nI started to review this, and got bogged down at\n\n@@ -17203,9 +17214,12 @@ array w/o UK? | t\n \n <para>\n For example, suppose you have some JSON data from a GPS tracker that you\n- would like to parse, such as:\n+ would like to parse, such as this JSON, set up as a\n+ <link linkend=\"app-psql-meta-command-set\"><application>psql</application>\n+ <command>\\set</command> variable</link> for use as <literal>:'json'</literal>\n+ in the examples below:\n <programlisting>\n-{\n+ \\set json '{\n \"track\": {\n \"segments\": [\n {\n\nI find the textual change rather unwieldy, but the bigger problem is\nthat this example doesn't actually work. If you try to copy-and-paste\nthis into psql, you get \"unterminated quoted string\", because psql\nmetacommands can't span line boundaries.\n\nPerhaps we could leave the existing display alone, and then add\n\n To follow the examples below, paste this into psql:\n <programlisting>\n \\set json '{ \"track\": { \"segments\": [ { \"location\": [ 47.763, 13.4034 ], \"start time\": \"2018-10-14 10:05:14\", \"HR\": 73 }, { \"location\": [ 47.706, 13.2635 ], \"start time\": \"2018-10-14 10:39:21\", \"HR\": 135 } ] }}'\n </programlisting>\n This will allow <literal>:'json'</literal> to be expanded into the\n above JSON value, plus suitable quoting.\n\nHowever, I'm not sure that's a great solution, because it's going to\nline-wrap on most displays, making copy-and-paste a bit iffy.\n\nI experimented with\n\nSELECT '\n ... multiline json value ...\n' AS json\n\\gexec\n\nbut that didn't seem to work either. Anybody have a better idea?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jan 2024 16:15:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "I wrote:\n> I experimented with\n\n> SELECT '\n> ... multiline json value ...\n> ' AS json\n> \\gexec\n\n> but that didn't seem to work either. Anybody have a better idea?\n\nOh, never mind, \\gset is what I was reaching for. We can make\nit work with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jan 2024 16:22:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On 2024-01-19 22:15 +0100, Tom Lane wrote:\n> \"David E. Wheeler\" <[email protected]> writes:\n> > [ v7-0001-Improve-boolean-predicate-JSON-Path-docs.patch ]\n> \n> + \\set json '{\n> \"track\": {\n> \"segments\": [\n> {\n> \n> I find the textual change rather unwieldy, but the bigger problem is\n> that this example doesn't actually work. If you try to copy-and-paste\n> this into psql, you get \"unterminated quoted string\", because psql\n> metacommands can't span line boundaries.\n\nInteresting... copy-pasting the entire \\set command works for me with\npsql 16.1 in gnome-terminal and tmux. Typing it out manually gives me\nthe \"unterminated quoted string\" error. Maybe has to do with my stty\nsettings.\n\n> I experimented with\n> \n> SELECT '\n> ... multiline json value ...\n> ' AS json\n> \\gexec\n> \n> but that didn't seem to work either. Anybody have a better idea?\n\nFine with me (the \\gset variant).\n\n-- \nErik\n\n\n", "msg_date": "Sat, 20 Jan 2024 03:46:52 +0100", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 19, 2024, at 21:46, Erik Wienhold <[email protected]> wrote:\n\n> Interesting... copy-pasting the entire \\set command works for me with\n> psql 16.1 in gnome-terminal and tmux. Typing it out manually gives me\n> the \"unterminated quoted string\" error. Maybe has to do with my stty\n> settings.\n\nYes, same on macOS Terminal.app and 16.1 compiled with readline. I didn’t realize that \\set didn’t support newlines, because it works fine when you paste something with newlines. Curious.\n\n>> I experimented with\n>> \n>> SELECT '\n>> ... multiline json value ...\n>> ' AS json\n>> \\gexec\n>> \n>> but that didn't seem to work either. Anybody have a better idea?\n> \n> Fine with me (the \\gset variant).\n\nMuch cleaner TBH.\n\ndavid=# select '{ \n \"track\": {\n \"segments\": [\n {\n \"location\": [ 47.763, 13.4034 ],\n \"start time\": \"2018-10-14 10:05:14\",\n \"HR\": 73\n },\n {\n \"location\": [ 47.706, 13.2635 ],\n \"start time\": \"2018-10-14 10:39:21\",\n \"HR\": 135\n }\n ]\n }\n}'::jsonb as json;\n json --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n {\"track\": {\"segments\": [{\"HR\": 73, \"location\": [47.763, 13.4034], \"start time\": \"2018-10-14 10:05:14\"}, {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}]}}\n(1 row)\n\ndavid=# \\gset\n\ndavid=# select :'json'::jsonb;\n jsonb --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n {\"track\": {\"segments\": [{\"HR\": 73, \"location\": [47.763, 13.4034], \"start time\": \"2018-10-14 10:05:14\"}, {\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}]}}\n(1 row)\n\nSo great!\n\nWhile you’re in there, Tom, would it make sense to fold in something like [this patch][1] I posted last month to clarify which JSONPath comparison operators can take advantage of a index?\n\n--- a/doc/src/sgml/json.sgml\n+++ b/doc/src/sgml/json.sgml\n@@ -513,7 +513,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n</programlisting>\n For these operators, a GIN index extracts clauses of the form\n <literal><replaceable>accessors_chain</replaceable>\n- = <replaceable>constant</replaceable></literal> out of\n+ == <replaceable>constant</replaceable></literal> out of\n the <type>jsonpath</type> pattern, and does the index search based on\n the keys and values mentioned in these clauses. The accessors chain\n may include <literal>.<replaceable>key</replaceable></literal>,\n@@ -522,6 +522,9 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n The <literal>jsonb_ops</literal> operator class also\n supports <literal>.*</literal> and <literal>.**</literal> accessors,\n but the <literal>jsonb_path_ops</literal> operator class does not.\n+ Only the <literal>==</literal> and <literal>!=</literal> <link\n+ linkend=\"functions-sqljson-path-operators\">SQL/JSON Path Operators</link>\n+ can use the index.\n </para>\n\n <para>\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/message-id/[email protected]\n\n", "msg_date": "Sat, 20 Jan 2024 10:09:33 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> While you’re in there, Tom, would it make sense to fold in something like [this patch][1] I posted last month to clarify which JSONPath comparison operators can take advantage of a index?\n\n> --- a/doc/src/sgml/json.sgml\n> +++ b/doc/src/sgml/json.sgml\n> @@ -513,7 +513,7 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n> </programlisting>\n> For these operators, a GIN index extracts clauses of the form\n> <literal><replaceable>accessors_chain</replaceable>\n> - = <replaceable>constant</replaceable></literal> out of\n> + == <replaceable>constant</replaceable></literal> out of\n> the <type>jsonpath</type> pattern, and does the index search based on\n> the keys and values mentioned in these clauses. The accessors chain\n> may include <literal>.<replaceable>key</replaceable></literal>,\n\nRight, clearly a typo.\n\n> @@ -522,6 +522,9 @@ SELECT jdoc->'guid', jdoc->'name' FROM api WHERE jdoc @@ '$.tags[*] == \"qui\"';\n> The <literal>jsonb_ops</literal> operator class also\n> supports <literal>.*</literal> and <literal>.**</literal> accessors,\n> but the <literal>jsonb_path_ops</literal> operator class does not.\n> + Only the <literal>==</literal> and <literal>!=</literal> <link\n> + linkend=\"functions-sqljson-path-operators\">SQL/JSON Path Operators</link>\n> + can use the index.\n> </para>\n\nYou sure about that? It would surprise me if we could effectively use\na not-equal condition with an index. If it is only == that works,\nthen the preceding statement seems sufficient.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 11:45:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "So, overall reaction to this patch: I like the approach of defining\n\"predicate check expressions\" as being a different thing from standard\njsonpath expressions. However, I'm not so thrilled with just saying\n\"don't use\" one type or the other with different jsonpath functions.\nAccording to my tests, some of these functions seem to give sensible\nresults anyway with the path type you say not to use, while some\ngive less-sensible results, and others give errors. We ought to try\nto document that, and maybe even clean up the less sane behaviors.\n(That is, I don't feel that a docs-only patch is necessarily the\nthing to do here.)\n\nAs an example, @? seems to behave sanely with a standard jsonpath:\n\nregression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] ? (@ < 5)' ;\n ?column? \n----------\n t\n(1 row)\nregression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] ? (@ > 5)' ;\n ?column? \n----------\n f\n(1 row)\n\nIt will take a predicate, but seems to always return true:\n\nregression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] < 5' ;\n ?column? \n----------\n t\n(1 row)\n\nregression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] > 5' ;\n ?column? \n----------\n t\n(1 row)\n\nSurely we're not helping anybody by leaving that behavior in place.\nMaking it do something useful, throwing an error, or returning NULL\nall seem superior to this. I observe that @@ returns NULL for the\npath type it doesn't like, so maybe that's what to do here.\n\n(Unsurprisingly, jsonb_path_exists acts similarly.)\n\nBTW, jsonb_path_query_array and jsonb_path_query_first seem to\ntake both types of path, like jsonb_path_query, so ISTM they need\ndocs changes too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 12:34:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 20, 2024, at 12:34, Tom Lane <[email protected]> wrote:\n\n> Surely we're not helping anybody by leaving that behavior in place.\n> Making it do something useful, throwing an error, or returning NULL\n> all seem superior to this. I observe that @@ returns NULL for the\n> path type it doesn't like, so maybe that's what to do here.\n\nI agree it would be far better for the behavior to be consistent, but frankly would like to see them raise an error. Ideally the hit would suggest the proper alternative operator or function to use, and maybe link to the docs that describe the difference between SQL-standard JSONPath and \"predicate check expressions”, and how they have separate operators and functions.\n\nI think of them as practically different data types (and wish they were, TBH). It makes sense that passing a JSON containment expression[1] would raise an error; so should passing the wrong flavor of JSONPath.\n\n> BTW, jsonb_path_query_array and jsonb_path_query_first seem to\n> take both types of path, like jsonb_path_query, so ISTM they need\n> docs changes too.\n\nHappy to update the patch, either to add those docs or, if we change the behavior to return a NULL or raise an error, then with that information, instead.\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/docs/current/datatype-json.html#JSON-CONTAINMENT\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 10:16:56 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 20, 2024, at 11:45, Tom Lane <[email protected]> wrote:\n\n> You sure about that? It would surprise me if we could effectively use\n> a not-equal condition with an index. If it is only == that works,\n> then the preceding statement seems sufficient.\n\nI’m not! I just assumed it in the same way creating an SQL = operator automatically respects NOT syntax (or so I recall). In fiddling a bit, I can’t get it to use an index:\n\nCREATE TABLE MOVIES (id SERIAL PRIMARY KEY, movie JSONB NOT NULL);\n\\copy movies(movie) from PROGRAM 'curl -s https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json | jq -c \".[]\" | sed \"s|\\\\\\\\|\\\\\\\\\\\\\\\\|g\"';\ncreate index on movies using gin (movie);\nanalyze movies;\n\ndavid=# explain analyze select id from movies where movie @? '$ ?(@.genre[*] != \"Teen\")';\n QUERY PLAN -----------------------------------------------------------------------------------------------------\nSeq Scan on movies (cost=0.00..3741.41 rows=4 width=4) (actual time=19.222..19.223 rows=0 loops=1)\n Filter: (movie @? '$?(@.\"genre\"[*] != \"Teen\")'::jsonpath)\n Rows Removed by Filter: 36273\nPlanning Time: 1.242 ms\nExecution Time: 19.247 ms\n(5 rows)\n\nBut that might be because the planner knows that the query is going to fetch most records, anyway. If I set most records to a single value:\n\ndavid=# update movies set movie = jsonb_set(movie, '{year}', '2020'::jsonb) where id < 3600;\nUPDATE 3599\ndavid=# analyze movies;\nANALYZE\ndavid=# explain analyze select id from movies where movie @? '$ ?(@.year != 2020)';\n QUERY PLAN ------------------------------------------------------------------------------------------------------------\nSeq Scan on movies (cost=0.00..3884.41 rows=32609 width=4) (actual time=0.065..43.730 rows=32399 loops=1)\n Filter: (movie @? '$?(@.\"year\" != 2020)'::jsonpath)\n Rows Removed by Filter: 3874\nPlanning Time: 1.759 ms\nExecution Time: 45.368 ms\n(5 rows)\n\nLooks like it still doesn’t use the index with !=. Pity.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 14:02:12 -0500", "msg_from": "David E. Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 20, 2024, at 12:34, Tom Lane <[email protected]> wrote:\n\n> It will take a predicate, but seems to always return true:\n> \n> regression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] < 5' ;\n> ?column? \n> ----------\n> t\n> (1 row)\n> \n> regression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] > 5' ;\n> ?column? \n> ----------\n> t\n> (1 row)\n\nJust for the sake of clarity, this return value is “correct,” because @? and other functions and operators that expect SQL standard statements evaluate the SET returned by the JSONPath statement, but predicate check expressions don’t return a set, but a always a single scalar value (true, false, or null). From the POV of the code expecting SQL standard JSONPath results, that’s a set of one. @? sees that the set is not empty so returns true.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 14:24:12 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Jan 20, 2024, at 12:34, Tom Lane <[email protected]> wrote:\n>> Surely we're not helping anybody by leaving that behavior in place.\n>> Making it do something useful, throwing an error, or returning NULL\n>> all seem superior to this. I observe that @@ returns NULL for the\n>> path type it doesn't like, so maybe that's what to do here.\n\n> I agree it would be far better for the behavior to be consistent, but frankly would like to see them raise an error. Ideally the hit would suggest the proper alternative operator or function to use, and maybe link to the docs that describe the difference between SQL-standard JSONPath and \"predicate check expressions”, and how they have separate operators and functions.\n\nThat ship's probably sailed. However, I spent some time poking into\nthe odd behavior I showed for @?, and it seems to me that it's an\noversight in appendBoolResult. That just automatically returns jperOk\nin the !found short-circuit path for any boolean result, which is not\nthe behavior you'd get if the boolean value were actually returned\n(cf. jsonb_path_match_internal). I experimented with making it do\nwhat seems like the right thing, and found that there is only one\nregression test case that changes behavior:\n\n select jsonb '2' @? '$ == \"2\"';\n ?column? \n ----------\n- t\n+ f\n (1 row)\n \n\nNow, JSON does not think that numeric 2 equals string \"2\", so\nISTM the expected output here is flat wrong. It's certainly\ninconsistent with @@:\n\nregression=# select jsonb '2' @@ '$ == \"2\"';\n ?column? \n----------\n \n(1 row)\n\nSo I think we should consider a patch like the attached\n(probably with some more test cases added). I don't really\nunderstand this code however, so maybe I missed something.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 21 Jan 2024 14:34:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Jan 20, 2024, at 12:34, Tom Lane <[email protected]> wrote:\n>> It will take a predicate, but seems to always return true:\n>> \n>> regression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] < 5' ;\n>> ?column? \n>> ----------\n>> t\n>> (1 row)\n>> \n>> regression=# select '{\"a\":[1,2,3,4,5]}'::jsonb @? '$.a[*] > 5' ;\n>> ?column? \n>> ----------\n>> t\n>> (1 row)\n\n> Just for the sake of clarity, this return value is “correct,” because @? and other functions and operators that expect SQL standard statements evaluate the SET returned by the JSONPath statement, but predicate check expressions don’t return a set, but a always a single scalar value (true, false, or null). From the POV of the code expecting SQL standard JSONPath results, that’s a set of one. @? sees that the set is not empty so returns true.\n\nI don't entirely buy this argument --- if that is the interpretation,\nof what use are predicate check expressions? It seems to me that we\nhave to consider them as being a shorthand notation for filter\nexpressions, or else they simply do not make sense as jsonpath.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Jan 2024 14:43:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 21, 2024, at 14:43, Tom Lane <[email protected]> wrote:\n\n> I don't entirely buy this argument --- if that is the interpretation,\n> of what use are predicate check expressions? It seems to me that we\n> have to consider them as being a shorthand notation for filter\n> expressions, or else they simply do not make sense as jsonpath.\n\nI believe it becomes pretty apparent when using jsonb_path_query(). The filter expression returns a set (using the previous \\gset example):\n\ndavid=# select jsonb_path_query(:'json', '$.track.segments[*].HR ? (@ > 10)');\n jsonb_path_query \n------------------\n 73\n 135\n(2 rows)\n\nThe predicate check returns a boolean:\n\ndavid=# select jsonb_path_query(:'json', '$.track.segments[*].HR > 10');\n jsonb_path_query \n------------------\n true\n(1 row)\n\nThis is the only way the different behaviors make sense to me. @? expects a set, not a boolean, sees there is an item in the set, so returns true:\n\ndavid=# select jsonb_path_query(:'json', '$.track.segments[*].HR > 1000');\n jsonb_path_query \n------------------\n false\n(1 row)\n\ndavid=# select :'json'::jsonb @? '$.track.segments[*].HR > 1000';\n ?column? \n----------\n t\n(1 row)\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 14:52:06 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 21, 2024, at 14:52, David E. Wheeler <[email protected]> wrote:\n\n> This is the only way the different behaviors make sense to me. @? expects a set, not a boolean, sees there is an item in the set, so returns true:\n\nI make this interpretation based on this bit of the docs:\n\n <para>\n<productname>PostgreSQL</productname>'s implementation of the SQL/JSON path\nlanguage has the following deviations from the SQL/JSON standard.\n</para>\n\n<sect4 id=\"boolean-predicate-check-expressions\">\n<title>Boolean Predicate Check Expressions</title>\n<para>\nAs an extension to the SQL standard, a <productname>PostgreSQL</productname>\npath expression can be a Boolean predicate, whereas the SQL standard allows\npredicates only in filters. Where SQL standard path expressions return the\nrelevant contents of the queried JSON value, predicate check expressions\nreturn the three-valued result of the predicate: <literal>true</literal>,\n<literal>false</literal>, or <literal>unknown</literal>. Compare this\nfilter <type>jsonpath</type> expression:\n<screen>\n<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', '$.track.segments ?(@[*].HR &gt; 130)');</userinput>\njsonb_path_query\n---------------------------------------------------------------------------------\n{\"HR\": 135, \"location\": [47.706, 13.2635], \"start time\": \"2018-10-14 10:39:21\"}\n</screen>\nTo a predicate expression, which returns <literal>true</literal>\n<screen>\n<prompt>=&gt;</prompt> <userinput>select jsonb_path_query(:'json', '$.track.segments[*].HR &gt; 130');</userinput>\njsonb_path_query\n------------------\ntrue\n</screen>\n</para>\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 14:58:10 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 21, 2024, at 14:58, David E. Wheeler <[email protected]> wrote:\n\n> I make this interpretation based on this bit of the docs:\n\nSorry, that’s from my branch. Here it is in master:\n\n <listitem>\n<para>\nA path expression can be a Boolean predicate, although the SQL/JSON\nstandard allows predicates only in filters. This is necessary for\nimplementation of the <literal>@@</literal> operator. For example,\nthe following <type>jsonpath</type> expression is valid in\n<productname>PostgreSQL</productname>:\n<programlisting>\n$.track.segments[*].HR &lt; 70\n</programlisting>\n</para>\n</listitem>\n\nIn any event, something to do with @@, perhaps to have some compatibility with `jsonb @> jsonb`? I don’t know why @@ was important to have.\n\nDavid\n\n\n\n", "msg_date": "Sun, 21 Jan 2024 15:02:37 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> In any event, something to do with @@, perhaps to have some compatibility with `jsonb @> jsonb`? I don’t know why @@ was important to have.\n\nYeah, that's certainly under-explained. But it seems like I'm not\ngetting traction for the idea of changing the behavior, so let's\ngo back to just documenting it. I spent some time going over your\ntext and also cleaning up nearby shaky English, and ended with v8\nattached. I'd be content to commit this if it looks good to you.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 24 Jan 2024 16:32:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 24, 2024, at 16:32, Tom Lane <[email protected]> wrote:\n\n> \"David E. Wheeler\" <[email protected]> writes:\n> \n>> In any event, something to do with @@, perhaps to have some compatibility with `jsonb @> jsonb`? I don’t know why @@ was important to have.\n> \n> Yeah, that's certainly under-explained. But it seems like I'm not\n> getting traction for the idea of changing the behavior, so let's\n> go back to just documenting it.\n\nCurious about those discussions. On the one hand I find the distinction between the two behaviors to be odd, and to produce unexpected results when they’re not used in the proper context.\n\nIt’s reminds me of the Perl idea of context, where functions behave differently in scalar and list context, and if you expect list behavior on scalar context you’re gonna get a surprise. This is a bit of a challenge for those new to the language, as they’re not necessarily aware of the context.\n\n> I spent some time going over your\n> text and also cleaning up nearby shaky English, and ended with v8\n> attached. I'd be content to commit this if it looks good to you.\n\nThis looks very nice, thank you. A couple of comments.\n\n> + <para>\n> + Predicate check expressions are required in the\n> + <literal>@@</literal> operator (and the\n> + <function>jsonb_path_match</function> function), and should not be used\n> + with the <literal>@?</literal> operator (or the\n> + <function>jsonb_path_exists</function> function).\n> + </para>\n> + </note>\n> + </sect4>\n\nI had this bit here:\n\n <para>\n Conversely, non-predicate <type>jsonpath</type> expressions should not be\n used with the <literal>@@</literal> operator (or the\n <function>jsonb_path_match</function> function).\n </para>\n\nI think it’s important to let people know what the difference is in the behavior of the two forms, in every spot it’s likely to come up. SQL-standard JSON Path expressions should never be used in contexts (functions, operators) only designed to work with predicate check expressions, and the docs should say so IMO.\n\n> <para>\n> - The lax mode facilitates matching of a JSON document structure and path\n> - expression if the JSON data does not conform to the expected schema.\n> + The lax mode facilitates matching of a JSON document and path\n> + expression when the JSON data does not conform to the expected schema.\n\n\nWhat do you think of also dropping the article from all the references to “the strict mode” or “the lax mode”, to make them “strict mode” and “lax mode”, respectively?\n\nThanks for the review!\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 24 Jan 2024 18:39:47 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "\"David E. Wheeler\" <[email protected]> writes:\n> On Jan 24, 2024, at 16:32, Tom Lane <[email protected]> wrote:\n>> + <para>\n>> + Predicate check expressions are required in the\n>> + <literal>@@</literal> operator (and the\n>> + <function>jsonb_path_match</function> function), and should not be used\n>> + with the <literal>@?</literal> operator (or the\n>> + <function>jsonb_path_exists</function> function).\n>> + </para>\n>> + </note>\n>> + </sect4>\n\n> I had this bit here:\n\n> <para>\n> Conversely, non-predicate <type>jsonpath</type> expressions should not be\n> used with the <literal>@@</literal> operator (or the\n> <function>jsonb_path_match</function> function).\n> </para>\n\nI changed the preceding para to say \"... check expressions are\nrequired in ...\", which I thought was sufficient to cover that.\nAlso, the tabular description of the operator tells you not to do it.\n\n> What do you think of also dropping the article from all the references to “the strict mode” or “the lax mode”, to make them “strict mode” and “lax mode”, respectively?\n\nCertainly most of 'em don't need it. I'll make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jan 2024 11:03:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" }, { "msg_contents": "On Jan 25, 2024, at 11:03, Tom Lane <[email protected]> wrote:\n\n> I changed the preceding para to say \"... check expressions are\n> required in ...\", which I thought was sufficient to cover that.\n> Also, the tabular description of the operator tells you not to do it.\n\nYeah, that’s good. I was perhaps leaning into being over-explicit after it took me a while to even figure out that there was a difference, let alone where matters.\n\n>> What do you think of also dropping the article from all the references to “the strict mode” or “the lax mode”, to make them “strict mode” and “lax mode”, respectively?\n> \n> Certainly most of 'em don't need it. I'll make it so.\n\nNice, thanks!\n\nDavid\n\n\n\n", "msg_date": "Fri, 26 Jan 2024 13:56:23 -0500", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Patch: Improve Boolean Predicate JSON Path Docs" } ]
[ { "msg_contents": "One our customer complains that he spawned two `create index \nconcurrently` for two different tables and both stuck in\"waiting for old \nsnapshots\".\nI wonder if two CIC can really block each other in `WaitForOlderSnapshots`?\nI found the similar question in hacker archive:\n\nhttps://www.postgresql.org/message-id/flat/MWHPR20MB1421AEC7CEC67B159AC188F6A19A0%40MWHPR20MB1421.namprd20.prod.outlook.com \n<https://www.postgresql.org/message-id/flat/MWHPR20MB1421AEC7CEC67B159AC188F6A19A0%40MWHPR20MB1421.namprd20.prod.outlook.com>\n\nbut it is quite old (2016). Was the problem fixed since that time? And \nif not, why there it is not mentioned in CIC documentation that \nperforming several CIC in parallel can cause \"deadlock\"?\n\nThanks in advance,\nKonstantin\n\n\n\n\n\n\nOne our customer complains that he spawned two `create index\n concurrently` for two different tables and both stuck in\n \"waiting for old snapshots\". \n I wonder if two CIC can really block each other in `WaitForOlderSnapshots`?\n I found the similar question in hacker archive:\nhttps://www.postgresql.org/message-id/flat/MWHPR20MB1421AEC7CEC67B159AC188F6A19A0%40MWHPR20MB1421.namprd20.prod.outlook.com\n\n but it is quite old (2016). Was the problem fixed since that time?\n And if not, why there it is not mentioned in CIC documentation\n that performing several CIC in parallel can cause \"deadlock\"?\n\n Thanks in advance,\n Konstantin", "msg_date": "Sun, 15 Oct 2023 21:33:00 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Can concurrent create index concurrently block each other?" }, { "msg_contents": "I noticed this on PG 10 recently, while I agree it is an obsolete version.\npg_blocking_pids() showed that one of the CIC on a Table is blocked\nby a CIC on another Table.\n\nI saw them both created over a period of time after which I doubted\nwhat was reported by pg_blocking_pids().\n\nAs it was PG 10, I could not see the phase of CIC and interestingly no wait\nevents.\nAnyways, PG 10 is unsupported but I would try it on a newer version.\n\nCurious to understand why this would have happened.\n\nRegards,\nAvi Vallarapu.\n\n\nOn Sun, Oct 15, 2023 at 2:35 PM Konstantin Knizhnik <[email protected]>\nwrote:\n\n> One our customer complains that he spawned two `create index concurrently`\n> for two different tables and both stuck in \"waiting for old snapshots\".\n> I wonder if two CIC can really block each other in `\n> WaitForOlderSnapshots`?\n> I found the similar question in hacker archive:\n>\n>\n> https://www.postgresql.org/message-id/flat/MWHPR20MB1421AEC7CEC67B159AC188F6A19A0%40MWHPR20MB1421.namprd20.prod.outlook.com\n>\n> but it is quite old (2016). Was the problem fixed since that time? And if\n> not, why there it is not mentioned in CIC documentation that performing\n> several CIC in parallel can cause \"deadlock\"?\n>\n> Thanks in advance,\n> Konstantin\n>\n\n\n-- \nRegards,\nAvinash Vallarapu\n+1-902-221-5976\n\nI noticed this on PG 10 recently, while I agree it is an obsolete version. pg_blocking_pids() showed that one of the CIC on a Table is blocked by a CIC on another Table. I saw them both created over a period of time after which I doubted what was reported by pg_blocking_pids(). As it was PG 10, I could not see the phase of CIC and interestingly no wait events.Anyways, PG 10 is unsupported but I would try it on a newer version.Curious to understand why this would have happened. Regards,Avi Vallarapu. On Sun, Oct 15, 2023 at 2:35 PM Konstantin Knizhnik <[email protected]> wrote:\n\nOne our customer complains that he spawned two `create index\n concurrently` for two different tables and both stuck in\n \"waiting for old snapshots\". \n I wonder if two CIC can really block each other in `WaitForOlderSnapshots`?\n I found the similar question in hacker archive:\nhttps://www.postgresql.org/message-id/flat/MWHPR20MB1421AEC7CEC67B159AC188F6A19A0%40MWHPR20MB1421.namprd20.prod.outlook.com\n\n but it is quite old (2016). Was the problem fixed since that time?\n And if not, why there it is not mentioned in CIC documentation\n that performing several CIC in parallel can cause \"deadlock\"?\n\n Thanks in advance,\n Konstantin\n\n\n-- Regards,Avinash Vallarapu+1-902-221-5976", "msg_date": "Sun, 15 Oct 2023 15:24:50 -0400", "msg_from": "Avinash Vallarapu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can concurrent create index concurrently block each other?" }, { "msg_contents": "Konstantin Knizhnik <[email protected]> writes:\n> One our customer complains that he spawned two `create index \n> concurrently` for two different tables and both stuck in\"waiting for old \n> snapshots\".\n> I wonder if two CIC can really block each other in `WaitForOlderSnapshots`?\n\nSince v14, we won't wait for another CIC unless it is processing a\npartial or expressional index. (According to the comments for\nWaitForOlderSnapshots, anyway.) What PG version is this, and what\nkind of indexes are being rebuilt?\n\nIn any case, if they were blocking each other that would be reported\nas a deadlock, since they'd use VirtualXactLock() which relies on\nthe heavyweight lock manager. What seems more likely is that your\ncustomer had some other old transaction sitting idle and blocking both\nof them. Looking into pg_locks would provide more definitive evidence\nabout what they are waiting for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Oct 2023 15:59:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can concurrent create index concurrently block each other?" }, { "msg_contents": "\nOn 15/10/2023 10:59 pm, Tom Lane wrote:\n> Konstantin Knizhnik <[email protected]> writes:\n>> One our customer complains that he spawned two `create index\n>> concurrently` for two different tables and both stuck in\"waiting for old\n>> snapshots\".\n>> I wonder if two CIC can really block each other in `WaitForOlderSnapshots`?\n> Since v14, we won't wait for another CIC unless it is processing a\n> partial or expressional index. (According to the comments for\n> WaitForOlderSnapshots, anyway.) What PG version is this, and what\n> kind of indexes are being rebuilt?\n>\n> In any case, if they were blocking each other that would be reported\n> as a deadlock, since they'd use VirtualXactLock() which relies on\n> the heavyweight lock manager. What seems more likely is that your\n> customer had some other old transaction sitting idle and blocking both\n> of them. Looking into pg_locks would provide more definitive evidence\n> about what they are waiting for.\n\nSorry, for false alarm. We have found long running truncation which \nactually blocks CIC in this case.\nI have asked this question because customer has wrote that there was no \nother long living active transactions, but it was not true.\n\n\n\n", "msg_date": "Mon, 16 Oct 2023 09:32:21 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can concurrent create index concurrently block each other?" } ]
[ { "msg_contents": "Hi.\n\n (\n SELECT interval(0) '1 day 01:23:45.6789'\n union all\n SELECT interval(1) '1 day 01:23:45.6789'\n union all\n SELECT interval(2) '1 day 01:23:45.6789'\n union all\n SELECT interval(3) '1 day 01:23:45.6789'\n union all\n SELECT interval(4) '1 day 01:23:45.6789'\n )\n EXCEPT all\n (\n SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval,2147418112)\n union all\n SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval,2147418113)\n union all\n SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval,2147418114)\n union all\n SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval,2147418115)\n union all\n SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval,2147418116)\n );\n\nhttps://dbfiddle.uk/zT8OByj1\nthe above works even in postgres 9.6. I debugged, then found out these\nmagic values like 2147418112.\n\nI thought:\nSELECT pg_catalog.interval('1 day 01:23:45.6789'::interval, 0)\nis same as\nSELECT interval(0) '1 day 01:23:45.6789'\n\nis this a bug in AdjustIntervalForTypmod?\n\n\n", "msg_date": "Mon, 16 Oct 2023 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "interval_scale not work as expected?" }, { "msg_contents": "jian he <[email protected]> writes:\n> I thought:\n> SELECT pg_catalog.interval('1 day 01:23:45.6789'::interval, 0)\n> is same as\n> SELECT interval(0) '1 day 01:23:45.6789'\n\n[ shrug ] No, it isn't. Interval typmods have to carry a lot\nmore than just the fractional precision, because of all the\nweird syntactic baggage that the SQL spec has for interval\ntypes (i.e., YEAR TO MONTH and other options). timestamp.h\nhas (some of) the details about what gets packed into an\ninterval typmod.\n\nEven with simpler types, there generally isn't a one-to-one\ncorrelation between user-visible precision and the encoded\ntypmod.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Oct 2023 20:28:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: interval_scale not work as expected?" } ]
[ { "msg_contents": "Add support event triggers on authenticated login\n\nThis commit introduces trigger on login event, allowing to fire some actions\nright on the user connection. This can be useful for logging or connection\ncheck purposes as well as for some personalization of environment. Usage\ndetails are described in the documentation included, but shortly usage is\nthe same as for other triggers: create function returning event_trigger and\nthen create event trigger on login event.\n\nIn order to prevent the connection time overhead when there are no triggers\nthe commit introduces pg_database.dathasloginevt flag, which indicates database\nhas active login triggers. This flag is set by CREATE/ALTER EVENT TRIGGER\ncommand, and unset at connection time when no active triggers found.\n\nAuthor: Konstantin Knizhnik, Mikhail Gribkov\nDiscussion: https://postgr.es/m/0d46d29f-4558-3af9-9c85-7774e14a7709%40postgrespro.ru\nReviewed-by: Pavel Stehule, Takayuki Tsunakawa, Greg Nancarrow, Ivan Panchenko\nReviewed-by: Daniel Gustafsson, Teodor Sigaev, Robert Haas, Andres Freund\nReviewed-by: Tom Lane, Andrey Sokolov, Zhihong Yu, Sergey Shinderuk\nReviewed-by: Gregory Stark, Nikita Malakhov, Ted Yu\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e83d1b0c40ccda8955f1245087f0697652c4df86\n\nModified Files\n--------------\ndoc/src/sgml/bki.sgml | 2 +-\ndoc/src/sgml/catalogs.sgml | 13 ++\ndoc/src/sgml/ecpg.sgml | 2 +\ndoc/src/sgml/event-trigger.sgml | 94 ++++++++++++\nsrc/backend/commands/dbcommands.c | 17 ++-\nsrc/backend/commands/event_trigger.c | 179 +++++++++++++++++++++--\nsrc/backend/storage/lmgr/lmgr.c | 38 +++++\nsrc/backend/tcop/postgres.c | 4 +\nsrc/backend/utils/cache/evtcache.c | 2 +\nsrc/backend/utils/init/globals.c | 2 +\nsrc/backend/utils/init/postinit.c | 1 +\nsrc/bin/pg_dump/pg_dump.c | 5 +\nsrc/bin/psql/tab-complete.c | 4 +-\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_database.dat | 2 +-\nsrc/include/catalog/pg_database.h | 3 +\nsrc/include/commands/event_trigger.h | 1 +\nsrc/include/miscadmin.h | 2 +\nsrc/include/storage/lmgr.h | 2 +\nsrc/include/tcop/cmdtaglist.h | 1 +\nsrc/include/utils/evtcache.h | 3 +-\nsrc/test/authentication/t/005_login_trigger.pl | 189 +++++++++++++++++++++++++\nsrc/test/recovery/t/001_stream_rep.pl | 26 ++++\nsrc/test/regress/expected/event_trigger.out | 45 ++++++\nsrc/test/regress/sql/event_trigger.sql | 26 ++++\n25 files changed, 644 insertions(+), 21 deletions(-)", "msg_date": "Mon, 16 Oct 2023 00:18:33 +0000", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Add support event triggers on authenticated login" }, { "msg_contents": "On Mon, Oct 16, 2023 at 5:49 AM Alexander Korotkov\n<[email protected]> wrote:\n>\n> Add support event triggers on authenticated login\n\nHi, I'm seeing a compiler warning with CFLAGS -O3 but not with -O2.\n\nIn file included from dbcommands.c:20:\ndbcommands.c: In function ‘createdb’:\n../../../src/include/postgres.h:104:16: warning: ‘src_hasloginevt’ may\nbe used uninitialized in this function [-Wmaybe-uninitialized]\n 104 | return (Datum) (X ? 1 : 0);\n | ^~~~~~~~~~~~~~~~~~~\ndbcommands.c:683:25: note: ‘src_hasloginevt’ was declared here\n 683 | bool src_hasloginevt;\n | ^~~~~~~~~~~~~~~\n\nThe configure command I used is ./configure --prefix=$PWD/inst/\nCFLAGS=\"-ggdb3 -O3\" > install.log && make -j 8 install > install.log\n2>&1 &:\n\nCONFIGURE = '--prefix=/home/ubuntu/postgres/inst/' 'CFLAGS=-ggdb3 -O3'\nCC = gcc\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3\n-Wcast-function-type -Wshadow=compatible-local -Wformat-security\n-fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -ggdb3 -O3\nCFLAGS_SL = -fPIC\n\nThe compiler version is:\ngcc --version\ngcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 07:28:26 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support event triggers on authenticated login" }, { "msg_contents": "Bharath Rupireddy <[email protected]> writes:\n> Hi, I'm seeing a compiler warning with CFLAGS -O3 but not with -O2.\n\n> In file included from dbcommands.c:20:\n> dbcommands.c: In function ‘createdb’:\n> ../../../src/include/postgres.h:104:16: warning: ‘src_hasloginevt’ may\n> be used uninitialized in this function [-Wmaybe-uninitialized]\n\nHmm, I also see that at -O3 (not at -O2) when using Fedora 39's\ngcc 13.2.1, but *not* when using RHEL8's gcc 8.5.0.\n\nI'm not sure how excited I am about curing that, though, because gcc\n13.2.1 spews several other totally baseless warnings (see attached).\nSome of them match up with warnings we're seeing on buildfarm member\nserinus, which I seem to recall that Andres had tracked to a known gcc\nbug.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 11 Jan 2024 21:55:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add support event triggers on authenticated login" }, { "msg_contents": "Hi,\n\nOn 2024-01-11 21:55:19 -0500, Tom Lane wrote:\n> Bharath Rupireddy <[email protected]> writes:\n> > Hi, I'm seeing a compiler warning with CFLAGS -O3 but not with -O2.\n> \n> > In file included from dbcommands.c:20:\n> > dbcommands.c: In function ‘createdb’:\n> > ../../../src/include/postgres.h:104:16: warning: ‘src_hasloginevt’ may\n> > be used uninitialized in this function [-Wmaybe-uninitialized]\n> \n> Hmm, I also see that at -O3 (not at -O2) when using Fedora 39's\n> gcc 13.2.1, but *not* when using RHEL8's gcc 8.5.0.\n\nIt's visible here with gcc >= 10. That's enough versions that I think we\nshould care. Interestingly enough, it seems to have recently have gotten\nfixed in gcc master (14 to be).\n\n\n> Some of them match up with warnings we're seeing on buildfarm member\n> serinus, which I seem to recall that Andres had tracked to a known gcc bug.\n\nSome, but I don't think all.\n\n\n> In file included from ../../../../src/include/executor/instrument.h:16,\n> from pgstat_io.c:19:\n> pgstat_io.c: In function 'pgstat_count_io_op_time':\n> pgstat_io.c:149:60: warning: array subscript 2 is above array bounds of 'instr_time[2][4][8]' [-Warray-bounds=]\n> 149 | INSTR_TIME_ADD(PendingIOStats.pending_times[io_object][io_context][io_op],\n> | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~\n\nHuh, I don't see that with any version of gcc I tried.\n\n\n> In file included from ../../../../src/include/access/htup_details.h:22,\n> from pl_exec.c:21:\n> In function 'assign_simple_var',\n> inlined from 'exec_set_found' at pl_exec.c:8349:2:\n> ../../../../src/include/varatt.h:230:36: warning: array subscript 0 is outside array bounds of 'char[0]' [-Warray-bounds=]\n> 230 | (((varattrib_1b_e *) (PTR))->va_tag)\n> | ^\n> ../../../../src/include/varatt.h:94:12: note: in definition of macro 'VARTAG_IS_EXPANDED'\n> 94 | (((tag) & ~1) == VARTAG_EXPANDED_RO)\n> | ^~~\n> ../../../../src/include/varatt.h:284:57: note: in expansion of macro 'VARTAG_1B_E'\n> 284 | #define VARTAG_EXTERNAL(PTR) VARTAG_1B_E(PTR)\n> | ^~~~~~~~~~~\n> ../../../../src/include/varatt.h:301:57: note: in expansion of macro 'VARTAG_EXTERNAL'\n> 301 | (VARATT_IS_EXTERNAL(PTR) && !VARTAG_IS_EXPANDED(VARTAG_EXTERNAL(PTR)))\n> | ^~~~~~~~~~~~~~~\n> pl_exec.c:8537:17: note: in expansion of macro 'VARATT_IS_EXTERNAL_NON_EXPANDED'\n> 8537 | VARATT_IS_EXTERNAL_NON_EXPANDED(DatumGetPointer(newvalue)))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> In function 'exec_set_found':\n> cc1: note: source object is likely at address zero\n\nThis I see. If I hint to the compiler that var->datatype->typlen != 1 when\ncalled from exec_set_found(), the warning vanishes. E.g. with\n\n\tif (var->datatype->typlen == -1)\n\t\t__builtin_unreachable();\n\nI see one more warning:\n\n[1390/2375 42 58%] Compiling C object src/backend/postgres_lib.a.p/utils_adt_jsonb_util.c.o\n../../../../../home/andres/src/postgresql/src/backend/utils/adt/jsonb_util.c: In function 'compareJsonbContainers':\n../../../../../home/andres/src/postgresql/src/backend/utils/adt/jsonb_util.c:296:34: warning: 'va.type' may be used uninitialized [-Wmaybe-uninitialized]\n 296 | res = (va.type > vb.type) ? 1 : -1;\n | ~~^~~~~\n../../../../../home/andres/src/postgresql/src/backend/utils/adt/jsonb_util.c:204:33: note: 'va' declared here\n 204 | JsonbValue va,\n | ^~\n\n\nI can't really blame the compiler here. There's a fairly lengthy comment\nexplaining that va.type/vb.type are set, and it took me a while to understand:\n\n\t\t\t/*\n\t\t\t * It's safe to assume that the types differed, and that the va\n\t\t\t * and vb values passed were set.\n\t\t\t *\n\t\t\t * If the two values were of the same container type, then there'd\n\t\t\t * have been a chance to observe the variation in the number of\n\t\t\t * elements/pairs (when processing WJB_BEGIN_OBJECT, say). They're\n\t\t\t * either two heterogeneously-typed containers, or a container and\n\t\t\t * some scalar type.\n\t\t\t *\n\t\t\t * We don't have to consider the WJB_END_ARRAY and WJB_END_OBJECT\n\t\t\t * cases here, because we would have seen the corresponding\n\t\t\t * WJB_BEGIN_ARRAY and WJB_BEGIN_OBJECT tokens first, and\n\t\t\t * concluded that they don't match.\n\t\t\t */\n\nIt's not surprising that the compiler can't understand that you can't get\nra = WJB_DONE, rb = WJB_END_ARRAY or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Feb 2024 12:31:38 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "gcc build warnings at -O3" }, { "msg_contents": "On Wed, Feb 7, 2024 at 10:31 PM Andres Freund <[email protected]> wrote:\n> On 2024-01-11 21:55:19 -0500, Tom Lane wrote:\n> > Bharath Rupireddy <[email protected]> writes:\n> > > Hi, I'm seeing a compiler warning with CFLAGS -O3 but not with -O2.\n> >\n> > > In file included from dbcommands.c:20:\n> > > dbcommands.c: In function ‘createdb’:\n> > > ../../../src/include/postgres.h:104:16: warning: ‘src_hasloginevt’ may\n> > > be used uninitialized in this function [-Wmaybe-uninitialized]\n> >\n> > Hmm, I also see that at -O3 (not at -O2) when using Fedora 39's\n> > gcc 13.2.1, but *not* when using RHEL8's gcc 8.5.0.\n>\n> It's visible here with gcc >= 10. That's enough versions that I think we\n> should care. Interestingly enough, it seems to have recently have gotten\n> fixed in gcc master (14 to be).\n\nI managed to reproduce this warning locally. Fixed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 8 Feb 2024 22:00:56 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gcc build warnings at -O3" } ]
[ { "msg_contents": "Hi,\n\nDavid and I had worked the uniquekey stuff since 2020[1], and later it\nis blocked by the NULL values stuff. Now the blocker should be removed\nby Var.varnullingrels, so it is time to work on this again. During the\npast 3 years, we have found more and more interesting usage of it. \n\nHere is a design document and a part of implementation.\n\nWhat is UniqueKey?\n-----------------\n\nUniqueKey represents a uniqueness information for a RelOptInfo. for\nexample: \n\nSELECT id FROM t;\n\nwhere the ID is the UniqueKey for the RelOptInfo (t). In the real word,\nit has the following attributes:\n\n1). It should be EquivalenceClass based. for example:\n\nSELECT a FROM t WHERE a = id;\n\nIn this case, the UniqueKey should be 1 EC with two members\n- EC(EM(a), EM(id)). \n\n\n2). Each UniqueKey may be made up with 1+ EquivalenceClass. for example:\n\nCREATE TABLE t(a int not null, b int not null);\nCREATE UNIQUE INDEX on t(a, b);\nSELECT * FROM t;\n\nWhere the UniqueKey for RelOptInfo (t) will be 2 ECs with each 1 has 1\nmember.\n\n- EC(em=a), EC(em=b)\n\n3). Each RelOptInfo may have 1+ UniqueKeys.\n\nCREATE TABLE t(a int not null, b int not null, c int not null);\nCREATE UNIQUE INDEX on t(a, b);\nCREATE UNIQUE INDEX on t(c);\n\nSELECT * FROM t;\n\nWhere the UniqueKey for RelOptInfo (t) will be\n- [EC(em=a), EC(em=b)].\n- [EC(em=c)]\n\n4). A special case is about the one-row case. It works like:\nSELECT * FROM t WHERE id = 1;\nHere every single expression in the RelOptInfo (t) is unique. \n\nWhere can we use it?\n--------------------\n1. mark the distinct as no-op. SELECT DISTINCT uniquekey FROM v; This\n optimization has been required several times in our threads. \n \n2. Figure out more pathkey within the onerow case, then some planning\n time can be reduced to be big extend. This user case is not absurd, I\n run into a real user case like this: \n \n CREATE TABLE small_t (id int primary key, b int, c int .. u int);\n CREATE INDEX ON small_t(b);\n CREATE INDEX ON small_t(c);\n ..\n\n SELECT * FROM small_t s\n JOIN t1 on t1.sb = s.b\n JOIN T2 on t2.sc = s.c\n ..\n JOIN t20 on t20.su = s.u\n WHERE s.id = 1;\n\n Without the above optimization, we don't know s.b /s.c is ordered\n already, so it might keep more different paths for small_t because of\n they have different interesting pathkey, and use more planning time\n for sorting to support merge join.\n \n With the above optimization, the planning time should be reduced since\n the seq scan can produce a ordered result for every expression. \n \n3. Figure out more interesting pathkey after join with normal UniqueKey.\n\n CREATE TABLE t(id int primary key, b int, c int);\n CREATE INDEX on t(c);\n ANALYZE t;\n\n explain (costs off)\n select t1.id, t2.c from t t1\n join t1 t2 on t1.id = t2.b\n and t2.c > 3\n order by t1.id, t2.c;\n\n QUERY PLAN \n --------------------------------------------------\n Sort Key: t1.id, t2.c <--- this sort can be avoided actually. \n -> Nested Loop\n Join Filter: (t1.id = t2.b)\n -> Index Only Scan using t_pkey on t t1\n -> Index Scan using t1_c_idx on t1 t2\n Index Cond: (c > 3)\n\n *Without knowing the t1.id is unique*, which means there are some\n duplicated data in t1.id, the duplication data in t1 will break the\n order of (t1.id, t2.c), but if we know the t1.id is unique, the sort\n will be not needed. I'm pretty happy with this finding.\n \n4. Optimize some group by case, like\n\n SELECT id, sum(b) FROM t GROUP BY id\n is same with\n SELECT id, b from t;\n\n I'm not sure how often it is in the real life, I'm not so excited with\n this for now.\n\n \nHow to present ECs in UniqueKey?\n--------------------------------\n\nI choose \"Bitmapset *eclass_indexes;\" finally, which is because\nBitmapset is memory compact and good at bms_union, bms_is_subset\nstuffs. The value in the bitmap is the positions in root->eq_classes. It\nis also be able to present the UniqueKey which is made up from multi\nrelations or upper relation. I'm pleased with the EC strategy because\nthe existing logic would even create a EC with single members which\nmeans we don't need to create any EquivalenceClass for our own. for\nexample, in the case of \n\nSELECT DISTINCT pk FROM t;\n\na EquivalenceClass with single member is created.\n\n\nHow to present single row in UniqueKey\n-------------------------------------\n\nI just use a 'Index relid', an non-zero value means the\nRelOptInfo[relid] is single row. For the case like\n\nSELECT * FROM t WHERE id = 1;\nThe UniqueKey is:\n- UniqueKey(eclass_indexes=NULL, relid=1)\n\nduring a join, any unique keys join with single row, it's uniqueness can\nbe kept.\n\nSELECT t1.uk, t2.a FROM t WHERE t2.id = 1 and any-qual(t1, t2);\n- UniqueKey (t1.uk)\n\nmore specially, join two single row like:\n\nSELECT * FROM t1 join t2 on true where t1.id = 1 and t2.id = 2;\n\nthe UniqueKey for the JoinRel will be:\n- UniqueKey(eclass_indexes=NULL, relid=1)\n- UniqueKey(eclass_indexes=NULL, relid=2)\n\nHowever, the current single row presentation can't works well with Upper\nrelation, which I think it would be acceptable. See the following case:\n\nSELECT count(*) FROM t1 JOIN t2 on true;\n\n\nHow to maintain the uniquekey?\n-------------------------------\nthe uniquekey is maintained from baserel to join rel then to upper\nrelation. In the base rel, it comes from unique index. From the join\nrelation, it is maintained with two rules:\n\n- the uniquekey in one side is still unique if it can't be duplicated\n after the join. for example:\n\n SELECT t1.pk FROM t1 JOIN t2 ON t1.a = t2.pk;\n UniqueKey on t1: t1.pk\n UniqueKey on t1 Join t2: t1.pk\n\n- The combined unique key from both sides are unique all the times.\n SELECT t1.pk , t2.pk FROM t1 join t2 on true;\n UniqueKey on t1 join t2: (t1.pk, t2.pk)\n\nSome other operations like DISTINCT, GROUP BY can produce UniqueKey as well.\n\nNULL values\n-----------\nI added notnullattrs in RelOptInfo, which present if these attributes may\nnot be NULL after the baserestrictinfo is executed. not-null-attributes\nmay be generated by not-null constraint in catalog or baserestrictinfo\n(only) filter. However it is possible become NULLs because of outer\njoin, then Var.varnullingrels is used in this case. see\n'var_is_nullable' function call. \n\nTo simplify the UniqueKey module, it doesn't care about the null values\nduring the maintaining, which means it may contains multi NULL values\nall the time by design. However whenever a user case care about that,\nthe user case can get the answer with the above logic, that is what\n'mark-distinct-as-noop' does. \n\nHow to reduce the overhead\n----------------------------------\nUniqueKey employs the similar strategy like PathKey, it only maintain\nthe interesting PathKey. Currently the interesting UniqueKey includes:\n1). It is subset of distinct_pathkeys.\n2). It is used in mergeable join clauses for unique key deduction (for\nthe join rel case, rule 1). In this case, it can be discarded quickly if\nthe join has been done.\n\nTo avoid to check if an uniquekey is subset of distinct clause again and\nagain, I cached the result into UnqiueKey struct during the UniqueKey\ncreation. \n\nSince our first goal is just used for marking distinct as no-op, so if\nthere is no distinct clause at all, unique key will be not maintained at\nthe beginning. so we can have some codes like:\n\nif (root->distinct_pathkeys == NULL)\nreturn;\n\nThis fast path is NOT added for now for better code coverage.\n\nWhat I have now:\n----------------\n\nThe current patch just maintain the UniqueKey at the baserel level and\nused it for mark-distinct-as-noop purpose. including the basic idea of\n\n- How the UniqueKey is defined. \n- How to find out the interesting pathkey in the base relation level.\n- How to figure out the unique key contains NULL values.\n\nAlso the test cases are prepared, see uniquekey.sql.\n\nSome deep issues can only be found during the development, but I still\nlike to gather more feedback to see if anything is wrong at the first\nplace. Like what role will the collation play on for UniqueKey.\n\nAny thought?\n\n\n\n\n\nThanks.\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvrBXjAvH45dEAZFOk-hzOt1mJC7-fxZ2v49mc5njtA7VQ%40mail.gmail.com\n\nBest Regards\nAndy Fan", "msg_date": "Mon, 16 Oct 2023 11:09:50 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "UniqueKey v2" }, { "msg_contents": "hi.\nAfter `git am`, I still cannot build.\n\n../../Desktop/pg_sources/main/postgres/src/backend/optimizer/path/uniquekey.c:125:45:\nerror: variable ‘var’ set but not used\n[-Werror=unused-but-set-variable]\n 125 | Var *var;\n | ^~~\n\n\nYou also need to change src/backend/optimizer/path/meson.build.\n\ngit apply failed.\n\ngit am warning:\nApplying: uniquekey on base relation and used it for mark-distinct-as-op.\n.git/rebase-apply/patch:876: new blank line at EOF.\n+\nwarning: 1 line adds whitespace errors.\n\nI think you can use `git diff --check`\n(https://git-scm.com/docs/git-diff) to check for whitespace related\nerrors.\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:09:43 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "jian he <[email protected]> writes:\n\nHi jian,\n\n> hi.\n> After `git am`, I still cannot build.\n>\n> ../../Desktop/pg_sources/main/postgres/src/backend/optimizer/path/uniquekey.c:125:45:\n> error: variable ‘var’ set but not used\n> [-Werror=unused-but-set-variable]\n> 125 | Var *var;\n> | ^~~\n\nThanks for this report, looks clang 11 can't capture this error. I have\nswitched to clang 17 which would report this issue at the first place. \n\n>\n> You also need to change src/backend/optimizer/path/meson.build.\n\nGreat thanks.\n\n>\n> git apply failed.\n>\n> git am warning:\n> Applying: uniquekey on base relation and used it for mark-distinct-as-op.\n> .git/rebase-apply/patch:876: new blank line at EOF.\n> +\n> warning: 1 line adds whitespace errors.\n>\n> I think you can use `git diff --check`\n> (https://git-scm.com/docs/git-diff) to check for whitespace related\n> errors.\n\nthanks for the really good suggestion. Here is the newer version:\n\n\n\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 17 Oct 2023 11:17:13 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "On Tue, Oct 17, 2023 at 11:21 AM <[email protected]> wrote:\n>\n>\n> thanks for the really good suggestion. Here is the newer version:\n>\n\n--- a/src/backend/optimizer/path/meson.build\n+++ b/src/backend/optimizer/path/meson.build\n@@ -10,4 +10,5 @@ backend_sources += files(\n 'joinrels.c',\n 'pathkeys.c',\n 'tidpath.c',\n+ 'uniquekey.c'\n )\ndiff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h\nindex 3ac25d47..5ed550ca 100644\n--- a/src/include/optimizer/paths.h\n+++ b/src/include/optimizer/paths.h\n@@ -264,7 +264,10 @@ extern PathKey *make_canonical_pathkey(PlannerInfo *root,\n\n int strategy, bool nulls_first);\n extern void add_paths_to_append_rel(PlannerInfo *root, RelOptInfo *rel,\n\n List *live_childrels);\n-\n+/*\n+ * uniquekey.c\n+ * uniquekey.c related functions.\n+ */\n\n---------\ni did some simple tests using text data type.\n\nit works with the primary key, not with unique indexes.\nit does not work when the column is unique, not null.\n\nThe following is my test.\n\nbegin;\nCREATE COLLATION case_insensitive (provider = icu, locale =\n'und-u-ks-level2', deterministic = false);\nCREATE COLLATION upper_first (provider = icu, locale = 'und-u-kf-upper');\ncommit;\nbegin;\nCREATE TABLE test_uniquekey3(a text, b text);\nCREATE TABLE test_uniquekey4(a text, b text);\nCREATE TABLE test_uniquekey5(a text, b text);\nCREATE TABLE test_uniquekey6(a text, b text);\nCREATE TABLE test_uniquekey7(a text not null, b text not null);\nCREATE TABLE test_uniquekey8(a text not null, b text not null);\nCREATE TABLE test_uniquekey9(a text primary key COLLATE upper_first, b\ntext not null);\nCREATE TABLE test_uniquekey10(a text primary key COLLATE\ncase_insensitive, b text not null);\ncreate unique index on test_uniquekey3 (a COLLATE case_insensitive nulls first)\n nulls distinct\n with (fillfactor = 80);\ncreate unique index on test_uniquekey4 (a COLLATE case_insensitive nulls first)\n nulls not distinct\n with (fillfactor = 80);\ncreate unique index on test_uniquekey5 (a COLLATE upper_first nulls first)\n nulls distinct;\ncreate unique index on test_uniquekey6 (a COLLATE upper_first nulls first)\n nulls not distinct;\ncreate unique index on test_uniquekey7 (a COLLATE upper_first nulls\nfirst) nulls distinct;\ncreate unique index on test_uniquekey8 (a COLLATE case_insensitive\nnulls first) nulls not distinct;\ninsert into test_uniquekey3(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey4(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey5(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey6(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey7(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey8(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey9(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey10(a,b) select g::text, (g+10)::text from\ngenerate_series(1,1e5) g;\ninsert into test_uniquekey3(a) VALUES(null),(null),(null);\ninsert into test_uniquekey4(a) VALUES(null);\ninsert into test_uniquekey5(a) VALUES(null),(null),(null);\ninsert into test_uniquekey6(a) VALUES(null);\ncommit;\n\nANALYZE test_uniquekey3, test_uniquekey4, test_uniquekey5\n ,test_uniquekey6,test_uniquekey7, test_uniquekey8\n ,test_uniquekey9, test_uniquekey10;\n\nexplain (costs off) select distinct a from test_uniquekey3;\nexplain (costs off) select distinct a from test_uniquekey4;\nexplain (costs off) select distinct a from test_uniquekey5;\nexplain (costs off) select distinct a from test_uniquekey6;\nexplain (costs off) select distinct a from test_uniquekey7;\nexplain (costs off) select distinct a from test_uniquekey8;\nexplain (costs off) select distinct a from test_uniquekey9;\nexplain (costs off) select distinct a from test_uniquekey10;\nexplain (costs off) select distinct a from test_uniquekey3 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey4 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey5 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey6 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey7 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey8 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey9 where a < '2000';\nexplain (costs off) select distinct a from test_uniquekey10 where a < '2000';\n\n--very high selectivity\nexplain (costs off) select distinct a from test_uniquekey3 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey4 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey5 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey6 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey7 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey8 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey9 where a < '1001';\nexplain (costs off) select distinct a from test_uniquekey10 where a < '1001';\nexplain (costs off,ANALYZE) select distinct a from test_uniquekey9\nwhere a < '1001';\nexplain (costs off,ANALYZE) select distinct a from test_uniquekey10\nwhere a < '1001';\n\n\n", "msg_date": "Thu, 19 Oct 2023 12:47:01 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\n> i did some simple tests using text data type.\n>\n> it works with the primary key, not with unique indexes.\n> it does not work when the column is unique, not null.\n>\n> The following is my test.\n\nCan you simplify your test case please? I can't undertand what \"doesn't\nwork\" mean here and for which case. FWIW, this feature has nothing with\nthe real data, I don't think inserting any data is helpful unless I\nmissed anything. \n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 20 Oct 2023 16:29:16 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "On Fri, Oct 20, 2023 at 4:33 PM <[email protected]> wrote:\n>\n>\n> > i did some simple tests using text data type.\n> >\n> > it works with the primary key, not with unique indexes.\n> > it does not work when the column is unique, not null.\n> >\n> > The following is my test.\n>\n> Can you simplify your test case please? I can't undertand what \"doesn't\n> work\" mean here and for which case. FWIW, this feature has nothing with\n> the real data, I don't think inserting any data is helpful unless I\n> missed anything.\n\nSorry for not explaining it very well.\n\"make distinct as no-op.\"\nmy understanding: it means: if fewer rows meet the criteria \"columnX <\n const_a;\" , after analyze the table, it should use index only scan\nfor the queryA?\n--queryA:\nselect distinct columnX from the_table where columnX < const_a;\n\nThere are several ways for columnX to be unique: primark key, unique\nkey, unique key nulls distinct, unique key nulls not distinct, unique\nkey and not null.\n\nAfter applying your patch, only the primary key case will make the\nqueryA explain output using the index-only scan.\n\n\n", "msg_date": "Mon, 23 Oct 2023 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\njian he <[email protected]> writes:\n\n> On Fri, Oct 20, 2023 at 4:33 PM <[email protected]> wrote:\n>>\n>>\n>> > i did some simple tests using text data type.\n>> >\n>> > it works with the primary key, not with unique indexes.\n>> > it does not work when the column is unique, not null.\n>> >\n>> > The following is my test.\n>>\n>> Can you simplify your test case please? I can't undertand what \"doesn't\n>> work\" mean here and for which case. FWIW, this feature has nothing with\n>> the real data, I don't think inserting any data is helpful unless I\n>> missed anything.\n>\n> Sorry for not explaining it very well.\n> \"make distinct as no-op.\"\n> my understanding: it means: if fewer rows meet the criteria \"columnX <\n> const_a;\" , after analyze the table, it should use index only scan\n\nNo, \"mark distinct as no-op\" means the distinct node can be discarded\nautomatically since it is not needed any more. The simplest case would\nbe \"select distinct pk from t\", where it should be same as \"select pk\nfrom t\". You can check the testcase for the more cases. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 07 Nov 2023 11:44:42 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "[email protected] writes:\n\nHi,\n\nHere is the v3, the mainly changes is it maintains the UniqueKey on\njoinrel level, which probabaly is the most important part of this\nfeature. It shows how the UnqiueKey on joinrel is generated and how it\nis discarded due to non-interesting-uniquekey and also show much details\nabout the single-row case.\n\nI will always maintain README.uniquekey under src/backend/optimizer/path/\nto include the latest state of this feature to save the time for\nreviewer from going through from the begining. I also use the word \"BAD\nCASE\" in uniquekey.sql to demo which sistuation is not handled well so\nfar, that probably needs more attention at the first review. \n\n\n\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 09 Nov 2023 19:33:34 +0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "[email protected] wrote:\n\n> Here is the v3, ...\n\nI'm trying to enhance the join removal functionality (will post my patch in a\nseparate thread soon) and I consider your patch very helpful in this\narea.\n\nFollowing is my review. Attached are also some fixes and one enhancement:\npropagation of the unique keys (UK) from a subquery to the parent query\n(0004). (Note that 0001 is just your patch rebased).\n\n\n* I think that, before EC is considered suitable for an UK, its ec_opfamilies\n field needs to be checked. I try to do that in\n find_ec_position_matching_expr(), see 0004.\n\n\n* Set-returning functions (SRFs) can make a \"distinct path\" necessary even if\n the join output is unique.\n\n\n* RelOptInfo.notnullattrs\n\n My understanding is that this field contains the set attributes whose\n uniqueness is guaranteed by the unique key. They are acceptable because they\n are either 1) not allowed to be NULL due to NOT NULL constraint or 2) NULL\n value makes the containing row filtered out, so the row cannot break\n uniqueness of the output. Am I right?\n\n If so, I suggest to change the field name to something less generic, maybe\n 'uniquekey_attrs' or 'uniquekey_candidate_attrs', and adding a comment that\n more checks are needed before particular attribute can actually be used in\n UniqueKey.\n\n\n* add_uniquekey_for_uniqueindex()\n\n I'd appreciate an explanation why the \"single-row UK\" is created. I think\n the reason for unique_exprs==NIL is that a restriction clause VAR=CONST\n exists for each column of the unique index. Whether I'm right or not, a\n comment should state clearly what the reason is.\n\n\n* uniquekey_useful_for_merging()\n\n How does uniqueness relate to merge join? In README.uniquekey you seem to\n point out that a single row is always sorted, but I don't think this\n function is related to that fact. (Instead, I'd expect that pathkeys are\n added to all paths for a single-row relation, but I'm not sure you do that\n in the current version of the patch.)\n\n\n* is_uniquekey_useful_afterjoin()\n\n Now that my patch (0004) allows propagation of the unique keys from a\n subquery to the upper query, I was wondering if the UniqueKey structure\n needs the 'use_for_distinct field' I mean we should now propagate the unique\n keys to the parent query whether the subquery has DISTINCT clause or not. I\n noticed that the field is checked by is_uniquekey_useful_afterjoin(), so I\n changed the function to always returned true. However nothing changed in the\n output of regression tests (installcheck). Do you insist that the\n 'use_for_distinct' field is needed?\n\n\n* uniquekey_contains_multinulls()\n\n ** Instead of calling the function when trying to use the UK, how about\n checking the ECs when considering creation of the UK? If the tests fail,\n just don't create the UK.\n\n ** What does the 'multi' word in the function name mean?\n\n\n* relation_is_distinct_for()\n\n The function name is too similar to rel_is_distinct_for(). I think the name\n should indicate that you are checking the relation against a set of\n pathkeys. Maybe rel_is_distinct_for_pathkeys() (and remove 'distinct' from\n the argument name)? At the same time, it might be good to rename\n rel_is_distinct_for() to rel_is_distinct_for_clauses().\n\n\n* uniquekey_contains_in()\n\n Shouldn't this be uniquekey_contained_in()? And likewise, shouldn't the\n comment be \" ... if UniqueKey is contained in the list of EquivalenceClass\"\n ?\n\n (In general, even though I'm not English native speaker, I think I see quite\n a few grammar issues, which often make reading of the comments/documentation\n a bit difficult.)\n\n\n* Combining the UKs\n\n IMO this is the most problematic part of the patch. You call\n populate_joinrel_uniquekeys() for the same join multiple times, each time\n with a different 'restrictlist', and you try to do two things at the same\n time: 1) combine the UKs of the input relations into the UKs of the join\n relation, 2) check if the join relation can be marked single-row.\n\n I think that both 1) and 2) should be independent from join order, and thus\n both computations should only take place once for given set of input\n relations. And I think they should be done separately:\n\n 1) Compute the join UKs\n\n As you admit in a comment in populate_joinrel_uniquekeys(), neither join\n method nor clauses should matter. So I think you only need to pick the\n \"component UKs\" (i.e. UKs of the input relations) which are usable above\n that join (i.e. neither the join itself nor any join below sets any column\n of the UK to NULL) and combine them.\n\n Of course one problem is that the number of combinations can grow\n exponentially as new relations are joined. I'm not sure it's necessary to\n combine the UKs (and to discard some of them) immediately. Instead, maybe we\n can keep lists of UKs only for base relations, and postpone picking the\n suitable UKs and combining them until we actually need to check the relation\n uniqueness.\n\n 2) Check if the join relation is single-row\n\n I in order to get rid of the dependency on 'restrictlist', I think you can\n use ECs. Consider a query from your regression tests:\n\nCREATE TABLE uk_t (id int primary key, a int not null, b int not null, c int, d int, e int);\n\nSELECT distinct t1.d FROM uk_t t1 JOIN uk_t t2 ON t1.e = t2.id and t1.id = 1;\n\n The idea here seems to be that no more than one row of t1 matches the query\n clauses. Therefore, if t2(id) is unique, the clause t1.e=t2.id ensures that\n no more than one row of t2 matches the query (because t1 cannot provide the\n clause with more than one input value of 'e'). And therefore, the join also\n produces at most one row.\n\n My theory is that relation is single-row if it has an UK such that each of\n its ECs meets at least one of the following conditions:\n\n a) contains a constant\n\n b) contains a column of a relation which has already been proven single-row.\n\n b) is referenced by an UK of a relation which has already been proven\n single-row.\n\n I think that in the example above, an EC {t1.e, t2.id} should exist. So when\n checking whether 't2' is single-row, the condition b) cam be ised: the UK of\n 't2' should reference the EC {t1.e, t2.id}, which in turn contains the\n column t1.e. And 't1' is unique because its EC meets the condition a). (Of\n course you need to check em_jdomain before you use particular EM.)\n\n\nAre you going to submit the patch to the first CF of PG 18?\n\nPlease let me know if I can contribute to the effort by reviewing or writing\nsome code.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 19 Apr 2024 13:39:18 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\nHello Antonin,\n\n> [email protected] wrote:\n>\n>> Here is the v3, ...\n>\n> I'm trying to enhance the join removal functionality (will post my patch in a\n> separate thread soon) and I consider your patch very helpful in this\n> area.\n\nThanks for these words. The point 2) and 3) is pretty interesting to me\nat [1] and \"enhance join removal\" is another interesting user case. \n\n> Following is my review. Attached are also some fixes and one enhancement:\n> propagation of the unique keys (UK) from a subquery to the parent query\n> (0004). (Note that 0001 is just your patch rebased).\n\nThanks for that! more enhancment like uniquekey in partitioned table is\nneeded. This post is mainly used to check if more people is still\ninterested with this. \n\n> Are you going to submit the patch to the first CF of PG 18?\n\nSince there are still known work to do, I'm not sure if it is OK to\nsubmit in CF. What do you think about this part?\n\n>\n> Please let me know if I can contribute to the effort by reviewing or writing\n> some code.\n\nAbsolutely yes! please feel free to review / writing any of them and do\nremember add yourself into the author list if you do that. \n\nThanks for your review suggestion, I will get to this very soon if once\nI get time, I hope it is in 4 weeks. \n\n[1]\nhttps://www.postgresql.org/message-id/7mlamswjp81p.fsf%40e18c07352.et15sqa\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sat, 20 Apr 2024 12:19:02 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\nHello Antonin,\n\nThanks for interesting with this topic!\n\n> * I think that, before EC is considered suitable for an UK, its ec_opfamilies\n> field needs to be checked. I try to do that in\n> find_ec_position_matching_expr(), see 0004.\n\nCould you make the reason clearer for adding 'List *opfamily_lists;'\ninto UniqueKey? You said \"This is needed to create ECs in the parent\nquery if the upper relation represents a subquery.\" but I didn't get the\nit. Since we need to maintain the UniqueKey in the many places, I'd like\nto keep it as simple as possbile. Of course, anything essentical should\nbe added for sure. \n\n>\n> * Set-returning functions (SRFs) can make a \"distinct path\" necessary even if\n> the join output is unique.\n\nYou are right at this point, I will fix it in the coming version.\n\n>\n> * RelOptInfo.notnullattrs\n>\n> My understanding is that this field contains the set attributes whose\n> uniqueness is guaranteed by the unique key. They are acceptable because they\n> are either 1) not allowed to be NULL due to NOT NULL constraint or 2) NULL\n> value makes the containing row filtered out, so the row cannot break\n> uniqueness of the output. Am I right?\n>\n> If so, I suggest to change the field name to something less generic, maybe\n> 'uniquekey_attrs' or 'uniquekey_candidate_attrs', and adding a comment that\n> more checks are needed before particular attribute can actually be used in\n> UniqueKey.\n\nI don't think so, UniqueKey is just one of the places to use this\nnot-null property, see 3af704098 for the another user case of it. \n\n(Because of 3af704098, we should leverage notnullattnums somehow in this\npatch, which will be included in the next version as well).\n\n> * add_uniquekey_for_uniqueindex()\n>\n> I'd appreciate an explanation why the \"single-row UK\" is created. I think\n> the reason for unique_exprs==NIL is that a restriction clause VAR=CONST\n> exists for each column of the unique index. Whether I'm right or not, a\n> comment should state clearly what the reason is.\n\nYou are understanding it correctly. I will add comments in the next\nversion.\n\n>\n> * uniquekey_useful_for_merging()\n>\n> How does uniqueness relate to merge join? In README.uniquekey you seem to\n> point out that a single row is always sorted, but I don't think this\n> function is related to that fact. (Instead, I'd expect that pathkeys are\n> added to all paths for a single-row relation, but I'm not sure you do that\n> in the current version of the patch.)\n\nThe merging is for \"mergejoinable join clauses\", see function\neclass_useful_for_merging. Usually I think it as operator \"t1.a = t2.a\";\n\n> * is_uniquekey_useful_afterjoin()\n>\n> Now that my patch (0004) allows propagation of the unique keys from a\n> subquery to the upper query, I was wondering if the UniqueKey structure\n> needs the 'use_for_distinct field' I mean we should now propagate the unique\n> keys to the parent query whether the subquery has DISTINCT clause or not. I\n> noticed that the field is checked by is_uniquekey_useful_afterjoin(), so I\n> changed the function to always returned true. However nothing changed in the\n> output of regression tests (installcheck). Do you insist that the\n> 'use_for_distinct' field is needed?\n>\n>\n> * uniquekey_contains_multinulls()\n>\n> ** Instead of calling the function when trying to use the UK, how about\n> checking the ECs when considering creation of the UK? If the tests fail,\n> just don't create the UK.\n\nI don't think so since we maintain the UniqueKey from bottom to top, you\ncan double check if my reason is appropriate. \n\nCREATE TABLE t1(a int);\nCREATE INDEX ON t1(a);\n\nSELECT distinct t1.a FROM t1 JOIN t2 using(a);\n\nWe need to create the UniqueKey on the baserel for t1 and the NULL\nvalues is filtered out in the joinrel. so we have to creating it with\nallowing NULL values first. \n\n> ** What does the 'multi' word in the function name mean?\n\nmulti means multiple, I thought we use this short name in the many\nplaces, for ex bt_multi_page_stats after a quick search. \n\n> * relation_is_distinct_for()\n>\n> The function name is too similar to rel_is_distinct_for(). I think the name\n> should indicate that you are checking the relation against a set of\n> pathkeys. Maybe rel_is_distinct_for_pathkeys() (and remove 'distinct' from\n> the argument name)? At the same time, it might be good to rename\n> rel_is_distinct_for() to rel_is_distinct_for_clauses().\n\nOK.\n\n> * uniquekey_contains_in()\n>\n> Shouldn't this be uniquekey_contained_in()? And likewise, shouldn't the\n> comment be \" ... if UniqueKey is contained in the list of EquivalenceClass\"\n> ?\n\nOK.\n>\n> (In general, even though I'm not English native speaker, I think I see quite\n> a few grammar issues, which often make reading of the comments/documentation\n\nYour English is really good:)\n\n>\n>\n> * Combining the UKs\n>\n> IMO this is the most problematic part of the patch. You call\n> populate_joinrel_uniquekeys() for the same join multiple times,\n\nWhy do you think so? The below code is called in \"make_join_rel\"\n\npopulate_joinrel_uniquekeys(root, joinrel, rel1, rel2, ...);\n\nso it should be only called once per joinrel.\n\nIs your original question is about populate_joinrel_uniquekey_for_rel\nrather than populate_joinrel_uniquekeys? We have the below codes:\n\n\touteruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, outerrel,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t innerrel, restrictlist);\n\tinneruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, innerrel,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t outerrel, restrictlist);\n\nThis is mainly because of the following theory. Quoted from\nREADME.uniquekey. Let's called this as \"rule 1\".\n\n\"\"\"\nHow to maintain the uniquekey?\n-------------------------------\n.. From the join relation, it is maintained with two rules:\n\n- the uniquekey in one side is still unique if it can't be duplicated\n after the join. for example:\n\n SELECT t1.pk FROM t1 JOIN t2 ON t1.a = t2.pk;\n UniqueKey on t1: t1.pk\n UniqueKey on t1 Join t2: t1.pk\n\"\"\"\n\nAND the blow codes:\n\n\n\tif (outeruk_still_valid || inneruk_still_valid)\n\n\t\t/*\n\t\t * the uniquekey on outers or inners have been added into joinrel so\n\t\t * the combined uniuqekey from both sides is not needed.\n\t\t */\n\t\treturn;\n\n\nWe don't create the component uniquekey if any one side of the boths\nsides is unique already. For example:\n\n\"(t1.id) in joinrel(t1, t2) is unique\" OR \"(t2.id) in joinrel is\nunique\", there is no need to create component UniqueKey (t1.id, t2.id); \n\n> each time\n> with a different 'restrictlist', and you try to do two things at the same\n> time: 1) combine the UKs of the input relations into the UKs of the join\n> relation, 2) check if the join relation can be marked single-row.\n>\n> I think that both 1) and 2) should be independent from join order, and thus\n> both computations should only take place once for given set of input\n> relations. And I think they should be done separately:\n>\n> 1) Compute the join UKs\n>\n> As you admit in a comment in populate_joinrel_uniquekeys(), neither join\n> method nor clauses should matter. So I think you only need to pick the\n> \"component UKs\" (i.e. UKs of the input relations) which are usable above\n> that join (i.e. neither the join itself nor any join below sets any column\n> of the UK to NULL) and combine them.\n\nWe need to do this only after the \"if (!outeruk_still_valid &&\n!inneruk_still_valid)\" check, as explained above. \n\n>\n> Of course one problem is that the number of combinations can grow\n> exponentially as new relations are joined.\n\nYes, that's why \"rule 1\" needed and \"How to reduce the overhead\" in\nUniqueKey.README is introduced. \n\n>\n> 2) Check if the join relation is single-row\n>\n> I in order to get rid of the dependency on 'restrictlist', I think you can\n> use ECs. Consider a query from your regression tests:\n>\n> CREATE TABLE uk_t (id int primary key, a int not null, b int not null, c int, d int, e int);\n>\n> SELECT distinct t1.d FROM uk_t t1 JOIN uk_t t2 ON t1.e = t2.id and t1.id = 1;\n>\n> The idea here seems to be that no more than one row of t1 matches the query\n> clauses. Therefore, if t2(id) is unique, the clause t1.e=t2.id ensures that\n> no more than one row of t2 matches the query (because t1 cannot provide the\n> clause with more than one input value of 'e'). And therefore, the join also\n> produces at most one row.\n\nYou are correct and IMO my current code are able to tell it is a single\nrow as well.\n\n1. Since t1.id = 1, so t1 is single row, so t1.d is unqiuekey as a\nconsequence.\n2. Given t2.id is unique, t1.e = t2.id so t1's unqiuekey can be kept\nafter the join because of rule 1 on joinrel. and t1 is singlerow, so the\njoinrel is singlerow as well.\n\nI'm interested with \"get rid of the dependency on 'restrictlist', I\nthink you can use ECs.\", let's see what we can improve.\n>\n> My theory is that relation is single-row if it has an UK such that each of\n> its ECs meets at least one of the following conditions:\n>\n> a) contains a constant\n\nTrue.\n>\n> b) contains a column of a relation which has already been proven single-row.\n\nTrue, not sure if it is easy to tell.\n\n> b) is referenced by an UK of a relation which has already been proven\n> single-row.\n\nI can't follow here...\n\n>\n> I think that in the example above, an EC {t1.e, t2.id} should exist. So when\n> checking whether 't2' is single-row, the condition b) cam be ised: the UK of\n> 't2' should reference the EC {t1.e, t2.id}, which in turn contains the\n> column t1.e. And 't1' is unique because its EC meets the condition a). (Of\n> course you need to check em_jdomain before you use particular EM.)\n\nI think the existing rule 1 for joinrel works well with the singlerow\ncase naturally, what can be improved if we add the theory you suggested\nhere? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 06 May 2024 15:48:10 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "Andy Fan <[email protected]> wrote:\n\n> > * I think that, before EC is considered suitable for an UK, its ec_opfamilies\n> > field needs to be checked. I try to do that in\n> > find_ec_position_matching_expr(), see 0004.\n> \n> Could you make the reason clearer for adding 'List *opfamily_lists;'\n> into UniqueKey? You said \"This is needed to create ECs in the parent\n> query if the upper relation represents a subquery.\" but I didn't get the\n> it. Since we need to maintain the UniqueKey in the many places, I'd like\n> to keep it as simple as possbile. Of course, anything essentical should\n> be added for sure. \n\nIf unique keys are generated for a subquery output, they also need to be\ncreated for the corresponding relation in the upper query (\"sub\" in the\nfollowing example):\n\nselect * from tab1 left join (select * from tab2) sub;\n\nHowever, to create an unique key for \"sub\", you need an EC for each expression\nof the key. And to create an EC, you in turn need the list of operator\nfamilies.\n\nEven if the parent query already had ECs for the columns of \"sub\" which are\ncontained in the unique key, you need to make sure that those ECs are\n\"compatible\" with the ECs of the subquery which generated the unique key. That\nis, if an EC of the subquery considers certain input values equal, the EC of\nthe parent query must also be able to determine if they are equal or not.\n\n> > * RelOptInfo.notnullattrs\n> >\n> > My understanding is that this field contains the set attributes whose\n> > uniqueness is guaranteed by the unique key. They are acceptable because they\n> > are either 1) not allowed to be NULL due to NOT NULL constraint or 2) NULL\n> > value makes the containing row filtered out, so the row cannot break\n> > uniqueness of the output. Am I right?\n> >\n> > If so, I suggest to change the field name to something less generic, maybe\n> > 'uniquekey_attrs' or 'uniquekey_candidate_attrs', and adding a comment that\n> > more checks are needed before particular attribute can actually be used in\n> > UniqueKey.\n> \n> I don't think so, UniqueKey is just one of the places to use this\n> not-null property, see 3af704098 for the another user case of it. \n> \n> (Because of 3af704098, we should leverage notnullattnums somehow in this\n> patch, which will be included in the next version as well).\n\nIn your patch you modify 'notnullattrs' in add_base_clause_to_rel(), but that\ndoes not happen to 'notnullattnums' in the current master branch. Thus I think\nthat 'notnullattrs' is specific to the unique keys feature, so the field name\nshould be less generic.\n\n> >\n> > * uniquekey_useful_for_merging()\n> >\n> > How does uniqueness relate to merge join? In README.uniquekey you seem to\n> > point out that a single row is always sorted, but I don't think this\n> > function is related to that fact. (Instead, I'd expect that pathkeys are\n> > added to all paths for a single-row relation, but I'm not sure you do that\n> > in the current version of the patch.)\n> \n> The merging is for \"mergejoinable join clauses\", see function\n> eclass_useful_for_merging. Usually I think it as operator \"t1.a = t2.a\";\n\nMy question is: why is the uniqueness important specifically to merge join? I\nunderstand that join evaluation can be more efficient if we know that one\ninput relation is unique (i.e. we only scan that relation until we find the\nfirst match), but this is not specific to merge join.\n\n> > * is_uniquekey_useful_afterjoin()\n> >\n> > Now that my patch (0004) allows propagation of the unique keys from a\n> > subquery to the upper query, I was wondering if the UniqueKey structure\n> > needs the 'use_for_distinct field' I mean we should now propagate the unique\n> > keys to the parent query whether the subquery has DISTINCT clause or not. I\n> > noticed that the field is checked by is_uniquekey_useful_afterjoin(), so I\n> > changed the function to always returned true. However nothing changed in the\n> > output of regression tests (installcheck). Do you insist that the\n> > 'use_for_distinct' field is needed?\n\nI miss your answer to this comment.\n\n> > * uniquekey_contains_multinulls()\n> >\n> > ** Instead of calling the function when trying to use the UK, how about\n> > checking the ECs when considering creation of the UK? If the tests fail,\n> > just don't create the UK.\n> \n> I don't think so since we maintain the UniqueKey from bottom to top, you\n> can double check if my reason is appropriate. \n> \n> CREATE TABLE t1(a int);\n> CREATE INDEX ON t1(a);\n> \n> SELECT distinct t1.a FROM t1 JOIN t2 using(a);\n> \n> We need to create the UniqueKey on the baserel for t1 and the NULL\n> values is filtered out in the joinrel. so we have to creating it with\n> allowing NULL values first. \n\nok\n\n> > ** What does the 'multi' word in the function name mean?\n> \n> multi means multiple, I thought we use this short name in the many\n> places, for ex bt_multi_page_stats after a quick search. \n\nWhy not simply uniquekey_contains_nulls() ?\n\nActually I wouldn't say that an instance of UniqueKey contains any value (NULL\nor NOT NULL) because it describes the whole relation rather than particular\nrow. I consider UniqueKey to be a set of expressions. How about\nuniquekey_expression_nullable() ?\n\n> >\n> >\n> > * Combining the UKs\n> >\n> > IMO this is the most problematic part of the patch. You call\n> > populate_joinrel_uniquekeys() for the same join multiple times,\n> \n> Why do you think so? The below code is called in \"make_join_rel\"\n\nConsider join of tables \"a\", \"b\" and \"c\". My understanding is that\nmake_join_rel() is called once with rel1={a} and rel2={b join c}, then with\nrel1={a join b} and rel2={c}, etc. I wanted to say that each call should\nproduce the same set of unique keys.\n\nI need to check this part more in detail.\n\n> Is your original question is about populate_joinrel_uniquekey_for_rel\n> rather than populate_joinrel_uniquekeys? We have the below codes:\n> \n> \touteruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, outerrel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t innerrel, restrictlist);\n> \tinneruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, innerrel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t outerrel, restrictlist);\n> \n> This is mainly because of the following theory. Quoted from\n> README.uniquekey. Let's called this as \"rule 1\".\n> \n> \"\"\"\n> How to maintain the uniquekey?\n> -------------------------------\n> .. From the join relation, it is maintained with two rules:\n> \n> - the uniquekey in one side is still unique if it can't be duplicated\n> after the join. for example:\n> \n> SELECT t1.pk FROM t1 JOIN t2 ON t1.a = t2.pk;\n> UniqueKey on t1: t1.pk\n> UniqueKey on t1 Join t2: t1.pk\n> \"\"\"\n> \n> AND the blow codes:\n> \n> \n> \tif (outeruk_still_valid || inneruk_still_valid)\n> \n> \t\t/*\n> \t\t * the uniquekey on outers or inners have been added into joinrel so\n> \t\t * the combined uniuqekey from both sides is not needed.\n> \t\t */\n> \t\treturn;\n> \n> \n> We don't create the component uniquekey if any one side of the boths\n> sides is unique already. For example:\n> \n> \"(t1.id) in joinrel(t1, t2) is unique\" OR \"(t2.id) in joinrel is\n> unique\", there is no need to create component UniqueKey (t1.id, t2.id); \n\nok, I need to check more in detail how this part works.\n\n> >\n> > Of course one problem is that the number of combinations can grow\n> > exponentially as new relations are joined.\n> \n> Yes, that's why \"rule 1\" needed and \"How to reduce the overhead\" in\n> UniqueKey.README is introduced. \n\nWhat if we are interested in unique keys of a subquery, but the subquery has\nno DISTINCT clause?\n\n> >\n> > 2) Check if the join relation is single-row\n> >\n> > I in order to get rid of the dependency on 'restrictlist', I think you can\n> > use ECs. Consider a query from your regression tests:\n> >\n> > CREATE TABLE uk_t (id int primary key, a int not null, b int not null, c int, d int, e int);\n> >\n> > SELECT distinct t1.d FROM uk_t t1 JOIN uk_t t2 ON t1.e = t2.id and t1.id = 1;\n> >\n> > The idea here seems to be that no more than one row of t1 matches the query\n> > clauses. Therefore, if t2(id) is unique, the clause t1.e=t2.id ensures that\n> > no more than one row of t2 matches the query (because t1 cannot provide the\n> > clause with more than one input value of 'e'). And therefore, the join also\n> > produces at most one row.\n> \n> You are correct and IMO my current code are able to tell it is a single\n> row as well.\n> \n> 1. Since t1.id = 1, so t1 is single row, so t1.d is unqiuekey as a\n> consequence.\n> 2. Given t2.id is unique, t1.e = t2.id so t1's unqiuekey can be kept\n> after the join because of rule 1 on joinrel. and t1 is singlerow, so the\n> joinrel is singlerow as well.\n> \n> I'm interested with \"get rid of the dependency on 'restrictlist', I\n> think you can use ECs.\", let's see what we can improve.\n> >\n> > My theory is that relation is single-row if it has an UK such that each of\n> > its ECs meets at least one of the following conditions:\n> >\n> > a) contains a constant\n> \n> True.\n> >\n> > b) contains a column of a relation which has already been proven single-row.\n> \n> True, not sure if it is easy to tell.\n> \n> > b) is referenced by an UK of a relation which has already been proven\n> > single-row.\n> \n> I can't follow here...\n\nThis is similar to EC containing a constant: if an EC is used by a single-row\nUK, all its member can only have a single value.\n\n> >\n> > I think that in the example above, an EC {t1.e, t2.id} should exist. So when\n> > checking whether 't2' is single-row, the condition b) cam be used: the UK of\n> > 't2' should reference the EC {t1.e, t2.id}, which in turn contains the\n> > column t1.e. And 't1' is unique because its EC meets the condition a). (Of\n> > course you need to check em_jdomain before you use particular EM.)\n> \n> I think the existing rule 1 for joinrel works well with the singlerow\n> case naturally, what can be improved if we add the theory you suggested\n> here?\n\nThis is still the explanation of the idea how to mark join unique key as a\nsingle-row separately from the other logic. As noted above, I need to learn\nmore about the unique keys of a join.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 13 May 2024 11:55:50 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "Antonin Houska <[email protected]> wrote:\n\n> Andy Fan <[email protected]> wrote:\n> > >\n> > > * Combining the UKs\n> > >\n> > > IMO this is the most problematic part of the patch. You call\n> > > populate_joinrel_uniquekeys() for the same join multiple times,\n> > \n> > Why do you think so? The below code is called in \"make_join_rel\"\n> \n> Consider join of tables \"a\", \"b\" and \"c\". My understanding is that\n> make_join_rel() is called once with rel1={a} and rel2={b join c}, then with\n> rel1={a join b} and rel2={c}, etc. I wanted to say that each call should\n> produce the same set of unique keys.\n> \n> I need to check this part more in detail.\n\nI think I understand now. By calling populate_joinrel_uniquekeys() for various\norderings, you can find out that various input relation unique keys can\nrepresent the whole join. For example, if the ordering is\n\nA JOIN (B JOIN C)\n\nyou can prove that the unique keys of A can be used for the whole join, while\nfor the ordering\n\nB JOIN (A JOIN C)\n\nyou can prove the same for the unique keys of B, and so on.\n\n> > Is your original question is about populate_joinrel_uniquekey_for_rel\n> > rather than populate_joinrel_uniquekeys? We have the below codes:\n> > \n> > \touteruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, outerrel,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t innerrel, restrictlist);\n> > \tinneruk_still_valid = populate_joinrel_uniquekey_for_rel(root, joinrel, innerrel,\n> > \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t outerrel, restrictlist);\n> > \n> > This is mainly because of the following theory. Quoted from\n> > README.uniquekey. Let's called this as \"rule 1\".\n> > \n> > \"\"\"\n> > How to maintain the uniquekey?\n> > -------------------------------\n> > .. From the join relation, it is maintained with two rules:\n> > \n> > - the uniquekey in one side is still unique if it can't be duplicated\n> > after the join. for example:\n> > \n> > SELECT t1.pk FROM t1 JOIN t2 ON t1.a = t2.pk;\n> > UniqueKey on t1: t1.pk\n> > UniqueKey on t1 Join t2: t1.pk\n> > \"\"\"\n> > \n> > AND the blow codes:\n> > \n> > \n> > \tif (outeruk_still_valid || inneruk_still_valid)\n> > \n> > \t\t/*\n> > \t\t * the uniquekey on outers or inners have been added into joinrel so\n> > \t\t * the combined uniuqekey from both sides is not needed.\n> > \t\t */\n> > \t\treturn;\n> > \n> > \n> > We don't create the component uniquekey if any one side of the boths\n> > sides is unique already. For example:\n> > \n> > \"(t1.id) in joinrel(t1, t2) is unique\" OR \"(t2.id) in joinrel is\n> > unique\", there is no need to create component UniqueKey (t1.id, t2.id); \n> \n> ok, I need to check more in detail how this part works.\n\nThis optimization makes sense to me.\n\n> > >\n> > > Of course one problem is that the number of combinations can grow\n> > > exponentially as new relations are joined.\n> > \n> > Yes, that's why \"rule 1\" needed and \"How to reduce the overhead\" in\n> > UniqueKey.README is introduced. \n\nI think there should yet be some guarantee that the number of unique keys does\nnot grow exponentially. Perhaps a constant that allows a relation (base or\njoin) to have at most N unique keys. (I imagine N to be rather small, e.g. 3\nor 4.) And when picking the \"best N keys\", one criterion could be the number\nof expressions in the key (the shorter key the better).\n\n> > >\n> > > 2) Check if the join relation is single-row\n> > >\n> > > I in order to get rid of the dependency on 'restrictlist', I think you can\n> > > use ECs. Consider a query from your regression tests:\n> > >\n> > > CREATE TABLE uk_t (id int primary key, a int not null, b int not null, c int, d int, e int);\n> > >\n> > > SELECT distinct t1.d FROM uk_t t1 JOIN uk_t t2 ON t1.e = t2.id and t1.id = 1;\n> > >\n> > > The idea here seems to be that no more than one row of t1 matches the query\n> > > clauses. Therefore, if t2(id) is unique, the clause t1.e=t2.id ensures that\n> > > no more than one row of t2 matches the query (because t1 cannot provide the\n> > > clause with more than one input value of 'e'). And therefore, the join also\n> > > produces at most one row.\n> > \n> > You are correct and IMO my current code are able to tell it is a single\n> > row as well.\n> > \n> > 1. Since t1.id = 1, so t1 is single row, so t1.d is unqiuekey as a\n> > consequence.\n> > 2. Given t2.id is unique, t1.e = t2.id so t1's unqiuekey can be kept\n> > after the join because of rule 1 on joinrel. and t1 is singlerow, so the\n> > joinrel is singlerow as well.\n\nok, I think I understand now.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 13 May 2024 19:42:15 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\nAntonin Houska <[email protected]> writes:\n\n>> Could you make the reason clearer for adding 'List *opfamily_lists;'\n>> into UniqueKey? You said \"This is needed to create ECs in the parent\n>> query if the upper relation represents a subquery.\" but I didn't get the\n>> it. Since we need to maintain the UniqueKey in the many places, I'd like\n>> to keep it as simple as possbile. Of course, anything essentical should\n>> be added for sure. \n>\n> If unique keys are generated for a subquery output, they also need to be\n> created for the corresponding relation in the upper query (\"sub\" in the\n> following example):\n\nOK.\n>\n> select * from tab1 left join (select * from tab2) sub;\n>\n> However, to create an unique key for \"sub\", you need an EC for each expression\n> of the key.\n\nOK.\n> And to create an EC, you in turn need the list of operator\n> families.\n\nI'm thinking if we need to \"create\" any EC. Can you find out a user case\nwhere the outer EC is missed and the UniqueKey is still interesting? I\ndon't have an example now. \n\nconvert_subquery_pathkeys has a similar sistuation and has the following\ncodes:\n\n\t\t\t\touter_ec =\n\t\t\t\t\tget_eclass_for_sort_expr(root,\n\t\t\t\t\t\t\t\t\t\t\t (Expr *) outer_var,\n\t\t\t\t\t\t\t\t\t\t\t sub_eclass->ec_opfamilies,\n\t\t\t\t\t\t\t\t\t\t\t sub_member->em_datatype,\n\t\t\t\t\t\t\t\t\t\t\t sub_eclass->ec_collation,\n\t\t\t\t\t\t\t\t\t\t\t 0,\n\t\t\t\t\t\t\t\t\t\t\t rel->relids,\n\t\t\t\t\t\t\t\t\t\t\t NULL,\n\t\t\t\t\t\t\t\t\t\t\t false);\n\n\t\t\t\t/*\n\t\t\t\t * If we don't find a matching EC, sub-pathkey isn't\n\t\t\t\t * interesting to the outer query\n\t\t\t\t */\n\t\t\t\tif (outer_ec)\n\t\t\t\t\tbest_pathkey =\n\t\t\t\t\t\tmake_canonical_pathkey(root,\n\t\t\t\t\t\t\t\t\t\t\t outer_ec,\n\t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_opfamily,\n\t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_strategy,\n\t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_nulls_first);\n\t\t\t}\n\n> Even if the parent query already had ECs for the columns of \"sub\" which are\n> contained in the unique key, you need to make sure that those ECs are\n> \"compatible\" with the ECs of the subquery which generated the unique key. That\n> is, if an EC of the subquery considers certain input values equal, the EC of\n> the parent query must also be able to determine if they are equal or not.\n>\n>> > * RelOptInfo.notnullattrs\n>> >\n>> > My understanding is that this field contains the set attributes whose\n>> > uniqueness is guaranteed by the unique key. They are acceptable because they\n>> > are either 1) not allowed to be NULL due to NOT NULL constraint or 2) NULL\n>> > value makes the containing row filtered out, so the row cannot break\n>> > uniqueness of the output. Am I right?\n>> >\n>> > If so, I suggest to change the field name to something less generic, maybe\n>> > 'uniquekey_attrs' or 'uniquekey_candidate_attrs', and adding a comment that\n>> > more checks are needed before particular attribute can actually be used in\n>> > UniqueKey.\n>> \n>> I don't think so, UniqueKey is just one of the places to use this\n>> not-null property, see 3af704098 for the another user case of it. \n>> \n>> (Because of 3af704098, we should leverage notnullattnums somehow in this\n>> patch, which will be included in the next version as well).\n>\n> In your patch you modify 'notnullattrs' in add_base_clause_to_rel(), but that\n> does not happen to 'notnullattnums' in the current master branch. Thus I think\n> that 'notnullattrs' is specific to the unique keys feature, so the field name\n> should be less generic.\n\nOK.\n\n>> >\n>> > * uniquekey_useful_for_merging()\n>> >\n>> > How does uniqueness relate to merge join? In README.uniquekey you seem to\n>> > point out that a single row is always sorted, but I don't think this\n>> > function is related to that fact. (Instead, I'd expect that pathkeys are\n>> > added to all paths for a single-row relation, but I'm not sure you do that\n>> > in the current version of the patch.)\n>> \n>> The merging is for \"mergejoinable join clauses\", see function\n>> eclass_useful_for_merging. Usually I think it as operator \"t1.a = t2.a\";\n>\n> My question is: why is the uniqueness important specifically to merge join? I\n> understand that join evaluation can be more efficient if we know that one\n> input relation is unique (i.e. we only scan that relation until we find the\n> first match), but this is not specific to merge join.\n\nSo the answer is the \"merging\" in uniquekey_useful_for_merging() has\nnothing with merge join. \n\n>> > * is_uniquekey_useful_afterjoin()\n>> >\n>> > Now that my patch (0004) allows propagation of the unique keys from a\n>> > subquery to the upper query, I was wondering if the UniqueKey structure\n>> > needs the 'use_for_distinct field' I mean we should now propagate the unique\n>> > keys to the parent query whether the subquery has DISTINCT clause or not. I\n>> > noticed that the field is checked by is_uniquekey_useful_afterjoin(), so I\n>> > changed the function to always returned true. However nothing changed in the\n>> > output of regression tests (installcheck). Do you insist that the\n>> > 'use_for_distinct' field is needed?\n>\n> I miss your answer to this comment.\n\nAfter we considers the uniquekey from subquery, 'use_for_distinct' field\nis not needed.\n\n>> > ** What does the 'multi' word in the function name mean?\n>> \n>> multi means multiple, I thought we use this short name in the many\n>> places, for ex bt_multi_page_stats after a quick search. \n>\n> Why not simply uniquekey_contains_nulls() ?\n\n> Actually I wouldn't say that an instance of UniqueKey contains any value (NULL\n> or NOT NULL) because it describes the whole relation rather than particular\n> row. I consider UniqueKey to be a set of expressions. How about\n> uniquekey_expression_nullable() ?\n\nuniquekey_expression_nullable() is a better name, I will use it in the\nnext version.\n\nIIUC, we have reached to the agreement based on your latest response for\nthe most of the questions. Please point me if I missed anything. \n\n>> > Of course one problem is that the number of combinations can grow\n>> > exponentially as new relations are joined.\n>> \n>> Yes, that's why \"rule 1\" needed and \"How to reduce the overhead\" in\n>> UniqueKey.README is introduced. \n>\n> What if we are interested in unique keys of a subquery, but the subquery has\n> no DISTINCT clause?\n\nI agree we should remove the prerequisite of \"use_for_distinct\". \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 14 May 2024 11:15:58 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "\n>> Consider join of tables \"a\", \"b\" and \"c\". My understanding is that\n>> make_join_rel() is called once with rel1={a} and rel2={b join c}, then with\n>> rel1={a join b} and rel2={c}, etc. I wanted to say that each call should\n>> produce the same set of unique keys.\n>> \n>> I need to check this part more in detail.\n>\n> I think I understand now. By calling populate_joinrel_uniquekeys() for various\n> orderings, you can find out that various input relation unique keys can\n> represent the whole join. For example, if the ordering is\n>\n> A JOIN (B JOIN C)\n>\n> you can prove that the unique keys of A can be used for the whole join, while\n> for the ordering\n>\n> B JOIN (A JOIN C)\n>\n> you can prove the same for the unique keys of B, and so on.\n\nYes.\n\n>> > We don't create the component uniquekey if any one side of the boths\n>> > sides is unique already. For example:\n>> > \n>> > \"(t1.id) in joinrel(t1, t2) is unique\" OR \"(t2.id) in joinrel is\n>> > unique\", there is no need to create component UniqueKey (t1.id, t2.id); \n>> \n>> ok, I need to check more in detail how this part works.\n>\n> This optimization makes sense to me.\n\nOK.\n\n>> > >\n>> > > Of course one problem is that the number of combinations can grow\n>> > > exponentially as new relations are joined.\n>> > \n>> > Yes, that's why \"rule 1\" needed and \"How to reduce the overhead\" in\n>> > UniqueKey.README is introduced. \n>\n> I think there should yet be some guarantee that the number of unique keys does\n> not grow exponentially. Perhaps a constant that allows a relation (base or\n> join) to have at most N unique keys. (I imagine N to be rather small, e.g. 3\n> or 4.) And when picking the \"best N keys\", one criterion could be the number\n> of expressions in the key (the shorter key the better).\n\nI don't want to introduce this complextity right now. I'm more\ninerested with how to work with them effectivity. main effort includes: \n\n- the design of bitmapset which is memory usage friendly and easy for\ncombinations.\n- Optimize the singlerow cases to reduce N UnqiueKeys to 1 UniqueKey.\n\nI hope we can pay more attention to this optimization (at most N\nUniqueKeys) when the major inforastruce has been done. \n\n>> > You are correct and IMO my current code are able to tell it is a single\n>> > row as well.\n>> > \n>> > 1. Since t1.id = 1, so t1 is single row, so t1.d is unqiuekey as a\n>> > consequence.\n>> > 2. Given t2.id is unique, t1.e = t2.id so t1's unqiuekey can be kept\n>> > after the join because of rule 1 on joinrel. and t1 is singlerow, so the\n>> > joinrel is singlerow as well.\n>\n> ok, I think I understand now.\n\nOK.\n\nAt last, this probably is my first non-trival patchs which has multiple\nauthors, I don't want myself is the bottleneck for the coorperation, so\nif you need something to do done sooner, please don't hesitate to ask me\nfor it explicitly.\n\nHere is my schedule about this. I can provide the next version based\nour discussion and your patches at the eariler of next week. and update\nthe UniqueKey.README to make sure the overall design clearer. What I\nhope you to pay more attention is the UniqueKey.README besides the\ncode. I hope the UniqueKey.README can reduce the effort for others to\nunderstand the overall design enormously.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 14 May 2024 11:38:08 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" }, { "msg_contents": "Andy Fan <[email protected]> wrote:\n> Antonin Houska <[email protected]> writes:\n> \n> >> Could you make the reason clearer for adding 'List *opfamily_lists;'\n> >> into UniqueKey? You said \"This is needed to create ECs in the parent\n> >> query if the upper relation represents a subquery.\" but I didn't get the\n> >> it. Since we need to maintain the UniqueKey in the many places, I'd like\n> >> to keep it as simple as possbile. Of course, anything essentical should\n> >> be added for sure. \n> >\n> > If unique keys are generated for a subquery output, they also need to be\n> > created for the corresponding relation in the upper query (\"sub\" in the\n> > following example):\n> \n> OK.\n> >\n> > select * from tab1 left join (select * from tab2) sub;\n> >\n> > However, to create an unique key for \"sub\", you need an EC for each expression\n> > of the key.\n> \n> OK.\n> > And to create an EC, you in turn need the list of operator\n> > families.\n> \n> I'm thinking if we need to \"create\" any EC. Can you find out a user case\n> where the outer EC is missed and the UniqueKey is still interesting? I\n> don't have an example now. \n> \n> convert_subquery_pathkeys has a similar sistuation and has the following\n> codes:\n> \n> \t\t\t\touter_ec =\n> \t\t\t\t\tget_eclass_for_sort_expr(root,\n> \t\t\t\t\t\t\t\t\t\t\t (Expr *) outer_var,\n> \t\t\t\t\t\t\t\t\t\t\t sub_eclass->ec_opfamilies,\n> \t\t\t\t\t\t\t\t\t\t\t sub_member->em_datatype,\n> \t\t\t\t\t\t\t\t\t\t\t sub_eclass->ec_collation,\n> \t\t\t\t\t\t\t\t\t\t\t 0,\n> \t\t\t\t\t\t\t\t\t\t\t rel->relids,\n> \t\t\t\t\t\t\t\t\t\t\t NULL,\n> \t\t\t\t\t\t\t\t\t\t\t false);\n> \n> \t\t\t\t/*\n> \t\t\t\t * If we don't find a matching EC, sub-pathkey isn't\n> \t\t\t\t * interesting to the outer query\n> \t\t\t\t */\n> \t\t\t\tif (outer_ec)\n> \t\t\t\t\tbest_pathkey =\n> \t\t\t\t\t\tmake_canonical_pathkey(root,\n> \t\t\t\t\t\t\t\t\t\t\t outer_ec,\n> \t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_opfamily,\n> \t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_strategy,\n> \t\t\t\t\t\t\t\t\t\t\t sub_pathkey->pk_nulls_first);\n> \t\t\t}\n\nI think that convert_subquery_pathkeys() just does not try that hard to\nachieve its goal.\n\nThe example where it's important to create the EC in the outer query is what I\nadded to the subselect.sql regression test in the 0004- diff in [1]:\n\ncreate table tabx as select * from generate_series(1,100) idx;\ncreate table taby as select * from generate_series(1,100) idy;\ncreate unique index on taby using btree (idy);\ncreate view view_barrier with (security_barrier=true) as select * from taby;\nanalyze tabx, taby;\nexplain (costs off, verbose on) select * from tabx x left join view_barrier y on idy = idx;\n\nIf you modify find_ec_position_matching_expr() to return -1 instead of\ncreating the EC, you will get this plan\n\nHash Left Join\n Output: x.idx, taby.idy\n Hash Cond: (x.idx = taby.idy)\n -> Seq Scan on public.tabx x\n Output: x.idx\n -> Hash\n Output: taby.idy\n -> Seq Scan on public.taby\n Output: taby.idy\n\ninstead of this\n\nHash Left Join\n Output: x.idx, taby.idy\n Inner Unique: true\n Hash Cond: (x.idx = taby.idy)\n -> Seq Scan on public.tabx x\n Output: x.idx\n -> Hash\n Output: taby.idy\n -> Seq Scan on public.taby\n Output: taby.idy\n\n> >> > * uniquekey_useful_for_merging()\n> >> >\n> >> > How does uniqueness relate to merge join? In README.uniquekey you seem to\n> >> > point out that a single row is always sorted, but I don't think this\n> >> > function is related to that fact. (Instead, I'd expect that pathkeys are\n> >> > added to all paths for a single-row relation, but I'm not sure you do that\n> >> > in the current version of the patch.)\n> >> \n> >> The merging is for \"mergejoinable join clauses\", see function\n> >> eclass_useful_for_merging. Usually I think it as operator \"t1.a = t2.a\";\n> >\n> > My question is: why is the uniqueness important specifically to merge join? I\n> > understand that join evaluation can be more efficient if we know that one\n> > input relation is unique (i.e. we only scan that relation until we find the\n> > first match), but this is not specific to merge join.\n> \n> So the answer is the \"merging\" in uniquekey_useful_for_merging() has\n> nothing with merge join. \n\nI don't understand. The function comment does mention merge join:\n\n/*\n * uniquekey_useful_for_merging\n *\tCheck if the uniquekey is useful for mergejoins above the given relation.\n *\n * similar with pathkeys_useful_for_merging.\n */\nstatic bool\nuniquekey_useful_for_merging(PlannerInfo *root, UniqueKey * ukey, RelOptInfo *rel)\n\n\n[1] https://www.postgresql.org/message-id/7971.1713526758%40antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 04 Jun 2024 16:14:19 +0200", "msg_from": "Antonin Houska <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UniqueKey v2" } ]
[ { "msg_contents": "Implement TODO item:\nPL/pgSQL\nIncomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n\nAs a first step, deal only with [], such as\nxxx.yyy%TYPE[]\nxxx%TYPE[]\n\nIt can be extended to support multi-dimensional and complex syntax in \nthe future.\n\n\n--\nQuan Zongliang", "msg_date": "Mon, 16 Oct 2023 18:15:53 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "> On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]> wrote:\n\n> Implement TODO item:\n> PL/pgSQL\n> Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n\nCool! While I haven't looked at the patch yet, I've wanted this myself many\ntimes in the past when working with large plpgsql codebases.\n\n> As a first step, deal only with [], such as\n> xxx.yyy%TYPE[]\n> xxx%TYPE[]\n> \n> It can be extended to support multi-dimensional and complex syntax in the future.\n\nWas this omitted due to complexity of implementation or for some other reason?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 16 Oct 2023 13:53:11 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "po 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson <[email protected]>\nnapsal:\n\n> > On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]> wrote:\n>\n> > Implement TODO item:\n> > PL/pgSQL\n> > Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n>\n> Cool! While I haven't looked at the patch yet, I've wanted this myself\n> many\n> times in the past when working with large plpgsql codebases.\n>\n> > As a first step, deal only with [], such as\n> > xxx.yyy%TYPE[]\n> > xxx%TYPE[]\n> >\n> > It can be extended to support multi-dimensional and complex syntax in\n> the future.\n>\n> Was this omitted due to complexity of implementation or for some other\n> reason?\n>\n\nThere is no reason for describing enhancement. The size and dimensions of\npostgresql arrays are dynamic, depends on the value, not on declaration.\nNow, this information is ignored, and can be compatibility break to check\nand enforce this info.\n\n\n> --\n> Daniel Gustafsson\n>\n>\n>\n>\n\npo 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson <[email protected]> napsal:> On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]> wrote:\n\n> Implement TODO item:\n> PL/pgSQL\n> Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n\nCool!  While I haven't looked at the patch yet, I've wanted this myself many\ntimes in the past when working with large plpgsql codebases.\n\n> As a first step, deal only with [], such as\n> xxx.yyy%TYPE[]\n> xxx%TYPE[]\n> \n> It can be extended to support multi-dimensional and complex syntax in the future.\n\nWas this omitted due to complexity of implementation or for some other reason?There is no reason for describing enhancement. The size and dimensions of postgresql arrays are dynamic, depends on the value, not on declaration. Now, this information is ignored, and can be compatibility break to check and enforce this info.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 16 Oct 2023 14:05:57 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "Attached new patch\n More explicit error messages based on type.\n\n\nOn 2023/10/16 18:15, Quan Zongliang wrote:\n> \n> \n> Implement TODO item:\n> PL/pgSQL\n> Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n> \n> As a first step, deal only with [], such as\n> xxx.yyy%TYPE[]\n> xxx%TYPE[]\n> \n> It can be extended to support multi-dimensional and complex syntax in \n> the future.\n> \n> \n> -- \n> Quan Zongliang", "msg_date": "Tue, 17 Oct 2023 09:19:29 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "\nError messages still seem ambiguous.\n do not support multi-dimensional arrays in PL/pgSQL\n\nIsn't that better?\n do not support multi-dimensional %TYPE arrays in PL/pgSQL\n\n\nOn 2023/10/17 09:19, Quan Zongliang wrote:\n> \n> Attached new patch\n>   More explicit error messages based on type.\n> \n> \n> On 2023/10/16 18:15, Quan Zongliang wrote:\n>>\n>>\n>> Implement TODO item:\n>> PL/pgSQL\n>> Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n>>\n>> As a first step, deal only with [], such as\n>> xxx.yyy%TYPE[]\n>> xxx%TYPE[]\n>>\n>> It can be extended to support multi-dimensional and complex syntax in \n>> the future.\n>>\n>>\n>> -- \n>> Quan Zongliang\n\n\n\n", "msg_date": "Tue, 17 Oct 2023 09:24:42 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "\n\nOn 2023/10/16 20:05, Pavel Stehule wrote:\n> \n> \n> po 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson <[email protected] \n> <mailto:[email protected]>> napsal:\n> \n> > On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> > Implement TODO item:\n> > PL/pgSQL\n> > Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n> \n> Cool!  While I haven't looked at the patch yet, I've wanted this\n> myself many\n> times in the past when working with large plpgsql codebases.\n> \n> > As a first step, deal only with [], such as\n> > xxx.yyy%TYPE[]\n> > xxx%TYPE[]\n> >\n> > It can be extended to support multi-dimensional and complex\n> syntax in the future.\n> \n> Was this omitted due to complexity of implementation or for some\n> other reason?\n> \nBecause of complexity.\n\n> \n> There is no reason for describing enhancement. The size and dimensions \n> of postgresql arrays are dynamic, depends on the value, not on \n> declaration. Now, this information is ignored, and can be compatibility \n> break to check and enforce this info.\n> \nYes. I don't think it's necessary.\nIf anyone needs it, we can continue to enhance it in the future.\n\n> \n> --\n> Daniel Gustafsson\n> \n> \n> \n\n\n\n", "msg_date": "Tue, 17 Oct 2023 09:29:25 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "út 17. 10. 2023 v 3:30 odesílatel Quan Zongliang <[email protected]>\nnapsal:\n\n>\n>\n> On 2023/10/16 20:05, Pavel Stehule wrote:\n> >\n> >\n> > po 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson <[email protected]\n> > <mailto:[email protected]>> napsal:\n> >\n> > > On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > > Implement TODO item:\n> > > PL/pgSQL\n> > > Incomplete item Allow handling of %TYPE arrays, e.g.\n> tab.col%TYPE[]\n> >\n> > Cool! While I haven't looked at the patch yet, I've wanted this\n> > myself many\n> > times in the past when working with large plpgsql codebases.\n> >\n> > > As a first step, deal only with [], such as\n> > > xxx.yyy%TYPE[]\n> > > xxx%TYPE[]\n> > >\n> > > It can be extended to support multi-dimensional and complex\n> > syntax in the future.\n> >\n> > Was this omitted due to complexity of implementation or for some\n> > other reason?\n> >\n> Because of complexity.\n>\n> >\n> > There is no reason for describing enhancement. The size and dimensions\n> > of postgresql arrays are dynamic, depends on the value, not on\n> > declaration. Now, this information is ignored, and can be compatibility\n> > break to check and enforce this info.\n> >\n> Yes. I don't think it's necessary.\n> If anyone needs it, we can continue to enhance it in the future.\n>\n\nI don't think it is possible to do it. But there is another missing\nfunctionality, if I remember well. There is no possibility to declare\nvariables for elements of array.\n\nI propose syntax xxx.yyy%ELEMENTTYPE and xxx%ELEMENTTYPE\n\nWhat do you think about it?\n\nRegards\n\nPavel\n\n\n> >\n> > --\n> > Daniel Gustafsson\n> >\n> >\n> >\n>\n>\n\nút 17. 10. 2023 v 3:30 odesílatel Quan Zongliang <[email protected]> napsal:\n\nOn 2023/10/16 20:05, Pavel Stehule wrote:\n> \n> \n> po 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson <[email protected] \n> <mailto:[email protected]>> napsal:\n> \n>      > On 16 Oct 2023, at 12:15, Quan Zongliang <[email protected]\n>     <mailto:[email protected]>> wrote:\n> \n>      > Implement TODO item:\n>      > PL/pgSQL\n>      > Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n> \n>     Cool!  While I haven't looked at the patch yet, I've wanted this\n>     myself many\n>     times in the past when working with large plpgsql codebases.\n> \n>      > As a first step, deal only with [], such as\n>      > xxx.yyy%TYPE[]\n>      > xxx%TYPE[]\n>      >\n>      > It can be extended to support multi-dimensional and complex\n>     syntax in the future.\n> \n>     Was this omitted due to complexity of implementation or for some\n>     other reason?\n> \nBecause of complexity.\n\n> \n> There is no reason for describing enhancement. The size and dimensions \n> of postgresql arrays are dynamic, depends on the value, not on \n> declaration. Now, this information is ignored, and can be compatibility \n> break to check and enforce this info.\n> \nYes. I don't think it's necessary.\nIf anyone needs it, we can continue to enhance it in the future.I don't think it is possible to do it.  But there is another missing functionality, if I remember well. There is no possibility to declare variables for elements of array.I propose syntax xxx.yyy%ELEMENTTYPE and xxx%ELEMENTTYPEWhat do you think about it?RegardsPavel\n\n> \n>     --\n>     Daniel Gustafsson\n> \n> \n>", "msg_date": "Tue, 17 Oct 2023 06:15:06 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "\n\nOn 2023/10/17 12:15, Pavel Stehule wrote:\n> \n> \n> út 17. 10. 2023 v 3:30 odesílatel Quan Zongliang <[email protected] \n> <mailto:[email protected]>> napsal:\n> \n> \n> \n> On 2023/10/16 20:05, Pavel Stehule wrote:\n> >\n> >\n> > po 16. 10. 2023 v 13:56 odesílatel Daniel Gustafsson\n> <[email protected] <mailto:[email protected]>\n> > <mailto:[email protected] <mailto:[email protected]>>> napsal:\n> >\n> >      > On 16 Oct 2023, at 12:15, Quan Zongliang\n> <[email protected] <mailto:[email protected]>\n> >     <mailto:[email protected]\n> <mailto:[email protected]>>> wrote:\n> >\n> >      > Implement TODO item:\n> >      > PL/pgSQL\n> >      > Incomplete item Allow handling of %TYPE arrays, e.g.\n> tab.col%TYPE[]\n> >\n> >     Cool!  While I haven't looked at the patch yet, I've wanted this\n> >     myself many\n> >     times in the past when working with large plpgsql codebases.\n> >\n> >      > As a first step, deal only with [], such as\n> >      > xxx.yyy%TYPE[]\n> >      > xxx%TYPE[]\n> >      >\n> >      > It can be extended to support multi-dimensional and complex\n> >     syntax in the future.\n> >\n> >     Was this omitted due to complexity of implementation or for some\n> >     other reason?\n> >\n> Because of complexity.\n> \n> >\n> > There is no reason for describing enhancement. The size and\n> dimensions\n> > of postgresql arrays are dynamic, depends on the value, not on\n> > declaration. Now, this information is ignored, and can be\n> compatibility\n> > break to check and enforce this info.\n> >\n> Yes. I don't think it's necessary.\n> If anyone needs it, we can continue to enhance it in the future.\n> \n> \n> I don't think it is possible to do it.  But there is another missing \n> functionality, if I remember well. There is no possibility to declare \n> variables for elements of array.\nThe way it's done now is more like laziness.\n\nIs it possible to do that?\nIf the parser encounters %TYPE[][]. It can be parsed. Then let \nparse_datatype do the rest.\n\nFor example, partitioned_table.a%TYPE[][100][]. Parse the type \nname(int4) of partitioned_table.a%TYPE and add the following [][100][]. \nPassing \"int4[][100][]\" to parse_datatype will give us the array \ndefinition we want.\n\nIsn't this code a little ugly?\n\n> \n> I propose syntax xxx.yyy%ELEMENTTYPE and xxx%ELEMENTTYPE\n> \n> What do you think about it?\nNo other relational database can be found with such an implementation. \nBut it seems like a good idea. It can bring more convenience to write \nstored procedure.\n\n> \n> Regards\n> \n> Pavel\n> \n> \n> >\n> >     --\n> >     Daniel Gustafsson\n> >\n> >\n> >\n> \n\n\n\n", "msg_date": "Tue, 17 Oct 2023 17:20:27 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "Hi\n\n\n> Isn't this code a little ugly?\n>\n> >\n> > I propose syntax xxx.yyy%ELEMENTTYPE and xxx%ELEMENTTYPE\n> >\n> > What do you think about it?\n> No other relational database can be found with such an implementation.\n> But it seems like a good idea. It can bring more convenience to write\n> stored procedure.\n>\n\nNo other databases support arrays :-)\n\nRegards\n\nPavel\n\n\n> >\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> > >\n> > > --\n> > > Daniel Gustafsson\n> > >\n> > >\n> > >\n> >\n>\n>\n\nHi\n\nIsn't this code a little ugly?\n\n> \n> I propose syntax xxx.yyy%ELEMENTTYPE and xxx%ELEMENTTYPE\n> \n> What do you think about it?\nNo other relational database can be found with such an implementation. \nBut it seems like a good idea. It can bring more convenience to write \nstored procedure.No other databases support arrays :-) RegardsPavel\n\n> \n> Regards\n> \n> Pavel\n> \n> \n>      >\n>      >     --\n>      >     Daniel Gustafsson\n>      >\n>      >\n>      >\n>", "msg_date": "Tue, 17 Oct 2023 20:04:34 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "Hi\n\nút 17. 10. 2023 v 3:20 odesílatel Quan Zongliang <[email protected]>\nnapsal:\n\n>\n> Attached new patch\n> More explicit error messages based on type.\n>\n>\n> On 2023/10/16 18:15, Quan Zongliang wrote:\n> >\n> >\n> > Implement TODO item:\n> > PL/pgSQL\n> > Incomplete item Allow handling of %TYPE arrays, e.g. tab.col%TYPE[]\n> >\n> > As a first step, deal only with [], such as\n> > xxx.yyy%TYPE[]\n> > xxx%TYPE[]\n> >\n> > It can be extended to support multi-dimensional and complex syntax in\n> > the future.\n> >\n>\n\nI did some deeper check:\n\n- I don't like too much parser's modification (I am sending alternative own\nimplementation) - the SQL parser allows richer syntax, and for full\nfunctionality is only few lines more\n\n- original patch doesn't solve %ROWTYPE\n\n(2023-11-20 10:04:36) postgres=# select * from foo;\n┌────┬────┐\n│ a │ b │\n╞════╪════╡\n│ 10 │ 20 │\n│ 30 │ 40 │\n└────┴────┘\n(2 rows)\n\n(2023-11-20 10:08:29) postgres=# do $$\ndeclare v foo%rowtype[];\nbegin\n v := array(select row(a,b) from foo);\n raise notice '%', v;\nend;\n$$;\nNOTICE: {\"(10,20)\",\"(30,40)\"}\nDO\n\n- original patch doesn't solve type RECORD\nthe error message should be more intuitive, although the arrays of record\ntype can be supported, but it probably needs bigger research.\n\n(2023-11-20 10:10:34) postgres=# do $$\ndeclare r record; v r%type[];\nbegin\n v := array(select row(a,b) from foo);\n raise notice '%', v;\nend;\n$$;\nERROR: syntax error at or near \"%\"\nLINE 2: declare r record; v r%type[];\n ^\nCONTEXT: invalid type name \"r%type[]\"\n\n- missing documentation\n\n- I don't like using the word \"partitioned\" in the regress test name\n\"partitioned_table\". It is confusing\n\nRegards\n\nPavel\n\n\n\n> >\n> > --\n> > Quan Zongliang", "msg_date": "Mon, 20 Nov 2023 10:33:00 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "On 2023/11/20 17:33, Pavel Stehule wrote:\n\n> \n> \n> I did some deeper check:\n> \n> - I don't like too much parser's modification (I am sending alternative \n> own implementation) - the SQL parser allows richer syntax, and for full \n> functionality is only few lines more\nAgree.\n\n> \n> - original patch doesn't solve %ROWTYPE\n> \n> (2023-11-20 10:04:36) postgres=# select * from foo;\n> ┌────┬────┐\n> │ a  │ b  │\n> ╞════╪════╡\n> │ 10 │ 20 │\n> │ 30 │ 40 │\n> └────┴────┘\n> (2 rows)\n> \n> (2023-11-20 10:08:29) postgres=# do $$\n> declare v foo%rowtype[];\n> begin\n>   v := array(select row(a,b) from foo);\n>   raise notice '%', v;\n> end;\n> $$;\n> NOTICE:  {\"(10,20)\",\"(30,40)\"}\n> DO\n> \ntwo little fixes\n1. spelling mistake\n ARRAY [ icons ] --> ARRAY [ iconst ]\n2. code bug\n if (!OidIsValid(dtype->typoid)) --> if (!OidIsValid(array_typeid))\n\n\n> - original patch doesn't solve type RECORD\n> the error message should be more intuitive, although the arrays of \n> record type can be supported, but it probably needs bigger research.\n> \n> (2023-11-20 10:10:34) postgres=# do $$\n> declare r record; v r%type[];\n> begin\n>   v := array(select row(a,b) from foo);\n>   raise notice '%', v;\n> end;\n> $$;\n> ERROR:  syntax error at or near \"%\"\n> LINE 2: declare r record; v r%type[];\n>                              ^\n> CONTEXT:  invalid type name \"r%type[]\"\n> \nCurrently only scalar variables are supported.\nThis error is consistent with the r%type error. And record arrays are \nnot currently supported.\nSupport for r%type should be considered first. For now, let r%type[] \nreport the same error as record[].\nI prefer to implement it with a new patch.\n\n> - missing documentation\nMy English is not good. I wrote it down, please correct it. Add a note \nin the \"Record Types\" documentation that arrays and \"Copying Types\" are \nnot supported yet.\n\n> \n> - I don't like using the word \"partitioned\" in the regress test name \n> \"partitioned_table\". It is confusing\nfixed\n\n> \n> Regards\n> \n> Pavel", "msg_date": "Thu, 23 Nov 2023 20:27:51 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "čt 23. 11. 2023 v 13:28 odesílatel Quan Zongliang <[email protected]>\nnapsal:\n\n>\n>\n> On 2023/11/20 17:33, Pavel Stehule wrote:\n>\n> >\n> >\n> > I did some deeper check:\n> >\n> > - I don't like too much parser's modification (I am sending alternative\n> > own implementation) - the SQL parser allows richer syntax, and for full\n> > functionality is only few lines more\n> Agree.\n>\n> >\n> > - original patch doesn't solve %ROWTYPE\n> >\n> > (2023-11-20 10:04:36) postgres=# select * from foo;\n> > ┌────┬────┐\n> > │ a │ b │\n> > ╞════╪════╡\n> > │ 10 │ 20 │\n> > │ 30 │ 40 │\n> > └────┴────┘\n> > (2 rows)\n> >\n> > (2023-11-20 10:08:29) postgres=# do $$\n> > declare v foo%rowtype[];\n> > begin\n> > v := array(select row(a,b) from foo);\n> > raise notice '%', v;\n> > end;\n> > $$;\n> > NOTICE: {\"(10,20)\",\"(30,40)\"}\n> > DO\n> >\n> two little fixes\n> 1. spelling mistake\n> ARRAY [ icons ] --> ARRAY [ iconst ]\n> 2. code bug\n> if (!OidIsValid(dtype->typoid)) --> if (!OidIsValid(array_typeid))\n>\n>\n> > - original patch doesn't solve type RECORD\n> > the error message should be more intuitive, although the arrays of\n> > record type can be supported, but it probably needs bigger research.\n> >\n> > (2023-11-20 10:10:34) postgres=# do $$\n> > declare r record; v r%type[];\n> > begin\n> > v := array(select row(a,b) from foo);\n> > raise notice '%', v;\n> > end;\n> > $$;\n> > ERROR: syntax error at or near \"%\"\n> > LINE 2: declare r record; v r%type[];\n> > ^\n> > CONTEXT: invalid type name \"r%type[]\"\n> >\n> Currently only scalar variables are supported.\n> This error is consistent with the r%type error. And record arrays are\n> not currently supported.\n> Support for r%type should be considered first. For now, let r%type[]\n> report the same error as record[].\n> I prefer to implement it with a new patch.\n>\n\nok\n\n\n>\n> > - missing documentation\n> My English is not good. I wrote it down, please correct it. Add a note\n> in the \"Record Types\" documentation that arrays and \"Copying Types\" are\n> not supported yet.\n>\n> >\n> > - I don't like using the word \"partitioned\" in the regress test name\n> > \"partitioned_table\". It is confusing\n> fixed\n>\n\nI modified the documentation a little bit - we don't need to extra propose\nSQL array syntax, I think.\nI rewrote regress tests - we don't need to test unsupported functionality\n(related to RECORD).\n\n- all tests passed\n\nRegards\n\nPavel\n\n\n>\n> >\n> > Regards\n> >\n> > Pavel", "msg_date": "Thu, 23 Nov 2023 20:39:38 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "On 2023/11/24 03:39, Pavel Stehule wrote:\n\n> \n> I modified the documentation a little bit - we don't need to extra \n> propose SQL array syntax, I think.\n> I rewrote regress tests - we don't need to test unsupported \n> functionality (related to RECORD).\n> \n> - all tests passed\n> \nI wrote two examples of errors:\n user_id users.user_id%ROWTYPE[];\n user_id users.user_id%ROWTYPE ARRAY[4][3];\n\nFixed.\n\n> Regards\n> \n> Pavel\n> \n> \n> >\n> > Regards\n> >\n> > Pavel\n>", "msg_date": "Fri, 24 Nov 2023 09:11:59 +0800", "msg_from": "Quan Zongliang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays, e.g.\n tab.col%TYPE[]" }, { "msg_contents": "pá 24. 11. 2023 v 2:12 odesílatel Quan Zongliang <[email protected]>\nnapsal:\n\n>\n>\n> On 2023/11/24 03:39, Pavel Stehule wrote:\n>\n> >\n> > I modified the documentation a little bit - we don't need to extra\n> > propose SQL array syntax, I think.\n> > I rewrote regress tests - we don't need to test unsupported\n> > functionality (related to RECORD).\n> >\n> > - all tests passed\n> >\n> I wrote two examples of errors:\n> user_id users.user_id%ROWTYPE[];\n> user_id users.user_id%ROWTYPE ARRAY[4][3];\n>\n\nthere were more issues in this part - the name \"user_id\" is a bad name for\na composite variable. I renamed it.\n+ I wrote a test related to usage type without array support.\n\nNow, I think so this simple patch is ready for committers\n\nRegards\n\nPavel\n\n\n\n> Fixed.\n>\n> > Regards\n> >\n> > Pavel\n> >\n> >\n> > >\n> > > Regards\n> > >\n> > > Pavel\n> >", "msg_date": "Fri, 24 Nov 2023 06:00:42 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> Now, I think so this simple patch is ready for committers\n\nI pushed this with some editorialization -- mostly, rewriting the\ndocumentation and comments. I found that the existing docs for %TYPE\nwere not great. There are two separate use-cases, one for referencing\na table column and one for referencing a previously-declared variable,\nand the docs were about as clear as mud about explaining that.\n\nI also looked into the problem Pavel mentioned that it doesn't work\nfor RECORD. If you just write \"record[]\" you get an error message\nthat at least indicates it's an unsupported case:\n\nregression=# do $$declare r record[]; begin end$$;\nERROR: variable \"r\" has pseudo-type record[]\nCONTEXT: compilation of PL/pgSQL function \"inline_code_block\" near line 1\n\nMaybe we could improve on that, but it would be a lot of work and\nI'm not terribly excited about it. However, %TYPE fails entirely\nfor both \"record\" and named composite types, and the reason turns\nout to be just that plpgsql_parse_wordtype fails to handle the\nPLPGSQL_NSTYPE_REC case. So that's easily fixed.\n\nI also wonder what the heck the last half of plpgsql_parse_wordtype\nis for at all. It looks for a named type, which means you can do\n\nregression=# do $$declare x float8%type; begin end$$;\nDO\n\nbut that's just stupid. You could leave off the %TYPE and get\nthe same result. Moreover, it is inconsistent because\nplpgsql_parse_cwordtype has no equivalent behavior:\n\nregression=# do $$declare x pg_catalog.float8%type; begin end$$;\nERROR: syntax error at or near \"%\"\nLINE 1: do $$declare x pg_catalog.float8%type; begin end$$;\n ^\nCONTEXT: invalid type name \"pg_catalog.float8%type\"\n\nIt's also undocumented and untested (the code coverage report\nshows this part is never reached). So I propose we remove it.\n\nThat leads me to the attached proposed follow-on patch.\n\nAnother thing we could think about, but I've not done it here,\nis to make plpgsql_parse_wordtype and friends throw error\ninstead of just returning NULL when they don't find the name.\nRight now, if NULL is returned, we end up passing the whole\nstring to parse_datatype, leading to unhelpful errors like\nthe one shown above. We could do better than that I think,\nperhaps like \"argument of %TYPE is not a known variable\".\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 04 Jan 2024 16:02:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" }, { "msg_contents": "čt 4. 1. 2024 v 22:02 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > Now, I think so this simple patch is ready for committers\n>\n> I pushed this with some editorialization -- mostly, rewriting the\n> documentation and comments. I found that the existing docs for %TYPE\n> were not great. There are two separate use-cases, one for referencing\n> a table column and one for referencing a previously-declared variable,\n> and the docs were about as clear as mud about explaining that.\n>\n> I also looked into the problem Pavel mentioned that it doesn't work\n> for RECORD. If you just write \"record[]\" you get an error message\n> that at least indicates it's an unsupported case:\n>\n> regression=# do $$declare r record[]; begin end$$;\n> ERROR: variable \"r\" has pseudo-type record[]\n> CONTEXT: compilation of PL/pgSQL function \"inline_code_block\" near line 1\n>\n> Maybe we could improve on that, but it would be a lot of work and\n> I'm not terribly excited about it. However, %TYPE fails entirely\n> for both \"record\" and named composite types, and the reason turns\n> out to be just that plpgsql_parse_wordtype fails to handle the\n> PLPGSQL_NSTYPE_REC case. So that's easily fixed.\n>\n> I also wonder what the heck the last half of plpgsql_parse_wordtype\n> is for at all. It looks for a named type, which means you can do\n>\n> regression=# do $$declare x float8%type; begin end$$;\n> DO\n>\n> but that's just stupid. You could leave off the %TYPE and get\n> the same result. Moreover, it is inconsistent because\n> plpgsql_parse_cwordtype has no equivalent behavior:\n>\n> regression=# do $$declare x pg_catalog.float8%type; begin end$$;\n> ERROR: syntax error at or near \"%\"\n> LINE 1: do $$declare x pg_catalog.float8%type; begin end$$;\n> ^\n> CONTEXT: invalid type name \"pg_catalog.float8%type\"\n>\n> It's also undocumented and untested (the code coverage report\n> shows this part is never reached). So I propose we remove it.\n>\n> That leads me to the attached proposed follow-on patch.\n>\n> Another thing we could think about, but I've not done it here,\n> is to make plpgsql_parse_wordtype and friends throw error\n> instead of just returning NULL when they don't find the name.\n> Right now, if NULL is returned, we end up passing the whole\n> string to parse_datatype, leading to unhelpful errors like\n> the one shown above. We could do better than that I think,\n> perhaps like \"argument of %TYPE is not a known variable\".\n>\n\n+1\n\nRegards\n\nPavel\n\n>\n> regards, tom lane\n>\n>\n\nčt 4. 1. 2024 v 22:02 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> Now, I think so this simple patch is ready for committers\n\nI pushed this with some editorialization -- mostly, rewriting the\ndocumentation and comments.  I found that the existing docs for %TYPE\nwere not great.  There are two separate use-cases, one for referencing\na table column and one for referencing a previously-declared variable,\nand the docs were about as clear as mud about explaining that.\n\nI also looked into the problem Pavel mentioned that it doesn't work\nfor RECORD.  If you just write \"record[]\" you get an error message\nthat at least indicates it's an unsupported case:\n\nregression=# do $$declare r record[]; begin end$$;\nERROR:  variable \"r\" has pseudo-type record[]\nCONTEXT:  compilation of PL/pgSQL function \"inline_code_block\" near line 1\n\nMaybe we could improve on that, but it would be a lot of work and\nI'm not terribly excited about it.  However, %TYPE fails entirely\nfor both \"record\" and named composite types, and the reason turns\nout to be just that plpgsql_parse_wordtype fails to handle the\nPLPGSQL_NSTYPE_REC case.  So that's easily fixed.\n\nI also wonder what the heck the last half of plpgsql_parse_wordtype\nis for at all.  It looks for a named type, which means you can do\n\nregression=# do $$declare x float8%type; begin end$$;\nDO\n\nbut that's just stupid.  You could leave off the %TYPE and get\nthe same result.  Moreover, it is inconsistent because\nplpgsql_parse_cwordtype has no equivalent behavior:\n\nregression=# do $$declare x pg_catalog.float8%type; begin end$$;\nERROR:  syntax error at or near \"%\"\nLINE 1: do $$declare x pg_catalog.float8%type; begin end$$;\n                                        ^\nCONTEXT:  invalid type name \"pg_catalog.float8%type\"\n\nIt's also undocumented and untested (the code coverage report\nshows this part is never reached).  So I propose we remove it.\n\nThat leads me to the attached proposed follow-on patch.\n\nAnother thing we could think about, but I've not done it here,\nis to make plpgsql_parse_wordtype and friends throw error\ninstead of just returning NULL when they don't find the name.\nRight now, if NULL is returned, we end up passing the whole\nstring to parse_datatype, leading to unhelpful errors like\nthe one shown above.  We could do better than that I think,\nperhaps like \"argument of %TYPE is not a known variable\".+1RegardsPavel\n\n                        regards, tom lane", "msg_date": "Fri, 5 Jan 2024 05:57:31 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL: Incomplete item Allow handling of %TYPE arrays,\n e.g. tab.col%TYPE[]" } ]
[ { "msg_contents": "Hi all!\n\nI'm a DevOps Manager/Engineer by trade (though the place I work is not,\nunfortunately, using Postgres). I've been thinking quite a bit about what\nour ideal architecture at work will look like and what scaling looks like,\nboth for work and for home projects (where I *am* looking at using Postgres\n:) ).\n\nTonight, (unrelated to work) I took the time to draw up a diagram of an\narchitecture that I think would help Postgres move one step towards both\nmore scalability, and better ease of use.\n\nSince I'm not so hot at drawing ASCII art diagrams, I thought maybe the way\nto go would be to drop it into a Google Presentation and make that public.\n\nIt's a couple of diagrams (existing and proposed architecture), and a list\nof what I think the advantages and disadvantages are.\n\nhttps://docs.google.com/presentation/d/1ew31STf8qC2keded5GfQiSUwb3fukmO0PFnZw12yAs8/edit?usp=sharing\n\nTo keep it short, the proposal is that the stages from Parse through Plan\nbe done in a separate process (and potentially on a separate server) from\nthe Execute stage. The idea is:\n- The Parse/Plan servers don't care whether they're read or write\n- The Parse/Plan know which Execute server is the writer (and which the\nreaders), and forward to the correct server for execution\n\nI even wonder if this might not mean that the Parse/Plan servers can be\ndeployed as K8s containers, with the Execute server being the external\nnon-k8s server.\n\nNote that in this e-mail, I've referred to:\n- The Parse/Plan server (which my diagram calls the Postgres SQL server)\n- The Execute server (which my diagram calls the Storage server)\n\nI'm not sure what naming makes sense, but I intentionally used a couple of\ndifferent names in hopes that one of them would get the idea across --\nplease disregard whichever names don't make sense, and feel free to suggest\nnew ones.\n\nI'm expecting that people will pick the idea apart, and wanted to know what\npeople think of it.\n\nThanks!\n\nHi all!  I'm a DevOps Manager/Engineer by trade (though the place I work is not, unfortunately, using Postgres).  I've been thinking quite a bit about what our ideal architecture at work will look like and what scaling looks like, both for work and for home projects (where I *am* looking at using Postgres :) ).  Tonight, (unrelated to work) I took the time to draw up a diagram of an architecture that I think would help Postgres move one step towards both more scalability, and better ease of use.  Since I'm not so hot at drawing ASCII art diagrams, I thought maybe the way to go would be to drop it into a Google Presentation and make that public.  It's a couple of diagrams (existing and proposed architecture), and a list of what I think the advantages and disadvantages are.  https://docs.google.com/presentation/d/1ew31STf8qC2keded5GfQiSUwb3fukmO0PFnZw12yAs8/edit?usp=sharingTo keep it short, the proposal is that the stages from Parse through Plan be done in a separate  process (and potentially on a separate server) from the Execute stage.  The idea is:- The Parse/Plan servers don't care whether they're read or write- The Parse/Plan know which Execute server is the writer (and which the readers), and forward to the correct server for executionI even wonder if this might not mean that the Parse/Plan servers can be deployed as K8s containers, with the Execute server being the external non-k8s server.  Note that in this e-mail, I've referred to:- The Parse/Plan server (which my diagram calls the Postgres SQL server)- The Execute server (which my diagram calls the Storage server)I'm not sure what naming makes sense, but I intentionally used a couple of different names in hopes that one of them would get the idea across -- please disregard whichever names don't make sense, and feel free to suggest new ones.  I'm expecting that people will pick the idea apart, and wanted to know what people think of it.  Thanks!", "msg_date": "Mon, 16 Oct 2023 21:40:16 +1100", "msg_from": "Timothy Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Architecture" }, { "msg_contents": "On Mon, Oct 16, 2023 at 6:42 AM Timothy Nelson <[email protected]>\nwrote:\n\n> I'm expecting that people will pick the idea apart, and wanted to know\n> what people think of it.\n>\n\nThanks for the proposal. This is actually a model that's been around for a\nvery long time. And, in fact, variations of it (e.g. parsing done in one\nplace and generated plan fragments shipped to remote execution nodes where\nthe data resides) are already used by things like Postgres-XL. There have\nalso been a number of academic implementations where parsing is done\nlocally and raw parse trees are sent to the server as well. While these\nthings do reduce CPU, there are a number of negative aspects to deal with\nthat make such an architecture more difficult to manage.\n\n-- \nJonah H. Harris\n\nOn Mon, Oct 16, 2023 at 6:42 AM Timothy Nelson <[email protected]> wrote:I'm expecting that people will pick the idea apart, and wanted to know what people think of it. Thanks for the proposal. This is actually a model that's been around for a very long time. And, in fact, variations of it (e.g. parsing done in one place and generated plan fragments shipped to remote execution nodes where the data resides) are already used by things like Postgres-XL. There have also been a number of academic implementations where parsing is done locally and raw parse trees are sent to the server as well. While these things do reduce CPU, there are a number of negative aspects to deal with that make such an architecture more difficult to manage.-- Jonah H. Harris", "msg_date": "Mon, 16 Oct 2023 11:07:37 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Architecture" }, { "msg_contents": "Great! I'm not surprised it's been around a long time -- I didn't think I\ncould be the only one to think of it.\n\nThanks for the heads-up on Postgres-XL -- I'd missed that one somehow.\n\nI'm going to include the words \"architecture\" and \"replication\" so that\npeople searching the archives in the future have more chance of finding\nthis conversation.\n\nThanks!\n\nOn Tue, 17 Oct 2023 at 02:07, Jonah H. Harris <[email protected]>\nwrote:\n\n> On Mon, Oct 16, 2023 at 6:42 AM Timothy Nelson <[email protected]>\n> wrote:\n>\n>> I'm expecting that people will pick the idea apart, and wanted to know\n>> what people think of it.\n>>\n>\n> Thanks for the proposal. This is actually a model that's been around for a\n> very long time. And, in fact, variations of it (e.g. parsing done in one\n> place and generated plan fragments shipped to remote execution nodes where\n> the data resides) are already used by things like Postgres-XL. There have\n> also been a number of academic implementations where parsing is done\n> locally and raw parse trees are sent to the server as well. While these\n> things do reduce CPU, there are a number of negative aspects to deal with\n> that make such an architecture more difficult to manage.\n>\n> --\n> Jonah H. Harris\n>\n>\n\nGreat!  I'm not surprised it's been around a long time -- I didn't think I could be the only one to think of it.  Thanks for the heads-up on Postgres-XL -- I'd missed that one somehow.  I'm going to include the words \"architecture\" and \"replication\" so that people searching the archives in the future have more chance of finding this conversation.  Thanks!  On Tue, 17 Oct 2023 at 02:07, Jonah H. Harris <[email protected]> wrote:On Mon, Oct 16, 2023 at 6:42 AM Timothy Nelson <[email protected]> wrote:I'm expecting that people will pick the idea apart, and wanted to know what people think of it. Thanks for the proposal. This is actually a model that's been around for a very long time. And, in fact, variations of it (e.g. parsing done in one place and generated plan fragments shipped to remote execution nodes where the data resides) are already used by things like Postgres-XL. There have also been a number of academic implementations where parsing is done locally and raw parse trees are sent to the server as well. While these things do reduce CPU, there are a number of negative aspects to deal with that make such an architecture more difficult to manage.-- Jonah H. Harris", "msg_date": "Tue, 17 Oct 2023 08:39:49 +1100", "msg_from": "Timothy Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Architecture" }, { "msg_contents": "Timothy Nelson <[email protected]> writes:\n> Great! I'm not surprised it's been around a long time -- I didn't think I\n> could be the only one to think of it.\n> Thanks for the heads-up on Postgres-XL -- I'd missed that one somehow.\n\nFWIW, we also have some in-core history with passing plans around,\nfor parallel-query workers. The things I'd take away from that\nare:\n\n1. It's expensive. In the parallel-query case it's hard to tease apart\nthe cost of passing across a plan from the cost of starting a worker,\nbut it's certainly high. You would need a way of only invoking this\nmechanism for expensive-anyway queries, which puts a hole in the idea\nyou seemed to have of having a hard separation between parse/plan\nprocesses and execute processes.\n\n2. Constant-folding at plan time is another reason you can't have\na hard separation: the planner might run arbitrary user-defined\ncode.\n\n3. Locking is a pain. In the Postgres architecture, table locks\nacquired during parse/plan have to be held through to execution,\nor concurrent DDL might invalidate your plan out from under you.\nWe finesse that in the parallel-query case by expecting the leader\nprocess to keep hold of all the needed locks, and then having some\nkluges that allow child workers to acquire the same locks without\nblocking. (The workers perhaps don't really need those locks,\nbut acquiring them avoids the need to poke holes in various\nyou-must-have-a-lock-to-do-this sanity checks.) I fear this area\nmight be a great deal harder if you're trying to pass plans from a\nparse/plan process to an arms-length execute process.\n\n4. Sharing execute workers between sessions (which I think was an\nimplicit part of your idea) is hard; hard enough that we haven't\neven tried. There's too much context-sensitive state in a backend\nand too little way of isolating which things depend on the current\nuser, current database etc. Probably this could be cleaned up\nwith enough work, but it'd not be a small project.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Oct 2023 18:34:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Architecture" }, { "msg_contents": "On Tue, Oct 17, 2023 at 08:39:49AM +1100, Timothy Nelson wrote:\n> Great!  I'm not surprised it's been around a long time -- I didn't think I\n> could be the only one to think of it.  \n> \n> Thanks for the heads-up on Postgres-XL -- I'd missed that one somehow. \n> \n> I'm going to include the words \"architecture\" and \"replication\" so that people\n> searching the archives in the future have more chance of finding this\n> conversation. \n\nYou can get some of this using foreign data wrappers to other Postgres\nservers.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 18 Oct 2023 12:51:30 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Architecture" } ]
[ { "msg_contents": "Some small (grammatical) changes in event-trigger.sgml\n\n(also one delete of 'community-we' (which I think is just confusing for \nthe not-postgresql-community reader).\n\n\nErik", "msg_date": "Mon, 16 Oct 2023 17:34:14 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": true, "msg_subject": "event trigger sgml touch-up" } ]
[ { "msg_contents": "Hi!\n\nThis email is a first pass at a user-visible design for how our backup and\nrestore process, as enabled by the Low Level API, can be modified to make\nit more mistake-proof. In short, it requires pg_start_backup to further\nexpand upon what it means for the system to be in the midst of a backup,\npg_stop_backup to reverse those things, and modifying the startup process\nto deal with the server having crashed while the system is in that backup\nstate. Notes at the end extend the design to handle concurrent backups.\n\nThe core functional changes are:\n1) pg_backup_start modifies a newly added \"in backup\" state flag in\npg_control to on.\n2) pg_backup_stop modifies that flag back to off.\n3) postmaster will refuse to start if that flag is on, unless one of:\n a) crash.signal exists in the data directory\n b) recovery.signal exists in the data directory\n c) standby.signal exists in the data directory\n4) Signal file processing causes the in-backup flag in pg_control to be set\nto off\n\nThe newly added crash.signal file is required to handle the case where the\nserver crashes after pg_backup_start and before pg_backup_stop. It\ninitiates a crash recovery of the instance just as is done today but with\nthe added change of flipping the flag to off when recovery is complete just\nbefore going live.\n\nThe error message for the failed startup while in backup will tell the dba\nthat one of the three signal files must exist.\nWhen processing recovery.signal or standby.signal the presence of the\nbackup_label and tablespace_map files are mandatory and the system will\nalso fail to start should they be missing.\n\nFor non-functional changes I would also suggest doing the following:\npg_backup_start will create a \"pg_backup_metadata\" directory if there is\nnot already one, or will empty it if there is.\npg_backup_start will create a crash.signal file in that directory\npg_backup_stop will create files within pg_backup_metadata upon its\ncompletion:\nbackup_label\ntablespace_map\nrecovery.signal\nstandby.signal\n\nAll of the instructions regarding what to place in those files should be\nremoved and instead the system should write them - no copy-paste.\n\nThe instructions modified to say \"copy the backup_label and tablespace_map\nfiles to the root of the backup directory and the recovery and standby\nsignal files to the pg_backup_metadata directory in the backup.\nAdditionally, we document crash recovery by saying \"move crash.signal from\npg_backup_metadata to the root of the data directory\". We should explicitly\nadvise excluding or removing pg_backup_metadata/crash.signal from the\nbackup as well.\n\nExtending the above to handle concurrent backup, for pg_control we'd sill\nuse the on/off flag but we have to have a shared in-memory session lock on\nsomething so that only the last surviving process actually changes it to\noff while also dealing with sessions that terminate without issuing\npg_backup_stop and without the server itself crashing. (I'm unfamiliar with\nhow this is handled today but I presume a mechanism exists already that\njust needs to be extended).\n\nFor the non-functional stuff, pg_backup_start returns a process id, and\nsubdirectories under pg_backup_metadata are created named with such. Add a\npg_backup_cleanup() function that executes while not in backup mode to\nclean up those subdirectories. Any subdirectory in the backup that isn't\nthe specified process id from pg_start_backup should be excluded/removed.\n\nDavid J.\n\nHi!This email is a first pass at a user-visible design for how our backup and restore process, as enabled by the Low Level API, can be modified to make it more mistake-proof.  In short, it requires pg_start_backup to further expand upon what it means for the system to be in the midst of a backup, pg_stop_backup to reverse those things, and modifying the startup process to deal with the server having crashed while the system is in that backup state.  Notes at the end extend the design to handle concurrent backups.The core functional changes are:1) pg_backup_start modifies a newly added \"in backup\" state flag in pg_control to on.2) pg_backup_stop modifies that flag back to off.3) postmaster will refuse to start if that flag is on, unless one of:  a) crash.signal exists in the data directory  b) recovery.signal exists in the data directory  c) standby.signal exists in the data directory4) Signal file processing causes the in-backup flag in pg_control to be set to offThe newly added crash.signal file is required to handle the case where the server crashes after pg_backup_start and before pg_backup_stop.  It initiates a crash recovery of the instance just as is done today but with the added change of flipping the flag to off when recovery is complete just before going live.The error message for the failed startup while in backup will tell the dba that one of the three signal files must exist.When processing recovery.signal or standby.signal the presence of the backup_label and tablespace_map files are mandatory and the system will also fail to start should they be missing.For non-functional changes I would also suggest doing the following:pg_backup_start will create a \"pg_backup_metadata\" directory if there is not already one, or will empty it if there is.pg_backup_start will create a crash.signal file in that directorypg_backup_stop  will create files within pg_backup_metadata upon its completion:backup_labeltablespace_maprecovery.signalstandby.signalAll of the instructions regarding what to place in those files should be removed and instead the system should write them - no copy-paste.The instructions modified to say \"copy the backup_label and tablespace_map files to the root of the backup directory and the recovery and standby signal files to the pg_backup_metadata directory in the backup.  Additionally, we document crash recovery by saying \"move crash.signal from pg_backup_metadata to the root of the data directory\". We should explicitly advise excluding or removing pg_backup_metadata/crash.signal from the backup as well.Extending the above to handle concurrent backup, for pg_control we'd sill use the on/off flag but we have to have a shared in-memory session lock on something so that only the last surviving process actually changes it to off while also dealing with sessions that terminate without issuing pg_backup_stop and without the server itself crashing. (I'm unfamiliar with how this is handled today but I presume a mechanism exists already that just needs to be extended).For the non-functional stuff, pg_backup_start returns a process id, and subdirectories under pg_backup_metadata are created named with such.  Add a pg_backup_cleanup() function that executes while not in backup mode to clean up those subdirectories.  Any subdirectory in the backup that isn't the specified process id from pg_start_backup should be excluded/removed.David J.", "msg_date": "Mon, 16 Oct 2023 09:26:47 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, 2023-10-16 at 09:26 -0700, David G. Johnston wrote:\n> This email is a first pass at a user-visible design for how our backup and restore\n> process, as enabled by the Low Level API, can be modified to make it more mistake-proof.\n> In short, it requires pg_start_backup to further expand upon what it means for the\n> system to be in the midst of a backup, pg_stop_backup to reverse those things,\n> and modifying the startup process to deal with the server having crashed while the\n> system is in that backup state.  Notes at the end extend the design to handle concurrent backups.\n> \n> The core functional changes are:\n> 1) pg_backup_start modifies a newly added \"in backup\" state flag in pg_control to on.\n> 2) pg_backup_stop modifies that flag back to off.\n> 3) postmaster will refuse to start if that flag is on, unless one of:\n>   a) crash.signal exists in the data directory\n>   b) recovery.signal exists in the data directory\n>   c) standby.signal exists in the data directory\n> 4) Signal file processing causes the in-backup flag in pg_control to be set to off\n> \n> The newly added crash.signal file is required to handle the case where the server\n> crashes after pg_backup_start and before pg_backup_stop.  It initiates a crash recovery\n> of the instance just as is done today but with the added change of flipping the flag\n> to off when recovery is complete just before going live.\n\nI see a couple of problems and/or things that need clarification with that idea:\n\n- Two backups can run concurrently. How do you reconcile that with the \"in backup\"\n flag and crash.signal?\n- I guess crash.signal is created during pg_start_backup(). So that file will be\n included in the backup. How do you handle that during recovery? Ignore it if\n another signal file is present? And if the user forgets to create a signal file\n for recovery, how do you prevent PostgreSQL from performing crash recovery?\n\nYours,\nLaurenz Albe \n\n\n", "msg_date": "Mon, 16 Oct 2023 19:26:35 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, Oct 16, 2023 at 10:26 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Mon, 2023-10-16 at 09:26 -0700, David G. Johnston wrote:\n> > This email is a first pass at a user-visible design for how our backup\n> and restore\n> > process, as enabled by the Low Level API, can be modified to make it\n> more mistake-proof.\n> > In short, it requires pg_start_backup to further expand upon what it\n> means for the\n> > system to be in the midst of a backup, pg_stop_backup to reverse those\n> things,\n> > and modifying the startup process to deal with the server having crashed\n> while the\n> > system is in that backup state. Notes at the end extend the design to\n> handle concurrent backups.\n> >\n> > The core functional changes are:\n> > 1) pg_backup_start modifies a newly added \"in backup\" state flag in\n> pg_control to on.\n> > 2) pg_backup_stop modifies that flag back to off.\n> > 3) postmaster will refuse to start if that flag is on, unless one of:\n> > a) crash.signal exists in the data directory\n> > b) recovery.signal exists in the data directory\n> > c) standby.signal exists in the data directory\n> > 4) Signal file processing causes the in-backup flag in pg_control to be\n> set to off\n> >\n> > The newly added crash.signal file is required to handle the case where\n> the server\n> > crashes after pg_backup_start and before pg_backup_stop. It initiates a\n> crash recovery\n> > of the instance just as is done today but with the added change of\n> flipping the flag\n> > to off when recovery is complete just before going live.\n>\n> I see a couple of problems and/or things that need clarification with that\n> idea:\n>\n> - Two backups can run concurrently. How do you reconcile that with the\n> \"in backup\"\n> flag and crash.signal?\n> - I guess crash.signal is created during pg_start_backup(). So that file\n> will be\n> included in the backup. How do you handle that during recovery? Ignore\n> it if\n> another signal file is present? And if the user forgets to create a\n> signal file\n> for recovery, how do you prevent PostgreSQL from performing crash\n> recovery?\n>\n>\ncrash.signal is created in the pg_backup_metadata directory, not the root\ndirectory. Should the server crash while any backup is in progress\npg_control would be aware of that fact (in_backup=true would still be\nthere, instead of in_backup=false which only comes back after all backups\nhave completed) and the server will not restart without user intervention -\nspecifically, moving the crash.signal file from (one of) the\npg_backup_metadata subdirectories to the root directory. As there is\nnothing special about the crash.signal files in the pg_backup_metadata\nsubdirectories \"touch crash.signal\" could be used.\n\nThe backed up pg_control file will have in_backup=true (I haven't pondered\nthe torn reads dynamic of this - I'm supposing that placing a copy of\npg_control into the pg_backup_metadata directory might be part of solving\nthat problem).\n\nDavid J.\n\nOn Mon, Oct 16, 2023 at 10:26 AM Laurenz Albe <[email protected]> wrote:On Mon, 2023-10-16 at 09:26 -0700, David G. Johnston wrote:\n> This email is a first pass at a user-visible design for how our backup and restore\n> process, as enabled by the Low Level API, can be modified to make it more mistake-proof.\n> In short, it requires pg_start_backup to further expand upon what it means for the\n> system to be in the midst of a backup, pg_stop_backup to reverse those things,\n> and modifying the startup process to deal with the server having crashed while the\n> system is in that backup state.  Notes at the end extend the design to handle concurrent backups.\n> \n> The core functional changes are:\n> 1) pg_backup_start modifies a newly added \"in backup\" state flag in pg_control to on.\n> 2) pg_backup_stop modifies that flag back to off.\n> 3) postmaster will refuse to start if that flag is on, unless one of:\n>   a) crash.signal exists in the data directory\n>   b) recovery.signal exists in the data directory\n>   c) standby.signal exists in the data directory\n> 4) Signal file processing causes the in-backup flag in pg_control to be set to off\n> \n> The newly added crash.signal file is required to handle the case where the server\n> crashes after pg_backup_start and before pg_backup_stop.  It initiates a crash recovery\n> of the instance just as is done today but with the added change of flipping the flag\n> to off when recovery is complete just before going live.\n\nI see a couple of problems and/or things that need clarification with that idea:\n\n- Two backups can run concurrently.  How do you reconcile that with the \"in backup\"\n  flag and crash.signal?\n- I guess crash.signal is created during pg_start_backup().  So that file will be\n  included in the backup.  How do you handle that during recovery?  Ignore it if\n  another signal file is present?  And if the user forgets to create a signal file\n  for recovery, how do you prevent PostgreSQL from performing crash recovery?crash.signal is created in the pg_backup_metadata directory, not the root directory.  Should the server crash while any backup is in progress pg_control would be aware of that fact (in_backup=true would still be there, instead of in_backup=false which only comes back after all backups have completed) and the server will not restart without user intervention - specifically, moving the crash.signal file from (one of) the pg_backup_metadata subdirectories to the root directory.  As there is nothing special about the crash.signal files in the pg_backup_metadata subdirectories \"touch crash.signal\" could be used.The backed up pg_control file will have in_backup=true (I haven't pondered the torn reads dynamic of this - I'm supposing that placing a copy of pg_control into the pg_backup_metadata directory might be part of solving that problem).David J.", "msg_date": "Mon, 16 Oct 2023 11:18:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, 2023-10-16 at 11:18 -0700, David G. Johnston wrote:\n> > I see a couple of problems and/or things that need clarification with that idea:\n> > \n> > - Two backups can run concurrently.  How do you reconcile that with the \"in backup\"\n> >   flag and crash.signal?\n> > - I guess crash.signal is created during pg_start_backup().  So that file will be\n> >   included in the backup.  How do you handle that during recovery?  Ignore it if\n> >   another signal file is present?  And if the user forgets to create a signal file\n> >   for recovery, how do you prevent PostgreSQL from performing crash recovery?\n> > \n> \n> crash.signal is created in the pg_backup_metadata directory, not the root directory.\n> Should the server crash while any backup is in progress pg_control would be aware\n> of that fact (in_backup=true would still be there, instead of in_backup=false which\n> only comes back after all backups have completed) and the server will not restart\n> without user intervention - specifically, moving the crash.signal file from (one of)\n> the pg_backup_metadata subdirectories to the root directory.  As there is nothing\n> special about the crash.signal files in the pg_backup_metadata subdirectories\n> \"touch crash.signal\" could be used.\n\nI see - I missed the part with the pg_backup_metadata directory.\n\nI think it won't meet with favor if there are cases that require manual intervention\nfor starting the server. That was the main argument for getting rid of the exclusive\nbackup API, which had a similar problem.\n\n\nAlso, how do you envision two concurrent backups with your setup?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 16 Oct 2023 21:09:09 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, Oct 16, 2023 at 12:09 PM Laurenz Albe <[email protected]>\nwrote:\n\n> I think it won't meet with favor if there are cases that require manual\n> intervention\n> for starting the server. That was the main argument for getting rid of\n> the exclusive\n> backup API, which had a similar problem.\n>\n\nIn the rare case of a crash of the source database while one or more\ndatabases are in progress. Restoring the backup requires manual\nintervention with signal files today.\n\nI get a desire for the live production server to not need intervention to\nrecover from a crash but I can't help but feel that this requirement plus\nthe goal of making this a non-interventionist as possible during recovery\nare incompatible. But I haven't given it a great amount of thought as I\nfelt the limited scope and situation were an acceptable cost for keeping\nthe process straight-forward (i.e., starting up a backup mode instance\nrequires a signal file that dictates the kind of recovery to perform). We\ncan either make the live backup contents invalid until something happens\nafter pg_backup_stop ends that makes it valid or we have to make the\ncurrent system being backed up invalid so long as it's in backup mode. The\nlater seemed easier and doesn't require actions outside of our control.\n\n\n> Also, how do you envision two concurrent backups with your setup?\n>\n\nI don't know if I understand the question - if ensuring that \"in backup\" is\nturned on when the first backup starts and is turned off when the last\nbackup ends isn't sufficient for concurrent usage I don't know what else I\nneed to deal with. Apparently concurrent backups already work today and\nI'm not seeing how, aside from the process ids for the metadata directories\n(i.e., the user needs to remove all but their own process subdirectory from\npg_backup_metadata) and state flag they wouldn't continue to work as-is.\n\nDavid J.\n\nOn Mon, Oct 16, 2023 at 12:09 PM Laurenz Albe <[email protected]> wrote:I think it won't meet with favor if there are cases that require manual intervention\nfor starting the server.  That was the main argument for getting rid of the exclusive\nbackup API, which had a similar problem.In the rare case of a crash of the source database while one or more databases are in progress.  Restoring the backup requires manual intervention with signal files today.I get a desire for the live production server to not need intervention to recover from a crash but I can't help but feel that this requirement plus the goal of making this a non-interventionist as possible during recovery are incompatible.  But I haven't given it a great amount of thought as I felt the limited scope and situation were an acceptable cost for keeping the process straight-forward (i.e., starting up a backup mode instance requires a signal file that dictates the kind of recovery to perform).  We can either make the live backup contents invalid until something happens after pg_backup_stop ends that makes it valid or we have to make the current system being backed up invalid so long as it's in backup mode.  The later seemed easier and doesn't require actions outside of our control.\nAlso, how do you envision two concurrent backups with your setup?I don't know if I understand the question - if ensuring that \"in backup\" is turned on when the first backup starts and is turned off when the last backup ends isn't sufficient for concurrent usage I don't know what else I need to deal with.  Apparently concurrent backups already work today and I'm not seeing how, aside from the process ids for the metadata directories (i.e., the user needs to remove all but their own process subdirectory from pg_backup_metadata) and state flag they wouldn't continue to work as-is.David J.", "msg_date": "Mon, 16 Oct 2023 12:36:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, Oct 16, 2023 at 12:36 PM David G. Johnston <\[email protected]> wrote:\n\n> On Mon, Oct 16, 2023 at 12:09 PM Laurenz Albe <[email protected]>\n> wrote:\n>\n>> I think it won't meet with favor if there are cases that require manual\n>> intervention\n>> for starting the server. That was the main argument for getting rid of\n>> the exclusive\n>> backup API, which had a similar problem.\n>>\n>\n> In the rare case of a crash of the source database while one or more\n> databases are in progress.\n>\n\nOr even more simply, just document that should the process executing\npg_backup_start, and eventually pg_backup_end, that noticed its session die\nout from under it, should just add crash.signal to the data directory\n(there probably can be a bit more intelligence involved in case the session\ncrash was isolated). A normal server shutdown should remove any\ncrash.signal files it sees (and ensure in_backup=\"false\"...). A non-normal\nshutdown is going to end up in crash recovery anyway so having the signal\nfile there won't harm anything even if pg_control is showing\n\"in_backup=false\".\n\nIn short, I probably don't know the details well enough to code the\nsolution but this seems solvable for those users that need automatic reboot\nand crash recovery during an incomplete backup. But no, by default, and\nprobably so far as pg_basebackup is concerned, a server crash during backup\nresults in requiring outside intervention in order to get the server to\nrestart. It specifically requires creation of crash.signal, the specific\nmethod being unimportant and its contents being fixed - whether empty or\notherwise.\n\nDavid J.\n\nOn Mon, Oct 16, 2023 at 12:36 PM David G. Johnston <[email protected]> wrote:On Mon, Oct 16, 2023 at 12:09 PM Laurenz Albe <[email protected]> wrote:I think it won't meet with favor if there are cases that require manual intervention\nfor starting the server.  That was the main argument for getting rid of the exclusive\nbackup API, which had a similar problem.In the rare case of a crash of the source database while one or more databases are in progress.Or even more simply, just document that should the process executing pg_backup_start, and eventually pg_backup_end, that noticed its session die out from under it, should just add crash.signal to the data directory (there probably can be a bit more intelligence involved in case the session crash was isolated).  A normal server shutdown should remove any crash.signal files it sees (and ensure in_backup=\"false\"...).  A non-normal shutdown is going to end up in crash recovery anyway so having the signal file there won't harm anything even if pg_control is showing \"in_backup=false\".In short, I probably don't know the details well enough to code the solution but this seems solvable for those users that need automatic reboot and crash recovery during an incomplete backup.  But no, by default, and probably so far as pg_basebackup is concerned, a server crash during backup results in requiring outside intervention in order to get the server to restart.  It specifically requires creation of crash.signal, the specific method being unimportant and its contents being fixed - whether empty or otherwise.David J.", "msg_date": "Mon, 16 Oct 2023 14:18:28 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Mon, Oct 16, 2023 at 5:21 PM David G. Johnston\n<[email protected]> wrote:\n> But no, by default, and probably so far as pg_basebackup is concerned, a server crash during backup results in requiring outside intervention in order to get the server to restart.\n\nOthers may differ, but I think such a proposal is dead on arrival. As\nLaurenz says, that's just reinventing one of the main problems with\nexclusive backup mode.\n\nThe underlying issue here is that, fundamentally, there's no way for\npostgres itself to tell the difference between the backup directory on\nthe primary and an exact copy of it on a standby. There has to be some\nmechanism by which the user tells us whether this is the original\ndirectory or a clone of it -- and that's what backup_label,\nrecovery.signal, and standby.signal are for. Your proposal rejiggers\nthe details of how we distinguish primary from standby, but it\ndoesn't, and can't, avoid the need for users to actually follow the\ndirections, and I don't see why they'd be any more likely to follow\nthe directions that this proposal would require than the directions\nwe're giving them now.\n\nI wish I had a better idea here, because the status quo is definitely\nnot great. The only thought that really occurs to me is that we might\ndo better if PostgreSQL did more of the work itself and left fewer\nsteps to the user to perform. If you could click the \"take a backup\nhere\" button and the \"restore a backup there\" button and not think\nabout what was actually happening, you'd not have the opportunity to\nmess up. But, as I understand it, the main motivation for the\ncontinued existence of the low-level API is that the data directory\nmight be really big, and you might need to clone it using some kind of\nspecial magic that your system has available instead of copying all\nthe bytes. And that makes it hard to move more of the responsibility\ninto PostgreSQL itself, because we don't know how that special magic\nworks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 14:28:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On 10/17/23 14:28, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 5:21 PM David G. Johnston\n> <[email protected]> wrote:\n>> But no, by default, and probably so far as pg_basebackup is concerned, a server crash during backup results in requiring outside intervention in order to get the server to restart.\n> \n> Others may differ, but I think such a proposal is dead on arrival. As\n> Laurenz says, that's just reinventing one of the main problems with\n> exclusive backup mode.\n\nI concur -- this proposal resurrects the issues we had with exclusive \nbackups without solving the issues being debated elsewhere, e.g. torn \nreads of pg_control or users removing backup_label when they should not.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 17 Oct 2023 15:30:10 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" }, { "msg_contents": "On Tue, Oct 17, 2023 at 12:30 PM David Steele <[email protected]> wrote:\n\n> On 10/17/23 14:28, Robert Haas wrote:\n> > On Mon, Oct 16, 2023 at 5:21 PM David G. Johnston\n> > <[email protected]> wrote:\n> >> But no, by default, and probably so far as pg_basebackup is concerned,\n> a server crash during backup results in requiring outside intervention in\n> order to get the server to restart.\n> >\n> > Others may differ, but I think such a proposal is dead on arrival. As\n> > Laurenz says, that's just reinventing one of the main problems with\n> > exclusive backup mode.\n>\n> I concur -- this proposal resurrects the issues we had with exclusive\n> backups without solving the issues being debated elsewhere, e.g. torn\n> reads of pg_control or users removing backup_label when they should not.\n>\n>\nThank you all for the feedback.\n\nAdmittedly I don't understand the problem of torn reads well enough to\nsolve it here but I figured by moving the \"must not remove\" stuff out of\nbackup_label and into pg_control the odds of it being removed from the\nbackup and the backup still booting basically go to zero. I do agree that\nrenaming backup_label to something like \"recovery_stuff_do_not_delete.conf\"\nprobably does that just as well without the downside.\n\nPlacing a copy of all relevant files into pg_backup_metadata seems like a\ndecent shield against accidents and a way to reliably self-document the\nbackup even if the behavioral changes are not desired. Though doing that\nand handling multiple concurrent backups probably makes the cost too high\nto move away from relying just on documentation.\n\nI suppose I'd consider having to add one file to the data directory to be\nan improvement over having to remove two of them - in terms of what it\ntakes to recover from system failure during a backup.\n\nDavid J\n\nOn Tue, Oct 17, 2023 at 12:30 PM David Steele <[email protected]> wrote:On 10/17/23 14:28, Robert Haas wrote:\n> On Mon, Oct 16, 2023 at 5:21 PM David G. Johnston\n> <[email protected]> wrote:\n>> But no, by default, and probably so far as pg_basebackup is concerned, a server crash during backup results in requiring outside intervention in order to get the server to restart.\n> \n> Others may differ, but I think such a proposal is dead on arrival. As\n> Laurenz says, that's just reinventing one of the main problems with\n> exclusive backup mode.\n\nI concur -- this proposal resurrects the issues we had with exclusive \nbackups without solving the issues being debated elsewhere, e.g. torn \nreads of pg_control or users removing backup_label when they should not.Thank you all for the feedback.Admittedly I don't understand the problem of torn reads well enough to solve it here but I figured by moving the \"must not remove\" stuff out of backup_label and into pg_control the odds of it being removed from the backup and the backup still booting basically go to zero.  I do agree that renaming backup_label to something like \"recovery_stuff_do_not_delete.conf\" probably does that just as well without the downside.Placing a copy of all relevant files into pg_backup_metadata seems like a decent shield against accidents and a way to reliably self-document the backup even if the behavioral changes are not desired.  Though doing that and handling multiple concurrent backups probably makes the cost too high to move away from relying just on documentation.I suppose I'd consider having to add one file to the data directory to be an improvement over having to remove two of them - in terms of what it takes to recover from system failure during a backup.David J", "msg_date": "Tue, 17 Oct 2023 13:05:39 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Physical Backup/Restore within the Low Level API" } ]
[ { "msg_contents": "Currently we have this odd behavior (for a superuser):\n\nregression=# ALTER SYSTEM SET foo.bar TO 'baz';\nERROR: unrecognized configuration parameter \"foo.bar\"\nregression=# SET foo.bar TO 'baz';\nSET\nregression=# ALTER SYSTEM SET foo.bar TO 'baz';\nALTER SYSTEM\n\nThat is, you can't ALTER SYSTEM SET a random custom GUC unless there\nis already a placeholder GUC for it, because the find_option call in\nAlterSystemSetConfigFile fails. This is surely pretty inconsistent.\nEither the first ALTER SYSTEM SET ought to succeed, or the second one\nought to fail too, because we don't have any more knowledge about the\ncustom GUC than we did before.\n\nIn the original discussion about this [1], I initially leaned towards\n\"they should both fail\", but I reconsidered: there doesn't seem to be\nany harm in allowing ALTER SYSTEM SET to succeed for any custom GUC\nname, as long as you're superuser.\n\nHence, attached is a patch for that. Much of it is refactoring to\navoid duplicating the code that checks for a reserved GUC name, which\nI think should still be done here --- otherwise, we're losing a lot of\nthe typo detection that that check was intended to provide. (That is,\nif you have loaded an extension that defines \"foo\" as a prefix, we\nshould honor the extension's opinion about whether \"foo.bar\" is\nvalid.) I also fixed the code for GRANT ON PARAMETER so that it\nfollows the same rules and throws the same errors for invalid cases.\n\nThere's a chunk of AlterSystemSetConfigFile that now needs indenting\none more tab stop, but I didn't do that yet for ease of review.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/169746329791.169914.16613647309012285391%40wrigleys.postgresql.org", "msg_date": "Mon, 16 Oct 2023 20:19:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "On 17/10/2023 07:19, Tom Lane wrote:\n> Currently we have this odd behavior (for a superuser):\n> \n> regression=# ALTER SYSTEM SET foo.bar TO 'baz';\n> ERROR: unrecognized configuration parameter \"foo.bar\"\n> regression=# SET foo.bar TO 'baz';\n> SET\n> regression=# ALTER SYSTEM SET foo.bar TO 'baz';\n> ALTER SYSTEM\n> \n> That is, you can't ALTER SYSTEM SET a random custom GUC unless there\n> is already a placeholder GUC for it, because the find_option call in\n> AlterSystemSetConfigFile fails. This is surely pretty inconsistent.\n> Either the first ALTER SYSTEM SET ought to succeed, or the second one\n> ought to fail too, because we don't have any more knowledge about the\n> custom GUC than we did before.\n> \n> In the original discussion about this [1], I initially leaned towards\n> \"they should both fail\", but I reconsidered: there doesn't seem to be\n> any harm in allowing ALTER SYSTEM SET to succeed for any custom GUC\n> name, as long as you're superuser.\n> \n> Hence, attached is a patch for that. Much of it is refactoring to\n> avoid duplicating the code that checks for a reserved GUC name, which\n> I think should still be done here --- otherwise, we're losing a lot of\n> the typo detection that that check was intended to provide. (That is,\n> if you have loaded an extension that defines \"foo\" as a prefix, we\n> should honor the extension's opinion about whether \"foo.bar\" is\n> valid.) I also fixed the code for GRANT ON PARAMETER so that it\n> follows the same rules and throws the same errors for invalid cases.\n> \n> There's a chunk of AlterSystemSetConfigFile that now needs indenting\n> one more tab stop, but I didn't do that yet for ease of review.\n> \n> Thoughts?\n\nI have reviewed this patch. It looks good in general. Now, we can change \nthe placeholder value with the SET command and have one more tool (which \nmay be unusual) to pass some data through the session.\nKeeping away from the reason why DBMS allows such behaviour, I have one \nquestion:\n\"SET foo.bar TO 'smth'\" can immediately alter the placeholder's value. \nBut what is the reason that \"ALTER SYSTEM SET foo.bar TO 'smth'\" doesn't \ndo the same?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:55:48 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "Andrei Lepikhov <[email protected]> writes:\n> \"SET foo.bar TO 'smth'\" can immediately alter the placeholder's value. \n> But what is the reason that \"ALTER SYSTEM SET foo.bar TO 'smth'\" doesn't \n> do the same?\n\nBecause it's not supposed to take effect until you issue a reload\ncommand (and maybe not even then, depending on which GUC we're\ntalking about). I certainly think it wouldn't make sense for your\nown session to adopt the value ahead of others.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Oct 2023 01:15:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "On 18/10/2023 12:15, Tom Lane wrote:\n> Andrei Lepikhov <[email protected]> writes:\n>> \"SET foo.bar TO 'smth'\" can immediately alter the placeholder's value.\n>> But what is the reason that \"ALTER SYSTEM SET foo.bar TO 'smth'\" doesn't\n>> do the same?\n> \n> Because it's not supposed to take effect until you issue a reload\n> command (and maybe not even then, depending on which GUC we're\n> talking about). I certainly think it wouldn't make sense for your\n> own session to adopt the value ahead of others.\n\nThanks for the answer.\nIntroducing the assignable_custom_variable_name can be helpful. The code \nlooks good. I think it deserves to be committed - after the indentation \nfix, of course.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 18 Oct 2023 12:55:52 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "I do like the idea that we should keep the set and the altar system with\nthe same behavior. But one thing I am worried about is the typo detected\nhere because I usually make that type of mistake myself. I believe we\nshould have an extra log to explicitly tell the user this is a `custom\nvariable` guc.\n\nBtw, another aspect I want to better understand is if the superuser session\ncalled pg_reload_conf with custom variables, does that mean these custom\nvariables will override the other active transaction's SET command?\n\nThanks,\nShihao\n\nOn Wed, Oct 18, 2023 at 1:59 AM Andrei Lepikhov <[email protected]>\nwrote:\n\n> On 18/10/2023 12:15, Tom Lane wrote:\n> > Andrei Lepikhov <[email protected]> writes:\n> >> \"SET foo.bar TO 'smth'\" can immediately alter the placeholder's value.\n> >> But what is the reason that \"ALTER SYSTEM SET foo.bar TO 'smth'\" doesn't\n> >> do the same?\n> >\n> > Because it's not supposed to take effect until you issue a reload\n> > command (and maybe not even then, depending on which GUC we're\n> > talking about). I certainly think it wouldn't make sense for your\n> > own session to adopt the value ahead of others.\n>\n> Thanks for the answer.\n> Introducing the assignable_custom_variable_name can be helpful. The code\n> looks good. I think it deserves to be committed - after the indentation\n> fix, of course.\n>\n> --\n> regards,\n> Andrey Lepikhov\n> Postgres Professional\n>\n>\n>\n>\n\nI do like the idea that we should keep the set and the altar system with the same behavior. But one thing I am worried about is the typo detected here because I usually make that type of mistake myself. I believe we should have an extra log to explicitly tell the user this is a `custom variable` guc.Btw, another aspect I want to better understand is if the superuser session called pg_reload_conf with custom variables, does that mean these custom variables will override the other active transaction's SET command?Thanks,ShihaoOn Wed, Oct 18, 2023 at 1:59 AM Andrei Lepikhov <[email protected]> wrote:On 18/10/2023 12:15, Tom Lane wrote:\n> Andrei Lepikhov <[email protected]> writes:\n>> \"SET foo.bar TO 'smth'\" can immediately alter the placeholder's value.\n>> But what is the reason that \"ALTER SYSTEM SET foo.bar TO 'smth'\" doesn't\n>> do the same?\n> \n> Because it's not supposed to take effect until you issue a reload\n> command (and maybe not even then, depending on which GUC we're\n> talking about).  I certainly think it wouldn't make sense for your\n> own session to adopt the value ahead of others.\n\nThanks for the answer.\nIntroducing the assignable_custom_variable_name can be helpful. The code \nlooks good. I think it deserves to be committed - after the indentation \nfix, of course.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional", "msg_date": "Thu, 19 Oct 2023 09:58:05 -0400", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "shihao zhong <[email protected]> writes:\n> I do like the idea that we should keep the set and the altar system with\n> the same behavior. But one thing I am worried about is the typo detected\n> here because I usually make that type of mistake myself. I believe we\n> should have an extra log to explicitly tell the user this is a `custom\n> variable` guc.\n\nI don't think there's any chance of getting away with that. As noted\nupthread, a lot of people use placeholder GUCs as a substitute for a\nproper session-variable feature. If we ever get real session variables,\nwe could start to nudge people away from using placeholders; but right\nnow too many people would complain about the noise of a warning.\n\n> Btw, another aspect I want to better understand is if the superuser session\n> called pg_reload_conf with custom variables, does that mean these custom\n> variables will override the other active transaction's SET command?\n\nNo, a per-session SET will override a value coming from the config file.\nThat's independent of whether it's a regular or custom GUC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Oct 2023 12:00:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "Thanks for the answer. The code looks good to me.\n\nThanks,\nShihao\n\nOn Thu, Oct 19, 2023 at 12:00 PM Tom Lane <[email protected]> wrote:\n\n> shihao zhong <[email protected]> writes:\n> > I do like the idea that we should keep the set and the altar system with\n> > the same behavior. But one thing I am worried about is the typo detected\n> > here because I usually make that type of mistake myself. I believe we\n> > should have an extra log to explicitly tell the user this is a `custom\n> > variable` guc.\n>\n> I don't think there's any chance of getting away with that. As noted\n> upthread, a lot of people use placeholder GUCs as a substitute for a\n> proper session-variable feature. If we ever get real session variables,\n> we could start to nudge people away from using placeholders; but right\n> now too many people would complain about the noise of a warning.\n>\n> > Btw, another aspect I want to better understand is if the superuser\n> session\n> > called pg_reload_conf with custom variables, does that mean these custom\n> > variables will override the other active transaction's SET command?\n>\n> No, a per-session SET will override a value coming from the config file.\n> That's independent of whether it's a regular or custom GUC.\n>\n> regards, tom lane\n>\n\nThanks for the answer. The code looks good to me.Thanks,ShihaoOn Thu, Oct 19, 2023 at 12:00 PM Tom Lane <[email protected]> wrote:shihao zhong <[email protected]> writes:\n> I do like the idea that we should keep the set and the altar system with\n> the same behavior. But one thing I am worried about is the typo detected\n> here because I usually make that type of mistake myself. I believe we\n> should have an extra log to explicitly tell the user this is a `custom\n> variable` guc.\n\nI don't think there's any chance of getting away with that.  As noted\nupthread, a lot of people use placeholder GUCs as a substitute for a\nproper session-variable feature.  If we ever get real session variables,\nwe could start to nudge people away from using placeholders; but right\nnow too many people would complain about the noise of a warning.\n\n> Btw, another aspect I want to better understand is if the superuser session\n> called pg_reload_conf with custom variables, does that mean these custom\n> variables will override the other active transaction's SET command?\n\nNo, a per-session SET will override a value coming from the config file.\nThat's independent of whether it's a regular or custom GUC.\n\n                        regards, tom lane", "msg_date": "Thu, 19 Oct 2023 12:05:44 -0400", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "> On 17 Oct 2023, at 05:19, Tom Lane <[email protected]> wrote:\n> \n> In the original discussion about this [1], I initially leaned towards\n> \"they should both fail\", but I reconsidered: there doesn't seem to be\n> any harm in allowing ALTER SYSTEM SET to succeed for any custom GUC\n> name, as long as you're superuser.\n\n+1 for allowing non-existent custom GUCs.\nFrom time to time we have to roll out custom binaries controlled by GUCs that do not exist in normal binaries. Juggling with postgresql.conf would be painful in this case.\n\n\nBest regards, Andrey Borodin.\nOn 17 Oct 2023, at 05:19, Tom Lane <[email protected]> wrote:In the original discussion about this [1], I initially leaned towards\"they should both fail\", but I reconsidered: there doesn't seem to beany harm in allowing ALTER SYSTEM SET to succeed for any custom GUCname, as long as you're superuser.+1 for allowing non-existent custom GUCs.From time to time we have to roll out custom binaries controlled by GUCs that do not exist in normal binaries. Juggling with postgresql.conf would be painful in this case.Best regards, Andrey Borodin.", "msg_date": "Thu, 19 Oct 2023 22:29:13 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" }, { "msg_contents": "\nOn 2023-10-16 Mo 20:19, Tom Lane wrote:\n> Currently we have this odd behavior (for a superuser):\n>\n> regression=# ALTER SYSTEM SET foo.bar TO 'baz';\n> ERROR: unrecognized configuration parameter \"foo.bar\"\n> regression=# SET foo.bar TO 'baz';\n> SET\n> regression=# ALTER SYSTEM SET foo.bar TO 'baz';\n> ALTER SYSTEM\n>\n> That is, you can't ALTER SYSTEM SET a random custom GUC unless there\n> is already a placeholder GUC for it, because the find_option call in\n> AlterSystemSetConfigFile fails. This is surely pretty inconsistent.\n> Either the first ALTER SYSTEM SET ought to succeed, or the second one\n> ought to fail too, because we don't have any more knowledge about the\n> custom GUC than we did before.\n>\n> In the original discussion about this [1], I initially leaned towards\n> \"they should both fail\", but I reconsidered: there doesn't seem to be\n> any harm in allowing ALTER SYSTEM SET to succeed for any custom GUC\n> name, as long as you're superuser.\n>\n> Hence, attached is a patch for that. Much of it is refactoring to\n> avoid duplicating the code that checks for a reserved GUC name, which\n> I think should still be done here --- otherwise, we're losing a lot of\n> the typo detection that that check was intended to provide. (That is,\n> if you have loaded an extension that defines \"foo\" as a prefix, we\n> should honor the extension's opinion about whether \"foo.bar\" is\n> valid.) I also fixed the code for GRANT ON PARAMETER so that it\n> follows the same rules and throws the same errors for invalid cases.\n>\n> There's a chunk of AlterSystemSetConfigFile that now needs indenting\n> one more tab stop, but I didn't do that yet for ease of review.\n>\n> Thoughts?\n>\n> \t\t\t\n\n\nHaven't read the patch but in principle I agree.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 10:19:54 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allow ALTER SYSTEM SET on unrecognized custom GUCs" } ]
[ { "msg_contents": "Hi Alvaro,\n\nProblem 1\n========\n#create table tpart (a serial primary key, src varchar) partition by range(a);\nCREATE TABLE\n#create table t_p4 (a int primary key, src varchar);\nCREATE TABLE\n#\\d tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable |\nDefault\n--------+-------------------+-----------+----------+----------------------------------\n a | integer | | not null |\nnextval('tpart_a_seq'::regclass)\n src | character varying | | |\nPartition key: RANGE (a)\nIndexes:\n \"tpart_pkey\" PRIMARY KEY, btree (a)\nNumber of partitions: 0\n\n#\\d t_p4;\n Table \"public.t_p4\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nIndexes:\n \"t_p4_pkey\" PRIMARY KEY, btree (a)\n\nNotice that both tpart and t_p4 have their column 'a' marked NOT NULL resp.\n\n#select conname, contype, conrelid::regclass from pg_constraint where\nconrelid in ('tpart'::regclass, 't_p4'::regclass);\n conname | contype | conrelid\n------------------+---------+----------\n tpart_a_not_null | n | tpart\n tpart_pkey | p | tpart\n t_p4_pkey | p | t_p4\n(3 rows)\n\nBut tparts NOT NULL constraint is recorded in pg_constraint but not\nt_p4's. Is this expected?\n\nBoth of them have there column a marked not null in pg_attribute\n#select attrelid::regclass, attname, attnotnull from pg_attribute\nwhere attrelid in ('tpart'::regclass, 't_p4'::regclass) and attname =\n'a';\n attrelid | attname | attnotnull\n----------+---------+------------\n tpart | a | t\n t_p4 | a | t\n(2 rows)\n\n From the next set of commands it can be inferred that the NOT NULL\nconstraint of tpart came because of serial column whereas t_p4's\ncolumn a was marked NOT NULL because of primary key. I didn't\ninvestigate the source code.\n#create table t_serial(a serial, src varchar);\nCREATE TABLE\n#select conname, contype, conrelid::regclass from pg_constraint where\nconrelid in ('t_serial'::regclass);\n conname | contype | conrelid\n---------------------+---------+----------\n t_serial_a_not_null | n | t_serial\n(1 row)\n\n#select attrelid::regclass, attname, attnotnull from pg_attribute\nwhere attrelid in ('t_serial'::regclass) and attname = 'a';\n attrelid | attname | attnotnull\n----------+---------+------------\n t_serial | a | t\n(1 row)\n\nHere's what I was trying to do actually.\n#alter table tpart attach partition t_p4 for values from (7) to (9);\nERROR: column \"a\" in child table must be marked NOT NULL\nThis is a surprise since t_p4.a is marked as NOT NULL. That happens\nbecause MergeConstraintsIntoExisting() only looks at pg_constraint and\nnot pg_attribute. Should this function look at pg_attribute as well?\n\nThis behaviour is different from PG 14. I chanced to have a PG 14\nbuild and hence tried that. I haven't tried PG 15 though.\n#select version();\n version\n-------------------------------------------------------------------------------------------------------\n PostgreSQL 14.8 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n9.4.0-1ubuntu1~20.04.1) 9.4.0, 64-bit\n(1 row)\n\n#create table tpart (a serial primary key, src varchar) partition by range(a);\nCREATE TABLE\n#create table t_p4 (a int primary key, src varchar);\nCREATE TABLE\n#\\d tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable |\nDefault\n--------+-------------------+-----------+----------+----------------------------------\n a | integer | | not null |\nnextval('tpart_a_seq'::regclass)\n src | character varying | | |\nPartition key: RANGE (a)\nIndexes:\n \"tpart_pkey\" PRIMARY KEY, btree (a)\nNumber of partitions: 0\n\n#\\d t_p4\n Table \"public.t_p4\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nIndexes:\n \"t_p4_pkey\" PRIMARY KEY, btree (a)\n\n#select conname, contype, conrelid::regclass from pg_constraint where\nconrelid in ('tpart'::regclass, 't_p4'::regclass);\n conname | contype | conrelid\n------------+---------+----------\n tpart_pkey | p | tpart\n t_p4_pkey | p | t_p4\n(2 rows)\n ^\n#select attrelid::regclass, attname, attnotnull from pg_attribute\nwhere attrelid in ('tpart'::regclass, 't_p4'::regclass) and attname =\n'a';\n attrelid | attname | attnotnull\n----------+---------+------------\n tpart | a | t\n t_p4 | a | t\n(2 rows)\n\n#alter table tpart attach partition t_p4 for values from (7) to (9);\nALTER TABLE\npostgres@1073836=#\\d tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable |\nDefault\n--------+-------------------+-----------+----------+----------------------------------\n a | integer | | not null |\nnextval('tpart_a_seq'::regclass)\n src | character varying | | |\nPartition key: RANGE (a)\nIndexes:\n \"tpart_pkey\" PRIMARY KEY, btree (a)\nNumber of partitions: 1 (Use \\d+ to list them.)\n\n#\\d t_p4\n Table \"public.t_p4\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (7) TO (9)\nIndexes:\n \"t_p4_pkey\" PRIMARY KEY, btree (a)\n\nNotice that ALTER TABLE succeeded and t_p4 was attached to tpart as a partition.\n\nIs this backward compatibility break intentional? I haven't followed\nNOT NULL constraint thread closely. I might have missed some\ndiscussion.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 17 Oct 2023 12:30:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "odd behaviour with serial, non null and partitioned table" }, { "msg_contents": "Hello,\n\nOn 2023-Oct-17, Ashutosh Bapat wrote:\n\n> Problem 1\n> ========\n> #create table tpart (a serial primary key, src varchar) partition by range(a);\n> CREATE TABLE\n> #create table t_p4 (a int primary key, src varchar);\n> CREATE TABLE\n\n> But tparts NOT NULL constraint is recorded in pg_constraint but not\n> t_p4's. Is this expected?\n\nYes. tpart gets it from SERIAL, which implicitly requires a NOT NULL\nmarker. If you just say PRIMARY KEY as you did for t_p4, the column\ngets marked attnotnull, but there's no explicit NOT NULL constraint.\n\n\n> Here's what I was trying to do actually.\n> #alter table tpart attach partition t_p4 for values from (7) to (9);\n> ERROR: column \"a\" in child table must be marked NOT NULL\n> This is a surprise since t_p4.a is marked as NOT NULL. That happens\n> because MergeConstraintsIntoExisting() only looks at pg_constraint and\n> not pg_attribute. Should this function look at pg_attribute as well?\n\nHmm ... well, not that way. Maybe attaching a partition should cause a\nNOT NULL constraint to spawn automatically (we do this in other cases).\nThere's no need to verify the existing rows for it, since attnotnull is\nalready checked; but it would mean that if you DETACH the partition, the\nconstraint would remain, so the table would dump slightly differently\nthan if you hadn't ATTACHed and DETACHed it. But that sounds OK to me.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nThou shalt check the array bounds of all strings (indeed, all arrays), for\nsurely where thou typest \"foo\" someone someday shall type\n\"supercalifragilisticexpialidocious\" (5th Commandment for C programmers)\n\n\n", "msg_date": "Tue, 17 Oct 2023 15:26:27 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: odd behaviour with serial, non null and partitioned table" } ]
[ { "msg_contents": "Hi All,\n#create table tpart (a serial primary key, src varchar) partition by range(a);\nCREATE TABLE\n#create table t_p4 (a int primary key, src varchar);\nCREATE TABLE\nTo appease the gods of surprises I need to add a NOT NULL constraint. See [1].\n#alter table t_p4 alter column a set not null;\nALTER TABLE\n#alter table tpart attach partition t_p4 for values from (7) to (9);\nALTER TABLE\n#\\d t_p4\n Table \"public.t_p4\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (7) TO (9)\nIndexes:\n \"t_p4_pkey\" PRIMARY KEY, btree (a)\n\nThe partition was attached but the gods of surprises forgot to set the\ndefault value for a, which gets set when we create a partition\ndirectly.\n#create table t_p3 partition of tpart for values from (5) to (7);\nCREATE TABLE\n#\\d t_p3\n Table \"public.t_p3\"\n Column | Type | Collation | Nullable |\nDefault\n--------+-------------------+-----------+----------+----------------------------------\n a | integer | | not null |\nnextval('tpart_a_seq'::regclass)\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (5) TO (7)\nIndexes:\n \"t_p3_pkey\" PRIMARY KEY, btree (a)\n\nGods of surprises have another similar gift.\n#create table t_p2(a serial primary key, src varchar);\nCREATE TABLE\n#alter table tpart attach partition t_p2 for values from (3) to (5);\nALTER TABLE\n#\\d t_p2\n Table \"public.t_p2\"\n Column | Type | Collation | Nullable |\nDefault\n--------+-------------------+-----------+----------+---------------------------------\n a | integer | | not null |\nnextval('t_p2_a_seq'::regclass)\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (3) TO (5)\nIndexes:\n \"t_p2_pkey\" PRIMARY KEY, btree (a)\nObserve that t_p2 uses a different sequence, not the sequence used by\nthe parttiioned table tpart.\n\nI think this behaviour is an unexpected result of using inheritance\nfor partitioning. Also partitions not getting default values from the\npartitioned table may be fine except in the case of serial columns.\nUnlike inheritance hierarchy, a partitioned table is expected to be a\nsingle table. Thus a serial column is expected to have monotonically\nincreasing values across the partitions. So partitions should use the\nsame sequence as the parent table. If the new partition being attached\nuses a different sequence than the partitioned table, we should\nprohibit it from being attached.\n\nThis raises the question of what should be the behaviour on detach\npartitions. I haven't studied the behaviour of inherited properties.\nBut it looks like the partition being detached should let go of the\ninherited properties and keep the non-inherited one (even those which\nwere retained after merging).\n\nI found this behaviour when experimenting with serial columns when\nreading [2]. The result of this discussion will have some impact on\nhow we deal with IDENTITY columns in partitioned tables.\n\n[1] https://www.postgresql.org/message-id/CAExHW5uRUtDfU0R8zXofQxCV3S1B%2BPa%2BX%2BNrpMwzKraLc25%3DEg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/70be435b-05db-06f2-7c01-9bb8ee2fccce%40enterprisedb.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 17 Oct 2023 12:55:46 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "serial and partitioned table" }, { "msg_contents": "On 17.10.23 09:25, Ashutosh Bapat wrote:\n> #create table tpart (a serial primary key, src varchar) partition by range(a);\n> CREATE TABLE\n> #create table t_p4 (a int primary key, src varchar);\n> CREATE TABLE\n> To appease the gods of surprises I need to add a NOT NULL constraint. See [1].\n> #alter table t_p4 alter column a set not null;\n> ALTER TABLE\n> #alter table tpart attach partition t_p4 for values from (7) to (9);\n> ALTER TABLE\n> #\\d t_p4\n> Table \"public.t_p4\"\n> Column | Type | Collation | Nullable | Default\n> --------+-------------------+-----------+----------+---------\n> a | integer | | not null |\n> src | character varying | | |\n> Partition of: tpart FOR VALUES FROM (7) TO (9)\n> Indexes:\n> \"t_p4_pkey\" PRIMARY KEY, btree (a)\n> \n> The partition was attached but the gods of surprises forgot to set the\n> default value for a, which gets set when we create a partition\n> directly.\n> #create table t_p3 partition of tpart for values from (5) to (7);\n> CREATE TABLE\n> #\\d t_p3\n> Table \"public.t_p3\"\n> Column | Type | Collation | Nullable |\n> Default\n> --------+-------------------+-----------+----------+----------------------------------\n> a | integer | | not null |\n> nextval('tpart_a_seq'::regclass)\n> src | character varying | | |\n> Partition of: tpart FOR VALUES FROM (5) TO (7)\n> Indexes:\n> \"t_p3_pkey\" PRIMARY KEY, btree (a)\n\nPartitions can have default values different from the parent table. So \nit would not be correct for the attach operation to just overwrite the \ndefaults on the table being attached. One might think, it should only \nadjust the default if no default was explicitly specified. But we don't \nhave a way to tell apart \"no default\" from \"null default was actually \nintended\".\n\nSo, while I agree that there is some potential for confusion here, I \nthink this might be intentional behavior. Or at least there is no \nbetter possible behavior.\n\n> \n> Gods of surprises have another similar gift.\n> #create table t_p2(a serial primary key, src varchar);\n> CREATE TABLE\n> #alter table tpart attach partition t_p2 for values from (3) to (5);\n> ALTER TABLE\n> #\\d t_p2\n> Table \"public.t_p2\"\n> Column | Type | Collation | Nullable |\n> Default\n> --------+-------------------+-----------+----------+---------------------------------\n> a | integer | | not null |\n> nextval('t_p2_a_seq'::regclass)\n> src | character varying | | |\n> Partition of: tpart FOR VALUES FROM (3) TO (5)\n> Indexes:\n> \"t_p2_pkey\" PRIMARY KEY, btree (a)\n> Observe that t_p2 uses a different sequence, not the sequence used by\n> the parttiioned table tpart.\n\nI think this is also correct if you consider the definition of serial as \na macro that creates a sequence. Of course, the behavior is silly, \nwhich is why we are plotting to get rid of the current definition of serial.\n\n\n", "msg_date": "Mon, 13 Nov 2023 11:09:28 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: serial and partitioned table" }, { "msg_contents": "On Mon, Nov 13, 2023 at 3:39 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 17.10.23 09:25, Ashutosh Bapat wrote:\n> > #create table tpart (a serial primary key, src varchar) partition by range(a);\n> > CREATE TABLE\n> > #create table t_p4 (a int primary key, src varchar);\n> > CREATE TABLE\n> > To appease the gods of surprises I need to add a NOT NULL constraint. See [1].\n> > #alter table t_p4 alter column a set not null;\n> > ALTER TABLE\n> > #alter table tpart attach partition t_p4 for values from (7) to (9);\n> > ALTER TABLE\n> > #\\d t_p4\n> > Table \"public.t_p4\"\n> > Column | Type | Collation | Nullable | Default\n> > --------+-------------------+-----------+----------+---------\n> > a | integer | | not null |\n> > src | character varying | | |\n> > Partition of: tpart FOR VALUES FROM (7) TO (9)\n> > Indexes:\n> > \"t_p4_pkey\" PRIMARY KEY, btree (a)\n> >\n> > The partition was attached but the gods of surprises forgot to set the\n> > default value for a, which gets set when we create a partition\n> > directly.\n> > #create table t_p3 partition of tpart for values from (5) to (7);\n> > CREATE TABLE\n> > #\\d t_p3\n> > Table \"public.t_p3\"\n> > Column | Type | Collation | Nullable |\n> > Default\n> > --------+-------------------+-----------+----------+----------------------------------\n> > a | integer | | not null |\n> > nextval('tpart_a_seq'::regclass)\n> > src | character varying | | |\n> > Partition of: tpart FOR VALUES FROM (5) TO (7)\n> > Indexes:\n> > \"t_p3_pkey\" PRIMARY KEY, btree (a)\n>\n> Partitions can have default values different from the parent table. So\n> it would not be correct for the attach operation to just overwrite the\n> defaults on the table being attached. One might think, it should only\n> adjust the default if no default was explicitly specified. But we don't\n> have a way to tell apart \"no default\" from \"null default was actually\n> intended\".\n>\n> So, while I agree that there is some potential for confusion here, I\n> think this might be intentional behavior. Or at least there is no\n> better possible behavior.\n\nOk.\n\n>\n> >\n> > Gods of surprises have another similar gift.\n> > #create table t_p2(a serial primary key, src varchar);\n> > CREATE TABLE\n> > #alter table tpart attach partition t_p2 for values from (3) to (5);\n> > ALTER TABLE\n> > #\\d t_p2\n> > Table \"public.t_p2\"\n> > Column | Type | Collation | Nullable |\n> > Default\n> > --------+-------------------+-----------+----------+---------------------------------\n> > a | integer | | not null |\n> > nextval('t_p2_a_seq'::regclass)\n> > src | character varying | | |\n> > Partition of: tpart FOR VALUES FROM (3) TO (5)\n> > Indexes:\n> > \"t_p2_pkey\" PRIMARY KEY, btree (a)\n> > Observe that t_p2 uses a different sequence, not the sequence used by\n> > the parttiioned table tpart.\n>\n> I think this is also correct if you consider the definition of serial as\n> a macro that creates a sequence. Of course, the behavior is silly,\n> which is why we are plotting to get rid of the current definition of serial.\n\nOk.\n\nIf we implement the identity behaviour as per the discussion in\npartitioning and identity thread [1], behaviour of serial column will\nbe different from the identity column. The behaviour of the identity\ncolumn would be saner, of course. When and if we redirect the serial\nto identity there will be some surprises because of these differences.\nI think, this is moving things in a better direction. We just need to\nacknowledge and agree on this.\n\n[1] https://www.postgresql.org/message-id/flat/8801cade-20d2-4c9c-a583-b3754beb9be3%40eisentraut.org#9ce279be53b86dd9ab5fce027c94687d\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 16 Nov 2023 16:40:48 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: serial and partitioned table" } ]
[ { "msg_contents": "Hi, hackers! \n\nI've stumbled into an interesting problem. Currently, if Postgres has nothing to write, it would skip the checkpoint creation defined by the checkpoint timeout setting. However, we might face a temporary archiving problem (for example, some network issues) that might lead to a pile of wal files stuck in pg_wal. After this temporary issue has gone, we would still be unable to archive them since we effectively skip the checkpoint because we have nothing to write.\n\nThat might lead to a problem - suppose you've run out of disk space because of the temporary failure of the archiver. After this temporary failure has gone, Postgres would be unable to recover from it automatically and will require human attention to initiate a CHECKPOINT call.\n\nI suggest changing this behavior by trying to clean up the old WAL even if we skip the main checkpoint routine. I've attached the patch that does exactly that.\n\nWhat do you think?\n\nTo reproduce the issue, you might repeat the following steps:\n\n1. Init Postgres:\npg_ctl initdb -D /Users/usernamedt/test_archiver\n\n2. Add the archiver script to simulate failure:\n➜  ~ cat /Users/usernamedt/command.sh\n#!/bin/bash\n\nfalse\n\n3. Then alter the PostgreSQL conf:\n\narchive_mode = on\ncheckpoint_timeout = 30s\narchive_command = /Users/usernamedt/command.sh\nlog_min_messages = debug1\n\n4. Then start Postgres:\n/usr/local/pgsql/bin/pg_ctl -D /Users/usernamedt/test_archiver -l logfile start\n\n5. Insert some data:\npgbench -i -s 30 -d postgres\n\n6. Trigger checkpoint to flush all data:\npsql -c \"checkpoint;\"\n\n7. Alter the archiver script to simulate the end of archiver issues:\n➜  ~ cat /Users/usernamedt/command.sh\n#!/bin/bash\n\ntrue\n\n8. Check that the WAL files are actually archived but not removed:\n➜  ~ ls -lha /Users/usernamedt/test_archiver/pg_wal/archive_status | head\ntotal 0\ndrwx------@ 48 usernamedt  LD\\Domain Users   1.5K Oct 17 17:44 .\ndrwx------@ 50 usernamedt  LD\\Domain Users   1.6K Oct 17 17:43 ..\n-rw-------@  1 usernamedt  LD\\Domain Users     0B Oct 17 17:42 000000010000000000000040.done\n...\n-rw-------@  1 usernamedt  LD\\Domain Users     0B Oct 17 17:43 00000001000000000000006D.done\n\n2023-10-17 18:03:44.621 +04 [71737] DEBUG:  checkpoint skipped because system is idle\n\nThanks,\n\nDaniil Zakhlystov", "msg_date": "Tue, 17 Oct 2023 14:09:21 +0000", "msg_from": "\"Zakhlystov, Daniil (Nebius)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Force the old transactions logs cleanup even if checkpoint is skipped" }, { "msg_contents": "Hi,\n\nI went through the Cfbot and saw that some test are failing for it\n(link: https://cirrus-ci.com/task/4631357628874752):\n\ntest: postgresql:recovery / recovery/019_replslot_limit\n\n# test failed\n----------------------------------- stderr -----------------------------------\n# poll_query_until timed out executing this query:\n# SELECT '0/15000D8' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 7.\n\nI tried to test it locally and this test is timing out in my local\nmachine as well.\n\nThanks\nShlok Kumar Kyal\n\n\n", "msg_date": "Thu, 2 Nov 2023 17:55:20 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "Hi!\n\nThanks for your review. I've actually missed the logic to retain some WALs required for replication slots or wal_keep_size setting. I've attached the updated version of the patch with an additional call to KeepLogSeg(). Now it passed all the tests, at least in my fork (https://cirrus-ci.com/build/4770244019879936).\n\n\nDaniil Zakhlystov\n\n________________________________________\nFrom: Shlok Kyal <[email protected]>\nSent: Thursday, November 2, 2023 1:25 PM\nTo: Zakhlystov, Daniil (Nebius)\nCc: [email protected]; [email protected]; Mokrushin, Mikhail (Nebius)\nSubject: Re: Force the old transactions logs cleanup even if checkpoint is skipped\n\nCAUTION: This email originated from outside mail organization. Do not click links or open attachments unless you recognize the sender.\n\nHi,\n\nI went through the Cfbot and saw that some test are failing for it\n(link: https://cirrus-ci.com/task/4631357628874752):\n\ntest: postgresql:recovery / recovery/019_replslot_limit\n\n# test failed\n----------------------------------- stderr -----------------------------------\n# poll_query_until timed out executing this query:\n# SELECT '0/15000D8' <= replay_lsn AND state = 'streaming'\n# FROM pg_catalog.pg_stat_replication\n# WHERE application_name IN ('standby_1', 'walreceiver')\n# expecting this output:\n# t\n# last actual query output:\n#\n# with stderr:\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 7.\n\nI tried to test it locally and this test is timing out in my local\nmachine as well.\n\nThanks\nShlok Kumar Kyal", "msg_date": "Tue, 7 Nov 2023 09:43:46 +0000", "msg_from": "\"Zakhlystov, Daniil (Nebius)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "On Tue, Oct 17, 2023 at 02:09:21PM +0000, Zakhlystov, Daniil (Nebius) wrote:\n> I've stumbled into an interesting problem. Currently, if Postgres\n> has nothing to write, it would skip the checkpoint creation defined\n> by the checkpoint timeout setting. However, we might face a\n> temporary archiving problem (for example, some network issues) that\n> might lead to a pile of wal files stuck in pg_wal. After this\n> temporary issue has gone, we would still be unable to archive them\n> since we effectively skip the checkpoint because we have nothing to\n> write.\n\nI am not sure to understand your last sentence here. Once the\narchiver is back up, you mean that the WAL segments that were not\npreviously archived still are still not archived? Or do you mean that\nbecause of a succession of checkpoint skipped we are just enable to\nremove them from pg_wal.\n\n> That might lead to a problem - suppose you've run out of disk space\n> because of the temporary failure of the archiver. After this\n> temporary failure has gone, Postgres would be unable to recover from\n> it automatically and will require human attention to initiate a\n> CHECKPOINT call.\n>\n> I suggest changing this behavior by trying to clean up the old WAL\n> even if we skip the main checkpoint routine. I've attached the patch\n> that does exactly that.\n> \n> What do you think?\n\nI am not convinced that this is worth the addition in the skipped\npath. If your system is idle and a set of checkpoints is skipped, the\ndata folder is not going to be under extra space pressure because of\ndatabase activity (okay, unlogged tables even if these would generate\nsome WAL for init pages), because there is nothing happening in it\nwith no \"important\" WAL generated. Note that the backend is very\nunlikely going to generate WAL only marked with XLOG_MARK_UNIMPORTANT.\n\nMore to the point: what's the origin of the disk space issues? System\nlogs, unlogged tables or something else? It is usually a good\npractice to move logs to a different partition. At the end, it sounds\nto me that removing segments more aggressively is just kicking the can\nelsewhere, without taking care of the origin of the disk issues.\n--\nMichael", "msg_date": "Wed, 8 Nov 2023 09:21:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "Hi!\n\nThanks for your review.\n\n> I am not sure to understand your last sentence here. Once the\n> archiver is back up, you mean that the WAL segments that were not\n> previously archived still are still not archived? Or do you mean that\n> because of a succession of checkpoint skipped we are just enable to\n> remove them from pg_wal.\n\nYes, the latter is correct - we are unable to clean up the already archived WALs\ndue to the checkpoint being skipped. \n\n> I am not convinced that this is worth the addition in the skipped\n> path. If your system is idle and a set of checkpoints is skipped, the\n> data folder is not going to be under extra space pressure because of\n> database activity (okay, unlogged tables even if these would generate\n> some WAL for init pages), because there is nothing happening in it\n> with no \"important\" WAL generated. Note that the backend is very\n> unlikely going to generate WAL only marked with XLOG_MARK_UNIMPORTANT.\n\n> More to the point: what's the origin of the disk space issues? System\n> logs, unlogged tables or something else? It is usually a good\n> practice to move logs to a different partition. At the end, it sounds\n> to me that removing segments more aggressively is just kicking the can\n> elsewhere, without taking care of the origin of the disk issues.\n\nThis problem arises when disk space issues are caused by temporary failed archiving.\nAs a result, the pg_wal becomes filled with WALs. This situation\nleads to Postgres being unable to perform any write operations since there is no more\nfree disk space left. Usually, cloud providers switch the cluster to a Read-Only mode\nif there is less than 3-4% of the available disk space left, but this also does not resolve\nthis problem.\n\nThe actual problem is that after archiving starts working normally again, Postgres is\nunable to free the accumulated WAL and switch to Read-Write mode due to the\ncheckpoint being skipped, leading to a vicious circle. However, nothing prevents\nPostgres from exiting such situations on its own. This patch addresses this specific\nbehavior, enabling Postgres to resolve such situations autonomously.\n\nThank you,\n\nDaniil Zakhlystov\n\n\n", "msg_date": "Wed, 8 Nov 2023 12:44:09 +0000", "msg_from": "\"Zakhlystov, Daniil (Nebius)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "On Wed, Nov 08, 2023 at 12:44:09PM +0000, Zakhlystov, Daniil (Nebius) wrote:\n>> I am not sure to understand your last sentence here. Once the\n>> archiver is back up, you mean that the WAL segments that were not\n>> previously archived still are still not archived? Or do you mean that\n>> because of a succession of checkpoint skipped we are just enable to\n>> remove them from pg_wal.\n> \n> Yes, the latter is correct - we are unable to clean up the already archived WALs\n> due to the checkpoint being skipped. \n\nYes, theoretically you could face this situation if you have an\nirregular WAL activity with cycles where nothing happens and an\narchive command that keeps failing while there is WAL generated, but\nworks while WAL is not generated.\n\n>> I am not convinced that this is worth the addition in the skipped\n>> path. If your system is idle and a set of checkpoints is skipped, the\n>> data folder is not going to be under extra space pressure because of\n>> database activity (okay, unlogged tables even if these would generate\n>> some WAL for init pages), because there is nothing happening in it\n>> with no \"important\" WAL generated. Note that the backend is very\n>> unlikely going to generate WAL only marked with XLOG_MARK_UNIMPORTANT.\n> \n>> More to the point: what's the origin of the disk space issues? System\n>> logs, unlogged tables or something else? It is usually a good\n>> practice to move logs to a different partition. At the end, it sounds\n>> to me that removing segments more aggressively is just kicking the can\n>> elsewhere, without taking care of the origin of the disk issues.\n> \n> This problem arises when disk space issues are caused by temporary failed archiving.\n> As a result, the pg_wal becomes filled with WALs. This situation\n> leads to Postgres being unable to perform any write operations since there is no more\n> free disk space left. Usually, cloud providers switch the cluster to a Read-Only mode\n> if there is less than 3-4% of the available disk space left, but this also does not resolve\n> this problem.\n\n> The actual problem is that after archiving starts working normally again, Postgres is\n> unable to free the accumulated WAL and switch to Read-Write mode due to the\n> checkpoint being skipped, leading to a vicious circle. However, nothing prevents\n> Postgres from exiting such situations on its own. This patch addresses this specific\n> behavior, enabling Postgres to resolve such situations autonomously.\n\nYep, but it does not really solve your disk space issues in a reliable\nway.\n\nI am not really convinced that this is worth complicating the skipped\npath for this goal. In my experience, I've seen complaints where WAL\narchiving bloat was coming from the archive command not able to keep\nup with the amount generated by the backend, particularly because the\ncommand invocation was taking longer than it takes to generate a new\nsegment. Even if there is a hole of activity in the server, if too\nmuch WAL has been generated it may not be enough to catch up depending\non the number of segments that need to be processed. Others are free\nto chime in with extra opinions, of course.\n\nWhile on it, I think that your patch would cause incorrect and early\nremoval of segments. It computes the name of the last segment to\nremove based on last_important_lsn, ignoring KeepLogSeg(), meaning\nthat it ignores any WAL retention required by replication slots or\nwal_keep_size. And this causes the calculation of an incorrect segno\nhorizon.\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 09:30:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "Hello,\n\n> On 9 Nov 2023, at 01:30, Michael Paquier <[email protected]> wrote:\n> \n> I am not really convinced that this is worth complicating the skipped\n> path for this goal. In my experience, I've seen complaints where WAL\n> archiving bloat was coming from the archive command not able to keep\n> up with the amount generated by the backend, particularly because the\n> command invocation was taking longer than it takes to generate a new\n> segment. Even if there is a hole of activity in the server, if too\n> much WAL has been generated it may not be enough to catch up depending\n> on the number of segments that need to be processed. Others are free\n> to chime in with extra opinions, of course.\n\nI agree that there might multiple reasons of pg_wal bloat. Please note that\nI am not addressing the WAL archiving issue at all. My proposal is to add a \nsmall improvement to the WAL cleanup routine for WALs that have been already\narchived successfully to free the disk space.\n\nYes, it might be not a common case, but a fairly realistic one. It occurred multiple times\nin our production when we had temporary issues with archiving. This small\ncomplication of the skipped path will help Postgres to return to a normal operational\nstate without any human operator / external control routine intervention.\n\n> On 9 Nov 2023, at 01:30, Michael Paquier <[email protected]> wrote:\n> \n> While on it, I think that your patch would cause incorrect and early\n> removal of segments. It computes the name of the last segment to\n> remove based on last_important_lsn, ignoring KeepLogSeg(), meaning\n> that it ignores any WAL retention required by replication slots or\n> wal_keep_size. And this causes the calculation of an incorrect segno\n> horizon.\n\nPlease check the latest patch version, I believe that it has been already fixed there.\n\nThanks,\n\nDaniil Zakhlystov\n\n\n\n", "msg_date": "Thu, 9 Nov 2023 11:50:10 +0000", "msg_from": "\"Zakhlystov, Daniil (Nebius)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" }, { "msg_contents": "Hi,\n\nOn 2023-11-09 11:50:10 +0000, Zakhlystov, Daniil (Nebius) wrote:\n> > On 9 Nov 2023, at 01:30, Michael Paquier <[email protected]> wrote:\n> >\n> > I am not really convinced that this is worth complicating the skipped\n> > path for this goal. In my experience, I've seen complaints where WAL\n> > archiving bloat was coming from the archive command not able to keep\n> > up with the amount generated by the backend, particularly because the\n> > command invocation was taking longer than it takes to generate a new\n> > segment. Even if there is a hole of activity in the server, if too\n> > much WAL has been generated it may not be enough to catch up depending\n> > on the number of segments that need to be processed. Others are free\n> > to chime in with extra opinions, of course.\n>\n> I agree that there might multiple reasons of pg_wal bloat. Please note that\n> I am not addressing the WAL archiving issue at all. My proposal is to add a\n> small improvement to the WAL cleanup routine for WALs that have been already\n> archived successfully to free the disk space.\n>\n> Yes, it might be not a common case, but a fairly realistic one. It occurred multiple times\n> in our production when we had temporary issues with archiving. This small\n> complication of the skipped path will help Postgres to return to a normal operational\n> state without any human operator / external control routine intervention.\n\nI agree that the scenario is worth addressing - it's quite a nasty situation.\n\nBut I'm not sure this is the way to address it. If a checkpoint does have to\nhappen, we might not get to the point of removing the old segments, because we\nmight fail to emit the WAL record due to running out of space. And if that\ndoesn't happen - do we really want to wait till a checkpoint finishes to free\nup space?\n\nWhat if we instead made archiver delete WAL files after archiving, if they're\nold enough? Some care would be needed to avoid checkpointer and archiver\ntrampling on each other, but that doesn't seem too hard.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Nov 2023 15:58:43 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force the old transactions logs cleanup even if checkpoint is\n skipped" } ]
[ { "msg_contents": "Hello.\n\nI've been unable to build PostgreSQL using Meson on Windows. As I'm\nunsure of the cause, I'm providing this as a report.\n\nIn brief, the ninja command fails with the following error message on\nmy Windows environment.\n\n>ninja -v\nninja: error: 'src/backend/postgres_lib.a.p/meson_pch-c.obj', needed by 'src/backend/postgres.exe', missing and no known rule to make it\n\nI'd like to note that I haven't made any modification to meson.build,\ntherefore, b_pch should be false. (I didn't confirm that because the\npython scripts are executables on my environment..) However,\nbuild.ninja actually contains several entries referencing\nmeson_pch-c.obj, despite lacking corresponding entries to build them:\n\n> build src/backend/postgres.exe | src/backend/postgres.pdb: c_LINKER_RSP src/backend/postgres.exe.p/win32ver.res src/backend/postgres_lib.a.p/meson_pch-c.obj ..\n\nAn excerpted output from the command \"ninja\".\n\n>ninja\nVersion: 1.2.1\n...\nBuild type: native build\nProject name: postgresql\nProject version: 17devel\nC compiler for the host machine: cl (msvc 19.34.31935 \"Microsoft(R) C/C++ Optimizing Compiler Version 19.34.31935 for x64\")\nC linker for the host machine: link link 14.34.31935.0\nHost machine cpu family: x86_64\nHost machine cpu: x86_64\n...\nProgram python found: YES (C:\\Users\\horiguti\\AppData\\Local\\Programs\\Python\\Python310\\python.EXE)\n...\nProgram C:\\Users\\horiguti\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\meson found: YES (C:\\Users\\horiguti\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\meson.exe)\n...\nFound ninja-1.11.1.git.kitware.jobserver-1 at C:\\Users\\horiguti\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\ninja.EXE\nCleaning... 0 files.\nninja: error: 'src/backend/postgres_lib.a.p/meson_pch-c.obj', needed by 'src/backend/postgres.exe', missing and no known rule to make it\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 18 Oct 2023 11:31:48 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "A trouble about meson on Windows" } ]
[ { "msg_contents": "Dear hackers,\n\nWhile discussing [1], I found that in tap tests, wal_level was set to logical for\nsubscribers too. The setting is not needed for subscriber side, and it may cause\nmisunderstanding for newcomers. Therefore, I wanted to propose the patch which\nremoves unnecessary \"allows_streaming => 'logical'\".\nI grepped with the string and checked the necessity of them one by one.\n\nHow do you think?\n\n[1]: https://commitfest.postgresql.org/45/4273/\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Wed, 18 Oct 2023 02:59:52 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Remove wal_level settings for subscribers in tap tests" }, { "msg_contents": "On Wed, Oct 18, 2023 at 02:59:52AM +0000, Hayato Kuroda (Fujitsu) wrote:\n> While discussing [1], I found that in tap tests, wal_level was set to logical for\n> subscribers too. The setting is not needed for subscriber side, and it may cause\n> misunderstanding for newcomers. Therefore, I wanted to propose the patch which\n> removes unnecessary \"allows_streaming => 'logical'\".\n> I grepped with the string and checked the necessity of them one by one.\n> \n> How do you think?\n> \n> [1]: https://commitfest.postgresql.org/45/4273/\n\nHmm, okay. On top of your argument, this may be a good idea for a\ndifferent reason: it makes the tests a bit cheaper as \"logical\"\ngenerates a bit more WAL. Still the gain is marginal. \n--\nMichael", "msg_date": "Wed, 18 Oct 2023 15:39:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove wal_level settings for subscribers in tap tests" }, { "msg_contents": "On Wed, Oct 18, 2023 at 03:39:16PM +0900, Michael Paquier wrote:\n> Hmm, okay. On top of your argument, this may be a good idea for a\n> different reason: it makes the tests a bit cheaper as \"logical\"\n> generates a bit more WAL. Still the gain is marginal. \n\nAnd applied this one.\n--\nMichael", "msg_date": "Fri, 20 Oct 2023 10:12:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove wal_level settings for subscribers in tap tests" }, { "msg_contents": "Dear Michael,\n\nI found it was pushed. Thanks!\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 02:11:16 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Remove wal_level settings for subscribers in tap tests" } ]
[ { "msg_contents": "Hi,\n\nThere is one hint message in 002_pg_upgrade.pl that is not consistent with the\ntesting purpose.\n\n# --check command works here, cleans up pg_upgrade_output.d.\ncommand_ok(\n\t[\n\t\t'pg_upgrade', '--no-sync', '-d', $oldnode->data_dir,\n...\nok(!-d $newnode->data_dir . \"/pg_upgrade_output.d\",\n-\t\"pg_upgrade_output.d/ not removed after pg_upgrade --check success\");\n+\t\"pg_upgrade_output.d/ removed after pg_upgrade --check success\");\n\nThe test is to confirm the output file has been removed for pg_upgrade --check while\nthe message here is not consistent. Attach a small patch to fix it.\n\nBest Regards,\nHou Zhijie", "msg_date": "Wed, 18 Oct 2023 07:27:45 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fix one hint message in 002_pg_upgrade.pl" }, { "msg_contents": "On Wed, Oct 18, 2023 at 07:27:45AM +0000, Zhijie Hou (Fujitsu) wrote:\n> The test is to confirm the output file has been removed for pg_upgrade --check while\n> the message here is not consistent. Attach a small patch to fix it.\n\nIndeed, will fix. Thanks!\n--\nMichael", "msg_date": "Wed, 18 Oct 2023 17:02:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix one hint message in 002_pg_upgrade.pl" } ]
[ { "msg_contents": "jit: Support opaque pointers in LLVM 16.\n\nRemove use of LLVMGetElementType() and provide the type of all pointers\nto LLVMBuildXXX() functions when emitting IR, as required by modern LLVM\nversions[1].\n\n * For LLVM <= 14, we'll still use the old LLVMBuildXXX() functions.\n * For LLVM == 15, we'll continue to do the same, explicitly opting\n out of opaque pointer mode.\n * For LLVM >= 16, we'll use the new LLVMBuildXXX2() functions that take\n the extra type argument.\n\nThe difference is hidden behind some new IR emitting wrapper functions\nl_load(), l_gep(), l_call() etc. The change is mostly mechanical,\nexcept that at each site the correct type had to be provided.\n\nIn some places we needed to do some extra work to get functions types,\nincluding some new wrappers for C++ APIs that are not yet exposed by in\nLLVM's C API, and some new \"example\" functions in llvmjit_types.c\nbecause it's no longer possible to start from the function pointer type\nand ask for the function type.\n\nBack-patch to 12, because it's a little tricker in 11 and we agreed not\nto put the latest LLVM support into the upcoming final release of 11.\n\n[1] https://llvm.org/docs/OpaquePointers.html\n\nReviewed-by: Dmitry Dolgov <[email protected]>\nReviewed-by: Ronan Dunklau <[email protected]>\nReviewed-by: Andres Freund <[email protected]>\nDiscussion: https://postgr.es/m/CA%2BhUKGKNX_%3Df%2B1C4r06WETKTq0G4Z_7q4L4Fxn5WWpMycDj9Fw%40mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/37d5babb5cfa4c6795b3cb6de964ba019d3d60ab\n\nModified Files\n--------------\nsrc/backend/jit/llvm/llvmjit.c | 59 ++---\nsrc/backend/jit/llvm/llvmjit_deform.c | 119 +++++-----\nsrc/backend/jit/llvm/llvmjit_expr.c | 401 ++++++++++++++++++++--------------\nsrc/backend/jit/llvm/llvmjit_types.c | 39 +++-\nsrc/backend/jit/llvm/llvmjit_wrap.cpp | 12 +\nsrc/backend/jit/llvm/meson.build | 2 +-\nsrc/include/jit/llvmjit.h | 7 +\nsrc/include/jit/llvmjit_emit.h | 106 ++++++---\n8 files changed, 481 insertions(+), 264 deletions(-)", "msg_date": "Wed, 18 Oct 2023 10:33:25 +0000", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: jit: Support opaque pointers in LLVM 16." }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> jit: Support opaque pointers in LLVM 16.\n\nI chanced to notice that the configure script (and meson too) is\nstill doing\n\n PGAC_PROG_VARCC_VARFLAGS_OPT(CLANG, BITCODE_CFLAGS, [-Xclang -no-opaque-pointers])\n PGAC_PROG_VARCXX_VARFLAGS_OPT(CLANGXX, BITCODE_CXXFLAGS, [-Xclang -no-opaque-pointers])\n\nShouldn't we remove that now?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 10:45:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: jit: Support opaque pointers in LLVM 16." }, { "msg_contents": "On 2023-Oct-18, Thomas Munro wrote:\n\n> jit: Support opaque pointers in LLVM 16.\n> \n> Remove use of LLVMGetElementType() and provide the type of all pointers\n> to LLVMBuildXXX() functions when emitting IR, as required by modern LLVM\n> versions[1].\n> \n> * For LLVM <= 14, we'll still use the old LLVMBuildXXX() functions.\n\nI have LLVM 14 (whatever Debian ships[*]), and running headerscheck results\nin a bunch of warnings from this:\n\nIn file included from /tmp/headerscheck.s89Gdv/test.c:2:\n/pgsql/source/master/src/include/jit/llvmjit_emit.h: In function ‘l_call’:\n/pgsql/source/master/src/include/jit/llvmjit_emit.h:141:9: warning: ‘LLVMBuildCall’ is deprecated [-Wdeprecated-declarations]\n 141 | return LLVMBuildCall(b, fn, args, nargs, name);\n | ^~~~~~\nIn file included from /usr/include/llvm-c/Core.h:18,\n from /pgsql/source/master/src/include/jit/llvmjit_emit.h:18:\n/usr/include/llvm-c/Core.h:3991:1: note: declared here\n 3991 | LLVM_ATTRIBUTE_C_DEPRECATED(\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\nThese warnings go away if I change the conditional from\nLLVM_VERSION_MAJOR < 16 to 14.\n\nLet's ... do that? As in the attached patch.\n\nIn 13, there's a comment about it being deprecated, but no macro to make\nthe compiler whine:\nhttps://github.com/hdoc/llvm-project/blob/release/13.x/llvm/include/llvm-c/Core.h#L3953\n\nThis changed in 14:\nhttps://github.com/hdoc/llvm-project/blob/release/14.x/llvm/include/llvm-c/Core.h#L3898\n\n\n[*] apt policy llvm:\nllvm:\n Installed: 1:14.0-55.7~deb12u1\n Candidate: 1:14.0-55.7~deb12u1\n Version table:\n *** 1:14.0-55.7~deb12u1 500\n 500 http://ftp.de.debian.org/debian bookworm/main amd64 Packages\n 100 /var/lib/dpkg/status\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Mon, 6 Nov 2023 19:11:54 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: jit: Support opaque pointers in LLVM 16." } ]
[ { "msg_contents": "Hi hackers,\n\nAfter committing the on-login trigger\n(e83d1b0c40ccda8955f1245087f0697652c4df86) the event_trigger regress test\nbecame sensible to any other parallel tests, not only DDL. Thus it should\nbe placed in a separate parallel schedule group.\n\nThe current problem is that a race condition may occur on some systems,\nwhen oidjoins test starts a moment later than normally and affects logins\ncount for on-login trigger test. The problem is quite a rare one and I only\nfaced it once. But rare or not - the problem is a problem and it should be\naddressed.\n\nSuch race condition can be simulated by adding \"select pg_sleep(2);\" and\n\"\\c\" at the very beginning of oidjoins.sql and adding \"select pg_sleep(5);\"\nafter creation of the login trigger in event_trigger.sql.\nThe resulting symptoms are quite recognizable: regression.diffs file will\ncontain unexpected welcome message for oidjoins test and unexpectedly\nincreased result of \"SELECT COUNT(*) FROM user_logins;\" for event_triggers\ntest. (These are accompanied with the expected responses to the newly added\ncommands of course)\n\nTo get rid of the unexpected results the oidjoins and event_triggers tests\nshould be splitted into separate paralell schedule groups. This is exactly\nwhat the proposed (attached) patch is doing.\n\nWhat do you think?\n--\n best regards,\n Mikhail A. Gribkov\n\ne-mail: [email protected]", "msg_date": "Wed, 18 Oct 2023 22:36:58 +0300", "msg_from": "Mikhail Gribkov <[email protected]>", "msg_from_op": true, "msg_subject": "Avoid race condition for event_triggers regress test" }, { "msg_contents": "Hi,\n\n> The current problem is that a race condition may occur on some systems, when oidjoins test starts a moment later than normally and affects logins count for on-login trigger test. The problem is quite a rare one and I only faced it once. But rare or not - the problem is a problem and it should be addressed.\n\nThanks for the patch and the steps to reproduce.\n\nI tested the patch and it does what is claimed. Including the steps to\nreproduce as a separate patch with .txt extension so cfbot will ignore\nit.\n\nI think it's a good find and a good fix.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 19 Oct 2023 17:01:28 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoid race condition for event_triggers regress test" } ]
[ { "msg_contents": "Hi.\n\nI happened upon a function comment referring to non-existent code\n(that code was moved to another location many years ago).\n\nProbably better to move that comment too. Thoughts?\n\nPSA.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 19 Oct 2023 13:30:59 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "boolin comment not moved when code was refactored" }, { "msg_contents": "On Thu, Oct 19, 2023 at 10:35 AM Peter Smith <[email protected]> wrote:\n\n> Hi.\n>\n> I happened upon a function comment referring to non-existent code\n> (that code was moved to another location many years ago).\n>\n> Probably better to move that comment too. Thoughts?\n\n\nAgreed. +1 to move that comment.\n\nThanks\nRichard\n\nOn Thu, Oct 19, 2023 at 10:35 AM Peter Smith <[email protected]> wrote:Hi.\n\nI happened upon a function comment referring to non-existent code\n(that code was moved to another location many years ago).\n\nProbably better to move that comment too. Thoughts?Agreed. +1 to move that comment.ThanksRichard", "msg_date": "Thu, 19 Oct 2023 10:57:57 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "Richard Guo <[email protected]> writes:\n> On Thu, Oct 19, 2023 at 10:35 AM Peter Smith <[email protected]> wrote:\n>> I happened upon a function comment referring to non-existent code\n>> (that code was moved to another location many years ago).\n>> \n>> Probably better to move that comment too. Thoughts?\n\n> Agreed. +1 to move that comment.\n\nHm, I'm inclined to think that the comment lines just above:\n\n * boolin - converts \"t\" or \"f\" to 1 or 0\n *\n * Check explicitly for \"true/false\" and TRUE/FALSE, 1/0, YES/NO, ON/OFF.\n * Reject other values.\n\nare also well past their sell-by date. The one-line summary\n\"converts \"t\" or \"f\" to 1 or 0\" is not remotely accurate anymore.\nPerhaps we should just drop it? Or else reword to something\nvaguer, like \"input function for boolean\". The \"Check explicitly\"\npara no longer describes logic in this function. We could move\nit to parse_bool_with_len, but that seems to have a suitable\ncomment already.\n\nIn short, maybe the whole comment should just be\n\n/*\n *\tboolin - input function for type boolean\n */\n\nAgreed with your original point, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Oct 2023 23:55:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "On Thu, Oct 19, 2023 at 2:55 PM Tom Lane <[email protected]> wrote:\n>\n> Richard Guo <[email protected]> writes:\n> > On Thu, Oct 19, 2023 at 10:35 AM Peter Smith <[email protected]> wrote:\n> >> I happened upon a function comment referring to non-existent code\n> >> (that code was moved to another location many years ago).\n> >>\n> >> Probably better to move that comment too. Thoughts?\n>\n> > Agreed. +1 to move that comment.\n>\n> Hm, I'm inclined to think that the comment lines just above:\n>\n> * boolin - converts \"t\" or \"f\" to 1 or 0\n> *\n> * Check explicitly for \"true/false\" and TRUE/FALSE, 1/0, YES/NO, ON/OFF.\n> * Reject other values.\n>\n> are also well past their sell-by date. The one-line summary\n> \"converts \"t\" or \"f\" to 1 or 0\" is not remotely accurate anymore.\n> Perhaps we should just drop it? Or else reword to something\n> vaguer, like \"input function for boolean\". The \"Check explicitly\"\n> para no longer describes logic in this function. We could move\n> it to parse_bool_with_len, but that seems to have a suitable\n> comment already.\n>\n\nYes, I had the same thought about the rest of the comment being\noutdated but just wanted to test the water to see if a small change\nwas accepted before I did too much.\n\n> In short, maybe the whole comment should just be\n>\n> /*\n> * boolin - input function for type boolean\n> */\n>\n\nHow about \"boolin - converts a boolean string value to 1 or 0\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 19 Oct 2023 15:17:33 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "On 10/19/23 06:17, Peter Smith wrote:\n>> In short, maybe the whole comment should just be\n>>\n>> /*\n>> * boolin - input function for type boolean\n>> */\n>>\n> How about \"boolin - converts a boolean string value to 1 or 0\"\n\n\nPersonally, I do not like exposing the implementation of a boolean (it \nis a base type that is not a numeric), so I prefer Tom's suggestion.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Thu, 19 Oct 2023 06:26:54 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "On Thu, Oct 19, 2023 at 3:26 PM Vik Fearing <[email protected]> wrote:\n>\n> On 10/19/23 06:17, Peter Smith wrote:\n> >> In short, maybe the whole comment should just be\n> >>\n> >> /*\n> >> * boolin - input function for type boolean\n> >> */\n> >>\n> > How about \"boolin - converts a boolean string value to 1 or 0\"\n>\n>\n> Personally, I do not like exposing the implementation of a boolean (it\n> is a base type that is not a numeric), so I prefer Tom's suggestion.\n\nOK. Done that way.\n\nPSA v2.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 19 Oct 2023 16:17:43 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "Peter Smith <[email protected]> writes:\n> PSA v2.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Oct 2023 11:31:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: boolin comment not moved when code was refactored" }, { "msg_contents": "On Fri, Oct 20, 2023 at 2:31 AM Tom Lane <[email protected]> wrote:\n>\n> Peter Smith <[email protected]> writes:\n> > PSA v2.\n>\n> Pushed.\n>\n\nThanks for pushing.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 20 Oct 2023 09:08:15 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: boolin comment not moved when code was refactored" } ]
[ { "msg_contents": "Hi,\n\nI believed that spread (not fast) checkpoints are the default in\npg_basebackup, but noticed that --help does not specify which is which -\ncontrary to the reference documentation.\n\nSo I propose the small attached patch to clarify that.\n\n\nMichael", "msg_date": "Thu, 19 Oct 2023 11:39:32 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "[patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\n> I believed that spread (not fast) checkpoints are the default in\n> pg_basebackup, but noticed that --help does not specify which is which -\n> contrary to the reference documentation.\n>\n> So I propose the small attached patch to clarify that.\n\nYou are right and I believe this is a good change.\n\nMaybe we should also display the defaults for -X,\n--manifest-checksums, etc for consistency.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 19 Oct 2023 16:21:19 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\nOn Thu, Oct 19, 2023 at 04:21:19PM +0300, Aleksander Alekseev wrote:\n> > I believed that spread (not fast) checkpoints are the default in\n> > pg_basebackup, but noticed that --help does not specify which is which -\n> > contrary to the reference documentation.\n> >\n> > So I propose the small attached patch to clarify that.\n> \n> You are right and I believe this is a good change.\n> \n> Maybe we should also display the defaults for -X,\n> --manifest-checksums, etc for consistency.\n\nHrm right, but those have multiple options and they do not enumerate\nthem in the help string as do -F and -c - not sure what general project\npolicy here is for mentioning defaults in --help, I will check some of\nthe other commands.\n\n\nMichael\n\n\n", "msg_date": "Thu, 19 Oct 2023 22:30:04 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "On Thu, Oct 19, 2023 at 10:30:04PM +0200, Michael Banck wrote:\n> Hrm right, but those have multiple options and they do not enumerate\n> them in the help string as do -F and -c - not sure what general project\n> policy here is for mentioning defaults in --help, I will check some of\n> the other commands.\n\nThen comes the point that this bloats the --help output. A bunch of\nsystem commands I use on a daily-basis outside Postgres don't do that,\nso it's kind of hard to put a line on what's good or not in this area\nwhile we have the SGML and man pages to do the job, with always more\ndetails.\n--\nMichael", "msg_date": "Fri, 20 Oct 2023 08:29:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\n> On Thu, Oct 19, 2023 at 10:30:04PM +0200, Michael Banck wrote:\n> > Hrm right, but those have multiple options and they do not enumerate\n> > them in the help string as do -F and -c - not sure what general project\n> > policy here is for mentioning defaults in --help, I will check some of\n> > the other commands.\n>\n> Then comes the point that this bloats the --help output. A bunch of\n> system commands I use on a daily-basis outside Postgres don't do that,\n> so it's kind of hard to put a line on what's good or not in this area\n> while we have the SGML and man pages to do the job, with always more\n> details.\n\nRight. Then I suggest merging the patch as is.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 20 Oct 2023 12:03:05 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "On 19.10.23 11:39, Michael Banck wrote:\n> Hi,\n> \n> I believed that spread (not fast) checkpoints are the default in\n> pg_basebackup, but noticed that --help does not specify which is which -\n> contrary to the reference documentation.\n> \n> So I propose the small attached patch to clarify that.\n\n > printf(_(\" -c, --checkpoint=fast|spread\\n\"\n >- \" set fast or spread \ncheckpointing\\n\"));\n >+ \" set fast or spread (default) \ncheckpointing\\n\"));\n\nCould we do like\n\n -c, --checkpoint=fast|spread\n set fast or spread checkpointing\n (default: spread)\n\nThis seems to be easier to read.\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:36:41 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\nOn Wed, Oct 25, 2023 at 04:36:41PM +0200, Peter Eisentraut wrote:\n> On 19.10.23 11:39, Michael Banck wrote:\n> > Hi,\n> > \n> > I believed that spread (not fast) checkpoints are the default in\n> > pg_basebackup, but noticed that --help does not specify which is which -\n> > contrary to the reference documentation.\n> > \n> > So I propose the small attached patch to clarify that.\n> \n> > printf(_(\" -c, --checkpoint=fast|spread\\n\"\n> >- \" set fast or spread checkpointing\\n\"));\n> >+ \" set fast or spread (default)\n> checkpointing\\n\"));\n> \n> Could we do like\n> \n> -c, --checkpoint=fast|spread\n> set fast or spread checkpointing\n> (default: spread)\n> \n> This seems to be easier to read.\n\nYeah, we could do that. But then again the question pops up what to do\nabout the other option that mentions defaults (-F) and the others which\nhave a default but it is not spelt out yet (-X, -Z at least) (output is\nstill from v15, additional options have been added since):\n\n -F, --format=p|t output format (plain (default), tar)\n -X, --wal-method=none|fetch|stream\n include required WAL files with specified method\n -Z, --compress=0-9 compress tar output with given compression level\n\nSo, my personal opinion is that we should really document -c because it\nis quite user-noticable compared to the others.\n \nSo attached is a new version with just your proposed change for now.\n\n\nMichael", "msg_date": "Thu, 26 Oct 2023 10:27:51 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\nOn Thu, 26 Oct 2023 at 13:58, Michael Banck <[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Oct 25, 2023 at 04:36:41PM +0200, Peter Eisentraut wrote:\n> > On 19.10.23 11:39, Michael Banck wrote:\n> > > Hi,\n> > >\n> > > I believed that spread (not fast) checkpoints are the default in\n> > > pg_basebackup, but noticed that --help does not specify which is which -\n> > > contrary to the reference documentation.\n> > >\n> > > So I propose the small attached patch to clarify that.\n> >\n> > > printf(_(\" -c, --checkpoint=fast|spread\\n\"\n> > >- \" set fast or spread checkpointing\\n\"));\n> > >+ \" set fast or spread (default)\n> > checkpointing\\n\"));\n> >\n> > Could we do like\n> >\n> > -c, --checkpoint=fast|spread\n> > set fast or spread checkpointing\n> > (default: spread)\n> >\n> > This seems to be easier to read.\n>\n> Yeah, we could do that. But then again the question pops up what to do\n> about the other option that mentions defaults (-F) and the others which\n> have a default but it is not spelt out yet (-X, -Z at least) (output is\n> still from v15, additional options have been added since):\n>\n> -F, --format=p|t output format (plain (default), tar)\n> -X, --wal-method=none|fetch|stream\n> include required WAL files with specified method\n> -Z, --compress=0-9 compress tar output with given compression level\n>\n> So, my personal opinion is that we should really document -c because it\n> is quite user-noticable compared to the others.\n>\n> So attached is a new version with just your proposed change for now.\n>\n>\n> Michael\n\nI went through the Cfbot for this patch and found out that the build\nis failing with the following error (Link:\nhttps://cirrus-ci.com/task/4648506929971200?logs=build#L1217):\n\n[08:34:47.625] FAILED: src/bin/pg_basebackup/pg_basebackup.p/pg_basebackup.c.o\n[08:34:47.625] ccache cc -Isrc/bin/pg_basebackup/pg_basebackup.p\n-Isrc/include -I../src/include -Isrc/interfaces/libpq\n-I../src/interfaces/libpq -Isrc/include/catalog -Isrc/include/nodes\n-Isrc/include/utils -fdiagnostics-color=always -pipe\n-D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g -fno-strict-aliasing\n-fwrapv -fexcess-precision=standard -D_GNU_SOURCE -Wmissing-prototypes\n-Wpointer-arith -Werror=vla -Wendif-labels -Wmissing-format-attribute\n-Wimplicit-fallthrough=3 -Wcast-function-type\n-Wshadow=compatible-local -Wformat-security\n-Wdeclaration-after-statement -Wno-format-truncation\n-Wno-stringop-truncation -pthread -MD -MQ\nsrc/bin/pg_basebackup/pg_basebackup.p/pg_basebackup.c.o -MF\nsrc/bin/pg_basebackup/pg_basebackup.p/pg_basebackup.c.o.d -o\nsrc/bin/pg_basebackup/pg_basebackup.p/pg_basebackup.c.o -c\n../src/bin/pg_basebackup/pg_basebackup.c\n[08:34:47.625] ../src/bin/pg_basebackup/pg_basebackup.c: In function ‘usage’:\n[08:34:47.625] ../src/bin/pg_basebackup/pg_basebackup.c:411:5:\nwarning: statement with no effect [-Wunused-value]\n[08:34:47.625] 411 | \" (default: spread)\\n\"));\n[08:34:47.625] | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n[08:34:47.625] ../src/bin/pg_basebackup/pg_basebackup.c:411:51: error:\nexpected ‘;’ before ‘)’ token\n[08:34:47.625] 411 | \" (default: spread)\\n\"));\n[08:34:47.625] | ^\n[08:34:47.625] | ;\n[08:34:47.625] ../src/bin/pg_basebackup/pg_basebackup.c:411:51: error:\nexpected statement before ‘)’ token\n[08:34:47.625] ../src/bin/pg_basebackup/pg_basebackup.c:411:52: error:\nexpected statement before ‘)’ token\n[08:34:47.625] 411 | \" (default: spread)\\n\"));\n[08:34:47.625] | ^\n[08:34:47.629] [1210/1832] Compiling C object\nsrc/bin/pg_dump/libpgdump_common.a.p/parallel.c.o\n[08:34:47.639] [1211/1832] Compiling C object\nsrc/bin/pg_basebackup/pg_recvlogical.p/pg_recvlogical.c.o\n[08:34:47.641] [1212/1832] Linking static target\nsrc/bin/pg_basebackup/libpg_basebackup_common.a\n[08:34:47.658] [1213/1832] Compiling C object\nsrc/bin/pg_dump/libpgdump_common.a.p/compress_io.c.o\n[08:34:47.669] [1214/1832] Compiling C object\nsrc/bin/pg_dump/libpgdump_common.a.p/compress_lz4.c.o\n[08:34:47.678] [1215/1832] Compiling C object\nsrc/bin/pg_dump/libpgdump_common.a.p/compress_zstd.c.o\n[08:34:47.692] [1216/1832] Compiling C object\nsrc/bin/pg_dump/libpgdump_common.a.p/dumputils.c.o\n[08:34:47.692] ninja: build stopped: subcommand failed.\n\nI also see that patch is marked 'Ready for Committer' on commitfest.\n\nJust wanted to make sure, you are aware of this error.\n\nThanks,\nShlok Kumar Kyal\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:59:24 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "Hi,\n\nOn Tue, Oct 31, 2023 at 04:59:24PM +0530, Shlok Kyal wrote:\n> I went through the Cfbot for this patch and found out that the build\n> is failing with the following error (Link:\n> https://cirrus-ci.com/task/4648506929971200?logs=build#L1217):\n\nOops, sorry. Attached is a working third version of this patch.\n\n\nMichael", "msg_date": "Tue, 31 Oct 2023 20:01:38 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" }, { "msg_contents": "On Tue, Oct 31, 2023 at 8:01 PM Michael Banck <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Oct 31, 2023 at 04:59:24PM +0530, Shlok Kyal wrote:\n> > I went through the Cfbot for this patch and found out that the build\n> > is failing with the following error (Link:\n> > https://cirrus-ci.com/task/4648506929971200?logs=build#L1217):\n>\n> Oops, sorry. Attached is a working third version of this patch.\n\nWhile I think Peters argument about one reading better than the other\none, that does also increase the \"help message bloat\" mentioned by\nMichael. So I think we're better off actually using the original\nversion, so I'm going to go ahead and push that one (and also to avoid\nendless bikeshedding)-\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:33:25 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [patch] pg_basebackup: mention that spread checkpoints are the\n default in --help" } ]
[ { "msg_contents": "We removed support for the HP-UX OS in v16, but left in support\nfor the PA-RISC architecture, mainly because I thought that its\nspinlock mechanism is weird enough to be a good stress test\nfor our spinlock infrastructure. It still is that, but my\none remaining HPPA machine has gone to the great recycle heap\nin the sky. There seems little point in keeping around nominal\nsupport for an architecture that we can't test and no one is\nusing anymore.\n\nHence, the attached removes the remaining support for HPPA.\nAny objections?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 19 Oct 2023 11:16:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Remove last traces of HPPA support" }, { "msg_contents": "On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n> We removed support for the HP-UX OS in v16, but left in support\n> for the PA-RISC architecture, mainly because I thought that its\n> spinlock mechanism is weird enough to be a good stress test\n> for our spinlock infrastructure. It still is that, but my\n> one remaining HPPA machine has gone to the great recycle heap\n> in the sky. There seems little point in keeping around nominal\n> support for an architecture that we can't test and no one is\n> using anymore.\n\nLooks OK for the C parts.\n\n> Hence, the attached removes the remaining support for HPPA.\n> Any objections?\n\nWould a refresh of config/config.guess and config/config.sub be\nsuited? This stuff still has references to HPPA.\n--\nMichael", "msg_date": "Fri, 20 Oct 2023 08:37:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n>> Hence, the attached removes the remaining support for HPPA.\n>> Any objections?\n\n> Would a refresh of config/config.guess and config/config.sub be\n> suited? This stuff still has references to HPPA.\n\nAFAIK we just absorb those files verbatim from upstream. There is plenty\nof stuff in them for systems we don't support; it's not worth trying\nto clean that out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Oct 2023 20:05:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n> We removed support for the HP-UX OS in v16, but left in support\n> for the PA-RISC architecture, mainly because I thought that its\n> spinlock mechanism is weird enough to be a good stress test\n> for our spinlock infrastructure. It still is that, but my\n> one remaining HPPA machine has gone to the great recycle heap\n> in the sky. There seems little point in keeping around nominal\n> support for an architecture that we can't test and no one is\n> using anymore.\n> \n> Hence, the attached removes the remaining support for HPPA.\n> Any objections?\n\nI wouldn't do this. NetBSD/hppa still claims to exist, as does the OpenBSD\nequivalent. I presume its pkgsrc compiles this code. The code is basically\nzero-maintenance, so there's not much to gain from deleting it preemptively.\n\n\n", "msg_date": "Thu, 19 Oct 2023 17:23:04 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n>> Hence, the attached removes the remaining support for HPPA.\n\n> I wouldn't do this. NetBSD/hppa still claims to exist, as does the OpenBSD\n> equivalent. I presume its pkgsrc compiles this code. The code is basically\n> zero-maintenance, so there's not much to gain from deleting it preemptively.\n\nI doubt it: I don't think anyone is routinely building very much of\npkgsrc for backwater hardware like HPPA, on either distro. It takes\ntoo much time (as cross-build doesn't work IME) and there are too few\npotential users. I certainly had to build all my own packages during\nmy experiments with running those systems on my machine.\n\nMoreover, if they are compiling it they aren't testing it.\nI filed a pile of bugs against NetBSD kernel and toolchains\non the way to getting the late lamented chickadee animal running.\nWhile it was pretty much working when I retired chickadee, it was\nobviously ground that nobody else had trodden in a long time.\n\nAs for OpenBSD, while I did have a working installation of 6.4\nat one time, I completely failed to get 7.1 running on that\nhardware. I think it's maintained only for very small values\nof \"maintained\".\n\nLastly, even when they're working those systems are about half\nthe speed of HP-UX on the same hardware; and even when using HP-UX\nthere is no HPPA hardware that's not insanely slow by modern\nstandards. I can't believe that anyone would want to run modern\nPG on that stack, and I don't believe that anyone but me has\ntried in a long time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Oct 2023 21:22:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Fri, Oct 20, 2023 at 4:21 AM Tom Lane <[email protected]> wrote:\n> We removed support for the HP-UX OS in v16, but left in support\n> for the PA-RISC architecture, mainly because I thought that its\n> spinlock mechanism is weird enough to be a good stress test\n> for our spinlock infrastructure. It still is that, but my\n> one remaining HPPA machine has gone to the great recycle heap\n> in the sky. There seems little point in keeping around nominal\n> support for an architecture that we can't test and no one is\n> using anymore.\n>\n> Hence, the attached removes the remaining support for HPPA.\n\n+1\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:44:31 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "I wrote:\n> Noah Misch <[email protected]> writes:\n>> On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n>>> Hence, the attached removes the remaining support for HPPA.\n\n>> I wouldn't do this. NetBSD/hppa still claims to exist, as does the OpenBSD\n>> equivalent. I presume its pkgsrc compiles this code. The code is basically\n>> zero-maintenance, so there's not much to gain from deleting it preemptively.\n\n> I doubt it: I don't think anyone is routinely building very much of\n> pkgsrc for backwater hardware like HPPA, on either distro.\n\nI dug a bit further on this point. The previous discussion about\nour policy for old-hardware support was here:\n\nhttps://www.postgresql.org/message-id/flat/959917.1657522169%40sss.pgh.pa.us#47f7af4817dc8dc0d8901d1ee965971e\n\nThe existence of a NetBSD/sh3el package for Postgres didn't stop\nus from dropping SuperH support. Moreover, the page showing the\nexistence of that package:\n\nhttps://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/databases/postgresql14-server/index.html\n\nalso shows a build for VAX, which we know positively would not\nhave passed regression tests, so they certainly weren't testing\nthose builds. (And, to the point here, it does *not* show any\nbuild for hppa.)\n\nThe bottom line, though, is that IMV we agreed in that thread to a\npolicy that no architecture will be considered supported unless\nit has a representative in the buildfarm. We've since enforced\nthat policy in the case of loongarch64, so it seems established.\nWith my HPPA animal gone, and nobody very likely to step up with\na replacement, HPPA no longer meets that threshold requirement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:31:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2023-10-19 17:23:04 -0700, Noah Misch wrote:\n> On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n> > We removed support for the HP-UX OS in v16, but left in support\n> > for the PA-RISC architecture, mainly because I thought that its\n> > spinlock mechanism is weird enough to be a good stress test\n> > for our spinlock infrastructure. It still is that, but my\n> > one remaining HPPA machine has gone to the great recycle heap\n> > in the sky. There seems little point in keeping around nominal\n> > support for an architecture that we can't test and no one is\n> > using anymore.\n> > \n> > Hence, the attached removes the remaining support for HPPA.\n> > Any objections?\n> \n> I wouldn't do this. NetBSD/hppa still claims to exist, as does the OpenBSD\n> equivalent. I presume its pkgsrc compiles this code. The code is basically\n> zero-maintenance, so there's not much to gain from deleting it preemptively.\n\nIn addition to the point Tom has made, I think it's also not correct that hppa\ndoesn't impose a burden: hppa is the only of our architectures that doesn't\nactually support atomic operations, requiring us to have infrastructure to\nbackfill atomics using spinlocks. This does preclude some uses of atomics,\ne.g. in signal handlers - I think Thomas wanted to do so for some concurrency\nprimitive.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 12:40:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> In addition to the point Tom has made, I think it's also not correct that hppa\n> doesn't impose a burden: hppa is the only of our architectures that doesn't\n> actually support atomic operations, requiring us to have infrastructure to\n> backfill atomics using spinlocks. This does preclude some uses of atomics,\n> e.g. in signal handlers - I think Thomas wanted to do so for some concurrency\n> primitive.\n\nHmm, are you saying there's more of port/atomics/ that could be\nremoved? What exactly? Do we really want to assume that all\nfuture architectures will have atomic operations?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:59:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2023-10-20 15:59:42 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > In addition to the point Tom has made, I think it's also not correct that hppa\n> > doesn't impose a burden: hppa is the only of our architectures that doesn't\n> > actually support atomic operations, requiring us to have infrastructure to\n> > backfill atomics using spinlocks. This does preclude some uses of atomics,\n> > e.g. in signal handlers - I think Thomas wanted to do so for some concurrency\n> > primitive.\n> \n> Hmm, are you saying there's more of port/atomics/ that could be\n> removed? What exactly?\n\nI was thinking we could remove the whole fallback path for atomic operations,\nbut it's a bit less, because we likely don't want to mandate support for 64bit\natomics yet. That'd still allow removing more than half of\nsrc/include/port/atomics/fallback.h and src/backend/port/atomics.c - and more\nif we finally decided to require a spinlock implementation.\n\n\n> Do we really want to assume that all future architectures will have atomic\n> operations?\n\nYes. Outside of the tiny microcontrollers, which obviously won't run postgres,\nI cannot see any future architecture not having support for atomic operations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 14:33:23 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2023-10-20 15:59:42 -0400, Tom Lane wrote:\n>> Hmm, are you saying there's more of port/atomics/ that could be\n>> removed? What exactly?\n\n> I was thinking we could remove the whole fallback path for atomic operations,\n> but it's a bit less, because we likely don't want to mandate support for 64bit\n> atomics yet.\n\nYeah. That'd be tantamount to desupporting 32-bit arches altogether,\nI think. I'm not ready to go there yet.\n\n> That'd still allow removing more than half of\n> src/include/port/atomics/fallback.h and src/backend/port/atomics.c - and more\n> if we finally decided to require a spinlock implementation.\n\nIn the wake of 1c72d82c2, it seems likely that requiring some kind of\nspinlock implementation is not such a big lift. Certainly, a machine\nwithout that hasn't been a fit target for production in a very long\ntime, so maybe we should just drop that semaphore-based emulation.\n\n>> Do we really want to assume that all future architectures will have atomic\n>> operations?\n\n> Yes. Outside of the tiny microcontrollers, which obviously won't run postgres,\n> I cannot see any future architecture not having support for atomic operations.\n\nI'd like to refine what that means a bit more. Are we assuming that\na machine providing any of the gcc atomic intrinsics (of a given\nwidth) will provide all of them? Or is there a specific subset that\nwe can emulate the rest on top of?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 17:46:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2023-10-20 17:46:59 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2023-10-20 15:59:42 -0400, Tom Lane wrote:\n> >> Hmm, are you saying there's more of port/atomics/ that could be\n> >> removed? What exactly?\n> \n> > I was thinking we could remove the whole fallback path for atomic operations,\n> > but it's a bit less, because we likely don't want to mandate support for 64bit\n> > atomics yet.\n> \n> Yeah. That'd be tantamount to desupporting 32-bit arches altogether,\n> I think. I'm not ready to go there yet.\n\nIt shouldn't be tantamount to that - many 32bit archs support 64bit atomic\noperations. E.g. x86 supported it since the 586 (in 1993). However, arm only\naddded them to 32 bit, in an extension, comparatively recently...\n\n\n> > That'd still allow removing more than half of\n> > src/include/port/atomics/fallback.h and src/backend/port/atomics.c - and more\n> > if we finally decided to require a spinlock implementation.\n> \n> In the wake of 1c72d82c2, it seems likely that requiring some kind of\n> spinlock implementation is not such a big lift. Certainly, a machine\n> without that hasn't been a fit target for production in a very long\n> time, so maybe we should just drop that semaphore-based emulation.\n\nYep. And the performance drop due to not having spinlock is also getting worse\nover time, with CPU bound workloads having become a lot more common due to\nlarger amounts of memory and much much faster IO.\n\n\n> >> Do we really want to assume that all future architectures will have atomic\n> >> operations?\n> \n> > Yes. Outside of the tiny microcontrollers, which obviously won't run postgres,\n> > I cannot see any future architecture not having support for atomic operations.\n> \n> I'd like to refine what that means a bit more. Are we assuming that a\n> machine providing any of the gcc atomic intrinsics (of a given width) will\n> provide all of them? Or is there a specific subset that we can emulate the\n> rest on top of?\n\nRight now we don't require that. As long as we know how to do atomic compare\nexchange, we backfill all other atomic operations using compare-exchange -\nalbeit less efficiently (there's no retries for atomic-add when implemented\ndirectly, but there are retries when using cmpxchg, the difference can be\nsignificant under contention).\n\nPractically speaking I think it's quite unlikely that a compiler + arch\ncombination will have only some intrinsics of some width - I think all\ncompilers have infrastructure to fall back to compare-exchange when there's no\ndedicated atomic operation for some intrinsic.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 15:03:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Fri, Oct 20, 2023 at 12:40:00PM -0700, Andres Freund wrote:\n> On 2023-10-19 17:23:04 -0700, Noah Misch wrote:\n> > On Thu, Oct 19, 2023 at 11:16:28AM -0400, Tom Lane wrote:\n> > > We removed support for the HP-UX OS in v16, but left in support\n> > > for the PA-RISC architecture, mainly because I thought that its\n> > > spinlock mechanism is weird enough to be a good stress test\n> > > for our spinlock infrastructure. It still is that, but my\n> > > one remaining HPPA machine has gone to the great recycle heap\n> > > in the sky. There seems little point in keeping around nominal\n> > > support for an architecture that we can't test and no one is\n> > > using anymore.\n> > > \n> > > Hence, the attached removes the remaining support for HPPA.\n> > > Any objections?\n> > \n> > I wouldn't do this. NetBSD/hppa still claims to exist, as does the OpenBSD\n> > equivalent. I presume its pkgsrc compiles this code. The code is basically\n> > zero-maintenance, so there's not much to gain from deleting it preemptively.\n> \n> In addition to the point Tom has made, I think it's also not correct that hppa\n> doesn't impose a burden: hppa is the only of our architectures that doesn't\n> actually support atomic operations, requiring us to have infrastructure to\n> backfill atomics using spinlocks. This does preclude some uses of atomics,\n> e.g. in signal handlers - I think Thomas wanted to do so for some concurrency\n> primitive.\n\nIf the next thing is a patch removing half of the fallback atomics, that is a\nsolid reason to remove hppa. The code removed in the last proposed patch was\nnot that and was code that never changes, hence my reaction.\n\n\n", "msg_date": "Fri, 20 Oct 2023 18:42:25 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> If the next thing is a patch removing half of the fallback atomics, that is a\n> solid reason to remove hppa.\n\nAgreed, though I don't think we have a clear proposal as to what\nelse to remove.\n\n> The code removed in the last proposed patch was\n> not that and was code that never changes, hence my reaction.\n\nMmm ... I'd agree that the relevant stanzas of s_lock.h/.c haven't\nchanged in a long time, but port/atomics/ is of considerably newer\nvintage and is still receiving a fair amount of churn. Moreover,\nmuch of what I proposed to remove from there is HPPA-only code with\nexactly no parallel in other arches (specifically, the bits in\natomics/fallback.h). So I don't feel comfortable that it will\ncontinue to work without benefit of testing. We're taking a risk\njust hoping that it will continue to work in the back branches until\nthey hit EOL. Expecting that it'll continue to work going forward,\nsans testing, seems like the height of folly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Oct 2023 22:06:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2023-10-20 22:06:55 -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > If the next thing is a patch removing half of the fallback atomics, that is a\n> > solid reason to remove hppa.\n> \n> Agreed, though I don't think we have a clear proposal as to what\n> else to remove.\n> \n> > The code removed in the last proposed patch was\n> > not that and was code that never changes, hence my reaction.\n> \n> Mmm ... I'd agree that the relevant stanzas of s_lock.h/.c haven't\n> changed in a long time, but port/atomics/ is of considerably newer\n> vintage and is still receiving a fair amount of churn. Moreover,\n> much of what I proposed to remove from there is HPPA-only code with\n> exactly no parallel in other arches (specifically, the bits in\n> atomics/fallback.h). So I don't feel comfortable that it will\n> continue to work without benefit of testing. We're taking a risk\n> just hoping that it will continue to work in the back branches until\n> they hit EOL. Expecting that it'll continue to work going forward,\n> sans testing, seems like the height of folly.\n\nIt'd be one thing to continue supporting an almost-guaranteed-to-be-unused\nplatform, if we expected it to become more popular or complete enough to be\nusable like e.g. risc-v a few years ago. But I doubt we'll find anybody out\nthere believing that there's a potential future upward trend for HPPA.\n\nIMO a single person looking at HPPA code for a few minutes is a cost that more\nthan outweighs the potential benefits of continuing \"supporting\" this dead\narch. Even code that doesn't need to change has costs, particularly if it's\nintermingled with actually important code (which spinlocks certainly are).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Oct 2023 22:56:31 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> It'd be one thing to continue supporting an almost-guaranteed-to-be-unused\n> platform, if we expected it to become more popular or complete enough to be\n> usable like e.g. risc-v a few years ago. But I doubt we'll find anybody out\n> there believing that there's a potential future upward trend for HPPA.\n\nIndeed. I would have bet that Postgres on HPPA was extinct in the wild,\nuntil I noticed this message a few days ago:\n\nhttps://www.postgresql.org/message-id/BYAPR02MB42624ED41C15BFA82DAE2C359BD5A%40BYAPR02MB4262.namprd02.prod.outlook.com\n\nBut we already cut that user off at the knees by removing HP-UX support.\n\nThe remaining argument for worrying about this architecture being in\nuse in the field is the idea that somebody is using it on top of\nNetBSD or OpenBSD. But having used both of those systems (or tried\nto), I feel absolutely confident in asserting that nobody is using\nit in production today, let alone hoping to continue using it.\n\n> IMO a single person looking at HPPA code for a few minutes is a cost that more\n> than outweighs the potential benefits of continuing \"supporting\" this dead\n> arch. Even code that doesn't need to change has costs, particularly if it's\n> intermingled with actually important code (which spinlocks certainly are).\n\nYup, that. It's not zero cost to carry this stuff.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Oct 2023 02:18:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi, \n\nOn October 20, 2023 11:18:19 PM PDT, Tom Lane <[email protected]> wrote:\n>Andres Freund <[email protected]> writes:\n>> It'd be one thing to continue supporting an almost-guaranteed-to-be-unused\n>> platform, if we expected it to become more popular or complete enough to be\n>> usable like e.g. risc-v a few years ago. But I doubt we'll find anybody out\n>> there believing that there's a potential future upward trend for HPPA.\n>\n>Indeed. I would have bet that Postgres on HPPA was extinct in the wild,\n>until I noticed this message a few days ago:\n>\n>https://www.postgresql.org/message-id/BYAPR02MB42624ED41C15BFA82DAE2C359BD5A%40BYAPR02MB4262.namprd02.prod.outlook.com\n>\n>But we already cut that user off at the knees by removing HP-UX support.\n\nNot that it matters really, but I'd assume that was hpux on ia64, not hppa?\n\nGreetings,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 20 Oct 2023 23:22:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On October 20, 2023 11:18:19 PM PDT, Tom Lane <[email protected]> wrote:\n>> Indeed. I would have bet that Postgres on HPPA was extinct in the wild,\n>> until I noticed this message a few days ago:\n>> https://www.postgresql.org/message-id/BYAPR02MB42624ED41C15BFA82DAE2C359BD5A%40BYAPR02MB4262.namprd02.prod.outlook.com\n>> But we already cut that user off at the knees by removing HP-UX support.\n\n> Not that it matters really, but I'd assume that was hpux on ia64, not hppa?\n\nHmm, maybe ... impossible to tell from the given information, but ia64\nwas at least still in production till recently, so you might be right.\n\nIn any case, I heard no bleating when we nuked ia64 support.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Oct 2023 02:32:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Sat, Oct 21, 2023 at 02:18:19AM -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > It'd be one thing to continue supporting an almost-guaranteed-to-be-unused\n> > platform, if we expected it to become more popular or complete enough to be\n> > usable like e.g. risc-v a few years ago. But I doubt we'll find anybody out\n> > there believing that there's a potential future upward trend for HPPA.\n> \n> Indeed. I would have bet that Postgres on HPPA was extinct in the wild,\n> until I noticed this message a few days ago:\n> \n> https://www.postgresql.org/message-id/BYAPR02MB42624ED41C15BFA82DAE2C359BD5A%40BYAPR02MB4262.namprd02.prod.outlook.com\n> \n> But we already cut that user off at the knees by removing HP-UX support.\n> \n> The remaining argument for worrying about this architecture being in\n> use in the field is the idea that somebody is using it on top of\n> NetBSD or OpenBSD. But having used both of those systems (or tried\n> to), I feel absolutely confident in asserting that nobody is using\n> it in production today, let alone hoping to continue using it.\n> \n> > IMO a single person looking at HPPA code for a few minutes is a cost that more\n> > than outweighs the potential benefits of continuing \"supporting\" this dead\n> > arch. Even code that doesn't need to change has costs, particularly if it's\n> > intermingled with actually important code (which spinlocks certainly are).\n> \n> Yup, that. It's not zero cost to carry this stuff.\n\n+1 for dropping it.\n\n\n", "msg_date": "Wed, 29 May 2024 09:58:39 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Sat, Oct 21, 2023 at 02:18:19AM -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> IMO a single person looking at HPPA code for a few minutes is a cost that more\n>>> than outweighs the potential benefits of continuing \"supporting\" this dead\n>>> arch. Even code that doesn't need to change has costs, particularly if it's\n>>> intermingled with actually important code (which spinlocks certainly are).\n\n>> Yup, that. It's not zero cost to carry this stuff.\n\n> +1 for dropping it.\n\nDone at commit edadeb0710.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 Jul 2024 13:56:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Tue, Jul 2, 2024 at 5:56 AM Tom Lane <[email protected]> wrote:\n> Done at commit edadeb0710.\n\nHere are some experimental patches to try out some ideas mentioned\nupthread, that are approximately unlocked by that cleanup.\n\n1. We could get rid of --disable-spinlocks. It is hard to imagine a\nhypothetical new port that would actually be a useful place to run\nPostgreSQL where you can't implement spinlocks. (This one isn't\nexactly unlocked by PA-RISC's departure, it's just tangled up with the\nrelevant cruft.)\n\n2. We could get rid of --disable-atomics, and require at least 32 bit\nlock-free (non-emulated) atomics. AFAIK there are no relevant systems\nthat don't have them. Hypothetical new systems would be unlikely to\nomit them, unless they are eg embedded systems that don't intend to be\nable to run an OS.\n\nPersonally I would like to do this, because I'd like to be able to use\npg_atomic_fetch_or_u32() in a SIGALRM handler in my\nlatchify-all-the-things patch (a stepping stone in the multi-threading\nproject as discussed in the Vancouver unconference). That's not\nallowed if it might be a locking fallback. It's not strictly\nnecessary for my project, and I could find another way if I have to,\nbut when contemplating doing extra work to support imaginary computers\nthat I don't truly believe in... and since this general direction was\nsuggested already, both on this thread and in the comments in the\ntree...\n\nOnce you decide to do #2, ie require atomics, perhaps you could also\nimplement spinlocks with them, rendering point #1 moot, and delete all\nthat hand-rolled TAS stuff. (Then you'd have spinlocks implemented\nwith flag/u32 atomics APIs, but potentially also u64 atomics\nimplemented with spinlocks! Circular, but not impossible AFAICT.\nAssuming we can't require 64 bit lock-free atomics any time soon that\nis, not considered). 🤯🤯🤯But maybe there are still good reasons to\nhave hand-rolled specialisations in some cases? I have not researched\nthat idea and eg compared the generated instructions... I do\nappreciate that that code reflects a lot of accumulated wisdom and\nexperience that I don't claim to possess, and this bit is vapourware\nanyway.\n\n3. While tinkering with the above, and contemplating contact with\nhypothetical future systems and even existing oddball systems, it\npractically suggests itself that we could allow <stdatomic.h> as a way\nof providing atomics (port/atomics.h clearly anticipated that, it was\njust too soon). Note: that's different from requiring C11, but it\nmeans that the new rule would be that your system should have *either*\nC11 <stdatomic.h> or a hand-rolled implementation in port/atomics/*.h.\nThis is not a proposal, just an early stage experiment to test the\nwaters!\n\nSome early thoughts about that, not fully explored:\n* Since C11 uses funky generics, perhaps we might want to add some\ntype checks to make sure you don't accidentally confuse u32 and u64\nsomewhere.\n* I couldn't immediately see how to use the standard atomic_flag for\nour stuff due to lack of relaxed load, so it's falling back to the\ngeneric u32 implementation (a small waste of space). atomic_bool or\natomic_char should work fine though, not tried. I guess\npg_atomic_flag might be a minor historical mistake, assuming it was\nsupposed to be just like the standard type of the same name. Or maybe\nI'm missing something.\n* The pg_spin_delay_impl() part definitely still needs hand-rolled\nmagic still when using <stdatomic.h> (I'm not aware of any standard\nway to do that). But I'm not sure it even belongs in the \"atomics\"\nheaders anyway? It's not the same kind of thing, is it?\n* The comments seem to imply that we need to tell the compiler not to\ngenerate any code for read/write barriers on TSO systems (compiler\nbarrier only), but AFAICS the right thing happens anyway when coded as\nstandard acquire/release barriers. x86: nothing. ARM: something.\nWhat am I missing?\n* It'd be interesting to learn about anything else that modern tool\nchains might do worse than our hand-rolled wisdom.\n* Special support for Sun's compiler could be dropped if we could just\nuse their <stdatomic.h>. The same applies for MSVC 2022+ AFAICS, so\nmaybe in ~3 years from now we could drop the Windows-specific code.\n* Uhh, yeah, so that would also apply to any modern GCC/Clang, so in\neffect everyone would be using <stdatomic.h> except any hand-rolled\nspecial bits that we decide to keep for performance reasons, and the\nrest would become dead code and liable for garbage collection. So\nthat would amount to a confusing policy like: \"we require\n<stdatomic.h> with at least lock-free int in practice, but we'd\nconsider patches to add a non-C11-way to do this stuff if you invent a\nnew kind of computer/toolchain and refuse to support C11\". Hmm. (I\nhave another version of this type of thinking happening in another\npending patch, the pg_threads.h one, more on that shortly...)", "msg_date": "Wed, 3 Jul 2024 15:15:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Here are some experimental patches to try out some ideas mentioned\n> upthread, that are approximately unlocked by that cleanup.\n\nFWIW, I'm good with getting rid of --disable-spinlocks and\n--disable-atomics. That's a fair amount of code and needing to\nsupport it causes problems, as you say. I am very much less\nexcited about ripping out our spinlock and/or atomics code in favor\nof <stdatomic.h>; I just don't see the gain there, and I do see risk\nin ceding control of the semantics and performance of those\nprimitives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jul 2024 04:08:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Wed, Jul 3, 2024 at 8:09 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > Here are some experimental patches to try out some ideas mentioned\n> > upthread, that are approximately unlocked by that cleanup.\n>\n> FWIW, I'm good with getting rid of --disable-spinlocks and\n> --disable-atomics. That's a fair amount of code and needing to\n> support it causes problems, as you say. I am very much less\n> excited about ripping out our spinlock and/or atomics code in favor\n> of <stdatomic.h>; I just don't see the gain there, and I do see risk\n> in ceding control of the semantics and performance of those\n> primitives.\n\nOK, <stdatomic.h> part on ice for now. Here's an update of the rest,\nthis time also removing the barrier fallbacks as discussed in the LTO\nthread[1].\n\nI guess we should also consider reimplementing the spinlock on the\natomic API, but I can see that Andres is poking at spinlock code right\nnow so I'll keep out of his way...\n\nSide issue: I noticed via CI failure when I tried to require\nread/write barriers to be provided (a choice I backed out of), that on\nMSVC we seem to be using the full memory barrier fallback for those.\nHuh? For x86, I think they should be using pg_compiler_barrier() (no\ncode gen, just prevent reordering), not pg_pg_memory_barrier(), no?\nPerhaps I'm missing something but I suspect we might be failing to\ninclude arch-x86.h on that compiler when we should... maybe it needs\nto detect _M_AMD64 too? For ARM, from a quick look, the only way to\nreach real acquire/release barriers seems to be to use the C11\ninterface (which would also be fine on x86 where it should degrade to\na no-op compiler barrier or signal fence as the standard calls it),\nbut IIRC the Windows/ARM basics haven't gone in yet anyway.\n\n[1] https://www.postgresql.org/message-id/flat/721bf39a-ed8a-44b0-8b8e-be3bd81db748%40technowledgy.de#66ba381b05e8ee08b11503b846acc4a1", "msg_date": "Tue, 30 Jul 2024 09:50:08 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On 30/07/2024 00:50, Thomas Munro wrote:\n> On Wed, Jul 3, 2024 at 8:09 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> Here are some experimental patches to try out some ideas mentioned\n>>> upthread, that are approximately unlocked by that cleanup.\n>>\n>> FWIW, I'm good with getting rid of --disable-spinlocks and\n>> --disable-atomics. That's a fair amount of code and needing to\n>> support it causes problems, as you say. I am very much less\n>> excited about ripping out our spinlock and/or atomics code in favor\n>> of <stdatomic.h>; I just don't see the gain there, and I do see risk\n>> in ceding control of the semantics and performance of those\n>> primitives.\n> \n> OK, <stdatomic.h> part on ice for now. Here's an update of the rest,\n> this time also removing the barrier fallbacks as discussed in the LTO\n> thread[1].\n\nLooks good to me.\n\n> I guess we should also consider reimplementing the spinlock on the\n> atomic API, but I can see that Andres is poking at spinlock code right\n> now so I'll keep out of his way...\n> \n> Side issue: I noticed via CI failure when I tried to require\n> read/write barriers to be provided (a choice I backed out of), that on\n> MSVC we seem to be using the full memory barrier fallback for those.\n> Huh? For x86, I think they should be using pg_compiler_barrier() (no\n> code gen, just prevent reordering), not pg_pg_memory_barrier(), no?\n\nAgreed, arch-x86.h is quite clear on that.\n\n> Perhaps I'm missing something but I suspect we might be failing to\n> include arch-x86.h on that compiler when we should... maybe it needs\n> to detect _M_AMD64 too? \n\nAha, yes I think that's it. Apparently, __x86_64__ is not defined on \nMSVC. To prove that, I added garbage to the \"#ifdef __x86_64__\" guarded \nblock in atomics.h. The compilation passes on MSVC, but not on other \nplatforms: https://cirrus-ci.com/build/6310061188841472.\n\nThat means that we're not getting the x86-64 instructions in \nsrc/port/pg_crc32c_sse42.c on MSVC either.\n\nI think we should do:\n\n#ifdef _M_AMD64\n#define __x86_64__\n#endif\n\nsomewhere, perhaps in src/include/port/win32.h.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 30 Jul 2024 02:16:18 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Tue, Jul 30, 2024 at 11:16 AM Heikki Linnakangas <[email protected]> wrote:\n> On 30/07/2024 00:50, Thomas Munro wrote:\n> > On Wed, Jul 3, 2024 at 8:09 PM Tom Lane <[email protected]> wrote:\n> >> Thomas Munro <[email protected]> writes:\n> > OK, <stdatomic.h> part on ice for now. Here's an update of the rest,\n> > this time also removing the barrier fallbacks as discussed in the LTO\n> > thread[1].\n>\n> Looks good to me.\n\nThanks. I'll wait just a bit longer to see if anyone else has comments.\n\n> > Perhaps I'm missing something but I suspect we might be failing to\n> > include arch-x86.h on that compiler when we should... maybe it needs\n> > to detect _M_AMD64 too?\n>\n> Aha, yes I think that's it. Apparently, __x86_64__ is not defined on\n> MSVC. To prove that, I added garbage to the \"#ifdef __x86_64__\" guarded\n> block in atomics.h. The compilation passes on MSVC, but not on other\n> platforms: https://cirrus-ci.com/build/6310061188841472.\n>\n> That means that we're not getting the x86-64 instructions in\n> src/port/pg_crc32c_sse42.c on MSVC either.\n>\n> I think we should do:\n>\n> #ifdef _M_AMD64\n> #define __x86_64__\n> #endif\n>\n> somewhere, perhaps in src/include/port/win32.h.\n\nHmm. I had come up with the opposite solution, because we already\ntested for _M_AMD64 explicitly elsewhere, and also I was thinking we\nwould back-patch, and I don't want to cause problems for external code\nthat thinks that __x86_64__ implies it can bust out some GCC inline\nassembler or something. But I don't have a strong opinion, your idea\nis certainly simpler to implement and I also wouldn't mind much if we\njust fixed it in master only, for fear of subtle breakage...\n\nSame problem probably exists for i386. I don't think CI, build farm\nor the EDB packaging team do 32 bit Windows, so that makes it a little\nhard to know if your blind code changes have broken or fixed\nanything... on the other hand it's pretty simple...\n\nI wondered if the pre-Meson system might have somehow defined\n__x86_64__, but I'm not seeing it. Commit b64d92f1a56 explicitly\nmentions that it was tested on MSVC, so I guess maybe it was just\nalways \"working\" but not quite taking the intended code paths? Funny\nthough, that code that calls _mm_pause() on AMD64 or the __asm thing\nthat only works on i386 doesn't look like blind code to me. Curious.", "msg_date": "Tue, 30 Jul 2024 12:39:44 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Tue, Jul 30, 2024 at 12:39 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Jul 30, 2024 at 11:16 AM Heikki Linnakangas <[email protected]> wrote:\n> > Looks good to me.\n>\n> Thanks. I'll wait just a bit longer to see if anyone else has comments.\n\nAnd pushed.\n\nI am aware of a couple of build farm animals that will now fail\nbecause they deliberately test --disable-spinlocks: francolin and\nrorqual, which will need adjustment or retirement on master. I'll\nwatch out for other surprises on the farm...\n\n\n", "msg_date": "Tue, 30 Jul 2024 23:08:36 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Tue, Jul 30, 2024 at 9:50 AM Thomas Munro <[email protected]> wrote:\n> I guess we should also consider reimplementing the spinlock on the\n> atomic API, but I can see that Andres is poking at spinlock code right\n> now so I'll keep out of his way...\n\nHere is a first attempt at that. I haven't compared the generated asm\nyet, but it seems to work OK. I solved some mysteries (or probably\njust rediscovered things that others already knew) along the way:\n\n1. The reason we finished up with OK-looking MSVC atomics code that\nwas probably never actually reachable might be that it was\ncopied-and-pasted from the spinlock code. This patch de-duplicates\nthat (and much more).\n\n2. The pg_atomic_unlocked_test_flag() function was surprising to me:\nit returns true if it's not currently set (according to a relaxed\nload). Most of this patch was easy, but figuring out that I had\nreverse polarity here was a multi-coffee operation :-) I can't call\nit wrong though, as it's not based on <stdatomic.h>, and it's clearly\ndocumented, so *shrug*.\n\n3. As for why we have a function that <stdatomic.h> doesn't, I\nspeculate that it might have been intended for implementing this exact\npatch, ie wanting to perform that relaxed load while spinning as\nrecommended by Intel. (If we strictly had to use <stdatomic.h>\nfunctions, we couldn't use atomic_flag due to the lack of a relaxed\nload operation on that type, so we'd probably have to use atomic_char\ninstead. Perhaps one day we will cross that bridge.)\n\n4. Another reason would be that you need it to implement\nSpinLockFree() and S_LOCK_FREE(). They don't seem to have had any\nreal callers since the beginning of open source PostgreSQL!, except\nfor a test of limited value in a new world without ports developing\ntheir own spinlock code. Let's remove them! I see this was already\nthreatened by Andres in 3b37a6de.\n\nArcheological notes: I went back further and found that POSTGRES 4.2\nused them only twice for assertions. These S_LOCK() etc interfaces\nseem to derive from Dynix's parallel programming library, but it\ndidn't have S_LOCK_FREE() either. It looks like the Berkeley guys\nadded _FREE() for *internal* use when dealing with PA-RISC, where free\nspinlocks were non-zero, but we later developed a different way of\ndealing with that.", "msg_date": "Wed, 31 Jul 2024 17:52:34 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On 31/07/2024 08:52, Thomas Munro wrote:\n> On Tue, Jul 30, 2024 at 9:50 AM Thomas Munro <[email protected]> wrote:\n>> I guess we should also consider reimplementing the spinlock on the\n>> atomic API, but I can see that Andres is poking at spinlock code right\n>> now so I'll keep out of his way...\n> \n> Here is a first attempt at that.\n\nLooks good, thanks!\n\n> I haven't compared the generated asm yet, but it seems to work OK.\nThe old __i386__ implementation of TAS() said:\n\n> \t * When this was last tested, we didn't have separate TAS() and TAS_SPIN()\n> \t * macros. Nowadays it probably would be better to do a non-locking test\n> \t * in TAS_SPIN() but not in TAS(), like on x86_64, but no-one's done the\n> \t * testing to verify that. Without some empirical evidence, better to\n> \t * leave it alone.\n\nIt seems that you did what the comment suggested. That seems fine. For \nsake of completeness, if someone has an i386 machine lying around, it \nwould be nice to verify that. Or an official CPU manufacturer's \nimplementation guide, or references to other implementations or something.\n\n> 2. The pg_atomic_unlocked_test_flag() function was surprising to me:\n> it returns true if it's not currently set (according to a relaxed\n> load). Most of this patch was easy, but figuring out that I had\n> reverse polarity here was a multi-coffee operation :-) I can't call\n> it wrong though, as it's not based on <stdatomic.h>, and it's clearly\n> documented, so *shrug*.\n\nHuh, yeah that's unexpected.\n\n> 3. As for why we have a function that <stdatomic.h> doesn't, I\n> speculate that it might have been intended for implementing this exact\n> patch, ie wanting to perform that relaxed load while spinning as\n> recommended by Intel. (If we strictly had to use <stdatomic.h>\n> functions, we couldn't use atomic_flag due to the lack of a relaxed\n> load operation on that type, so we'd probably have to use atomic_char\n> instead. Perhaps one day we will cross that bridge.)\n\nAs a side note, I remember when I've tried to use pg_atomic_flag in the \npast, I wanted to do an atomic compare-and-exchange on it, to clear the \nvalue and return the old value. Surprisingly, there's no function to do \nthat. There's pg_atomic_test_set_flag(), but no \npg_atomic_test_clear_flag(). C11 has both \"atomic_flag\" and \n\"atomic_bool\", and I guess what I actually wanted was atomic_bool.\n\n> - * On platforms with weak memory ordering, the TAS(), TAS_SPIN(), and\n> - * S_UNLOCK() macros must further include hardware-level memory fence\n> - * instructions to prevent similar re-ordering at the hardware level.\n> - * TAS() and TAS_SPIN() must guarantee that loads and stores issued after\n> - * the macro are not executed until the lock has been obtained. Conversely,\n> - * S_UNLOCK() must guarantee that loads and stores issued before the macro\n> - * have been executed before the lock is released.\n\nThat old comment means that both SpinLockAcquire() and SpinLockRelease() \nacted as full memory barriers, and looking at the implementations, that \nwas indeed so. With the new implementation, SpinLockAcquire() will have \n\"acquire semantics\" and SpinLockRelease will have \"release semantics\". \nThat's very sensible, and I don't believe it will break anything, but \nit's a change in semantics nevertheless.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 31 Jul 2024 11:47:48 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Wed, Jul 31, 2024 at 8:47 PM Heikki Linnakangas <[email protected]> wrote:\n> On 31/07/2024 08:52, Thomas Munro wrote:\n> The old __i386__ implementation of TAS() said:\n>\n> > * When this was last tested, we didn't have separate TAS() and TAS_SPIN()\n> > * macros. Nowadays it probably would be better to do a non-locking test\n> > * in TAS_SPIN() but not in TAS(), like on x86_64, but no-one's done the\n> > * testing to verify that. Without some empirical evidence, better to\n> > * leave it alone.\n>\n> It seems that you did what the comment suggested. That seems fine. For\n> sake of completeness, if someone has an i386 machine lying around, it\n> would be nice to verify that. Or an official CPU manufacturer's\n> implementation guide, or references to other implementations or something.\n\nHmm, the last \"real\" 32 bit CPU is from ~20 years ago. Now the only\n32 bit x86 systems we should nominally care about are modern CPUs that\ncan also run 32 bit instruction; is there a reason to think they'd\nbehave differently at this level? Looking at the current Intel\noptimisation guide's discussion of spinlock implementation at page\n2-34 of [1], it doesn't distinguish between 32 and 64, and it has that\ndouble-check thing.\n\n> > - * On platforms with weak memory ordering, the TAS(), TAS_SPIN(), and\n> > - * S_UNLOCK() macros must further include hardware-level memory fence\n> > - * instructions to prevent similar re-ordering at the hardware level.\n> > - * TAS() and TAS_SPIN() must guarantee that loads and stores issued after\n> > - * the macro are not executed until the lock has been obtained. Conversely,\n> > - * S_UNLOCK() must guarantee that loads and stores issued before the macro\n> > - * have been executed before the lock is released.\n>\n> That old comment means that both SpinLockAcquire() and SpinLockRelease()\n> acted as full memory barriers, and looking at the implementations, that\n> was indeed so. With the new implementation, SpinLockAcquire() will have\n> \"acquire semantics\" and SpinLockRelease will have \"release semantics\".\n> That's very sensible, and I don't believe it will break anything, but\n> it's a change in semantics nevertheless.\n\nYeah. It's interesting that our pg_atomic_clear_flag(f) is like\nstandard atomic_flag_clear_explicit(f, memory_order_release), not like\natomic_flag_clear(f) which is short for atomic_flag_clear_explicit(f,\nmemory_order_seq_cst). Example spinlock code I've seen written in\nmodern C or C++ therefore uses the _explicit variants, so it can get\nacquire/release, which is what people usually want from a lock-like\nthing. What's a good way to test the performance in PostgreSQL? In a\nnaive loop that just test-and-sets and clears a flag a billion times\nin a loop and does nothing else, I see 20-40% performance increase\ndepending on architecture when comparing _seq_cst with\n_acquire/_release. You're right that this semantic change deserves\nexplicit highlighting, in comments somewhere... I wonder if we have\nanywhere that is counting on the stronger barrier...\n\n[1] https://www.intel.com/content/www/us/en/content-details/671488/intel-64-and-ia-32-architectures-optimization-reference-manual-volume-1.html\n\n\n", "msg_date": "Wed, 31 Jul 2024 22:32:19 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2024-07-31 17:52:34 +1200, Thomas Munro wrote:\n> 2. The pg_atomic_unlocked_test_flag() function was surprising to me:\n> it returns true if it's not currently set (according to a relaxed\n> load). Most of this patch was easy, but figuring out that I had\n> reverse polarity here was a multi-coffee operation :-) I can't call\n> it wrong though, as it's not based on <stdatomic.h>, and it's clearly\n> documented, so *shrug*.\n\nI have no idea why I did it that way round. This was a long time ago...\n\n\n> 4. Another reason would be that you need it to implement\n> SpinLockFree() and S_LOCK_FREE(). They don't seem to have had any\n> real callers since the beginning of open source PostgreSQL!, except\n> for a test of limited value in a new world without ports developing\n> their own spinlock code. Let's remove them! I see this was already\n> threatened by Andres in 3b37a6de.\n\nNote that I would like to add a user for S_LOCK_FREE(), to detect repeated\nSpinLockRelease():\nhttps://postgr.es/m/20240729182952.hua325647e2ggbsy%40awork3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2024 12:07:30 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2024-07-30 23:08:36 +1200, Thomas Munro wrote:\n> On Tue, Jul 30, 2024 at 12:39 PM Thomas Munro <[email protected]> wrote:\n> > On Tue, Jul 30, 2024 at 11:16 AM Heikki Linnakangas <[email protected]> wrote:\n> > > Looks good to me.\n> >\n> > Thanks. I'll wait just a bit longer to see if anyone else has comments.\n> \n> And pushed.\n\nYay!\n\n\n> I am aware of a couple of build farm animals that will now fail\n> because they deliberately test --disable-spinlocks: francolin and\n> rorqual, which will need adjustment or retirement on master. I'll\n> watch out for other surprises on the farm...\n\nI've now adjusted rorqual, francolin, piculet to not run on master anymore -\nthey're just there to test combinations of --disable-atomics and\n--disable-spinlocks, so there seems not much point in just disabling those\noptions for HEAD.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2024 12:20:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2024-07-31 22:32:19 +1200, Thomas Munro wrote:\n> > That old comment means that both SpinLockAcquire() and SpinLockRelease()\n> > acted as full memory barriers, and looking at the implementations, that\n> > was indeed so. With the new implementation, SpinLockAcquire() will have\n> > \"acquire semantics\" and SpinLockRelease will have \"release semantics\".\n> > That's very sensible, and I don't believe it will break anything, but\n> > it's a change in semantics nevertheless.\n> \n> Yeah. It's interesting that our pg_atomic_clear_flag(f) is like\n> standard atomic_flag_clear_explicit(f, memory_order_release), not like\n> atomic_flag_clear(f) which is short for atomic_flag_clear_explicit(f,\n> memory_order_seq_cst). Example spinlock code I've seen written in\n> modern C or C++ therefore uses the _explicit variants, so it can get\n> acquire/release, which is what people usually want from a lock-like\n> thing. What's a good way to test the performance in PostgreSQL?\n\nI've used\n c=8;pgbench -n -Mprepared -c$c -j$c -P1 -T10 -f <(echo \"SELECT pg_logical_emit_message(false, \\:client_id::text, '1'), generate_series(1, 1000) OFFSET 1000;\")\nin the past. Because of NUM_XLOGINSERT_LOCKS = 8 this ends up with 8 backends\ndoing tiny xlog insertions and heavily contending on insertpos_lck.\n\nThe generate_series() is necessary as otherwise the context switch and\nexecutor startup overhead dominates.\n\n\n> In a naive loop that just test-and-sets and clears a flag a billion times in\n> a loop and does nothing else, I see 20-40% performance increase depending on\n> architecture when comparing _seq_cst with _acquire/_release.\n\nI'd expect the difference to be even bigger on concurrent workloads on x86-64\n- the added memory barrier during lock release really hurts. I have a test\nprogram to play around with this and the difference in isolation is like 0.4x\nthe throughput with a full barrier release on my older 2 socket workstation\n[1]. Of course it's not trivial to hit \"pure enough\" cases in the real world.\n\n\nOn said workstation [1], with the above pgbench, I get ~1.95M inserts/sec\n(1959 TPS * 1000) on HEAD and 1.80M insert/sec after adding\n#define S_UNLOCK(lock) __atomic_store_n(lock, 0, __ATOMIC_SEQ_CST)\n\n\nIf I change NUM_XLOGINSERT_LOCKS = 40 and use 40 clients, I get\n1.03M inserts/sec with the current code and 0.86M inserts/sec with\n__ATOMIC_SEQ_CST.\n\nGreetings,\n\nAndres Freund\n\n[1] 2x Xeon Gold 5215\n\n\n", "msg_date": "Wed, 31 Jul 2024 12:45:15 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Thu, Aug 1, 2024 at 7:07 AM Andres Freund <[email protected]> wrote:\n> Note that I would like to add a user for S_LOCK_FREE(), to detect repeated\n> SpinLockRelease():\n> https://postgr.es/m/20240729182952.hua325647e2ggbsy%40awork3.anarazel.de\n\nWhat about adding a \"magic\" member in assertion builds? Here is my\nattempt at that, in 0002.\n\nI also realised that we might as well skip the trivial S_XXX macros\nand delete s_lock.h. In this version of 0001 we retain just spin.h,\nbut s_lock.c still exists to hold the slow path.", "msg_date": "Thu, 1 Aug 2024 10:09:07 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "Hi,\n\nOn 2024-08-01 10:09:07 +1200, Thomas Munro wrote:\n> On Thu, Aug 1, 2024 at 7:07 AM Andres Freund <[email protected]> wrote:\n> > Note that I would like to add a user for S_LOCK_FREE(), to detect repeated\n> > SpinLockRelease():\n> > https://postgr.es/m/20240729182952.hua325647e2ggbsy%40awork3.anarazel.de\n> \n> What about adding a \"magic\" member in assertion builds? Here is my\n> attempt at that, in 0002.\n\nThat changes the ABI, which we don't want, because it breaks using\nextensions against a differently built postgres.\n\nI don't really see a reason to avoid having S_LOCK_FREE(), am I missing\nsomething? Previously the semaphore fallback was a reason, but that's gone\nnow...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Jul 2024 15:38:51 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Thu, Aug 1, 2024 at 10:38 AM Andres Freund <[email protected]> wrote:\n> On 2024-08-01 10:09:07 +1200, Thomas Munro wrote:\n> > On Thu, Aug 1, 2024 at 7:07 AM Andres Freund <[email protected]> wrote:\n> > > Note that I would like to add a user for S_LOCK_FREE(), to detect repeated\n> > > SpinLockRelease():\n> > > https://postgr.es/m/20240729182952.hua325647e2ggbsy%40awork3.anarazel.de\n> >\n> > What about adding a \"magic\" member in assertion builds? Here is my\n> > attempt at that, in 0002.\n>\n> That changes the ABI, which we don't want, because it breaks using\n> extensions against a differently built postgres.\n\nYeah, right, bad idea. Let me think about how to do something like\nwhat you showed, but with the atomics patch...\n\nHmm. One of the interesting things about the atomic_flag interface is\nthat it completely hides the contents of memory. (Guess: its weird\nminimal interface was designed to help weird architectures like\nPA-RISC, staying on topic for $SUBJECT; I doubt we'll see such a\nsystem again but it's useful for this trick). So I guess we could\npush the check down to that layer, and choose arbitrary non-zero\nvalues for the arch-x86.h implementation of pg_atomic_flag . See\nattached. Is this on the right track?\n\n(Looking ahead, if we eventually move to using <stdatomic.h>, we won't\nbe able to use atomic_flag due to lack of relaxed load anyway, so we\ncould generalise this to atomic_char (rather than atomic_bool), and\nkeep using non-zero values. Presumably at that point we could also\ndecree that zero-initialised memory is valid for initialising our\nspinlocks, but it seems useful as a defence against uninitialised\nobjects anyway.)\n\n> I don't really see a reason to avoid having S_LOCK_FREE(), am I missing\n> something? Previously the semaphore fallback was a reason, but that's gone\n> now...\n\nSure, but if it's just for assertions, we don't need it. Or any of\nthe S_XXX stuff.", "msg_date": "Thu, 1 Aug 2024 12:33:42 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" }, { "msg_contents": "On Tue, Jul 30, 2024 at 12:39 PM Thomas Munro <[email protected]> wrote:\n> On Tue, Jul 30, 2024 at 11:16 AM Heikki Linnakangas <[email protected]> wrote:\n> > I think we should do:\n> >\n> > #ifdef _M_AMD64\n> > #define __x86_64__\n> > #endif\n> >\n> > somewhere, perhaps in src/include/port/win32.h.\n\nI suppose we could define our own\nPG_ARCH_{ARM,MIPS,POWER,RISCV,S390,SPARC,X86}_{32,64} in one central\nplace, instead. Draft patch for illustration.", "msg_date": "Thu, 1 Aug 2024 17:32:42 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove last traces of HPPA support" } ]
[ { "msg_contents": "Hi,\n\nTwice now, I've had 'meson test' fail because it tried to start too\nmany copies of the server at the same time. In the server log, I get\nthe complaint about needing to raise SHMMNI. This is a macos machine,\nwith kern.sysv.shmmni=32. The obvious fix to this is to just tell\n'meson test' how many processes I'd like it to run. I thought maybe I\ncould just do 'meson -j8 test' but that does not work, because the\noption is --num-processes and has no short version. Even typing -j8\nevery time would be kind of annoying; typing --num-processes 8 every\ntime is ridiculously verbose.\n\nMy next thought was that there ought to be some environmental variable\nthat I could set to control this behavior. But I can't find a list of\nenvironment variables that affect meson behavior anywhere. I guess the\nauthors don't believe in environment variable as a control mechanism.\nOr, at the risk of sounding a bit testy, maybe their documentation\njust isn't quite up to par. It's not that hard to find lists of\noptions for particular subcommands, either from the tool itself or on\nthe web site. But unlike git, where you can do something like 'man\ngit-checkout' and actually get more information than the command help\nitself provides, there are no man pages for the main subcommands, and\nI can't really find any good documentation on the web site either.\nKnowing that a certain subcommand has a flag called\n--pkgconfig.relocatable or that some other command has a flag called\n--cross-file CROSS_FILE whose argument is, and I quote, a \"File\ndescribing cross compilation environment,\" is not good enough.\n\nSo my questions are:\n\n1. Is there some better way to control testing parallelism than\nspecifying --num-processes N every single time?\n\n2. Is there better documentation somewhere?\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Oct 2023 13:44:20 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "controlling meson's parallelism (and some whining)" }, { "msg_contents": "Hi,\n\nOn 2023-10-19 13:44:20 -0400, Robert Haas wrote:\n> Twice now, I've had 'meson test' fail because it tried to start too\n> many copies of the server at the same time. In the server log, I get\n> the complaint about needing to raise SHMMNI. This is a macos machine,\n> with kern.sysv.shmmni=32.\n\nHm. Did you not run into simmilar issues with make check-world? I found the\nconcurrency of that to be even more variable over a run.\n\n\nBut perhaps there's something else wrong here? Perhaps we should deal with\nthis in Cluster.pm to some degree? Controlling this from the level of meson\n(or make/prove for that matter) doesn't really work well, because different\ntests start differently many postgres instances.\n\nHow many cores does your machine have? I've run the tests in a loop on my m1\nmac mini in the past without running into this issue. It has \"only\" 8 cores\nthough, whereas I infer, from you mentioning -j8, that you have more cores?\n\n\n> The obvious fix to this is to just tell 'meson test' how many processes I'd\n> like it to run. I thought maybe I could just do 'meson -j8 test' but that\n> does not work, because the option is --num-processes and has no short\n> version. Even typing -j8 every time would be kind of annoying; typing\n> --num-processes 8 every time is ridiculously verbose.\n\nI've also wondered why there's no support for -j, maybe we should open an\nissue...\n\n\n> My next thought was that there ought to be some environmental variable\n> that I could set to control this behavior. But I can't find a list of\n> environment variables that affect meson behavior anywhere. I guess the\n> authors don't believe in environment variable as a control mechanism.\n\nThey indeed do not like them - but there is one in this\ncase: MESON_TESTTHREADS\n\nThere's even documentation for it: https://mesonbuild.com/Unit-tests.html#parallelism\n\n\n> Or, at the risk of sounding a bit testy, maybe their documentation\n> just isn't quite up to par. It's not that hard to find lists of\n> options for particular subcommands, either from the tool itself or on\n> the web site. But unlike git, where you can do something like 'man\n> git-checkout' and actually get more information than the command help\n> itself provides, there are no man pages for the main subcommands, and\n> I can't really find any good documentation on the web site either.\n> Knowing that a certain subcommand has a flag called\n> --pkgconfig.relocatable or that some other command has a flag called\n> --cross-file CROSS_FILE whose argument is, and I quote, a \"File\n> describing cross compilation environment,\" is not good enough.\n\nI agree that meson's documentation is of, let's say, varying quality. But\nhttps://mesonbuild.com/Commands.html#test does link to\nhttps://mesonbuild.com/Unit-tests.html which in turn has the bit about\nMESON_TESTTHREADS\n\nI do agree that it'd be nice if the online docs were converted to command\nspecific manpages...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Oct 2023 15:09:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: controlling meson's parallelism (and some whining)" }, { "msg_contents": "On Thu Oct 19, 2023 at 12:44 PM CDT, Robert Haas wrote:\n> The obvious fix to this is to just tell 'meson test' how many \n> processes I'd like it to run. I thought maybe I could just do 'meson \n> -j8 test' but that does not work, because the option is \n> --num-processes and has no short version. Even typing -j8 every time \n> would be kind of annoying; typing --num-processes 8 every time is \n> ridiculously verbose.\n\nI submitted a patch[0] to Meson to add -j.\n\n[0]: https://github.com/mesonbuild/meson/pull/12403\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 20 Oct 2023 11:22:51 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: controlling meson's parallelism (and some whining)" }, { "msg_contents": "On Fri Oct 20, 2023 at 11:22 AM CDT, Tristan Partin wrote:\n> On Thu Oct 19, 2023 at 12:44 PM CDT, Robert Haas wrote:\n> > The obvious fix to this is to just tell 'meson test' how many \n> > processes I'd like it to run. I thought maybe I could just do 'meson \n> > -j8 test' but that does not work, because the option is \n> > --num-processes and has no short version. Even typing -j8 every time \n> > would be kind of annoying; typing --num-processes 8 every time is \n> > ridiculously verbose.\n>\n> I submitted a patch[0] to Meson to add -j.\n>\n> [0]: https://github.com/mesonbuild/meson/pull/12403\n\nYou will see this in the 1.3.0 release which will be happening soon™️.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 20 Oct 2023 12:08:19 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: controlling meson's parallelism (and some whining)" }, { "msg_contents": "On Thu, Oct 19, 2023 at 6:09 PM Andres Freund <[email protected]> wrote:\n> Hm. Did you not run into simmilar issues with make check-world? I found the\n> concurrency of that to be even more variable over a run.\n\nI did not, but I didn't generally run that in parallel, either, mostly\nfor fear of being unable to see failures properly in the output. I\nused -j8 when building, though.\n\n> But perhaps there's something else wrong here? Perhaps we should deal with\n> this in Cluster.pm to some degree? Controlling this from the level of meson\n> (or make/prove for that matter) doesn't really work well, because different\n> tests start differently many postgres instances.\n\nI'm not sure, but I'm open to however anybody would like to improve things.\n\n> How many cores does your machine have? I've run the tests in a loop on my m1\n> mac mini in the past without running into this issue. It has \"only\" 8 cores\n> though, whereas I infer, from you mentioning -j8, that you have more cores?\n\nSystem Information shows \"Total Number of Cores: 8\" but sysctl hw.ncpu\nreturns 16. No real idea what is fastest, I haven't been real\nscientific about choosing values for -j.\n\n> > My next thought was that there ought to be some environmental variable\n> > that I could set to control this behavior. But I can't find a list of\n> > environment variables that affect meson behavior anywhere. I guess the\n> > authors don't believe in environment variable as a control mechanism.\n>\n> They indeed do not like them - but there is one in this\n> case: MESON_TESTTHREADS\n>\n> There's even documentation for it: https://mesonbuild.com/Unit-tests.html#parallelism\n\nI mean, I probably glanced at that page at some point, but it's hardly\nobvious that there's a mention of an environment variable buried\nsomewhere in the middle of the page. Most of the code you see looking\nat the page is Python, and the other environment variables mentioned\nseem to be ones that it sets, rather than ones that you can set. They\nreally ought to have a documentation page somewhere that lists all of\nthe environment variables that the user can set, and maybe another one\nthat lists all the ones that the tool itself sets before running\nsubprocesses. You can't expect people to navigate through every page\nof the documentation and read every word on the page carefully to find\nstuff like this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Oct 2023 10:10:45 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: controlling meson's parallelism (and some whining)" }, { "msg_contents": "On Fri, Oct 20, 2023 at 1:08 PM Tristan Partin <[email protected]> wrote:\n> You will see this in the 1.3.0 release which will be happening soon™️.\n\nCool, thanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Oct 2023 10:11:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: controlling meson's parallelism (and some whining)" } ]
[ { "msg_contents": "Hi All,\n\nCurrently, BackgroundWorker connected to a database by calling\nBackgroundWorkerInitializeConnection with username as NULL can be\nterminated by non-superuser with pg_signal_backend privilege. When the\nusername is NULL the bgworker process runs as superuser (which is\nexpected as per the documentation -\nhttps://www.postgresql.org/docs/current/bgworker.html ), but can the\nnon-superuser (with pg_signal_backend) terminate this superuser owned\nprocess?\nWe (Mahendrakar and Myself) think that this is a bug and proposing a\nfix that sets MyProc->roleId to BOOTSTRAP_SUPERUSERID, similar to\nInitializeSessionUserId, to prevent non-superuser terminating it.\n\nPlease let us know your comments.\n\nThanks,\nHemanth Sandrana", "msg_date": "Thu, 19 Oct 2023 23:19:09 +0530", "msg_from": "Hemanth Sandrana <[email protected]>", "msg_from_op": true, "msg_subject": "prevent non-superuser terminate bgworker running as superuser" }, { "msg_contents": "This seems like it should even be considered a security honestly.\n\nOn Thu, 19 Oct 2023, 19:49 Hemanth Sandrana, <[email protected]>\nwrote:\n\n> Hi All,\n>\n> Currently, BackgroundWorker connected to a database by calling\n> BackgroundWorkerInitializeConnection with username as NULL can be\n> terminated by non-superuser with pg_signal_backend privilege. When the\n> username is NULL the bgworker process runs as superuser (which is\n> expected as per the documentation -\n> https://www.postgresql.org/docs/current/bgworker.html ), but can the\n> non-superuser (with pg_signal_backend) terminate this superuser owned\n> process?\n> We (Mahendrakar and Myself) think that this is a bug and proposing a\n> fix that sets MyProc->roleId to BOOTSTRAP_SUPERUSERID, similar to\n> InitializeSessionUserId, to prevent non-superuser terminating it.\n>\n> Please let us know your comments.\n>\n> Thanks,\n> Hemanth Sandrana\n>\n\nThis seems like it should even be considered a security honestly. On Thu, 19 Oct 2023, 19:49 Hemanth Sandrana, <[email protected]> wrote:Hi All,\n\nCurrently, BackgroundWorker connected to a database by calling\nBackgroundWorkerInitializeConnection with username as NULL can be\nterminated by non-superuser with pg_signal_backend privilege. When the\nusername is NULL the bgworker process runs as superuser (which is\nexpected as per the documentation -\nhttps://www.postgresql.org/docs/current/bgworker.html ), but can the\nnon-superuser (with pg_signal_backend) terminate this superuser owned\nprocess?\nWe (Mahendrakar and Myself) think that this is a bug and proposing a\nfix that sets MyProc->roleId to BOOTSTRAP_SUPERUSERID, similar to\nInitializeSessionUserId, to prevent non-superuser terminating it.\n\nPlease let us know your comments.\n\nThanks,\nHemanth Sandrana", "msg_date": "Thu, 19 Oct 2023 22:47:19 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": false, "msg_subject": "Re: prevent non-superuser terminate bgworker running as superuser" } ]
[ { "msg_contents": "Hi all\n\nThis patch uses a parallel computing optimization algorithm to improve crc32c computing performance on ARM. The algorithm comes from Intel whitepaper: crc-iscsi-polynomial-crc32-instruction-paper. Input data is divided into three equal-sized blocks.Three parallel blocks (crc0, crc1, crc2) for 1024 Bytes.One Block: 42(BLK_LENGTH) * 8(step length: crc32c_u64) bytes\n\nCrc32c unitest: https://gist.github.com/gaoxyt/138fd53ca1eead8102eeb9204067f7e4\nCrc32c benchmark: https://gist.github.com/gaoxyt/4506c10fc06b3501445e32c4257113e9\nIt gets ~2x speedup compared to linear Arm crc32c instructions.\n\nI'll create a CommitFests ticket for this submission.\nAny comments or feedback are welcome.\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Fri, 20 Oct 2023 07:08:58 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Fri, Oct 20, 2023 at 07:08:58AM +0000, Xiang Gao wrote:\n> This patch uses a parallel computing optimization algorithm to\n> improve crc32c computing performance on ARM. The algorithm comes\n> from Intel whitepaper:\n> crc-iscsi-polynomial-crc32-instruction-paper. Input data is divided\n> into three equal-sized blocks.Three parallel blocks (crc0, crc1,\n> crc2) for 1024 Bytes.One Block: 42(BLK_LENGTH) * 8(step length:\n> crc32c_u64) bytes \n> \n> Crc32c unitest: https://gist.github.com/gaoxyt/138fd53ca1eead8102eeb9204067f7e4\n> Crc32c benchmark: https://gist.github.com/gaoxyt/4506c10fc06b3501445e32c4257113e9\n> It gets ~2x speedup compared to linear Arm crc32c instructions.\n\nInteresting. Could you attached to this thread the test files you\nused and the results obtained please? If this data gets deleted from\ngithub, then it would not be possible to refer back to what you did at\nthe related benchmark results.\n\nNote that your patch is forgetting about meson; it just patches\n./configure.\n--\nMichael", "msg_date": "Fri, 20 Oct 2023 17:18:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Fri, Oct 20, 2023 at 05:18:56PM +0900, Michael Paquier wrote:\n> On Fri, Oct 20, 2023 at 07:08:58AM +0000, Xiang Gao wrote:\n>> This patch uses a parallel computing optimization algorithm to\n>> improve crc32c computing performance on ARM. The algorithm comes\n>> from Intel whitepaper:\n>> crc-iscsi-polynomial-crc32-instruction-paper. Input data is divided\n>> into three equal-sized blocks.Three parallel blocks (crc0, crc1,\n>> crc2) for 1024 Bytes.One Block: 42(BLK_LENGTH) * 8(step length:\n>> crc32c_u64) bytes \n>> \n>> Crc32c unitest: https://gist.github.com/gaoxyt/138fd53ca1eead8102eeb9204067f7e4\n>> Crc32c benchmark: https://gist.github.com/gaoxyt/4506c10fc06b3501445e32c4257113e9\n>> It gets ~2x speedup compared to linear Arm crc32c instructions.\n> \n> Interesting. Could you attached to this thread the test files you\n> used and the results obtained please? If this data gets deleted from\n> github, then it would not be possible to refer back to what you did at\n> the related benchmark results.\n> \n> Note that your patch is forgetting about meson; it just patches\n> ./configure.\n\nI'm able to reproduce the speedup with the provided benchmark on an Apple\nM1 Pro (which appears to have the required instructions). There was almost\nno change for the 512-byte case, but there was a ~60% speedup for the\n4096-byte case.\n\nHowever, I couldn't produce any noticeable speedup with Heikki's pg_waldump\nbenchmark [0]. I haven't had a chance to dig further, unfortunately.\nAssuming I'm not doing something wrong, I don't think such a result should\nnecessarily disqualify this optimization, though.\n\n[0] https://postgr.es/m/ec487192-f6aa-509a-cacb-6642dad14209%40iki.fi\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 16:09:54 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Tue, Oct 24, 2023 at 04:09:54PM -0500, Nathan Bossart wrote:\n> I'm able to reproduce the speedup with the provided benchmark on an Apple\n> M1 Pro (which appears to have the required instructions). There was almost\n> no change for the 512-byte case, but there was a ~60% speedup for the\n> 4096-byte case.\n> \n> However, I couldn't produce any noticeable speedup with Heikki's pg_waldump\n> benchmark [0]. I haven't had a chance to dig further, unfortunately.\n> Assuming I'm not doing something wrong, I don't think such a result should\n> necessarily disqualify this optimization, though.\n\nActually, since the pg_waldump benchmark likely only involves very small\nWAL records, it would make sense that there isn't much difference.\n*facepalm*\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 16:18:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On 25/10/2023 00:18, Nathan Bossart wrote:\n> On Tue, Oct 24, 2023 at 04:09:54PM -0500, Nathan Bossart wrote:\n>> I'm able to reproduce the speedup with the provided benchmark on an Apple\n>> M1 Pro (which appears to have the required instructions). There was almost\n>> no change for the 512-byte case, but there was a ~60% speedup for the\n>> 4096-byte case.\n>>\n>> However, I couldn't produce any noticeable speedup with Heikki's pg_waldump\n>> benchmark [0]. I haven't had a chance to dig further, unfortunately.\n>> Assuming I'm not doing something wrong, I don't think such a result should\n>> necessarily disqualify this optimization, though.\n> \n> Actually, since the pg_waldump benchmark likely only involves very small\n> WAL records, it would make sense that there isn't much difference.\n> *facepalm*\n\nNo need to guess, pg_waldump -z will tell you what the record size is. \nAnd you can vary it by changing the checkpoint interval and/or pgbench \nscale factor: if you checkpoint frequently or if the database is larger, \nyou get more full-page images which makes the records larger on average, \nand vice versa.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 00:37:45 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Wed, Oct 25, 2023 at 12:37:45AM +0300, Heikki Linnakangas wrote:\n> On 25/10/2023 00:18, Nathan Bossart wrote:\n>> Actually, since the pg_waldump benchmark likely only involves very small\n>> WAL records, it would make sense that there isn't much difference.\n>> *facepalm*\n> \n> No need to guess, pg_waldump -z will tell you what the record size is. And\n> you can vary it by changing the checkpoint interval and/or pgbench scale\n> factor: if you checkpoint frequently or if the database is larger, you get\n> more full-page images which makes the records larger on average, and vice\n> versa.\n\nIf you are looking at computing the CRC of records with arbitrary\nsizes, why not just generating a series with\npg_logical_emit_message() before doing a comparison with pg_waldump or\na custom replay loop to go through the records? At least it would\nmake the results more predictible.\n--\nMichael", "msg_date": "Wed, 25 Oct 2023 07:17:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Wed, Oct 25, 2023 at 07:17:55AM +0900, Michael Paquier wrote:\n> If you are looking at computing the CRC of records with arbitrary\n> sizes, why not just generating a series with\n> pg_logical_emit_message() before doing a comparison with pg_waldump or\n> a custom replay loop to go through the records? At least it would\n> make the results more predictible.\n\nI tried this. pg_waldump on 2 million ~8kB records took around 8.1 seconds\nwithout the patch and around 7.4 seconds with it (an 8% improvement).\npg_waldump on 1 million ~16kB records took around 3.2 seconds without the\npatch and around 2.4 seconds with it (a 25% improvement).\n\nGiven the performance characteristics and relative simplicity of the patch,\nI think this could be worth doing. I suspect we'll want to do something\nsimilar for x86, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 20:45:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "Thanks for your suggestion, this is the modified patch and two test files.\n\n-----Original Message-----\nFrom: Michael Paquier <[email protected]>\nSent: Friday, October 20, 2023 4:19 PM\nTo: Xiang Gao <[email protected]>\nCc: [email protected]\nSubject: Re: CRC32C Parallel Computation Optimization on ARM\n\nOn Fri, Oct 20, 2023 at 07:08:58AM +0000, Xiang Gao wrote:\n> This patch uses a parallel computing optimization algorithm to improve\n> crc32c computing performance on ARM. The algorithm comes from Intel\n> whitepaper:\n> crc-iscsi-polynomial-crc32-instruction-paper. Input data is divided\n> into three equal-sized blocks.Three parallel blocks (crc0, crc1,\n> crc2) for 1024 Bytes.One Block: 42(BLK_LENGTH) * 8(step length:\n> crc32c_u64) bytes\n>\n> Crc32c unitest:\n> https://gist.github.com/gaoxyt/138fd53ca1eead8102eeb9204067f7e4\n> Crc32c benchmark:\n> https://gist.github.com/gaoxyt/4506c10fc06b3501445e32c4257113e9\n> It gets ~2x speedup compared to linear Arm crc32c instructions.\n\nInteresting. Could you attached to this thread the test files you used and the results obtained please? If this data gets deleted from github, then it would not be possible to refer back to what you did at the related benchmark results.\n\nNote that your patch is forgetting about meson; it just patches ./configure.\n--\nMichael\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Wed, 25 Oct 2023 03:38:20 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "+pg_crc32c\n+pg_comp_crc32c_with_vmull_armv8(pg_crc32c crc, const void *data, size_t len)\n\nIt looks like most of this function is duplicated from\npg_comp_crc32c_armv8(). I understand that we probably need a separate\nfunction because of the runtime check, but perhaps we could create a common\nstatic inline helper function with a branch for when vmull_p64() can be\nused. It's callers would then just provide a boolean to indicate which\nbranch to take.\n\n+# Use ARM VMULL if available and ARM CRC32C intrinsic is avaliable too.\n+if test x\"$USE_ARMV8_VMULL\" = x\"\" && (test x\"$USE_ARMV8_CRC32C\" = x\"1\" || test x\"$USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK\" = x\"1\"); then\n+ if test x\"$pgac_armv8_vmull_intrinsics\" = x\"yes\"; then\n+ USE_ARMV8_VMULL=1\n+ fi\n+fi\n\nHm. I wonder if we need to switch to a runtime check in some cases. For\nexample, what happens if the ARMv8 intrinsics used today are found with the\ndefault compiler flags, but vmull_p64() is only available if\n-march=armv8-a+crypto is added? It looks like the precedent is to use a\nruntime check if we need extra CFLAGS to produce code that uses the\nintrinsics.\n\nSeparately, I wonder if we should just always do runtime checks for the CRC\nstuff whenever we can produce code with the intrinics, regardless of\nwhether we need extra CFLAGS. The check doesn't look terribly expensive,\nand it might allow us to simplify the code a bit (especially now that we\nsupport a few different architectures).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 10:43:25 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Wed, 25 Oct, 2023 at 10:43:25 -0500, Nathan Bossart wrote:\n>+pg_crc32c\n>+pg_comp_crc32c_with_vmull_armv8(pg_crc32c crc, const void *data, size_t len)\n\n>It looks like most of this function is duplicated from\n>pg_comp_crc32c_armv8(). I understand that we probably need a separate\n>function because of the runtime check, but perhaps we could create a common\n>static inline helper function with a branch for when vmull_p64() can be\n>used. It's callers would then just provide a boolean to indicate which\n>branch to take.\n\nI have modified and remade the patch.\n\n>+# Use ARM VMULL if available and ARM CRC32C intrinsic is avaliable too.\n>+if test x\"$USE_ARMV8_VMULL\" = x\"\" && (test x\"$USE_ARMV8_CRC32C\" = x\"1\" || test x\"$USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK\" = x\"1\"); then\n>+ if test x\"$pgac_armv8_vmull_intrinsics\" = x\"yes\"; then\n>+ USE_ARMV8_VMULL=1\n>+ fi\n>+fi\n\n>Hm. I wonder if we need to switch to a runtime check in some cases. For\n>example, what happens if the ARMv8 intrinsics used today are found with the\n>default compiler flags, but vmull_p64() is only available if\n>-march=armv8-a+crypto is added? It looks like the precedent is to use a\n>runtime check if we need extra CFLAGS to produce code that uses the\n>intrinsics.\n\nWe consider that a runtime check needs to be done in any scenario.\nHere we only confirm that the compilation can be successful.\nA runtime check will be done when choosing which algorithm.\nYou can think of us as merging USE_ARMV8_VMULL and USE_ARMV8_VMULL_WITH_RUNTIME_CHECK into USE_ARMV8_VMULL.\n\n>Separately, I wonder if we should just always do runtime checks for the CRC\n>stuff whenever we can produce code with the intrinics, regardless of\n>whether we need extra CFLAGS. The check doesn't look terribly expensive,\n>and it might allow us to simplify the code a bit (especially now that we\n>support a few different architectures).\n\nYes, I think so. USE_ARMV8_CRC32C only means that the compilation is successful,\nand it does not guarantee that it can run correctly on the local machine.\nTherefore, a runtime check is required during actual operation.\nBased on the principle of minimal changes, we plan to fix it in the next patch.\nIf the community agrees, we will continue to improve it later, such as merging x86 and arm code, etc.\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Thu, 26 Oct 2023 07:28:35 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Tue, 24 Oct, 2023 20:45:39PM -0500, Nathan Bossart wrote:\n>I tried this. pg_waldump on 2 million ~8kB records took around 8.1 seconds\n>without the patch and around 7.4 seconds with it (an 8% improvement).\n>pg_waldump on 1 million ~16kB records took around 3.2 seconds without the\n>patch and around 2.4 seconds with it (a 25% improvement).\n\nCould you please provide details on how to generate these 8kB size or 16kB size data? Thanks!\n\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\n\n\n", "msg_date": "Thu, 26 Oct 2023 08:53:31 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, Oct 26, 2023 at 2:23 PM Xiang Gao <[email protected]> wrote:\n>\n> On Tue, 24 Oct, 2023 20:45:39PM -0500, Nathan Bossart wrote:\n> >I tried this. pg_waldump on 2 million ~8kB records took around 8.1 seconds\n> >without the patch and around 7.4 seconds with it (an 8% improvement).\n> >pg_waldump on 1 million ~16kB records took around 3.2 seconds without the\n> >patch and around 2.4 seconds with it (a 25% improvement).\n>\n> Could you please provide details on how to generate these 8kB size or 16kB size data? Thanks!\n\nHere's a script that I use to generate WAL records of various sizes,\nchange it to taste if useful:\n\nfor m in 16 64 256 1024 4096 8192 16384\ndo\n echo \"Start of run with WAL size \\$m bytes at:\"\n date\n echo \"SELECT pg_logical_emit_message(true, 'mymessage',\nrepeat('d', \\$m));\" >> $JUMBO/scripts/dumbo\\$m.sql\n for c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096\n do\n $PGWORKSPACE/pgbench -n postgres -c\\$c -j\\$c -T60 -f\n$JUMBO/scripts/dumbo\\$m.sql > $JUMBO/results/dumbo\\$m:\\$c.out\n done\n echo \"End of run with WAL size \\$m bytes at:\"\n date\n echo \"\\n\"\ndone\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 14:36:44 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, Oct 26, 2023 at 07:28:35AM +0000, Xiang Gao wrote:\n> On Wed, 25 Oct, 2023 at 10:43:25 -0500, Nathan Bossart wrote:\n>>+# Use ARM VMULL if available and ARM CRC32C intrinsic is avaliable too.\n>>+if test x\"$USE_ARMV8_VMULL\" = x\"\" && (test x\"$USE_ARMV8_CRC32C\" = x\"1\" || test x\"$USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK\" = x\"1\"); then\n>>+ if test x\"$pgac_armv8_vmull_intrinsics\" = x\"yes\"; then\n>>+ USE_ARMV8_VMULL=1\n>>+ fi\n>>+fi\n> \n>>Hm. I wonder if we need to switch to a runtime check in some cases. For\n>>example, what happens if the ARMv8 intrinsics used today are found with the\n>>default compiler flags, but vmull_p64() is only available if\n>>-march=armv8-a+crypto is added? It looks like the precedent is to use a\n>>runtime check if we need extra CFLAGS to produce code that uses the\n>>intrinsics.\n> \n> We consider that a runtime check needs to be done in any scenario.\n> Here we only confirm that the compilation can be successful.\n> A runtime check will be done when choosing which algorithm.\n> You can think of us as merging USE_ARMV8_VMULL and USE_ARMV8_VMULL_WITH_RUNTIME_CHECK into USE_ARMV8_VMULL.\n\nOh. Looking again, I see that we are using a runtime check for ARM in all\ncases with this patch. If so, maybe we should just remove\nUSE_ARV8_CRC32C_WITH_RUNTIME_CHECK in a prerequisite patch (and have\nUSE_ARMV8_CRC32C always do the runtime check). I suspect there are other\nopportunities to simplify things, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 11:37:52 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, Oct 26, 2023 at 08:53:31AM +0000, Xiang Gao wrote:\n> On Tue, 24 Oct, 2023 20:45:39PM -0500, Nathan Bossart wrote:\n>>I tried this. pg_waldump on 2 million ~8kB records took around 8.1 seconds\n>>without the patch and around 7.4 seconds with it (an 8% improvement).\n>>pg_waldump on 1 million ~16kB records took around 3.2 seconds without the\n>>patch and around 2.4 seconds with it (a 25% improvement).\n> \n> Could you please provide details on how to generate these 8kB size or 16kB size data? Thanks!\n\nI did something like\n\n\tdo $$\n\tbegin\n\t\tfor i in 1..1000000\n\t\tloop\n\t\t\tperform pg_logical_emit_message(false, 'test', repeat('0123456789', 800));\n\t\tend loop;\n\tend;\n\t$$;\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:18:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, 26 Oct, 2023 11:37:52AM -0500, Nathan Bossart wrote:\n>> We consider that a runtime check needs to be done in any scenario.\n>> Here we only confirm that the compilation can be successful.\n> >A runtime check will be done when choosing which algorithm.\n> >You can think of us as merging USE_ARMV8_VMULL and USE_ARMV8_VMULL_WITH_RUNTIME_CHECK into USE_ARMV8_VMULL.\n\n>Oh. Looking again, I see that we are using a runtime check for ARM in all\n>cases with this patch. If so, maybe we should just remove\n>USE_ARV8_CRC32C_WITH_RUNTIME_CHECK in a prerequisite patch (and have\n>USE_ARMV8_CRC32C always do the runtime check). I suspect there are other\n>opportunities to simplify things, too.\n\nYes, I have been removed USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK in this patch.\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Fri, 27 Oct 2023 07:01:10 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Fri, Oct 27, 2023 at 07:01:10AM +0000, Xiang Gao wrote:\n> On Thu, 26 Oct, 2023 11:37:52AM -0500, Nathan Bossart wrote:\n>>> We consider that a runtime check needs to be done in any scenario.\n>>> Here we only confirm that the compilation can be successful.\n>> >A runtime check will be done when choosing which algorithm.\n>> >You can think of us as merging USE_ARMV8_VMULL and USE_ARMV8_VMULL_WITH_RUNTIME_CHECK into USE_ARMV8_VMULL.\n> \n>>Oh. Looking again, I see that we are using a runtime check for ARM in all\n>>cases with this patch. If so, maybe we should just remove\n>>USE_ARV8_CRC32C_WITH_RUNTIME_CHECK in a prerequisite patch (and have\n>>USE_ARMV8_CRC32C always do the runtime check). I suspect there are other\n>>opportunities to simplify things, too.\n> \n> Yes, I have been removed USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK in this patch.\n\nThanks. I went ahead and split this prerequisite part out to a separate\nthread [0] since it's sort-of unrelated to your proposal here. It's not\nreally a prerequisite, but I do think it will simplify things a bit.\n\n[0] https://postgr.es/m/20231030161706.GA3011%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 11:21:43 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Mon, Oct 30, 2023 at 11:21:43AM -0500, Nathan Bossart wrote:\n> On Fri, Oct 27, 2023 at 07:01:10AM +0000, Xiang Gao wrote:\n>> On Thu, 26 Oct, 2023 11:37:52AM -0500, Nathan Bossart wrote:\n>>>> We consider that a runtime check needs to be done in any scenario.\n>>>> Here we only confirm that the compilation can be successful.\n>>> >A runtime check will be done when choosing which algorithm.\n>>> >You can think of us as merging USE_ARMV8_VMULL and USE_ARMV8_VMULL_WITH_RUNTIME_CHECK into USE_ARMV8_VMULL.\n>> \n>>>Oh. Looking again, I see that we are using a runtime check for ARM in all\n>>>cases with this patch. If so, maybe we should just remove\n>>>USE_ARV8_CRC32C_WITH_RUNTIME_CHECK in a prerequisite patch (and have\n>>>USE_ARMV8_CRC32C always do the runtime check). I suspect there are other\n>>>opportunities to simplify things, too.\n>> \n>> Yes, I have been removed USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK in this patch.\n> \n> Thanks. I went ahead and split this prerequisite part out to a separate\n> thread [0] since it's sort-of unrelated to your proposal here. It's not\n> really a prerequisite, but I do think it will simplify things a bit.\n\nPer the other thread [0], we should try to avoid the runtime check when\npossible, as it seems to produce a small regression. This means that if\nthe ARMv8 CRC instructions are found with the default compiler flags, we\ncan only use vmull_p64() if it can also be used with the default flags.\nOtherwise, we can just do the runtime check.\n\n[0] https://postgr.es/m/2620794.1698783160%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 15:48:21 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Tue, 31 Oct 2023 15:48:21PM -0500, Nathan Bossart wrote:\n>> Thanks. I went ahead and split this prerequisite part out to a separate\n>> thread [0] since it's sort-of unrelated to your proposal here. It's not\n>> really a prerequisite, but I do think it will simplify things a bit.\n\n>Per the other thread [0], we should try to avoid the runtime check when\n>possible, as it seems to produce a small regression. This means that if\n>the ARMv8 CRC instructions are found with the default compiler flags, we\n>can only use vmull_p64() if it can also be used with the default flags.\n>Otherwise, we can just do the runtime check.\n\n>[0] https://postgr.es/m/2620794.1698783160%40sss.pgh.pa.us\n\nAfter reading the discussion, I understand that in order to avoid performance\nregression in some instances, we need to try our best to avoid runtime checks.\nI don't know if I understand it correctly.\nif so, we need to check whether to use the ARM CRC32C and VMULL instruction\ndirectly or with runtime check. There will be many scenarios here and the code\nwill be more complicated.\nCould you please give me some suggestions about how to refine this patch?\nThanks very much!\n\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\n\n\n", "msg_date": "Thu, 2 Nov 2023 06:17:20 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, Nov 02, 2023 at 06:17:20AM +0000, Xiang Gao wrote:\n> After reading the discussion, I understand that in order to avoid performance\n> regression in some instances, we need to try our best to avoid runtime checks.\n> I don't know if I understand it correctly.\n\nThe idea is that we don't want to start forcing runtime checks on builds\nwhere we aren't already doing runtime checks. IOW if the compiler can use\nthe ARMv8 CRC instructions with the default compiler flags, we should only\nuse vmull_p64() if it can also be used with the default compiler flags. I\nsuspect this limitation sounds worse than it actually is in practice. The\nvast majority of the buildfarm uses runtime checks, and at least some of\nthe platforms that don't, such as the Apple M-series machines, seem to\ninclude +crypto by default.\n\nOf course, if a compiler picks up +crc but not +crypto in its defaults, we\ncould lose the vmull_p64() optimization on that platform. But as noted in\nthe other thread, we can revisit if these kinds of hypothetical situations\nbecome reality.\n\n> Could you please give me some suggestions about how to refine this patch?\n\nOf course. I think we'll ultimately want to independently check for the\navailability of the new instruction like we do for the other sets of\nintrinsics:\n\n\tPGAC_ARMV8_VMULL_INTRINSICS([])\n\tif test x\"$pgac_armv8_vmull_intrinsics\" != x\"yes\"; then\n\t\tPGAC_ARMV8_VMULL_INTRINSICS([-march=armv8-a+crypto])\n\tfi\n\nMy current thinking is that we'll want to add USE_ARMV8_VMULL and\nUSE_ARMV8_VMULL_WITH_RUNTIME_CHECK and use those to decide exactly what to\ncompile. I'll admit I haven't fully thought through every detail yet, but\nI'm cautiously optimistic that we can avoid too much complexity in the\nautoconf/meson scripts.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Nov 2023 09:35:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Date: Thu, 2 Nov 2023 09:35:50AM -0500, Nathan Bossart wrote:\n\n>On Thu, Nov 02, 2023 at 06:17:20AM +0000, Xiang Gao wrote:\n>> After reading the discussion, I understand that in order to avoid performance\n>> regression in some instances, we need to try our best to avoid runtime checks.\n> >I don't know if I understand it correctly.\n\n>The idea is that we don't want to start forcing runtime checks on builds\n>where we aren't already doing runtime checks. IOW if the compiler can use\n>the ARMv8 CRC instructions with the default compiler flags, we should only\n>use vmull_p64() if it can also be used with the default compiler flags. I\n>suspect this limitation sounds worse than it actually is in practice. The\n>vast majority of the buildfarm uses runtime checks, and at least some of\n>the platforms that don't, such as the Apple M-series machines, seem to\n>include +crypto by default.\n\n>Of course, if a compiler picks up +crc but not +crypto in its defaults, we\n>could lose the vmull_p64() optimization on that platform. But as noted in\n>the other thread, we can revisit if these kinds of hypothetical situations\n>become reality.\n\n>> Could you please give me some suggestions about how to refine this patch?\n\n>Of course. I think we'll ultimately want to independently check for the\n>availability of the new instruction like we do for the other sets of\n>intrinsics:\n>\n> PGAC_ARMV8_VMULL_INTRINSICS([])\n> if test x\"$pgac_armv8_vmull_intrinsics\" != x\"yes\"; then\n> PGAC_ARMV8_VMULL_INTRINSICS([-march=armv8-a+crypto])\n> fi\n>\n>My current thinking is that we'll want to add USE_ARMV8_VMULL and\n>USE_ARMV8_VMULL_WITH_RUNTIME_CHECK and use those to decide exactly what to\n>compile. I'll admit I haven't fully thought through every detail yet, but\n>I'm cautiously optimistic that we can avoid too much complexity in the\n>autoconf/meson scripts.\n\nThank you so much!\nThis is the newest patch, I think the code for which crc algorithm to choose is a bit complicated. Maybe we can just use USE_ARMV8_VMULL only, and do runtime checks on the vmull_p64 instruction at all times. This will not affect the existing builds, because this is a new instruction and new logic. In addition, it can also reduce the complexity of the code.\nVery much looking forward to receiving your suggestions, thank you!\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Fri, 3 Nov 2023 10:46:57 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Fri, Nov 03, 2023 at 10:46:57AM +0000, Xiang Gao wrote:\n> On Date: Thu, 2 Nov 2023 09:35:50AM -0500, Nathan Bossart wrote:\n>> The idea is that we don't want to start forcing runtime checks on builds\n>> where we aren't already doing runtime checks. IOW if the compiler can use\n>> the ARMv8 CRC instructions with the default compiler flags, we should only\n>> use vmull_p64() if it can also be used with the default compiler flags.\n>\n> This is the newest patch, I think the code for which crc algorithm to\n> choose is a bit complicated. Maybe we can just use USE_ARMV8_VMULL only,\n> and do runtime checks on the vmull_p64 instruction at all times. This\n> will not affect the existing builds, because this is a new instruction\n> and new logic. In addition, it can also reduce the complexity of the\n> code.\n\nI don't think we can. AFAICT a runtime check necessitates a function\npointer or a branch, both of which incurred an impact on performance in my\ntests. It looks like this latest patch still does the runtime check even\nfor the USE_ARMV8_CRC32C case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 13:16:13 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Mon, 6 Nov 2023 13:16:13PM -0600, Nathan Bossart wrote:\n>>> The idea is that we don't want to start forcing runtime checks on builds\n>>>where we aren't already doing runtime checks. IOW if the compiler can use\n>>>the ARMv8 CRC instructions with the default compiler flags, we should only\n>>>use vmull_p64() if it can also be used with the default compiler flags.\n>>\n>>This is the newest patch, I think the code for which crc algorithm to\n>>choose is a bit complicated. Maybe we can just use USE_ARMV8_VMULL only,\n>>and do runtime checks on the vmull_p64 instruction at all times. This\n>>will not affect the existing builds, because this is a new instruction\n>>and new logic. In addition, it can also reduce the complexity of the\n>>code.\n\n>I don't think we can. AFAICT a runtime check necessitates a function\n>pointer or a branch, both of which incurred an impact on performance in my\n>tests. It looks like this latest patch still does the runtime check even\n>for the USE_ARMV8_CRC32C case.\n\nI think I understand what you mean, this is the latest patch. Thank you!\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Tue, 7 Nov 2023 08:05:45 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Tue, Nov 07, 2023 at 08:05:45AM +0000, Xiang Gao wrote:\n> I think I understand what you mean, this is the latest patch. Thank you!\n\nThanks for the new patch.\n\n+# PGAC_ARMV8_VMULL_INTRINSICS\n+# ----------------------------\n+# Check if the compiler supports the vmull_p64\n+# intrinsic functions. These instructions\n+# were first introduced in ARMv8 crypto Extension.\n\nI wonder if it'd be better to call this PGAC_ARMV8_CRYPTO_INTRINSICS since\nthis check seems to indicate the presence of +crypto. Presumably there are\nother instructions in this extension that could be used elsewhere, in which\ncase we could reuse this.\n\n+# Use ARM VMULL if available and ARM CRC32C intrinsic is avaliable too.\n+if test x\"$USE_ARMV8_VMULL\" = x\"\" && test x\"$USE_ARMV8_VMULL_WITH_RUNTIME_CHECK\" = x\"\" && (test x\"$USE_ARMV8_CRC32C\" = x\"1\" || test x\"$USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK\" = x\"1\"); then\n+ if test x\"$pgac_armv8_vmull_intrinsics\" = x\"yes\" && test x\"$CFLAGS_VMULL\" = x\"\"; then\n+ USE_ARMV8_VMULL=1\n+ else\n+ if test x\"$pgac_armv8_vmull_intrinsics\" = x\"yes\"; then\n+ USE_ARMV8_VMULL_WITH_RUNTIME_CHECK=1\n+ fi\n+ fi\n+fi\n\nI'm not sure I see the need to check USE_ARMV8_CRC32C* when setting these.\nCouldn't we set them solely on the results of our\nPGAC_ARMV8_VMULL_INTRINSICS check? It looks like this is what you are\ndoing in meson.build already.\n\n+extern pg_crc32c pg_comp_crc32c_with_vmull_armv8(pg_crc32c crc, const void *data, size_t len);\n\nnitpick: Maybe pg_comp_crc32_armv8_parallel()?\n\n-# all versions of pg_crc32c_armv8.o need CFLAGS_CRC\n-pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n-pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n-pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n\nWhy are these lines deleted?\n\n- ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 'crc'],\n+ ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK'],\n\nWhat is the purpose of this change?\n\n+__attribute__((target(\"+crc+crypto\")))\n\nI'm not sure we can assume that all compilers will understand this, and I'm\nnot sure we need it.\n\n+\tif (use_vmull)\n+\t{\n+/*\n+ * Crc32c parallel computation Input data is divided into three\n+ * equal-sized blocks. Block length : 42 words(42 * 8 bytes).\n+ * CRC0: 0 ~ 41 * 8,\n+ * CRC1: 42 * 8 ~ (42 * 2 - 1) * 8,\n+ * CRC2: 42 * 2 * 8 ~ (42 * 3 - 1) * 8.\n+ */\n\nShouldn't we surround this with #ifdefs for USE_ARMV8_VMULL*?\n\n \tif (pg_crc32c_armv8_available())\n+\t{\n+#if defined(USE_ARMV8_VMULL)\n+\t\tpg_comp_crc32c = pg_comp_crc32c_with_vmull_armv8;\n+#elif defined(USE_ARMV8_VMULL_WITH_RUNTIME_CHECK)\n+\t\tif (pg_vmull_armv8_available())\n+\t\t{\n+\t\t\tpg_comp_crc32c = pg_comp_crc32c_with_vmull_armv8;\n+\t\t}\n+\t\telse\n+\t\t{\n+\t\t\tpg_comp_crc32c = pg_comp_crc32c_armv8;\n+\t\t}\n+#else\n \t\tpg_comp_crc32c = pg_comp_crc32c_armv8;\n+#endif\n+\t}\n\nIMO it'd be better to move the #ifdefs into the functions so that we can\nsimplify this to something like\n\n\tif (pg_crc32c_armv8_available())\n\t{\n\t\tif (pg_crc32c_armv8_crypto_available())\n\t\t\tpg_comp_crc32c = pg_comp_crc32c_armv8_parallel;\n\t\telse\n\t\t\tpg_comp_crc32c = pg_comp_crc32c_armv8;\n\telse\n\t\tpc_comp_crc32c = pg_comp_crc32c_sb8;\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 10:36:08 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Date: Fri, 10 Nov 2023 10:36:08AM -0600, Nathan Bossart wrote:\n\n>-# all versions of pg_crc32c_armv8.o need CFLAGS_CRC\n>-pg_crc32c_armv8.o: CFLAGS+=$(CFLAGS_CRC)\n>-pg_crc32c_armv8_shlib.o: CFLAGS+=$(CFLAGS_CRC)\n>-pg_crc32c_armv8_srv.o: CFLAGS+=$(CFLAGS_CRC)\n>\n>Why are these lines deleted?\n>\n>- ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 'crc'],\n>+ ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK'],\n>\n>What is the purpose of this change?\n\nBecause I added `__attribute__((target(\"+crc+crypto\")))` before the functions that require crc extension and crypto extension, so they are removed here.\n\n>+__attribute__((target(\"+crc+crypto\")))\n>\n>I'm not sure we can assume that all compilers will understand this, and I'm\n>not sure we need it.\n\nCFLAGS_CRC is \"-march=armv8-a+crc\". Generally, if -march is supported, __attribute__ is also supported.\nIn addition, I am not sure about the source file pg_crc32c_armv8.c, if CFLAGS_CRC and CFLAGS_CRYPTO are needed at the same time, how should it be expressed in the makefile?\n\n\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Wed, 22 Nov 2023 10:16:44 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Wed, Nov 22, 2023 at 10:16:44AM +0000, Xiang Gao wrote:\n> On Date: Fri, 10 Nov 2023 10:36:08AM -0600, Nathan Bossart wrote:\n>>+__attribute__((target(\"+crc+crypto\")))\n>>\n>>I'm not sure we can assume that all compilers will understand this, and I'm\n>>not sure we need it.\n> \n> CFLAGS_CRC is \"-march=armv8-a+crc\". Generally, if -march is supported,\n> __attribute__ is also supported.\n\nIMHO we should stick with CFLAGS_CRC for now. If we want to switch to\nusing __attribute__((target(\"...\"))), I think we should do so in a separate\npatch. We are cautious about checking the availability of an attribute\nbefore using it (see c.h), and IIUC we'd need to verify that this works for\nall supported compilers that can target ARM before removing CFLAGS_CRC\nhere.\n\n> In addition, I am not sure about the source file pg_crc32c_armv8.c, if\n> CFLAGS_CRC and CFLAGS_CRYPTO are needed at the same time, how should it\n> be expressed in the makefile?\n\npg_crc32c_armv8.o: CFLAGS += ${CFLAGS_CRC} ${CFLAGS_CRYPTO}\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 22 Nov 2023 15:06:18 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Date: Wed, 22 Nov 2023 15:06:18PM -0600, Nathan Bossart wrote:\n\n>> On Date: Fri, 10 Nov 2023 10:36:08AM -0600, Nathan Bossart wrote:\n>>>+__attribute__((target(\"+crc+crypto\")))\n>>>\n>>>I'm not sure we can assume that all compilers will understand this, and I'm\n>>>not sure we need it.\n>>\n>> CFLAGS_CRC is \"-march=armv8-a+crc\". Generally, if -march is supported,\n>> __attribute__ is also supported.\n\n>IMHO we should stick with CFLAGS_CRC for now. If we want to switch to\n>using __attribute__((target(\"...\"))), I think we should do so in a separate\n>patch. We are cautious about checking the availability of an attribute\n>before using it (see c.h), and IIUC we'd need to verify that this works for\n>all supported compilers that can target ARM before removing CFLAGS_CRC\n>here.\n\nI agree.\n\n>> In addition, I am not sure about the source file pg_crc32c_armv8.c, if\n>> CFLAGS_CRC and CFLAGS_CRYPTO are needed at the same time, how should it\n>> be expressed in the makefile?\n>\n>pg_crc32c_armv8.o: CFLAGS += ${CFLAGS_CRC} ${CFLAGS_CRYPTO}\n\nIt does not work correctly. CFLAGS ='-march=armv8-a+crc, -march=armv8-a+crypto', what actually works is '-march=armv8-a+crypto'.\n\nWe set a new variable CLAGS_CRC_CRYPTO,In configure.ac,\n\nIf test x\"$CFLAGS_CRC\" != x\"\" || test x\"CFLAGS_CRYPTO\" != x\"\"; then\n CLAGS_CRC_CRYPTO = '-march=armv8-a+crc+crypto'\nfi\n\nthen in makefile,\npg_crc32c_armv8.o: CFLAGS +=${ CLAGS_CRC_CRYPTO }\n\nAnd same thing in meson.build. In src/port/meson.build,\n\nreplace_funcs_pos = [\n # arm / aarch64\n ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C', 'crc_crypto'],\n ['pg_crc32c_armv8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 'crc_crypto'],\n ['pg_crc32c_armv8_choose', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK', 'crc_crypto'],\n ['pg_crc32c_sb8', 'USE_ARMV8_CRC32C_WITH_RUNTIME_CHECK'],\n]\n'pg_crc32c_armv8' also needs 'crc_crypto' when 'USE_ARMV8_CRC32C'.\n\nLooking forward to your feedback, thanks!\n\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.\n\n\n", "msg_date": "Thu, 23 Nov 2023 08:05:26 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Thu, Nov 23, 2023 at 08:05:26AM +0000, Xiang Gao wrote:\n> On Date: Wed, 22 Nov 2023 15:06:18PM -0600, Nathan Bossart wrote:\n>>pg_crc32c_armv8.o: CFLAGS += ${CFLAGS_CRC} ${CFLAGS_CRYPTO}\n> \n> It does not work correctly. CFLAGS ='-march=armv8-a+crc,\n> -march=armv8-a+crypto', what actually works is '-march=armv8-a+crypto'.\n> \n> We set a new variable CLAGS_CRC_CRYPTO,In configure.ac,\n> \n> If test x\"$CFLAGS_CRC\" != x\"\" || test x\"CFLAGS_CRYPTO\" != x\"\"; then\n> CLAGS_CRC_CRYPTO = '-march=armv8-a+crc+crypto'\n> fi\n> \n> then in makefile,\n> pg_crc32c_armv8.o: CFLAGS +=${ CLAGS_CRC_CRYPTO }\n\nAh, I see. We need to append +crc and/or +crypto based on what the\ncompiler understands. That seems fine to me...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 14:54:26 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Date: Thu, 30 Nov 2023 14:54:26PM -0600, Nathan Bossart wrote:\n>>pg_crc32c_armv8.o: CFLAGS += ${CFLAGS_CRC} ${CFLAGS_CRYPTO}\n>>\n>> It does not work correctly. CFLAGS ='-march=armv8-a+crc,\n>> -march=armv8-a+crypto', what actually works is '-march=armv8-a+crypto'.\n>>\n>> We set a new variable CLAGS_CRC_CRYPTO,In configure.ac,\n>>\n>> If test x\"$CFLAGS_CRC\" != x\"\" || test x\"CFLAGS_CRYPTO\" != x\"\"; then\n>> CLAGS_CRC_CRYPTO = '-march=armv8-a+crc+crypto'\n>> fi\n>>\n>> then in makefile,\n>> pg_crc32c_armv8.o: CFLAGS +=${ CLAGS_CRC_CRYPTO }\n>\n>Ah, I see. We need to append +crc and/or +crypto based on what the\n>compiler understands. That seems fine to me...\n\nThis is the latest patch. Looking forward to your feedback, thanks!\nIMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.", "msg_date": "Mon, 4 Dec 2023 07:27:01 +0000", "msg_from": "Xiang Gao <[email protected]>", "msg_from_op": true, "msg_subject": "RE: CRC32C Parallel Computation Optimization on ARM" }, { "msg_contents": "On Mon, Dec 04, 2023 at 07:27:01AM +0000, Xiang Gao wrote:\n> This is the latest patch. Looking forward to your feedback, thanks!\n\nThanks for the new patch. I am hoping to spend much more time on this in\nthe near future...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Dec 2023 22:18:09 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CRC32C Parallel Computation Optimization on ARM" } ]
[ { "msg_contents": "I noticed $subject with the query below.\n\nset enable_memoize to off;\n\nexplain (analyze, costs off)\nselect * from tenk1 t1 left join lateral\n (select t1.two as t1two, * from tenk1 t2 offset 0) s\non t1.two = s.two;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.050..59578.053 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.027..2.703 rows=10000 loops=1)\n -> Subquery Scan on s (actual time=0.004..4.819 rows=5000 loops=10000)\n Filter: (t1.two = s.two)\n Rows Removed by Filter: 5000\n -> Seq Scan on tenk1 t2 (actual time=0.002..3.834 rows=10000\nloops=10000)\n Planning Time: 0.666 ms\n Execution Time: 60937.899 ms\n(8 rows)\n\nset enable_memoize to on;\n\nexplain (analyze, costs off)\nselect * from tenk1 t1 left join lateral\n (select t1.two as t1two, * from tenk1 t2 offset 0) s\non t1.two = s.two;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.061..122684.607 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.026..3.367 rows=10000 loops=1)\n -> Memoize (actual time=0.011..9.821 rows=5000 loops=10000)\n Cache Key: t1.two, t1.two\n Cache Mode: binary\n Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0 Memory\nUsage: 1368kB\n -> Subquery Scan on s (actual time=0.008..5.188 rows=5000\nloops=10000)\n Filter: (t1.two = s.two)\n Rows Removed by Filter: 5000\n -> Seq Scan on tenk1 t2 (actual time=0.004..4.081\nrows=10000 loops=10000)\n Planning Time: 0.607 ms\n Execution Time: 124431.388 ms\n(12 rows)\n\nThe execution time (best of 3) is 124431.388 VS 60937.899 with and\nwithout memoize.\n\nThe Memoize runtime stats 'Hits: 0 Misses: 10000 Evictions: 9999'\nseems suspicious to me, so I've looked into it a little bit, and found\nthat the MemoizeState's keyparamids and its outerPlan's chgParam are\nalways different, and that makes us have to purge the entire cache each\ntime we rescan the memoize node.\n\nBut why are they always different? Well, for the query above, we have\ntwo NestLoopParam nodes, one (with paramno 1) is created when we replace\nouter-relation Vars in the scan qual 't1.two = s.two', the other one\n(with paramno 0) is added from the subquery's subplan_params, which was\ncreated when we replaced uplevel vars with Param nodes for the subquery.\nThat is to say, the chgParam would be {0, 1}.\n\nWhen it comes to replace outer-relation Vars in the memoize keys, the\ntwo 't1.two' Vars are both replaced with the NestLoopParam with paramno\n1, because it is the first NLP we see in root->curOuterParams that is\nequal to the Vars in memoize keys. That is to say, the memoize node's\nkeyparamids is {1}.\n\nI haven't thought thoroughly about the fix yet. But one way I'm\nthinking is that in create_subqueryscan_plan() we can first add the\nsubquery's subplan_params to root->curOuterParams, and then replace\nouter-relation Vars in scan_clauses afterwards. That can make us be\nable to share the same PARAM_EXEC slot for the same Var that both\nbelongs to the subquery's uplevel vars and to the NestLoop's\nouter-relation vars. To be concrete, something like attached.\n\nWith this change the same query runs much faster and the Memoize runtime\nstats looks more normal.\n\nexplain (analyze, costs off)\nselect * from tenk1 t1 left join lateral\n (select t1.two as t1two, * from tenk1 t2 offset 0) s\non t1.two = s.two;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.063..21177.476 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.025..5.415 rows=10000 loops=1)\n -> Memoize (actual time=0.001..0.234 rows=5000 loops=10000)\n Cache Key: t1.two, t1.two\n Cache Mode: binary\n Hits: 9998 Misses: 2 Evictions: 0 Overflows: 0 Memory Usage:\n2735kB\n -> Subquery Scan on s (actual time=0.009..5.169 rows=5000 loops=2)\n Filter: (t1.two = s.two)\n Rows Removed by Filter: 5000\n -> Seq Scan on tenk1 t2 (actual time=0.006..4.050\nrows=10000 loops=2)\n Planning Time: 0.593 ms\n Execution Time: 22486.621 ms\n(12 rows)\n\nAny thoughts?\n\nThanks\nRichard", "msg_date": "Fri, 20 Oct 2023 18:40:38 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "A performance issue with Memoize" }, { "msg_contents": "pá 20. 10. 2023 v 13:01 odesílatel Richard Guo <[email protected]>\nnapsal:\n\n> I noticed $subject with the query below.\n>\n> set enable_memoize to off;\n>\n> explain (analyze, costs off)\n> select * from tenk1 t1 left join lateral\n> (select t1.two as t1two, * from tenk1 t2 offset 0) s\n> on t1.two = s.two;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------\n> Nested Loop Left Join (actual time=0.050..59578.053 rows=50000000 loops=1)\n> -> Seq Scan on tenk1 t1 (actual time=0.027..2.703 rows=10000 loops=1)\n> -> Subquery Scan on s (actual time=0.004..4.819 rows=5000 loops=10000)\n> Filter: (t1.two = s.two)\n> Rows Removed by Filter: 5000\n> -> Seq Scan on tenk1 t2 (actual time=0.002..3.834 rows=10000\n> loops=10000)\n> Planning Time: 0.666 ms\n> Execution Time: 60937.899 ms\n> (8 rows)\n>\n> set enable_memoize to on;\n>\n> explain (analyze, costs off)\n> select * from tenk1 t1 left join lateral\n> (select t1.two as t1two, * from tenk1 t2 offset 0) s\n> on t1.two = s.two;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------\n> Nested Loop Left Join (actual time=0.061..122684.607 rows=50000000\n> loops=1)\n> -> Seq Scan on tenk1 t1 (actual time=0.026..3.367 rows=10000 loops=1)\n> -> Memoize (actual time=0.011..9.821 rows=5000 loops=10000)\n> Cache Key: t1.two, t1.two\n> Cache Mode: binary\n> Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0 Memory\n> Usage: 1368kB\n> -> Subquery Scan on s (actual time=0.008..5.188 rows=5000\n> loops=10000)\n> Filter: (t1.two = s.two)\n> Rows Removed by Filter: 5000\n> -> Seq Scan on tenk1 t2 (actual time=0.004..4.081\n> rows=10000 loops=10000)\n> Planning Time: 0.607 ms\n> Execution Time: 124431.388 ms\n> (12 rows)\n>\n> The execution time (best of 3) is 124431.388 VS 60937.899 with and\n> without memoize.\n>\n> The Memoize runtime stats 'Hits: 0 Misses: 10000 Evictions: 9999'\n> seems suspicious to me, so I've looked into it a little bit, and found\n> that the MemoizeState's keyparamids and its outerPlan's chgParam are\n> always different, and that makes us have to purge the entire cache each\n> time we rescan the memoize node.\n>\n> But why are they always different? Well, for the query above, we have\n> two NestLoopParam nodes, one (with paramno 1) is created when we replace\n> outer-relation Vars in the scan qual 't1.two = s.two', the other one\n> (with paramno 0) is added from the subquery's subplan_params, which was\n> created when we replaced uplevel vars with Param nodes for the subquery.\n> That is to say, the chgParam would be {0, 1}.\n>\n> When it comes to replace outer-relation Vars in the memoize keys, the\n> two 't1.two' Vars are both replaced with the NestLoopParam with paramno\n> 1, because it is the first NLP we see in root->curOuterParams that is\n> equal to the Vars in memoize keys. That is to say, the memoize node's\n> keyparamids is {1}.\n>\n> I haven't thought thoroughly about the fix yet. But one way I'm\n> thinking is that in create_subqueryscan_plan() we can first add the\n> subquery's subplan_params to root->curOuterParams, and then replace\n> outer-relation Vars in scan_clauses afterwards. That can make us be\n> able to share the same PARAM_EXEC slot for the same Var that both\n> belongs to the subquery's uplevel vars and to the NestLoop's\n> outer-relation vars. To be concrete, something like attached.\n>\n> With this change the same query runs much faster and the Memoize runtime\n> stats looks more normal.\n>\n> explain (analyze, costs off)\n> select * from tenk1 t1 left join lateral\n> (select t1.two as t1two, * from tenk1 t2 offset 0) s\n> on t1.two = s.two;\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------\n> Nested Loop Left Join (actual time=0.063..21177.476 rows=50000000 loops=1)\n> -> Seq Scan on tenk1 t1 (actual time=0.025..5.415 rows=10000 loops=1)\n> -> Memoize (actual time=0.001..0.234 rows=5000 loops=10000)\n> Cache Key: t1.two, t1.two\n> Cache Mode: binary\n> Hits: 9998 Misses: 2 Evictions: 0 Overflows: 0 Memory Usage:\n> 2735kB\n> -> Subquery Scan on s (actual time=0.009..5.169 rows=5000\n> loops=2)\n> Filter: (t1.two = s.two)\n> Rows Removed by Filter: 5000\n> -> Seq Scan on tenk1 t2 (actual time=0.006..4.050\n> rows=10000 loops=2)\n> Planning Time: 0.593 ms\n> Execution Time: 22486.621 ms\n> (12 rows)\n>\n> Any thoughts?\n>\n\n+1\n\nit would be great to fix this problem - I've seen this issue a few times.\n\nRegards\n\nPavel\n\n\n\n> Thanks\n> Richard\n>\n\npá 20. 10. 2023 v 13:01 odesílatel Richard Guo <[email protected]> napsal:I noticed $subject with the query below.set enable_memoize to off;explain (analyze, costs off)select * from tenk1 t1 left join lateral    (select t1.two as t1two, * from tenk1 t2 offset 0) son t1.two = s.two;                                     QUERY PLAN------------------------------------------------------------------------------------ Nested Loop Left Join (actual time=0.050..59578.053 rows=50000000 loops=1)   ->  Seq Scan on tenk1 t1 (actual time=0.027..2.703 rows=10000 loops=1)   ->  Subquery Scan on s (actual time=0.004..4.819 rows=5000 loops=10000)         Filter: (t1.two = s.two)         Rows Removed by Filter: 5000         ->  Seq Scan on tenk1 t2 (actual time=0.002..3.834 rows=10000 loops=10000) Planning Time: 0.666 ms Execution Time: 60937.899 ms(8 rows)set enable_memoize to on;explain (analyze, costs off)select * from tenk1 t1 left join lateral    (select t1.two as t1two, * from tenk1 t2 offset 0) son t1.two = s.two;                                        QUERY PLAN------------------------------------------------------------------------------------------ Nested Loop Left Join (actual time=0.061..122684.607 rows=50000000 loops=1)   ->  Seq Scan on tenk1 t1 (actual time=0.026..3.367 rows=10000 loops=1)   ->  Memoize (actual time=0.011..9.821 rows=5000 loops=10000)         Cache Key: t1.two, t1.two         Cache Mode: binary         Hits: 0  Misses: 10000  Evictions: 9999  Overflows: 0  Memory Usage: 1368kB         ->  Subquery Scan on s (actual time=0.008..5.188 rows=5000 loops=10000)               Filter: (t1.two = s.two)               Rows Removed by Filter: 5000               ->  Seq Scan on tenk1 t2 (actual time=0.004..4.081 rows=10000 loops=10000) Planning Time: 0.607 ms Execution Time: 124431.388 ms(12 rows)The execution time (best of 3) is 124431.388 VS 60937.899 with andwithout memoize.The Memoize runtime stats 'Hits: 0  Misses: 10000  Evictions: 9999'seems suspicious to me, so I've looked into it a little bit, and foundthat the MemoizeState's keyparamids and its outerPlan's chgParam arealways different, and that makes us have to purge the entire cache eachtime we rescan the memoize node.But why are they always different?  Well, for the query above, we havetwo NestLoopParam nodes, one (with paramno 1) is created when we replaceouter-relation Vars in the scan qual 't1.two = s.two', the other one(with paramno 0) is added from the subquery's subplan_params, which wascreated when we replaced uplevel vars with Param nodes for the subquery.That is to say, the chgParam would be {0, 1}.When it comes to replace outer-relation Vars in the memoize keys, thetwo 't1.two' Vars are both replaced with the NestLoopParam with paramno1, because it is the first NLP we see in root->curOuterParams that isequal to the Vars in memoize keys.  That is to say, the memoize node'skeyparamids is {1}.I haven't thought thoroughly about the fix yet.  But one way I'mthinking is that in create_subqueryscan_plan() we can first add thesubquery's subplan_params to root->curOuterParams, and then replaceouter-relation Vars in scan_clauses afterwards.  That can make us beable to share the same PARAM_EXEC slot for the same Var that bothbelongs to the subquery's uplevel vars and to the NestLoop'souter-relation vars.  To be concrete, something like attached.With this change the same query runs much faster and the Memoize runtimestats looks more normal.explain (analyze, costs off)select * from tenk1 t1 left join lateral    (select t1.two as t1two, * from tenk1 t2 offset 0) son t1.two = s.two;                                      QUERY PLAN-------------------------------------------------------------------------------------- Nested Loop Left Join (actual time=0.063..21177.476 rows=50000000 loops=1)   ->  Seq Scan on tenk1 t1 (actual time=0.025..5.415 rows=10000 loops=1)   ->  Memoize (actual time=0.001..0.234 rows=5000 loops=10000)         Cache Key: t1.two, t1.two         Cache Mode: binary         Hits: 9998  Misses: 2  Evictions: 0  Overflows: 0  Memory Usage: 2735kB         ->  Subquery Scan on s (actual time=0.009..5.169 rows=5000 loops=2)               Filter: (t1.two = s.two)               Rows Removed by Filter: 5000               ->  Seq Scan on tenk1 t2 (actual time=0.006..4.050 rows=10000 loops=2) Planning Time: 0.593 ms Execution Time: 22486.621 ms(12 rows)Any thoughts?+1 it would be great to fix this problem - I've seen this issue a few times.RegardsPavelThanksRichard", "msg_date": "Fri, 20 Oct 2023 13:43:02 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, Oct 20, 2023 at 6:40 PM Richard Guo <[email protected]> wrote:\n\n> I haven't thought thoroughly about the fix yet. But one way I'm\n> thinking is that in create_subqueryscan_plan() we can first add the\n> subquery's subplan_params to root->curOuterParams, and then replace\n> outer-relation Vars in scan_clauses afterwards. That can make us be\n> able to share the same PARAM_EXEC slot for the same Var that both\n> belongs to the subquery's uplevel vars and to the NestLoop's\n> outer-relation vars. To be concrete, something like attached.\n>\n\nAfter some more thought, I think this is the right way to fix this\nissue. The idea here is to make sure that the same NLP Var shares the\nsame PARAM_EXEC slot. This change can also help to save PARAM_EXEC\nslots (which is trivial though since slots are very cheap).\n\nThanks\nRichard\n\nOn Fri, Oct 20, 2023 at 6:40 PM Richard Guo <[email protected]> wrote:I haven't thought thoroughly about the fix yet.  But one way I'mthinking is that in create_subqueryscan_plan() we can first add thesubquery's subplan_params to root->curOuterParams, and then replaceouter-relation Vars in scan_clauses afterwards.  That can make us beable to share the same PARAM_EXEC slot for the same Var that bothbelongs to the subquery's uplevel vars and to the NestLoop'souter-relation vars.  To be concrete, something like attached.After some more thought, I think this is the right way to fix thisissue.  The idea here is to make sure that the same NLP Var shares thesame PARAM_EXEC slot.  This change can also help to save PARAM_EXECslots (which is trivial though since slots are very cheap).ThanksRichard", "msg_date": "Wed, 25 Oct 2023 14:40:58 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, Oct 20, 2023 at 7:43 PM Pavel Stehule <[email protected]>\nwrote:\n\n> +1\n>\n> it would be great to fix this problem - I've seen this issue a few times.\n>\n\nThanks for the input. I guess this is not rare in the real world. If\nthe subquery contains lateral reference to a Var that also appears in\nthe subquery's join clauses, we'd probably suffer from this issue.\n\nThanks\nRichard\n\nOn Fri, Oct 20, 2023 at 7:43 PM Pavel Stehule <[email protected]> wrote:+1 it would be great to fix this problem - I've seen this issue a few times.Thanks for the input.  I guess this is not rare in the real world.  Ifthe subquery contains lateral reference to a Var that also appears inthe subquery's join clauses, we'd probably suffer from this issue.ThanksRichard", "msg_date": "Wed, 25 Oct 2023 14:45:59 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On 20/10/2023 17:40, Richard Guo wrote:\n> I noticed $subject with the query below.\n> \n> set enable_memoize to off;\n> \n> explain (analyze, costs off)\n> select * from tenk1 t1 left join lateral\n>     (select t1.two as t1two, * from tenk1 t2 offset 0) s\n> on t1.two = s.two;\n>                                      QUERY PLAN\n> ------------------------------------------------------------------------------------\n>  Nested Loop Left Join (actual time=0.050..59578.053 rows=50000000 loops=1)\n>    ->  Seq Scan on tenk1 t1 (actual time=0.027..2.703 rows=10000 loops=1)\n>    ->  Subquery Scan on s (actual time=0.004..4.819 rows=5000 loops=10000)\n>          Filter: (t1.two = s.two)\n>          Rows Removed by Filter: 5000\n>          ->  Seq Scan on tenk1 t2 (actual time=0.002..3.834 rows=10000 \n> loops=10000)\n>  Planning Time: 0.666 ms\n>  Execution Time: 60937.899 ms\n> (8 rows)\n> \n> set enable_memoize to on;\n> \n> explain (analyze, costs off)\n> select * from tenk1 t1 left join lateral\n>     (select t1.two as t1two, * from tenk1 t2 offset 0) s\n> on t1.two = s.two;\n>                                         QUERY PLAN\n> ------------------------------------------------------------------------------------------\n>  Nested Loop Left Join (actual time=0.061..122684.607 rows=50000000 \n> loops=1)\n>    ->  Seq Scan on tenk1 t1 (actual time=0.026..3.367 rows=10000 loops=1)\n>    ->  Memoize (actual time=0.011..9.821 rows=5000 loops=10000)\n>          Cache Key: t1.two, t1.two\n>          Cache Mode: binary\n>          Hits: 0  Misses: 10000  Evictions: 9999  Overflows: 0  Memory \n> Usage: 1368kB\n>          ->  Subquery Scan on s (actual time=0.008..5.188 rows=5000 \n> loops=10000)\n>                Filter: (t1.two = s.two)\n>                Rows Removed by Filter: 5000\n>                ->  Seq Scan on tenk1 t2 (actual time=0.004..4.081 \n> rows=10000 loops=10000)\n>  Planning Time: 0.607 ms\n>  Execution Time: 124431.388 ms\n> (12 rows)\n> \n> The execution time (best of 3) is 124431.388 VS 60937.899 with and\n> without memoize.\n> \n> The Memoize runtime stats 'Hits: 0  Misses: 10000  Evictions: 9999'\n> seems suspicious to me, so I've looked into it a little bit, and found\n> that the MemoizeState's keyparamids and its outerPlan's chgParam are\n> always different, and that makes us have to purge the entire cache each\n> time we rescan the memoize node.\n> \n> But why are they always different?  Well, for the query above, we have\n> two NestLoopParam nodes, one (with paramno 1) is created when we replace\n> outer-relation Vars in the scan qual 't1.two = s.two', the other one\n> (with paramno 0) is added from the subquery's subplan_params, which was\n> created when we replaced uplevel vars with Param nodes for the subquery.\n> That is to say, the chgParam would be {0, 1}.\n> \n> When it comes to replace outer-relation Vars in the memoize keys, the\n> two 't1.two' Vars are both replaced with the NestLoopParam with paramno\n> 1, because it is the first NLP we see in root->curOuterParams that is\n> equal to the Vars in memoize keys.  That is to say, the memoize node's\n> keyparamids is {1}.\n> ...\n> Any thoughts?\n\nDo you've thought about the case, fixed with the commit 1db5667? As I \nsee, that bugfix still isn't covered by regression tests. Could your \napproach of a PARAM_EXEC slot reusing break that case?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 11:07:35 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Thu, Oct 26, 2023 at 12:07 PM Andrei Lepikhov <[email protected]>\nwrote:\n\n> Do you've thought about the case, fixed with the commit 1db5667? As I\n> see, that bugfix still isn't covered by regression tests. Could your\n> approach of a PARAM_EXEC slot reusing break that case?\n\n\nHm, I don't think so. The issue fixed by commit 1db5667 was caused by\nsharing PARAM_EXEC slots between different levels of NestLoop. AFAICS\nit's safe to share PARAM_EXEC slots within the same level of NestLoop.\n\nThe change here is about sharing PARAM_EXEC slots between subquery's\nsubplan_params and outer-relation variables, which happens within the\nsame level of NestLoop.\n\nActually, even without this change, we'd still share PARAM_EXEC slots\nbetween subquery's subplan_params and outer-relation variables in some\ncases. As an example, consider\n\nexplain (costs off)\nselect * from t t1 left join\n (t t2 left join\n lateral (select t1.a as t1a, t2.a as t2a, * from t t3) s\n on t2.b = s.b)\non t1.b = s.b and t1.a = t2.a;\n QUERY PLAN\n-------------------------------------------------------\n Nested Loop Left Join\n -> Seq Scan on t t1\n -> Nested Loop\n Join Filter: (t1.a = t2.a)\n -> Seq Scan on t t2\n -> Subquery Scan on s\n Filter: ((t1.b = s.b) AND (t2.b = s.b))\n -> Seq Scan on t t3\n(8 rows)\n\nFor outer-relation Var 't1.a' from qual 't1.a = t2.a', it shares\nPARAM_EXEC slot 0 with the PlannerParamItem for 't1.a' within the\nsubquery (from its targetlist).\n\nDid you notice a case that the change here breaks?\n\nHi Tom, could you share your insights on this issue and the proposed\nfix?\n\nThanks\nRichard\n\nOn Thu, Oct 26, 2023 at 12:07 PM Andrei Lepikhov <[email protected]> wrote:\nDo you've thought about the case, fixed with the commit 1db5667? As I \nsee, that bugfix still isn't covered by regression tests. Could your \napproach of a PARAM_EXEC slot reusing break that case?Hm, I don't think so.  The issue fixed by commit 1db5667 was caused bysharing PARAM_EXEC slots between different levels of NestLoop.  AFAICSit's safe to share PARAM_EXEC slots within the same level of NestLoop.The change here is about sharing PARAM_EXEC slots between subquery'ssubplan_params and outer-relation variables, which happens within thesame level of NestLoop.Actually, even without this change, we'd still share PARAM_EXEC slotsbetween subquery's subplan_params and outer-relation variables in somecases.  As an example, considerexplain (costs off)select * from t t1 left join        (t t2 left join                lateral (select t1.a as t1a, t2.a as t2a, * from t t3) s        on t2.b = s.b)on t1.b = s.b and t1.a = t2.a;                      QUERY PLAN------------------------------------------------------- Nested Loop Left Join   ->  Seq Scan on t t1   ->  Nested Loop         Join Filter: (t1.a = t2.a)         ->  Seq Scan on t t2         ->  Subquery Scan on s               Filter: ((t1.b = s.b) AND (t2.b = s.b))               ->  Seq Scan on t t3(8 rows)For outer-relation Var 't1.a' from qual 't1.a = t2.a', it sharesPARAM_EXEC slot 0 with the PlannerParamItem for 't1.a' within thesubquery (from its targetlist).Did you notice a case that the change here breaks?Hi Tom, could you share your insights on this issue and the proposedfix?ThanksRichard", "msg_date": "Mon, 30 Oct 2023 15:55:58 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On 30/10/2023 14:55, Richard Guo wrote:\n> \n> On Thu, Oct 26, 2023 at 12:07 PM Andrei Lepikhov \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> Do you've thought about the case, fixed with the commit 1db5667? As I\n> see, that bugfix still isn't covered by regression tests. Could your\n> approach of a PARAM_EXEC slot reusing break that case?\n> \n> \n> Hm, I don't think so.  The issue fixed by commit 1db5667 was caused by\n> sharing PARAM_EXEC slots between different levels of NestLoop.  AFAICS\n> it's safe to share PARAM_EXEC slots within the same level of NestLoop.\n> \n> The change here is about sharing PARAM_EXEC slots between subquery's\n> subplan_params and outer-relation variables, which happens within the\n> same level of NestLoop.\n> ...\n> Did you notice a case that the change here breaks?\n> \n> Hi Tom, could you share your insights on this issue and the proposed\n> fix?\n\nI think your patch works correctly so far. I mentioned the commit \n1db5667 because, as I see, the origin of the problem was parallel \nworkers. I have thought about pushing Memoize down to a parallel worker \nand couldn't imagine whether such a solution would be correct.\nSorry if I disturbed you in vain.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Tue, 31 Oct 2023 12:36:42 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Tue, Oct 31, 2023 at 1:36 PM Andrei Lepikhov <[email protected]>\nwrote:\n\n> On 30/10/2023 14:55, Richard Guo wrote:\n> >\n> > On Thu, Oct 26, 2023 at 12:07 PM Andrei Lepikhov\n> > <[email protected] <mailto:[email protected]>> wrote:\n> >\n> > Do you've thought about the case, fixed with the commit 1db5667? As I\n> > see, that bugfix still isn't covered by regression tests. Could your\n> > approach of a PARAM_EXEC slot reusing break that case?\n> >\n> >\n> > Hm, I don't think so. The issue fixed by commit 1db5667 was caused by\n> > sharing PARAM_EXEC slots between different levels of NestLoop. AFAICS\n> > it's safe to share PARAM_EXEC slots within the same level of NestLoop.\n> >\n> > The change here is about sharing PARAM_EXEC slots between subquery's\n> > subplan_params and outer-relation variables, which happens within the\n> > same level of NestLoop.\n> > ...\n> > Did you notice a case that the change here breaks?\n> >\n> > Hi Tom, could you share your insights on this issue and the proposed\n> > fix?\n>\n> I think your patch works correctly so far. I mentioned the commit\n> 1db5667 because, as I see, the origin of the problem was parallel\n> workers. I have thought about pushing Memoize down to a parallel worker\n> and couldn't imagine whether such a solution would be correct.\n> Sorry if I disturbed you in vain.\n\n\nThanks for mentioning commit 1db5667, which brings my attention to more\naspects about the PARAM_EXEC mechanism. I don't think the discussion is\nin vain. It helps a lot.\n\nThanks for looking into this patch.\n\nThanks\nRichard\n\nOn Tue, Oct 31, 2023 at 1:36 PM Andrei Lepikhov <[email protected]> wrote:On 30/10/2023 14:55, Richard Guo wrote:\n> \n> On Thu, Oct 26, 2023 at 12:07 PM Andrei Lepikhov \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n>     Do you've thought about the case, fixed with the commit 1db5667? As I\n>     see, that bugfix still isn't covered by regression tests. Could your\n>     approach of a PARAM_EXEC slot reusing break that case?\n> \n> \n> Hm, I don't think so.  The issue fixed by commit 1db5667 was caused by\n> sharing PARAM_EXEC slots between different levels of NestLoop.  AFAICS\n> it's safe to share PARAM_EXEC slots within the same level of NestLoop.\n> \n> The change here is about sharing PARAM_EXEC slots between subquery's\n> subplan_params and outer-relation variables, which happens within the\n> same level of NestLoop.\n> ...\n> Did you notice a case that the change here breaks?\n> \n> Hi Tom, could you share your insights on this issue and the proposed\n> fix?\n\nI think your patch works correctly so far. I mentioned the commit \n1db5667 because, as I see, the origin of the problem was parallel \nworkers. I have thought about pushing Memoize down to a parallel worker \nand couldn't imagine whether such a solution would be correct.\nSorry if I disturbed you in vain.Thanks for mentioning commit 1db5667, which brings my attention to moreaspects about the PARAM_EXEC mechanism.  I don't think the discussion isin vain.  It helps a lot.Thanks for looking into this patch.ThanksRichard", "msg_date": "Tue, 31 Oct 2023 14:19:31 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "Hi Richard,\n\nI can tell this a real world problem. I have seen this multiple times in \nproduction.\n\nThe fix seems surprisingly simple.\n\nI hope my questions here aren't completely off. I still struggle to \nthink about the implications.\n\nI wonder, if there is any stuff we are breaking by bluntly forgetting \nabout the subplan params. Maybe some table valued function scan within a \nsubquery scan? Or something about casts on a join condition, that could \nbe performed differently?\n\nI wasn't able to construct a problem case. I might be just missing \ncontext here. But I am not yet fully convinced whether this is safe to \ndo in all cases.\n\nRegards\nArne\n\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 07:39:09 +0100", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, 20 Oct 2023 at 23:40, Richard Guo <[email protected]> wrote:\n> The Memoize runtime stats 'Hits: 0 Misses: 10000 Evictions: 9999'\n> seems suspicious to me, so I've looked into it a little bit, and found\n> that the MemoizeState's keyparamids and its outerPlan's chgParam are\n> always different, and that makes us have to purge the entire cache each\n> time we rescan the memoize node.\n>\n> But why are they always different? Well, for the query above, we have\n> two NestLoopParam nodes, one (with paramno 1) is created when we replace\n> outer-relation Vars in the scan qual 't1.two = s.two', the other one\n> (with paramno 0) is added from the subquery's subplan_params, which was\n> created when we replaced uplevel vars with Param nodes for the subquery.\n> That is to say, the chgParam would be {0, 1}.\n>\n> When it comes to replace outer-relation Vars in the memoize keys, the\n> two 't1.two' Vars are both replaced with the NestLoopParam with paramno\n> 1, because it is the first NLP we see in root->curOuterParams that is\n> equal to the Vars in memoize keys. That is to say, the memoize node's\n> keyparamids is {1}.\n\nI see the function calls were put this way around in 5ebaaa494\n(Implement SQL-standard LATERAL subqueries.), per:\n\n@ -1640,6 +1641,7 @@ create_subqueryscan_plan(PlannerInfo *root, Path\n*best_path,\n {\n scan_clauses = (List *)\n replace_nestloop_params(root, (Node *) scan_clauses);\n+ identify_nestloop_extparams(root, best_path->parent->subplan);\n }\n\n(identify_nestloop_extparams was later renamed to\nprocess_subquery_nestloop_params in 46c508fbc.)\n\nI think fixing it your way makes sense. I don't really see any reason\nwhy we should have two. However...\n\nAnother way it *could* be fixed would be to get rid of pull_paramids()\nand change create_memoize_plan() to set keyparamids to all the param\nIDs that match are equal() to each param_exprs. That way nodeMemoize\nwouldn't purge the cache as we'd know the changing param is accounted\nfor in the cache. For the record, I don't think that's better, but it\nscares me a bit less as I don't know what other repercussions there\nare of applying your patch to get rid of the duplicate\nNestLoopParam.paramval.\n\nI'd feel better about doing it your way if Tom could comment on if\nthere was a reason he put the function calls that way around in\n5ebaaa494.\n\nI also feel like we might be getting a bit close to the minor version\nreleases to be adjusting this stuff in back branches.\n\nDavid\n\n\n", "msg_date": "Thu, 25 Jan 2024 13:13:41 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I'd feel better about doing it your way if Tom could comment on if\n> there was a reason he put the function calls that way around in\n> 5ebaaa494.\n\nApologies for not having noticed this thread before. I'm taking\na look at it now. However, while sniffing around this I found\nwhat seems like an oversight in paramassign.c's\nassign_param_for_var(): it says it should compare all the same\nfields as _equalVar except for varlevelsup, but it's failing to\ncompare varnullingrels. Is that a bug? It's conceivable that\nit's not possible to get here with varnullingrels different and\nall else the same, but I don't feel good about that proposition.\n\nI tried adding\n\n@@ -91,7 +91,10 @@ assign_param_for_var(PlannerInfo *root, Var *var)\n pvar->vartype == var->vartype &&\n pvar->vartypmod == var->vartypmod &&\n pvar->varcollid == var->varcollid)\n+ {\n+ Assert(bms_equal(pvar->varnullingrels, var->varnullingrels));\n return pitem->paramId;\n+ }\n }\n }\n\nThis triggers no failures in the regression tests, but we know\nhow little that proves.\n\nAnyway, that's just a side observation unrelated to the problem\nat hand. More later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jan 2024 12:22:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I think fixing it your way makes sense. I don't really see any reason\n> why we should have two. However...\n\n> Another way it *could* be fixed would be to get rid of pull_paramids()\n> and change create_memoize_plan() to set keyparamids to all the param\n> IDs that match are equal() to each param_exprs. That way nodeMemoize\n> wouldn't purge the cache as we'd know the changing param is accounted\n> for in the cache. For the record, I don't think that's better, but it\n> scares me a bit less as I don't know what other repercussions there\n> are of applying your patch to get rid of the duplicate\n> NestLoopParam.paramval.\n\n> I'd feel better about doing it your way if Tom could comment on if\n> there was a reason he put the function calls that way around in\n> 5ebaaa494.\n\nI'm fairly sure I thought it wouldn't matter because of the Param\nde-duplication done in paramassign.c. However, Richard's example\nshows that's not so, because process_subquery_nestloop_params is\npicky about the param ID assigned to a particular Var while\nreplace_nestloop_params is not. So flipping the order makes sense.\nI'd change the comment though, maybe to\n\n /*\n * Replace any outer-relation variables with nestloop params.\n *\n * We must provide nestloop params for both lateral references of\n * the subquery and outer vars in the scan_clauses. It's better\n * to assign the former first, because that code path requires\n * specific param IDs, while replace_nestloop_params can adapt\n * to the IDs assigned by process_subquery_nestloop_params.\n * This avoids possibly duplicating nestloop params when the\n * same Var is needed for both reasons.\n */\n\nHowever ... it seems like we're not out of the woods yet. Why\nis Richard's proposed test case still showing\n\n+ -> Memoize (actual rows=5000 loops=N)\n+ Cache Key: t1.two, t1.two\n\nSeems like there is missing de-duplication logic, or something.\n\n> I also feel like we might be getting a bit close to the minor version\n> releases to be adjusting this stuff in back branches.\n\nYeah, I'm not sure I would change this in the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jan 2024 13:32:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, Jan 26, 2024 at 2:32 AM Tom Lane <[email protected]> wrote:\n\n> I'm fairly sure I thought it wouldn't matter because of the Param\n> de-duplication done in paramassign.c. However, Richard's example\n> shows that's not so, because process_subquery_nestloop_params is\n> picky about the param ID assigned to a particular Var while\n> replace_nestloop_params is not. So flipping the order makes sense.\n> I'd change the comment though, maybe to\n>\n> /*\n> * Replace any outer-relation variables with nestloop params.\n> *\n> * We must provide nestloop params for both lateral references of\n> * the subquery and outer vars in the scan_clauses. It's better\n> * to assign the former first, because that code path requires\n> * specific param IDs, while replace_nestloop_params can adapt\n> * to the IDs assigned by process_subquery_nestloop_params.\n> * This avoids possibly duplicating nestloop params when the\n> * same Var is needed for both reasons.\n> */\n\n\n+1. It's much better.\n\n\n> However ... it seems like we're not out of the woods yet. Why\n> is Richard's proposed test case still showing\n>\n> + -> Memoize (actual rows=5000 loops=N)\n> + Cache Key: t1.two, t1.two\n>\n> Seems like there is missing de-duplication logic, or something.\n\n\nWhen we collect the cache keys in paraminfo_get_equal_hashops() we\nsearch param_info's ppi_clauses as well as innerrel's lateral_vars for\nouter expressions. We do not perform de-duplication on the collected\nouter expressions there. In my proposed test case, the same Var\n't1.two' appears both in the param_info->ppi_clauses and in the\ninnerrel->lateral_vars, so we see two identical cache keys in the plan.\nI noticed this before and wondered if we should do de-duplication on the\ncache keys, but somehow I did not chase this to the ground.\n\nThanks\nRichard\n\nOn Fri, Jan 26, 2024 at 2:32 AM Tom Lane <[email protected]> wrote:\nI'm fairly sure I thought it wouldn't matter because of the Param\nde-duplication done in paramassign.c.  However, Richard's example\nshows that's not so, because process_subquery_nestloop_params is\npicky about the param ID assigned to a particular Var while\nreplace_nestloop_params is not.  So flipping the order makes sense.\nI'd change the comment though, maybe to\n\n    /*\n     * Replace any outer-relation variables with nestloop params.\n     *\n     * We must provide nestloop params for both lateral references of\n     * the subquery and outer vars in the scan_clauses.  It's better\n     * to assign the former first, because that code path requires\n     * specific param IDs, while replace_nestloop_params can adapt\n     * to the IDs assigned by process_subquery_nestloop_params.\n     * This avoids possibly duplicating nestloop params when the\n     * same Var is needed for both reasons.\n     */+1.  It's much better. \nHowever ... it seems like we're not out of the woods yet.  Why\nis Richard's proposed test case still showing\n\n+         ->  Memoize (actual rows=5000 loops=N)\n+               Cache Key: t1.two, t1.two\n\nSeems like there is missing de-duplication logic, or something.When we collect the cache keys in paraminfo_get_equal_hashops() wesearch param_info's ppi_clauses as well as innerrel's lateral_vars forouter expressions.  We do not perform de-duplication on the collectedouter expressions there.  In my proposed test case, the same Var't1.two' appears both in the param_info->ppi_clauses and in theinnerrel->lateral_vars, so we see two identical cache keys in the plan.I noticed this before and wondered if we should do de-duplication on thecache keys, but somehow I did not chase this to the ground.ThanksRichard", "msg_date": "Fri, 26 Jan 2024 10:57:58 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, 26 Jan 2024 at 07:32, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I'd feel better about doing it your way if Tom could comment on if\n> > there was a reason he put the function calls that way around in\n> > 5ebaaa494.\n>\n> I'm fairly sure I thought it wouldn't matter because of the Param\n> de-duplication done in paramassign.c. However, Richard's example\n> shows that's not so, because process_subquery_nestloop_params is\n> picky about the param ID assigned to a particular Var while\n> replace_nestloop_params is not. So flipping the order makes sense.\n\nMakes sense.\n\nI've adjusted the comments to what you mentioned and also leaned out\nthe pretty expensive test case to something that'll run much faster\nand pushed the result.\n\n> However ... it seems like we're not out of the woods yet. Why\n> is Richard's proposed test case still showing\n>\n> + -> Memoize (actual rows=5000 loops=N)\n> + Cache Key: t1.two, t1.two\n>\n> Seems like there is missing de-duplication logic, or something.\n\nThis seems separate and isn't quite causing the same problems as what\nRichard wants to fix so I didn't touch this for now.\n\nDavid\n\n\n", "msg_date": "Fri, 26 Jan 2024 16:23:46 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I've adjusted the comments to what you mentioned and also leaned out\n> the pretty expensive test case to something that'll run much faster\n> and pushed the result.\n\n+1, I was wondering if the test could be cheaper. It wasn't horrid\nas Richard had it, but core regression tests add up over time.\n\n>> However ... it seems like we're not out of the woods yet. Why\n>> is Richard's proposed test case still showing\n>> + -> Memoize (actual rows=5000 loops=N)\n>> + Cache Key: t1.two, t1.two\n>> Seems like there is missing de-duplication logic, or something.\n\n> This seems separate and isn't quite causing the same problems as what\n> Richard wants to fix so I didn't touch this for now.\n\nFair enough, but I think it might be worth pursuing later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Jan 2024 22:51:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, 26 Jan 2024 at 16:51, Tom Lane <[email protected]> wrote:\n> >> However ... it seems like we're not out of the woods yet. Why\n> >> is Richard's proposed test case still showing\n> >> + -> Memoize (actual rows=5000 loops=N)\n> >> + Cache Key: t1.two, t1.two\n> >> Seems like there is missing de-duplication logic, or something.\n>\n> > This seems separate and isn't quite causing the same problems as what\n> > Richard wants to fix so I didn't touch this for now.\n>\n> Fair enough, but I think it might be worth pursuing later.\n\nHere's a patch for that.\n\nDavid", "msg_date": "Fri, 26 Jan 2024 17:18:17 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, Jan 26, 2024 at 1:22 AM Tom Lane <[email protected]> wrote:\n\n> Apologies for not having noticed this thread before. I'm taking\n> a look at it now. However, while sniffing around this I found\n> what seems like an oversight in paramassign.c's\n> assign_param_for_var(): it says it should compare all the same\n> fields as _equalVar except for varlevelsup, but it's failing to\n> compare varnullingrels. Is that a bug? It's conceivable that\n> it's not possible to get here with varnullingrels different and\n> all else the same, but I don't feel good about that proposition.\n>\n> I tried adding\n>\n> @@ -91,7 +91,10 @@ assign_param_for_var(PlannerInfo *root, Var *var)\n> pvar->vartype == var->vartype &&\n> pvar->vartypmod == var->vartypmod &&\n> pvar->varcollid == var->varcollid)\n> + {\n> + Assert(bms_equal(pvar->varnullingrels,\n> var->varnullingrels));\n> return pitem->paramId;\n> + }\n> }\n> }\n\n\nYeah, I think it should be safe to assert that the varnullingrels is\nequal here. The Var is supposed to be an upper-level Var, and two same\nsuch Vars should not have different varnullingrels at this point,\nalthough the varnullingrels might be adjusted later in\nidentify_current_nestloop_params according to which form of identity 3\nwe end up applying.\n\nThanks\nRichard\n\nOn Fri, Jan 26, 2024 at 1:22 AM Tom Lane <[email protected]> wrote:\nApologies for not having noticed this thread before.  I'm taking\na look at it now.  However, while sniffing around this I found\nwhat seems like an oversight in paramassign.c's\nassign_param_for_var(): it says it should compare all the same\nfields as _equalVar except for varlevelsup, but it's failing to\ncompare varnullingrels.  Is that a bug?  It's conceivable that\nit's not possible to get here with varnullingrels different and\nall else the same, but I don't feel good about that proposition.\n\nI tried adding\n\n@@ -91,7 +91,10 @@ assign_param_for_var(PlannerInfo *root, Var *var)\n                 pvar->vartype == var->vartype &&\n                 pvar->vartypmod == var->vartypmod &&\n                 pvar->varcollid == var->varcollid)\n+            {\n+                Assert(bms_equal(pvar->varnullingrels, var->varnullingrels));\n                 return pitem->paramId;\n+            }\n         }\n     }Yeah, I think it should be safe to assert that the varnullingrels isequal here.  The Var is supposed to be an upper-level Var, and two samesuch Vars should not have different varnullingrels at this point,although the varnullingrels might be adjusted later inidentify_current_nestloop_params according to which form of identity 3we end up applying.ThanksRichard", "msg_date": "Fri, 26 Jan 2024 13:38:53 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, Jan 26, 2024 at 12:18 PM David Rowley <[email protected]> wrote:\n\n> On Fri, 26 Jan 2024 at 16:51, Tom Lane <[email protected]> wrote:\n> > >> However ... it seems like we're not out of the woods yet. Why\n> > >> is Richard's proposed test case still showing\n> > >> + -> Memoize (actual rows=5000 loops=N)\n> > >> + Cache Key: t1.two, t1.two\n> > >> Seems like there is missing de-duplication logic, or something.\n> >\n> > > This seems separate and isn't quite causing the same problems as what\n> > > Richard wants to fix so I didn't touch this for now.\n> >\n> > Fair enough, but I think it might be worth pursuing later.\n>\n> Here's a patch for that.\n\n\nAt first I wondered if we should assume that the same param expr must\nhave the same equality operator. If not, we should also check the\noperator to tell if the cache key is a duplicate, like\n\n- if (!list_member(*param_exprs, expr))\n+ if (!list_member(*param_exprs, expr) ||\n+ !list_member_oid(*operators, hasheqoperator))\n\nBut after looking at how rinfo->left_hasheqoperator/right_hasheqoperator\nis set, it seems we can assume that: the operator is from the type cache\nentry which is fetched according to the expr datatype.\n\nSo I think the patch makes sense. +1.\n\nThanks\nRichard\n\nOn Fri, Jan 26, 2024 at 12:18 PM David Rowley <[email protected]> wrote:On Fri, 26 Jan 2024 at 16:51, Tom Lane <[email protected]> wrote:\n> >> However ... it seems like we're not out of the woods yet.  Why\n> >> is Richard's proposed test case still showing\n> >> +         ->  Memoize (actual rows=5000 loops=N)\n> >> +               Cache Key: t1.two, t1.two\n> >> Seems like there is missing de-duplication logic, or something.\n>\n> > This seems separate and isn't quite causing the same problems as what\n> > Richard wants to fix so I didn't touch this for now.\n>\n> Fair enough, but I think it might be worth pursuing later.\n\nHere's a patch for that.At first I wondered if we should assume that the same param expr musthave the same equality operator. If not, we should also check theoperator to tell if the cache key is a duplicate, like-           if (!list_member(*param_exprs, expr))+           if (!list_member(*param_exprs, expr) ||+               !list_member_oid(*operators, hasheqoperator))But after looking at how rinfo->left_hasheqoperator/right_hasheqoperatoris set, it seems we can assume that: the operator is from the type cacheentry which is fetched according to the expr datatype.So I think the patch makes sense.  +1.ThanksRichard", "msg_date": "Fri, 26 Jan 2024 14:02:54 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Fri, 26 Jan 2024 at 19:03, Richard Guo <[email protected]> wrote:\n> At first I wondered if we should assume that the same param expr must\n> have the same equality operator. If not, we should also check the\n> operator to tell if the cache key is a duplicate, like\n>\n> - if (!list_member(*param_exprs, expr))\n> + if (!list_member(*param_exprs, expr) ||\n> + !list_member_oid(*operators, hasheqoperator))\n\nhmm, if that were the case you wouldn't do it that way. You'd need to\nforboth() and look for an item in both lists matching the search.\n\n> But after looking at how rinfo->left_hasheqoperator/right_hasheqoperator\n> is set, it seems we can assume that: the operator is from the type cache\n> entry which is fetched according to the expr datatype.\n\nYip.\n\n> So I think the patch makes sense. +1.\n\nThanks for reviewing. I've pushed the patch.\n\nDavid\n\n\n", "msg_date": "Fri, 26 Jan 2024 20:54:45 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I've adjusted the comments to what you mentioned and also leaned out\n> the pretty expensive test case to something that'll run much faster\n> and pushed the result.\n\ndrongo and fairywren are consistently failing the test case added\nby this commit. I'm not quite sure why the behavior of Memoize\nwould be platform-specific when we're dealing with integers,\nbut ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jan 2024 15:41:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Sat, 27 Jan 2024 at 09:41, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I've adjusted the comments to what you mentioned and also leaned out\n> > the pretty expensive test case to something that'll run much faster\n> > and pushed the result.\n>\n> drongo and fairywren are consistently failing the test case added\n> by this commit. I'm not quite sure why the behavior of Memoize\n> would be platform-specific when we're dealing with integers,\n> but ...\n\nMaybe snprintf(buf, \"%.*f\", 0, 5.0 / 2.0); results in \"3\" on those\nrather than \"2\"?\n\nLooking at the code in fmtfloat(), we fallback on the built-in snprintf.\n\nI can try changing the unique1 < 5 to unique1 < 4 to see that's more stable.\n\nDavid\n\n\n", "msg_date": "Sat, 27 Jan 2024 10:02:51 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Sat, 27 Jan 2024 at 09:41, Tom Lane <[email protected]> wrote:\n>> drongo and fairywren are consistently failing the test case added\n>> by this commit. I'm not quite sure why the behavior of Memoize\n>> would be platform-specific when we're dealing with integers,\n>> but ...\n\n> Maybe snprintf(buf, \"%.*f\", 0, 5.0 / 2.0); results in \"3\" on those\n> rather than \"2\"?\n> Looking at the code in fmtfloat(), we fallback on the built-in snprintf.\n\nMaybe ... I don't have a better theory.\n\n> I can try changing the unique1 < 5 to unique1 < 4 to see that's more stable.\n\nWorth a try.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Jan 2024 16:09:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "Hello,\n\n27.01.2024 00:09, Tom Lane wrote:\n> David Rowley <[email protected]> writes:\n>> On Sat, 27 Jan 2024 at 09:41, Tom Lane <[email protected]> wrote:\n>>> drongo and fairywren are consistently failing the test case added\n>>> by this commit. I'm not quite sure why the behavior of Memoize\n>>> would be platform-specific when we're dealing with integers,\n>>> but ...\n>> Maybe snprintf(buf, \"%.*f\", 0, 5.0 / 2.0); results in \"3\" on those\n>> rather than \"2\"?\n>> Looking at the code in fmtfloat(), we fallback on the built-in snprintf.\n> Maybe ... I don't have a better theory.\n\nFWIW, I've found where this behaviour is documented:\nhttps://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/sprintf-sprintf-l-swprintf-swprintf-l-swprintf-l?view=msvc-170\n\n(I've remembered a case with test/sql/partition_prune from 2020, where\nsprintf on Windows worked the other way.)\n\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 27 Jan 2024 07:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "Hi,\n\nI've seen a similar issue with the following query (tested on the current head):\n\nEXPLAIN ANALYZE SELECT * FROM tenk1 t1\nLEFT JOIN LATERAL (SELECT t1.two, tenk2.hundred, tenk2.two FROM tenk2) t2\nON t1.hundred = t2.hundred WHERE t1.hundred < 5;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=8.46..1495.10 rows=50000 width=256)\n(actual time=0.860..111.013 rows=50000 loops=1)\n -> Bitmap Heap Scan on tenk1 t1 (cost=8.16..376.77 rows=500\nwidth=244) (actual time=0.798..1.418 rows=500 loops=1)\n Recheck Cond: (hundred < 5)\n Heap Blocks: exact=263\n -> Bitmap Index Scan on tenk1_hundred (cost=0.00..8.04\nrows=500 width=0) (actual time=0.230..0.230 rows=500 loops=1)\n Index Cond: (hundred < 5)\n -> Memoize (cost=0.30..4.89 rows=100 width=12) (actual\ntime=0.009..0.180 rows=100 loops=500)\n Cache Key: t1.hundred\n Cache Mode: logical\n Hits: 0 Misses: 500 Evictions: 499 Overflows: 0 Memory Usage: 5kB\n -> Index Scan using tenk2_hundred on tenk2 (cost=0.29..4.88\nrows=100 width=12) (actual time=0.007..0.124 rows=100 loops=500)\n Index Cond: (hundred = t1.hundred)\n Planning Time: 0.661 ms\n Execution Time: 113.076 ms\n(14 rows)\n\nThe memoize's cache key only uses t1.hundred while the nested loop has\ntwo changed parameters: the lateral var t1.two and t1.hundred. This\nleads to a chgParam that is always different and the cache is purged\non each rescan.\n\nRegards,\nAnthonin\n\nOn Sat, Jan 27, 2024 at 5:00 AM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello,\n>\n> 27.01.2024 00:09, Tom Lane wrote:\n> > David Rowley <[email protected]> writes:\n> >> On Sat, 27 Jan 2024 at 09:41, Tom Lane <[email protected]> wrote:\n> >>> drongo and fairywren are consistently failing the test case added\n> >>> by this commit. I'm not quite sure why the behavior of Memoize\n> >>> would be platform-specific when we're dealing with integers,\n> >>> but ...\n> >> Maybe snprintf(buf, \"%.*f\", 0, 5.0 / 2.0); results in \"3\" on those\n> >> rather than \"2\"?\n> >> Looking at the code in fmtfloat(), we fallback on the built-in snprintf.\n> > Maybe ... I don't have a better theory.\n>\n> FWIW, I've found where this behaviour is documented:\n> https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/sprintf-sprintf-l-swprintf-swprintf-l-swprintf-l?view=msvc-170\n>\n> (I've remembered a case with test/sql/partition_prune from 2020, where\n> sprintf on Windows worked the other way.)\n>\n>\n> Best regards,\n> Alexander\n>\n>\n\n\n", "msg_date": "Thu, 1 Feb 2024 08:43:14 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A performance issue with Memoize" }, { "msg_contents": "On Thu, Feb 1, 2024 at 3:43 PM Anthonin Bonnefoy <\[email protected]> wrote:\n\n> Hi,\n>\n> I've seen a similar issue with the following query (tested on the current\n> head):\n>\n> EXPLAIN ANALYZE SELECT * FROM tenk1 t1\n> LEFT JOIN LATERAL (SELECT t1.two, tenk2.hundred, tenk2.two FROM tenk2) t2\n> ON t1.hundred = t2.hundred WHERE t1.hundred < 5;\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=8.46..1495.10 rows=50000 width=256)\n> (actual time=0.860..111.013 rows=50000 loops=1)\n> -> Bitmap Heap Scan on tenk1 t1 (cost=8.16..376.77 rows=500\n> width=244) (actual time=0.798..1.418 rows=500 loops=1)\n> Recheck Cond: (hundred < 5)\n> Heap Blocks: exact=263\n> -> Bitmap Index Scan on tenk1_hundred (cost=0.00..8.04\n> rows=500 width=0) (actual time=0.230..0.230 rows=500 loops=1)\n> Index Cond: (hundred < 5)\n> -> Memoize (cost=0.30..4.89 rows=100 width=12) (actual\n> time=0.009..0.180 rows=100 loops=500)\n> Cache Key: t1.hundred\n> Cache Mode: logical\n> Hits: 0 Misses: 500 Evictions: 499 Overflows: 0 Memory Usage:\n> 5kB\n> -> Index Scan using tenk2_hundred on tenk2 (cost=0.29..4.88\n> rows=100 width=12) (actual time=0.007..0.124 rows=100 loops=500)\n> Index Cond: (hundred = t1.hundred)\n> Planning Time: 0.661 ms\n> Execution Time: 113.076 ms\n> (14 rows)\n>\n> The memoize's cache key only uses t1.hundred while the nested loop has\n> two changed parameters: the lateral var t1.two and t1.hundred. This\n> leads to a chgParam that is always different and the cache is purged\n> on each rescan.\n\n\nThanks for the report! This issue is caused by that we fail to check\nPHVs for lateral references, and hence cannot know that t1.two should\nalso be included in the cache keys.\n\nI reported exactly the same issue in [1], and verified that it can be\nfixed by the patch in [2] (which seems to require a rebase).\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4_imG5C8rXt7xdU7zf6whUDc2rdDun%2BVtrowcmxb41CzA%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs49%2BCjoy0S0xkCRDcHXGHvsYLOdvr9jq9OTONOBnsgzXOg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Thu, Feb 1, 2024 at 3:43 PM Anthonin Bonnefoy <[email protected]> wrote:Hi,\n\nI've seen a similar issue with the following query (tested on the current head):\n\nEXPLAIN ANALYZE SELECT * FROM tenk1 t1\nLEFT JOIN LATERAL (SELECT t1.two, tenk2.hundred, tenk2.two FROM tenk2) t2\nON t1.hundred = t2.hundred WHERE t1.hundred < 5;\n                                                               QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join  (cost=8.46..1495.10 rows=50000 width=256)\n(actual time=0.860..111.013 rows=50000 loops=1)\n   ->  Bitmap Heap Scan on tenk1 t1  (cost=8.16..376.77 rows=500\nwidth=244) (actual time=0.798..1.418 rows=500 loops=1)\n         Recheck Cond: (hundred < 5)\n         Heap Blocks: exact=263\n         ->  Bitmap Index Scan on tenk1_hundred  (cost=0.00..8.04\nrows=500 width=0) (actual time=0.230..0.230 rows=500 loops=1)\n               Index Cond: (hundred < 5)\n   ->  Memoize  (cost=0.30..4.89 rows=100 width=12) (actual\ntime=0.009..0.180 rows=100 loops=500)\n         Cache Key: t1.hundred\n         Cache Mode: logical\n         Hits: 0  Misses: 500  Evictions: 499  Overflows: 0  Memory Usage: 5kB\n         ->  Index Scan using tenk2_hundred on tenk2  (cost=0.29..4.88\nrows=100 width=12) (actual time=0.007..0.124 rows=100 loops=500)\n               Index Cond: (hundred = t1.hundred)\n Planning Time: 0.661 ms\n Execution Time: 113.076 ms\n(14 rows)\n\nThe memoize's cache key only uses t1.hundred while the nested loop has\ntwo changed parameters: the lateral var t1.two and t1.hundred. This\nleads to a chgParam that is always different and the cache is purged\non each rescan.Thanks for the report!  This issue is caused by that we fail to checkPHVs for lateral references, and hence cannot know that t1.two shouldalso be included in the cache keys.I reported exactly the same issue in [1], and verified that it can befixed by the patch in [2] (which seems to require a rebase).[1] https://www.postgresql.org/message-id/CAMbWs4_imG5C8rXt7xdU7zf6whUDc2rdDun%2BVtrowcmxb41CzA%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAMbWs49%2BCjoy0S0xkCRDcHXGHvsYLOdvr9jq9OTONOBnsgzXOg%40mail.gmail.comThanksRichard", "msg_date": "Thu, 1 Feb 2024 16:39:33 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A performance issue with Memoize" } ]
[ { "msg_contents": "This patch resolves false sharing observed when running Postgres in a Docker container against sysbench.\nFalse sharing was observed in freelist.c in the BufferStrategyControl struct.\nAs the size slock_t is platform dependent, and testing was done on an Intel Xeon scalable platform, the changes\nare made conditionally similar to how slock_t is defined.\n\nThis patch is against master.\n\nAs the patch involves only data structure padding, I do not believe any new regression tests are required.\n\nFalse sharing has a negative impact on performance, this patch resolves false sharing in the BufferStrategyControl struct.\n\nTo verify this, I ran sysbench on the host against a running postgres docker instance like so:\n\nsysbench --db-driver=pgsql --report-interval=2 --table-size=100000 --tables=25 --threads=70 --time=60 --pgsql-host=127.0.0.1 --pgsql-port=5432 --pgsql-user=postgres --pgsql-password=password --pgsql-db=postgres /usr/share/sysbench/oltp_read_only.lua prepare\n\nsysbench --db-driver=pgsql --report-interval=2 --table-size=100000 --tables=25 --threads=70 --time=60 --pgsql-host=127.0.0.1 --pgsql-port=5432 --pgsql-user=postgres --pgsql-password=password --pgsql-db=postgres /usr/share/sysbench/oltp_read_only.lua run\n\nDuring the run phase, I ran perf like so:\n\nsudo perf c2c record -a -u --ldlat 50 -- sleep 30\n\nAfter the runs I post processed the perf data like this:\n\nsudo perf c2c report -NN -g --call-graph --full-symbols -c pid,iaddr --stdio >perf_report.txt\n\nYou can observe the changes in the attached perf_report files. Note the strategygetbuffer function in cacheline 1 in the base report, it goes to cacheline 4 in the fix report. In the report prior to the fix (base) we can observe multiple offsets within the same cacheline and after the fix there is a single offset.\n\n---\nNitin Tekchandani\[email protected]", "msg_date": "Fri, 20 Oct 2023 23:39:40 +0000", "msg_from": "\"Tekchandani, Nitin\" <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Address false sharing on x86_64 and i386 in\n BufferStrategyControl" } ]
[ { "msg_contents": "Hi,\n\nThere exists an extraneous break condition in\npg_logical_replication_slot_advance(). When the end of WAL or moveto\nLSN is reached, the main while condition helps to exit the loop, so no\nseparate break condition is needed. Attached patch removes it.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 21 Oct 2023 08:00:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Remove extraneous break condition in logical slot advance function" }, { "msg_contents": "On Fri, Oct 20, 2023 at 7:30 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> There exists an extraneous break condition in\n> pg_logical_replication_slot_advance(). When the end of WAL or moveto\n> LSN is reached, the main while condition helps to exit the loop, so no\n> separate break condition is needed. Attached patch removes it.\n>\n> Thoughts?\n\n+1 for the patch.\n\nThe only advantage I see of the code as it stands right now is that it\navoids one last call to CHECK_FOR_INTERRUPTS() by break'ing early. I\ndon't think we'd lose much in terms of performance by making one (very\ncheap, in common case) extra call of this macro.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Sat, 21 Oct 2023 09:56:15 -0700", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove extraneous break condition in logical slot advance\n function" }, { "msg_contents": "Gurjeet Singh <[email protected]> writes:\n> On Fri, Oct 20, 2023 at 7:30 PM Bharath Rupireddy\n> <[email protected]> wrote:\n>> There exists an extraneous break condition in\n>> pg_logical_replication_slot_advance(). When the end of WAL or moveto\n>> LSN is reached, the main while condition helps to exit the loop, so no\n>> separate break condition is needed. Attached patch removes it.\n\n> The only advantage I see of the code as it stands right now is that it\n> avoids one last call to CHECK_FOR_INTERRUPTS() by break'ing early. I\n> don't think we'd lose much in terms of performance by making one (very\n> cheap, in common case) extra call of this macro.\n\nAgreed, bypassing the last CHECK_FOR_INTERRUPTS() shouldn't save\nanything noticeable. Could there be a correctness argument for it\nthough? Can't see what. We should assume that CFIs might happen\ndown inside LogicalDecodingProcessRecord.\n\nI wondered why the code looks like this, and whether there used\nto be more of a reason for it. \"git blame\" reveals the probable\nanswer: when this code was added, in 9c7d06d60, the loop\ncondition was different so the break was necessary.\n38a957316 simplified the loop condition to what we see today,\nbut didn't notice that the break was thereby made pointless.\n\nWhile we're here ... the comment above the loop seems wrong\nalready, and this makes it more so. I suggest something like\n\n-\t\t/* Decode at least one record, until we run out of records */\n+\t\t/* Decode records until we reach the requested target */\n\t\twhile (ctx->reader->EndRecPtr < moveto)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Oct 2023 14:10:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove extraneous break condition in logical slot advance\n function" }, { "msg_contents": "On Sat, Oct 21, 2023 at 11:40 PM Tom Lane <[email protected]> wrote:\n>\n> Gurjeet Singh <[email protected]> writes:\n> > On Fri, Oct 20, 2023 at 7:30 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> >> There exists an extraneous break condition in\n> >> pg_logical_replication_slot_advance(). When the end of WAL or moveto\n> >> LSN is reached, the main while condition helps to exit the loop, so no\n> >> separate break condition is needed. Attached patch removes it.\n>\n> > The only advantage I see of the code as it stands right now is that it\n> > avoids one last call to CHECK_FOR_INTERRUPTS() by break'ing early. I\n> > don't think we'd lose much in terms of performance by making one (very\n> > cheap, in common case) extra call of this macro.\n>\n> Agreed, bypassing the last CHECK_FOR_INTERRUPTS() shouldn't save\n> anything noticeable. Could there be a correctness argument for it\n> though? Can't see what. We should assume that CFIs might happen\n> down inside LogicalDecodingProcessRecord.\n\nAFAICS, there's no correctness argument for breaking before CFI. As\nrightly said, CFIs can happen before the break condition either down\ninside LogicalDecodingProcessRecord or XLogReadRecord (page_read\ncallbacks for instance).\n\nHaving said that, what may happen if CFI happens and interrupts are\nprocessed before the break condition is that the decoding occurs again\nwhich IMV is not a big problem.\n\nAn idea to keep all of XLogReadRecord() -\nLogicalDecodingProcessRecord() loops consistent is by having CFI at\nthe start of the loops before the XLogReadRecord().\n\n> I wondered why the code looks like this, and whether there used\n> to be more of a reason for it. \"git blame\" reveals the probable\n> answer: when this code was added, in 9c7d06d60, the loop\n> condition was different so the break was necessary.\n> 38a957316 simplified the loop condition to what we see today,\n> but didn't notice that the break was thereby made pointless.\n\nRight. Thanks for these references.\n\n> While we're here ... the comment above the loop seems wrong\n> already, and this makes it more so. I suggest something like\n>\n> - /* Decode at least one record, until we run out of records */\n> + /* Decode records until we reach the requested target */\n> while (ctx->reader->EndRecPtr < moveto)\n\n+1 and done so in the attached v2 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 22 Oct 2023 23:59:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove extraneous break condition in logical slot advance\n function" }, { "msg_contents": "On Sun, Oct 22, 2023 at 11:59:00PM +0530, Bharath Rupireddy wrote:\n> AFAICS, there's no correctness argument for breaking before CFI. As\n> rightly said, CFIs can happen before the break condition either down\n> inside LogicalDecodingProcessRecord or XLogReadRecord (page_read\n> callbacks for instance).\n> \n> Having said that, what may happen if CFI happens and interrupts are\n> processed before the break condition is that the decoding occurs again\n> which IMV is not a big problem.\n> \n> An idea to keep all of XLogReadRecord() -\n> LogicalDecodingProcessRecord() loops consistent is by having CFI at\n> the start of the loops before the XLogReadRecord().\n\nPassing by.. All that just looks like an oversight of 38a957316d7e\nthat simplified the main while loop, so I've just applied your v2.\n--\nMichael", "msg_date": "Mon, 23 Oct 2023 10:24:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove extraneous break condition in logical slot advance\n function" } ]
[ { "msg_contents": "Hi hackers,\n\nEXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, \nCOST,...) which help to provide useful details of query execution.\nIn Neon we have added PREFETCH option which shows information about page \nprefetching during query execution (prefetching is more critical for Neon\narchitecture because of separation of compute and storage, so it is \nimplemented not only for bitmap heap scan as in Vanilla Postgres, but \nalso for seqscan, indexscan and indexonly scan). Another possible \ncandidate  for explain options is local file cache (extra caching layer \nabove shared buffers which is used to somehow replace file system cache \nin standalone Postgres).\n\nI think that it will be nice to have a generic mechanism which allows \nextensions to add its own options to EXPLAIN.\nI have attached the patch with implementation of such mechanism (also \navailable as PR: https://github.com/knizhnik/postgres/pull/1 )\n\nI have demonstrated this mechanism using Bloom extension - just to \nreport number of Bloom matches.\nNot sure that it is really useful information but it is used mostly as \nexample:\n\nexplain (analyze,bloom) select * from t where pk=2000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t (cost=15348.00..15352.01 rows=1 width=4) (actual time=25.244..25.939 rows=1 loops=1)\n Recheck Cond: (pk = 2000)\n Rows Removed by Index Recheck: 292\n Heap Blocks: exact=283\n Bloom: matches=293\n -> Bitmap Index Scan on t_pk_idx (cost=0.00..15348.00 rows=1 width=0) (actual time=25.147..25.147 rows=293 loops=1)\n Index Cond: (pk = 2000)\n Bloom: matches=293\n Planning:\n Bloom: matches=0\n Planning Time: 0.387 ms\n Execution Time: 26.053 ms\n(12 rows)\n\nThere are two known issues with this proposal:\n\n1. I have to limit total size of all custom metrics - right now it is \nlimited by 128 bytes. It is done to keep|Instrumentation|and some other \ndata structures fixes size. Otherwise maintaining varying parts of this \nstructure is ugly, especially in shared memory\n\n2. Custom extension is added by means \nof|RegisterCustomInsrumentation|function which is called from|_PG_init|\nBut|_PG_init|is called when extension is loaded and it is loaded on \ndemand when some of extension functions is called (except when extension \nis included\nin shared_preload_libraries list), Bloom extension doesn't require it. \nSo if your first statement executed in your session is:\n\n explain (analyze,bloom) select * from t where pk=2000;\n\n...you will get error:\n\nERROR: unrecognized EXPLAIN option \"bloom\"\nLINE 1: explain (analyze,bloom) select * from t where pk=2000;\n\nIt happens because at the moment when explain statement parses options, \nBloom index is not yet selected and so bloom extension is not loaded \nand|RegisterCustomInsrumentation|is not yet called. If we repeat the \nquery, then proper result will be displayed (see above).", "msg_date": "Sat, 21 Oct 2023 15:16:33 +0300", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Custom explain options" }, { "msg_contents": "Hi\n\nso 25. 11. 2023 v 8:23 odesílatel Konstantin Knizhnik <[email protected]>\nnapsal:\n\n> Hi hackers,\n>\n> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, COST,...)\n> which help to provide useful details of query execution.\n> In Neon we have added PREFETCH option which shows information about page\n> prefetching during query execution (prefetching is more critical for Neon\n> architecture because of separation of compute and storage, so it is\n> implemented not only for bitmap heap scan as in Vanilla Postgres, but also\n> for seqscan, indexscan and indexonly scan). Another possible candidate for\n> explain options is local file cache (extra caching layer above shared\n> buffers which is used to somehow replace file system cache in standalone\n> Postgres).\n>\n> I think that it will be nice to have a generic mechanism which allows\n> extensions to add its own options to EXPLAIN.\n> I have attached the patch with implementation of such mechanism (also\n> available as PR: https://github.com/knizhnik/postgres/pull/1 )\n>\n> I have demonstrated this mechanism using Bloom extension - just to report\n> number of Bloom matches.\n> Not sure that it is really useful information but it is used mostly as\n> example:\n>\n> explain (analyze,bloom) select * from t where pk=2000;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on t (cost=15348.00..15352.01 rows=1 width=4) (actual time=25.244..25.939 rows=1 loops=1)\n> Recheck Cond: (pk = 2000)\n> Rows Removed by Index Recheck: 292\n> Heap Blocks: exact=283\n> Bloom: matches=293\n> -> Bitmap Index Scan on t_pk_idx (cost=0.00..15348.00 rows=1 width=0) (actual time=25.147..25.147 rows=293 loops=1)\n> Index Cond: (pk = 2000)\n> Bloom: matches=293\n> Planning:\n> Bloom: matches=0\n> Planning Time: 0.387 ms\n> Execution Time: 26.053 ms\n> (12 rows)\n>\n> There are two known issues with this proposal:\n>\n> 1. I have to limit total size of all custom metrics - right now it is\n> limited by 128 bytes. It is done to keep Instrumentation and some other\n> data structures fixes size. Otherwise maintaining varying parts of this\n> structure is ugly, especially in shared memory\n>\n> 2. Custom extension is added by means of RegisterCustomInsrumentation function\n> which is called from _PG_init\n> But _PG_init is called when extension is loaded and it is loaded on\n> demand when some of extension functions is called (except when extension is\n> included\n> in shared_preload_libraries list), Bloom extension doesn't require it. So\n> if your first statement executed in your session is:\n>\n> explain (analyze,bloom) select * from t where pk=2000;\n>\n> ...you will get error:\n>\n> ERROR: unrecognized EXPLAIN option \"bloom\"\n> LINE 1: explain (analyze,bloom) select * from t where pk=2000;\n>\n> It happens because at the moment when explain statement parses options,\n> Bloom index is not yet selected and so bloom extension is not loaded and\n> RegisterCustomInsrumentation is not yet called. If we repeat the query,\n> then proper result will be displayed (see above).\n>\n>\nThis patch has a lot of whitespaces and formatting issues. I fixed some\n\nI don't understand how selecting some custom instrumentation can be safe.\n\nList *pgCustInstr is a global variable. The attribute selected is set by\nNewExplainState routine\n\n+ /* Reset custom instrumentations selection flag */\n+ foreach (lc, pgCustInstr)\n+ {\n+ CustomInstrumentation *ci = (CustomInstrumentation*) lfirst(lc);\n+\n+ ci->selected = false;\n+ }\n\nand this attribute is used more times. But the queries can be nested.\nTheoretically EXPLAIN ANALYZE can run another EXPLAIN ANALYZE, and then\nthis attribute of the global list can be rewritten. The list of selected\ncustom instrumentations should be part of explain state, I think.\n\nRegards\n\nPavel", "msg_date": "Wed, 29 Nov 2023 21:03:21 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 21/10/2023 19:16, Konstantin Knizhnik wrote:\n> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, \n> COST,...) which help to provide useful details of query execution.\n> In Neon we have added PREFETCH option which shows information about page \n> prefetching during query execution (prefetching is more critical for Neon\n> architecture because of separation of compute and storage, so it is \n> implemented not only for bitmap heap scan as in Vanilla Postgres, but \n> also for seqscan, indexscan and indexonly scan). Another possible \n> candidate  for explain options is local file cache (extra caching layer \n> above shared buffers which is used to somehow replace file system cache \n> in standalone Postgres).\n> \n> I think that it will be nice to have a generic mechanism which allows \n> extensions to add its own options to EXPLAIN.\n\nGenerally, I welcome this idea: Extensions can already do a lot of work, \nand they should have a tool to report their state, not only into the log.\nBut I think your approach needs to be elaborated. At first, it would be \nbetter to allow registering extended instruments for specific node types \nto avoid unneeded calls.\nSecondly, looking into the Instrumentation usage, I don't see the reason \nto limit the size: as I see everywhere it exists locally or in the DSA \nwhere its size is calculated on the fly. So, by registering an extended \ninstrument, we can reserve a slot for the extension. The actual size of \nunderlying data can be provided by the extension routine.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 11:59:32 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\nOn 30/11/2023 5:59 am, Andrei Lepikhov wrote:\n> On 21/10/2023 19:16, Konstantin Knizhnik wrote:\n>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, \n>> COST,...) which help to provide useful details of query execution.\n>> In Neon we have added PREFETCH option which shows information about \n>> page prefetching during query execution (prefetching is more critical \n>> for Neon\n>> architecture because of separation of compute and storage, so it is \n>> implemented not only for bitmap heap scan as in Vanilla Postgres, but \n>> also for seqscan, indexscan and indexonly scan). Another possible \n>> candidate  for explain options is local file cache (extra caching \n>> layer above shared buffers which is used to somehow replace file \n>> system cache in standalone Postgres).\n>>\n>> I think that it will be nice to have a generic mechanism which allows \n>> extensions to add its own options to EXPLAIN.\n>\n> Generally, I welcome this idea: Extensions can already do a lot of \n> work, and they should have a tool to report their state, not only into \n> the log.\n> But I think your approach needs to be elaborated. At first, it would \n> be better to allow registering extended instruments for specific node \n> types to avoid unneeded calls.\n> Secondly, looking into the Instrumentation usage, I don't see the \n> reason to limit the size: as I see everywhere it exists locally or in \n> the DSA where its size is calculated on the fly. So, by registering an \n> extended instrument, we can reserve a slot for the extension. The \n> actual size of underlying data can be provided by the extension routine.\n>\nThank you for review.\n\nI agree that support of extended instruments is desired. I just tried to \nminimize number of changes to make this patch smaller.\n\nConcerning limiting instrumentation size, may be I missed something, but \nI do not see any goo way to handle this:\n\n```\n\n./src/backend/executor/nodeMemoize.c1106:        si = \n&node->shared_info->sinstrument[ParallelWorkerNumber];\n./src/backend/executor/nodeAgg.c4322:        si = \n&node->shared_info->sinstrument[ParallelWorkerNumber];\n./src/backend/executor/nodeIncrementalSort.c107: \ninstrumentSortedGroup(&(node)->shared_info->sinfo[ParallelWorkerNumber].groupName##GroupInfo, \n\\\n./src/backend/executor/execParallel.c808: InstrInit(&instrument[i], \nestate->es_instrument);\n./src/backend/executor/execParallel.c1052: \nInstrAggNode(planstate->instrument, &instrument[n]);\n./src/backend/executor/execParallel.c1306: \nInstrAggNode(&instrument[ParallelWorkerNumber], planstate->instrument);\n./src/backend/commands/explain.c1763:            Instrumentation \n*instrument = &w->instrument[n];\n./src/backend/commands/explain.c2168:            Instrumentation \n*instrument = &w->instrument[n];\n```\n\nIn all this cases we are using array of `Instrumentation` and if it \ncontains varying part, then it is not clear where to place it.\nYes, there is also code which serialize and sends instrumentations \nbetween worker processes  and I have updated this code in my PR to send \nactual amount of custom instrumentation data. But it can not help with \nthe cases above.\n\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 17:40:15 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 30/11/2023 22:40, Konstantin Knizhnik wrote:\n> \n> On 30/11/2023 5:59 am, Andrei Lepikhov wrote:\n>> On 21/10/2023 19:16, Konstantin Knizhnik wrote:\n>>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, \n>>> COST,...) which help to provide useful details of query execution.\n>>> In Neon we have added PREFETCH option which shows information about \n>>> page prefetching during query execution (prefetching is more critical \n>>> for Neon\n>>> architecture because of separation of compute and storage, so it is \n>>> implemented not only for bitmap heap scan as in Vanilla Postgres, but \n>>> also for seqscan, indexscan and indexonly scan). Another possible \n>>> candidate  for explain options is local file cache (extra caching \n>>> layer above shared buffers which is used to somehow replace file \n>>> system cache in standalone Postgres).\n>>>\n>>> I think that it will be nice to have a generic mechanism which allows \n>>> extensions to add its own options to EXPLAIN.\n>>\n>> Generally, I welcome this idea: Extensions can already do a lot of \n>> work, and they should have a tool to report their state, not only into \n>> the log.\n>> But I think your approach needs to be elaborated. At first, it would \n>> be better to allow registering extended instruments for specific node \n>> types to avoid unneeded calls.\n>> Secondly, looking into the Instrumentation usage, I don't see the \n>> reason to limit the size: as I see everywhere it exists locally or in \n>> the DSA where its size is calculated on the fly. So, by registering an \n>> extended instrument, we can reserve a slot for the extension. The \n>> actual size of underlying data can be provided by the extension routine.\n>>\n> Thank you for review.\n> \n> I agree that support of extended instruments is desired. I just tried to \n> minimize number of changes to make this patch smaller.\n\nI got it. But having a substantial number of extensions in support, I \nthink the extension part of instrumentation could have advantages and be \nworth elaborating on.\n\n> In all this cases we are using array of `Instrumentation` and if it \n> contains varying part, then it is not clear where to place it.\n> Yes, there is also code which serialize and sends instrumentations \n> between worker processes  and I have updated this code in my PR to send \n> actual amount of custom instrumentation data. But it can not help with \n> the cases above.\nI see next basic instruments in the code:\n- Instrumentation (which should be named NodeInstrumentation)\n- MemoizeInstrumentation\n- JitInstrumentation\n- AggregateInstrumentation\n- HashInstrumentation\n- TuplesortInstrumentation\n\nAs a variant, extensibility can be designed with parent \n'AbstractInstrumentation' node, containing node type and link to \nextensible part. sizeof(Instr) calls should be replaced with the \ngetInstrSize() call - not so much places in the code; memcpy() also can \nbe replaced with the copy_instr() routine.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Fri, 1 Dec 2023 10:57:51 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On Sat, 21 Oct 2023 at 18:34, Konstantin Knizhnik <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, COST,...) which help to provide useful details of query execution.\n> In Neon we have added PREFETCH option which shows information about page prefetching during query execution (prefetching is more critical for Neon\n> architecture because of separation of compute and storage, so it is implemented not only for bitmap heap scan as in Vanilla Postgres, but also for seqscan, indexscan and indexonly scan). Another possible candidate for explain options is local file cache (extra caching layer above shared buffers which is used to somehow replace file system cache in standalone Postgres).\n>\n> I think that it will be nice to have a generic mechanism which allows extensions to add its own options to EXPLAIN.\n> I have attached the patch with implementation of such mechanism (also available as PR: https://github.com/knizhnik/postgres/pull/1 )\n>\n> I have demonstrated this mechanism using Bloom extension - just to report number of Bloom matches.\n> Not sure that it is really useful information but it is used mostly as example:\n>\n> explain (analyze,bloom) select * from t where pk=2000;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on t (cost=15348.00..15352.01 rows=1 width=4) (actual time=25.244..25.939 rows=1 loops=1)\n> Recheck Cond: (pk = 2000)\n> Rows Removed by Index Recheck: 292\n> Heap Blocks: exact=283\n> Bloom: matches=293\n> -> Bitmap Index Scan on t_pk_idx (cost=0.00..15348.00 rows=1 width=0) (actual time=25.147..25.147 rows=293 loops=1)\n> Index Cond: (pk = 2000)\n> Bloom: matches=293\n> Planning:\n> Bloom: matches=0\n> Planning Time: 0.387 ms\n> Execution Time: 26.053 ms\n> (12 rows)\n>\n> There are two known issues with this proposal:\n\nThere are few compilation errors reported by CFBot at [1] with:\n[05:00:40.452] ../src/backend/access/brin/brin.c: In function\n‘_brin_end_parallel’:\n[05:00:40.452] ../src/backend/access/brin/brin.c:2675:3: error: too\nfew arguments to function ‘InstrAccumParallelQuery’\n[05:00:40.452] 2675 |\nInstrAccumParallelQuery(&brinleader->bufferusage[i],\n&brinleader->walusage[i]);\n[05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~~~\n[05:00:40.452] In file included from ../src/include/nodes/execnodes.h:33,\n[05:00:40.452] from ../src/include/access/brin.h:13,\n[05:00:40.452] from ../src/backend/access/brin/brin.c:18:\n[05:00:40.452] ../src/include/executor/instrument.h:151:13: note: declared here\n[05:00:40.452] 151 | extern void InstrAccumParallelQuery(BufferUsage\n*bufusage, WalUsage *walusage, char* custusage);\n[05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~~~\n[05:00:40.452] ../src/backend/access/brin/brin.c: In function\n‘_brin_parallel_build_main’:\n[05:00:40.452] ../src/backend/access/brin/brin.c:2873:2: error: too\nfew arguments to function ‘InstrEndParallelQuery’\n[05:00:40.452] 2873 |\nInstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n[05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~\n[05:00:40.452] In file included from ../src/include/nodes/execnodes.h:33,\n[05:00:40.452] from ../src/include/access/brin.h:13,\n[05:00:40.452] from ../src/backend/access/brin/brin.c:18:\n[05:00:40.452] ../src/include/executor/instrument.h:150:13: note: declared here\n[05:00:40.452] 150 | extern void InstrEndParallelQuery(BufferUsage\n*bufusage, WalUsage *walusage, char* custusage);\n\n[1] - https://cirrus-ci.com/task/5452124486631424?logs=build#L374\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 9 Jan 2024 14:03:34 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 30/11/2023 22:40, Konstantin Knizhnik wrote:\n> In all this cases we are using array of `Instrumentation` and if it \n> contains varying part, then it is not clear where to place it.\n> Yes, there is also code which serialize and sends instrumentations \n> between worker processes  and I have updated this code in my PR to send \n> actual amount of custom instrumentation data. But it can not help with \n> the cases above.\nWhat do you think about this really useful feature? Do you wish to \ndevelop it further?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:29:30 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On Wed, Jan 10, 2024 at 01:29:30PM +0700, Andrei Lepikhov wrote:\n> What do you think about this really useful feature? Do you wish to develop\n> it further?\n\nI am biased here. This seems like a lot of code for something we've\nbeen delegating to the explain hook for ages. Even if I can see the\nappeal of pushing that more into explain.c to get more data on a\nper-node basis depending on the custom options given by the caller of\nan EXPLAIN entry point, I cannot get really excited about the extra\nmaintenance this facility would involve compared to the potential\ngains, knowing that there's a hook.\n--\nMichael", "msg_date": "Wed, 10 Jan 2024 15:46:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\nOn 10/01/2024 8:46 am, Michael Paquier wrote:\n> On Wed, Jan 10, 2024 at 01:29:30PM +0700, Andrei Lepikhov wrote:\n>> What do you think about this really useful feature? Do you wish to develop\n>> it further?\n> I am biased here. This seems like a lot of code for something we've\n> been delegating to the explain hook for ages. Even if I can see the\n> appeal of pushing that more into explain.c to get more data on a\n> per-node basis depending on the custom options given by the caller of\n> an EXPLAIN entry point, I cannot get really excited about the extra\n> maintenance this facility would involve compared to the potential\n> gains, knowing that there's a hook.\n> --\n> Michael\n\n\nWell, I am not sure that proposed patch is flexible enough to handle all \npossible scenarios.\nI just wanted to make it as simple as possible to leave some chances for \nit to me merged.\nBut it is easy to answer the question why existed explain hook is not \nenough:\n\n1. It doesn't allow to add some extra options to EXPLAIN. My intention \nwas to be able to do something like this \"explain \n(analyze,buffers,prefetch) ...\". It is completely not possible with \nexplain hook.\n2. May be I wrong, but it is impossible now to collect and combine \ninstrumentation from all parallel workers without changing Postgres core\n\nExplain hook can be useful if you add some custom node to query \nexecution plan and want to provide information about this node.\nBut if you are implementing some alternative storage mechanism or some \noptimization for existed plan nodes, then it is very difficult to do it \nusing existed explain hook.\n\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:27:06 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\nOn 10/01/2024 8:29 am, Andrei Lepikhov wrote:\n> On 30/11/2023 22:40, Konstantin Knizhnik wrote:\n>> In all this cases we are using array of `Instrumentation` and if it \n>> contains varying part, then it is not clear where to place it.\n>> Yes, there is also code which serialize and sends instrumentations \n>> between worker processes  and I have updated this code in my PR to \n>> send actual amount of custom instrumentation data. But it can not \n>> help with the cases above.\n> What do you think about this really useful feature? Do you wish to \n> develop it further?\n>\nIn Neon (cloud Postgres) we have changed Postgres core to include in \nexplain information about prefetch and local file cache.\nEXPLAIN seems to be most convenient way for users to get this \ninformation which can be very useful for investigation of query \nexecution speed.\nSo my intention was to make it possible to add extra information to \nexplain without patching Postgres core.\nExisted explain hook is not enough for it.\n\nI am not sure that the suggested approach is flexible enough. First of \nall I tried to make it is simple as possible, minimizing changes in \nPostgres core.\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:32:50 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 09/01/2024 10:33 am, vignesh C wrote:\n> On Sat, 21 Oct 2023 at 18:34, Konstantin Knizhnik <[email protected]> wrote:\n>> Hi hackers,\n>>\n>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS, COST,...) which help to provide useful details of query execution.\n>> In Neon we have added PREFETCH option which shows information about page prefetching during query execution (prefetching is more critical for Neon\n>> architecture because of separation of compute and storage, so it is implemented not only for bitmap heap scan as in Vanilla Postgres, but also for seqscan, indexscan and indexonly scan). Another possible candidate for explain options is local file cache (extra caching layer above shared buffers which is used to somehow replace file system cache in standalone Postgres).\n>>\n>> I think that it will be nice to have a generic mechanism which allows extensions to add its own options to EXPLAIN.\n>> I have attached the patch with implementation of such mechanism (also available as PR: https://github.com/knizhnik/postgres/pull/1 )\n>>\n>> I have demonstrated this mechanism using Bloom extension - just to report number of Bloom matches.\n>> Not sure that it is really useful information but it is used mostly as example:\n>>\n>> explain (analyze,bloom) select * from t where pk=2000;\n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------------\n>> Bitmap Heap Scan on t (cost=15348.00..15352.01 rows=1 width=4) (actual time=25.244..25.939 rows=1 loops=1)\n>> Recheck Cond: (pk = 2000)\n>> Rows Removed by Index Recheck: 292\n>> Heap Blocks: exact=283\n>> Bloom: matches=293\n>> -> Bitmap Index Scan on t_pk_idx (cost=0.00..15348.00 rows=1 width=0) (actual time=25.147..25.147 rows=293 loops=1)\n>> Index Cond: (pk = 2000)\n>> Bloom: matches=293\n>> Planning:\n>> Bloom: matches=0\n>> Planning Time: 0.387 ms\n>> Execution Time: 26.053 ms\n>> (12 rows)\n>>\n>> There are two known issues with this proposal:\n> There are few compilation errors reported by CFBot at [1] with:\n> [05:00:40.452] ../src/backend/access/brin/brin.c: In function\n> ‘_brin_end_parallel’:\n> [05:00:40.452] ../src/backend/access/brin/brin.c:2675:3: error: too\n> few arguments to function ‘InstrAccumParallelQuery’\n> [05:00:40.452] 2675 |\n> InstrAccumParallelQuery(&brinleader->bufferusage[i],\n> &brinleader->walusage[i]);\n> [05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~~~\n> [05:00:40.452] In file included from ../src/include/nodes/execnodes.h:33,\n> [05:00:40.452] from ../src/include/access/brin.h:13,\n> [05:00:40.452] from ../src/backend/access/brin/brin.c:18:\n> [05:00:40.452] ../src/include/executor/instrument.h:151:13: note: declared here\n> [05:00:40.452] 151 | extern void InstrAccumParallelQuery(BufferUsage\n> *bufusage, WalUsage *walusage, char* custusage);\n> [05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~~~\n> [05:00:40.452] ../src/backend/access/brin/brin.c: In function\n> ‘_brin_parallel_build_main’:\n> [05:00:40.452] ../src/backend/access/brin/brin.c:2873:2: error: too\n> few arguments to function ‘InstrEndParallelQuery’\n> [05:00:40.452] 2873 |\n> InstrEndParallelQuery(&bufferusage[ParallelWorkerNumber],\n> [05:00:40.452] | ^~~~~~~~~~~~~~~~~~~~~\n> [05:00:40.452] In file included from ../src/include/nodes/execnodes.h:33,\n> [05:00:40.452] from ../src/include/access/brin.h:13,\n> [05:00:40.452] from ../src/backend/access/brin/brin.c:18:\n> [05:00:40.452] ../src/include/executor/instrument.h:150:13: note: declared here\n> [05:00:40.452] 150 | extern void InstrEndParallelQuery(BufferUsage\n> *bufusage, WalUsage *walusage, char* custusage);\n>\n> [1] - https://cirrus-ci.com/task/5452124486631424?logs=build#L374\n>\n> Regards,\n> Vignesh\n\n\nThank you for reporting the problem.\nRebased version of the patch is attached.", "msg_date": "Wed, 10 Jan 2024 15:56:53 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 10/1/2024 20:27, Konstantin Knizhnik wrote:\n> \n> On 10/01/2024 8:46 am, Michael Paquier wrote:\n>> On Wed, Jan 10, 2024 at 01:29:30PM +0700, Andrei Lepikhov wrote:\n>>> What do you think about this really useful feature? Do you wish to \n>>> develop\n>>> it further?\n>> I am biased here.  This seems like a lot of code for something we've\n>> been delegating to the explain hook for ages.  Even if I can see the\n>> appeal of pushing that more into explain.c to get more data on a\n>> per-node basis depending on the custom options given by the caller of\n>> an EXPLAIN entry point, I cannot get really excited about the extra\n>> maintenance this facility would involve compared to the potential\n>> gains, knowing that there's a hook.\n>> -- \n>> Michael\n> \n> \n> Well, I am not sure that proposed patch is flexible enough to handle all \n> possible scenarios.\n> I just wanted to make it as simple as possible to leave some chances for \n> it to me merged.\n> But it is easy to answer the question why existed explain hook is not \n> enough:\n> \n> 1. It doesn't allow to add some extra options to EXPLAIN. My intention \n> was to be able to do something like this \"explain \n> (analyze,buffers,prefetch) ...\". It is completely not possible with \n> explain hook.\nI agree. Designing mostly planner-related extensions, I also wanted to \nadd some information to the explain of nodes. For example, \npg_query_state could add the status of the node at the time of \ninterruption of execution: started, stopped, or loop closed.\nMaybe we should gather some statistics on how developers of extensions \ndeal with that issue ...\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 22:59:02 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 10/21/23 14:16, Konstantin Knizhnik wrote:\n> Hi hackers,\n> \n> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\n> COST,...) which help to provide useful details of query execution.\n> In Neon we have added PREFETCH option which shows information about page\n> prefetching during query execution (prefetching is more critical for Neon\n> architecture because of separation of compute and storage, so it is\n> implemented not only for bitmap heap scan as in Vanilla Postgres, but\n> also for seqscan, indexscan and indexonly scan). Another possible\n> candidate  for explain options is local file cache (extra caching layer\n> above shared buffers which is used to somehow replace file system cache\n> in standalone Postgres).\n\nNot quite related to this patch about EXPLAIN options, but can you share\nsome details how you implemented prefetching for the other nodes?\n\nI'm asking because I've been working on prefetching for index scans, so\nI'm wondering if there's a better way to do this, or how to do it in a\nway that would allow neon to maybe leverage that too.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Jan 2024 18:03:15 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\nOn 12/01/2024 7:03 pm, Tomas Vondra wrote:\n> On 10/21/23 14:16, Konstantin Knizhnik wrote:\n>> Hi hackers,\n>>\n>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\n>> COST,...) which help to provide useful details of query execution.\n>> In Neon we have added PREFETCH option which shows information about page\n>> prefetching during query execution (prefetching is more critical for Neon\n>> architecture because of separation of compute and storage, so it is\n>> implemented not only for bitmap heap scan as in Vanilla Postgres, but\n>> also for seqscan, indexscan and indexonly scan). Another possible\n>> candidate  for explain options is local file cache (extra caching layer\n>> above shared buffers which is used to somehow replace file system cache\n>> in standalone Postgres).\n> Not quite related to this patch about EXPLAIN options, but can you share\n> some details how you implemented prefetching for the other nodes?\n>\n> I'm asking because I've been working on prefetching for index scans, so\n> I'm wondering if there's a better way to do this, or how to do it in a\n> way that would allow neon to maybe leverage that too.\n>\n> regards\n>\nYes, I am looking at your PR. What we have implemented in Neon is more \nspecific to Neon architecture where storage is separated from compute.\nSo each page not found in shared buffers has to be downloaded from page \nserver. It adds quite noticeable latency, because of network roundtrip.\nWhile vanilla Postgres can rely on OS file system cache when page is not \nfound in shared buffer (access to OS file cache is certainly slower than \nto shared buffers\nbecause of syscall and copying of page, but performance penaly is not \nvery large - less than 15%), Neon has no local files and so has to send \nrequest to the socket.\n\nThis is why we have to perform aggressive prefetching whenever it is \npossible (when it it is possible to predict order of subsequent pages).\nUnlike vanilla Postgres which implements prefetch only for bitmap heap \nscan, we have implemented it for seqscan, index scan, indexonly scan, \nbitmap heap scan, vacuum, pg_prewarm.\nThe main difference between Neon prefetch and vanilla Postgres prefetch \nis that first one is backend specific. So each backend prefetches only \npages which it needs.\nThis is why we have to rewrite prefetch for bitmap heap scan, which is \nusing `fadvise` and assumes that pages prefetched by one backend in file \ncache, can be used by any other backend.\n\n\nConcerning index scan we have implemented two different approaches: for \nindex only scan we  try to prefetch leave pages and for index scan we \nprefetch referenced heap pages.\nIn both cases we start from prefetch distance 0 and increase it until it \nreaches `effective_io_concurrency` for this relation. Doing so we try to \navoid prefetching of useless pages and slowdown of \"point\" lookups \nreturning one or few records.\n\nIf you are interested, you can look at our implementation in neon repo: \nall source are available. But briefly speaking, each backend has its own \nprefetch ring (prefetch requests which are waiting for response). The \nkey idea is that we can send several prefetch requests to page server \nand then receive multiple replies. It allows to increased speed of OLAP \nqueries up to 10 times.\n\nHeikki thinks that prefetch can be somehow combined with async-io \nproposal (based on io_uring). But right now they have nothing in common.\n\n\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 21:30:21 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\n\nOn 1/12/24 20:30, Konstantin Knizhnik wrote:\n> \n> On 12/01/2024 7:03 pm, Tomas Vondra wrote:\n>> On 10/21/23 14:16, Konstantin Knizhnik wrote:\n>>> Hi hackers,\n>>>\n>>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\n>>> COST,...) which help to provide useful details of query execution.\n>>> In Neon we have added PREFETCH option which shows information about page\n>>> prefetching during query execution (prefetching is more critical for\n>>> Neon\n>>> architecture because of separation of compute and storage, so it is\n>>> implemented not only for bitmap heap scan as in Vanilla Postgres, but\n>>> also for seqscan, indexscan and indexonly scan). Another possible\n>>> candidate  for explain options is local file cache (extra caching layer\n>>> above shared buffers which is used to somehow replace file system cache\n>>> in standalone Postgres).\n>> Not quite related to this patch about EXPLAIN options, but can you share\n>> some details how you implemented prefetching for the other nodes?\n>>\n>> I'm asking because I've been working on prefetching for index scans, so\n>> I'm wondering if there's a better way to do this, or how to do it in a\n>> way that would allow neon to maybe leverage that too.\n>>\n>> regards\n>>\n> Yes, I am looking at your PR. What we have implemented in Neon is more\n> specific to Neon architecture where storage is separated from compute.\n> So each page not found in shared buffers has to be downloaded from page\n> server. It adds quite noticeable latency, because of network roundtrip.\n> While vanilla Postgres can rely on OS file system cache when page is not\n> found in shared buffer (access to OS file cache is certainly slower than\n> to shared buffers\n> because of syscall and copying of page, but performance penaly is not\n> very large - less than 15%), Neon has no local files and so has to send\n> request to the socket.\n> \n> This is why we have to perform aggressive prefetching whenever it is\n> possible (when it it is possible to predict order of subsequent pages).\n> Unlike vanilla Postgres which implements prefetch only for bitmap heap\n> scan, we have implemented it for seqscan, index scan, indexonly scan,\n> bitmap heap scan, vacuum, pg_prewarm.\n> The main difference between Neon prefetch and vanilla Postgres prefetch\n> is that first one is backend specific. So each backend prefetches only\n> pages which it needs.\n> This is why we have to rewrite prefetch for bitmap heap scan, which is\n> using `fadvise` and assumes that pages prefetched by one backend in file\n> cache, can be used by any other backend.\n> \n\nI do understand why prefetching is important in neon (likely more than\nfor core postgres). I'm interested in how it's actually implemented,\nwhether it's somehow similar to how my patch does things or in some\ndifferent (perhaps neon-specific way), and if the approaches are\ndifferent then what are the pros/cons. And so on.\n\nSo is it implemented in the neon-specific storage, somehow, or where/how\ndoes neon issue the prefetch requests?\n\n> \n> Concerning index scan we have implemented two different approaches: for\n> index only scan we  try to prefetch leave pages and for index scan we\n> prefetch referenced heap pages.\n\nIn my experience the IOS handling (only prefetching leaf pages) is very\nlimiting, and may easily lead to index-only scans being way slower than\nregular index scans. Which is super surprising for users. It's why I\nended up improving the prefetcher to optionally look at the VM etc.\n\n> In both cases we start from prefetch distance 0 and increase it until it\n> reaches `effective_io_concurrency` for this relation. Doing so we try to\n> avoid prefetching of useless pages and slowdown of \"point\" lookups\n> returning one or few records.\n> \n\nRight, the regular prefetch ramp-up. My patch does the same thing.\n\n> If you are interested, you can look at our implementation in neon repo:\n> all source are available. But briefly speaking, each backend has its own\n> prefetch ring (prefetch requests which are waiting for response). The\n> key idea is that we can send several prefetch requests to page server\n> and then receive multiple replies. It allows to increased speed of OLAP\n> queries up to 10 times.\n> \n\nCan you point me to the actual code / branch where it happens? I did\ncheck the github repo, but I don't see anything relevant in the default\nbranch (REL_15_STABLE_neon). There are some \"prefetch\" branches, but\nthose seem abandoned.\n\n> Heikki thinks that prefetch can be somehow combined with async-io\n> proposal (based on io_uring). But right now they have nothing in common.\n> \n\nI can imagine async I/O being useful here, but I find the flow of I/O\nrequests is quite complex / goes through multiple layers. Or maybe I\njust don't understand how it should work.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Jan 2024 15:51:20 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 13/01/2024 4:51 pm, Tomas Vondra wrote:\n>\n> On 1/12/24 20:30, Konstantin Knizhnik wrote:\n>> On 12/01/2024 7:03 pm, Tomas Vondra wrote:\n>>> On 10/21/23 14:16, Konstantin Knizhnik wrote:\n>>>> Hi hackers,\n>>>>\n>>>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\n>>>> COST,...) which help to provide useful details of query execution.\n>>>> In Neon we have added PREFETCH option which shows information about page\n>>>> prefetching during query execution (prefetching is more critical for\n>>>> Neon\n>>>> architecture because of separation of compute and storage, so it is\n>>>> implemented not only for bitmap heap scan as in Vanilla Postgres, but\n>>>> also for seqscan, indexscan and indexonly scan). Another possible\n>>>> candidate  for explain options is local file cache (extra caching layer\n>>>> above shared buffers which is used to somehow replace file system cache\n>>>> in standalone Postgres).\n>>> Not quite related to this patch about EXPLAIN options, but can you share\n>>> some details how you implemented prefetching for the other nodes?\n>>>\n>>> I'm asking because I've been working on prefetching for index scans, so\n>>> I'm wondering if there's a better way to do this, or how to do it in a\n>>> way that would allow neon to maybe leverage that too.\n>>>\n>>> regards\n>>>\n>> Yes, I am looking at your PR. What we have implemented in Neon is more\n>> specific to Neon architecture where storage is separated from compute.\n>> So each page not found in shared buffers has to be downloaded from page\n>> server. It adds quite noticeable latency, because of network roundtrip.\n>> While vanilla Postgres can rely on OS file system cache when page is not\n>> found in shared buffer (access to OS file cache is certainly slower than\n>> to shared buffers\n>> because of syscall and copying of page, but performance penaly is not\n>> very large - less than 15%), Neon has no local files and so has to send\n>> request to the socket.\n>>\n>> This is why we have to perform aggressive prefetching whenever it is\n>> possible (when it it is possible to predict order of subsequent pages).\n>> Unlike vanilla Postgres which implements prefetch only for bitmap heap\n>> scan, we have implemented it for seqscan, index scan, indexonly scan,\n>> bitmap heap scan, vacuum, pg_prewarm.\n>> The main difference between Neon prefetch and vanilla Postgres prefetch\n>> is that first one is backend specific. So each backend prefetches only\n>> pages which it needs.\n>> This is why we have to rewrite prefetch for bitmap heap scan, which is\n>> using `fadvise` and assumes that pages prefetched by one backend in file\n>> cache, can be used by any other backend.\n>>\n> I do understand why prefetching is important in neon (likely more than\n> for core postgres). I'm interested in how it's actually implemented,\n> whether it's somehow similar to how my patch does things or in some\n> different (perhaps neon-specific way), and if the approaches are\n> different then what are the pros/cons. And so on.\n>\n> So is it implemented in the neon-specific storage, somehow, or where/how\n> does neon issue the prefetch requests?\n\nNeon mostly preservers Postgres prefetch mechanism, so we are using \nPrefetchBuffer which checks if page is present in shared buffers\nand if not - calls smgrprefetch. We are using own storage manager \nimplementation which instead of reading pages from local disk, download \nthem from page server.\nAnd prefetch implementation in Neon storager manager is obviously also \ndifferent from one in vanilla Postgres which uses posix_fadvise.\nNeon prefetch implementation inserts prefetch request in ring buffer and \nsends it to the server. When read operation is performed we check if \nthere is correspondent prefetch request in ring buffer and if so - waits \nits completion.\n\nAs I already wrote - prefetch is done locally for each backend. And each \nbackend has its own connection with page server. It  can be changed in \nfuture when we implement multiplexing of page server connections. But \nright now prefetch is local. And certainly prefetch can improve \nperformance only if we correctly predict subsequent page requests.\nIf not - then page server does useless jobs and backend has to waity and \nconsume all issues prefetch requests. This is why in prefetch \nimplementation for most of nodes we  start with minimal prefetch \ndistance and then increase it. It allows to perform prefetch only for \nsuch queries where it is really efficient (OLAP) and doesn't degrade \nperformance of simple OLTP queries.\n\nOut prefetch implementation is also compatible with parallel plans, but \nhere we need to preserve some range of pages for each parallel workers \ninstead of picking page from some shared queue on demand. It is one of \nthe major difference with Postgres prefetch using posix_fadvise: each \nbackend shoudl prefetch only those pages which it will going to read.\n\n>> Concerning index scan we have implemented two different approaches: for\n>> index only scan we  try to prefetch leave pages and for index scan we\n>> prefetch referenced heap pages.\n> In my experience the IOS handling (only prefetching leaf pages) is very\n> limiting, and may easily lead to index-only scans being way slower than\n> regular index scans. Which is super surprising for users. It's why I\n> ended up improving the prefetcher to optionally look at the VM etc.\n\nWell, my assumption was the following: prefetch is most efficient for \nOLAP queries.\nAlthough HTAP (hybrid transactional/analytical processing) is popular \ntrend now,\nclassical model is that analytic queries are performed on \"historical\" \ndata, which was already proceeded by vacuum and all-visible bits were \nset in VM.\nMay be this assumption is wrong but it seems to me that if most heap \npages are not marked as all-visible, then  optimizer should prefetch \nbitmap scan to index-only scan.\nAnd for combination of index and heap bitmap scans we can efficiently \nprefetch both index and heap pages.\n\n>> In both cases we start from prefetch distance 0 and increase it until it\n>> reaches `effective_io_concurrency` for this relation. Doing so we try to\n>> avoid prefetching of useless pages and slowdown of \"point\" lookups\n>> returning one or few records.\n>>\n> Right, the regular prefetch ramp-up. My patch does the same thing.\n>\n>> If you are interested, you can look at our implementation in neon repo:\n>> all source are available. But briefly speaking, each backend has its own\n>> prefetch ring (prefetch requests which are waiting for response). The\n>> key idea is that we can send several prefetch requests to page server\n>> and then receive multiple replies. It allows to increased speed of OLAP\n>> queries up to 10 times.\n>>\n> Can you point me to the actual code / branch where it happens? I did\n> check the github repo, but I don't see anything relevant in the default\n> branch (REL_15_STABLE_neon). There are some \"prefetch\" branches, but\n> those seem abandoned.\n\nImplementation of prefetch mecnahism is in Neon extension:\nhttps://github.com/neondatabase/neon/blob/60ced06586a6811470c16c6386daba79ffaeda13/pgxn/neon/pagestore_smgr.c#L205\n\nBut concrete implementation of prefetch for particular nodes is \ncertainly inside Postgres.\nFor example, if you are interested how it is implemented for index scan, \nthen please look at:\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L844\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1166\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1467\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1625\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L2629\n\n\n>\n>> Heikki thinks that prefetch can be somehow combined with async-io\n>> proposal (based on io_uring). But right now they have nothing in common.\n>>\n> I can imagine async I/O being useful here, but I find the flow of I/O\n> requests is quite complex / goes through multiple layers. Or maybe I\n> just don't understand how it should work.\nI also do not think that it will be possible to marry this two approaches.\n\n\n\n\n\n\n\nOn 13/01/2024 4:51 pm, Tomas Vondra\n wrote:\n\n\n\n\nOn 1/12/24 20:30, Konstantin Knizhnik wrote:\n\n\n\nOn 12/01/2024 7:03 pm, Tomas Vondra wrote:\n\n\nOn 10/21/23 14:16, Konstantin Knizhnik wrote:\n\n\nHi hackers,\n\nEXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\nCOST,...) which help to provide useful details of query execution.\nIn Neon we have added PREFETCH option which shows information about page\nprefetching during query execution (prefetching is more critical for\nNeon\narchitecture because of separation of compute and storage, so it is\nimplemented not only for bitmap heap scan as in Vanilla Postgres, but\nalso for seqscan, indexscan and indexonly scan). Another possible\ncandidate  for explain options is local file cache (extra caching layer\nabove shared buffers which is used to somehow replace file system cache\nin standalone Postgres).\n\n\nNot quite related to this patch about EXPLAIN options, but can you share\nsome details how you implemented prefetching for the other nodes?\n\nI'm asking because I've been working on prefetching for index scans, so\nI'm wondering if there's a better way to do this, or how to do it in a\nway that would allow neon to maybe leverage that too.\n\nregards\n\n\n\nYes, I am looking at your PR. What we have implemented in Neon is more\nspecific to Neon architecture where storage is separated from compute.\nSo each page not found in shared buffers has to be downloaded from page\nserver. It adds quite noticeable latency, because of network roundtrip.\nWhile vanilla Postgres can rely on OS file system cache when page is not\nfound in shared buffer (access to OS file cache is certainly slower than\nto shared buffers\nbecause of syscall and copying of page, but performance penaly is not\nvery large - less than 15%), Neon has no local files and so has to send\nrequest to the socket.\n\nThis is why we have to perform aggressive prefetching whenever it is\npossible (when it it is possible to predict order of subsequent pages).\nUnlike vanilla Postgres which implements prefetch only for bitmap heap\nscan, we have implemented it for seqscan, index scan, indexonly scan,\nbitmap heap scan, vacuum, pg_prewarm.\nThe main difference between Neon prefetch and vanilla Postgres prefetch\nis that first one is backend specific. So each backend prefetches only\npages which it needs.\nThis is why we have to rewrite prefetch for bitmap heap scan, which is\nusing `fadvise` and assumes that pages prefetched by one backend in file\ncache, can be used by any other backend.\n\n\n\n\nI do understand why prefetching is important in neon (likely more than\nfor core postgres). I'm interested in how it's actually implemented,\nwhether it's somehow similar to how my patch does things or in some\ndifferent (perhaps neon-specific way), and if the approaches are\ndifferent then what are the pros/cons. And so on.\n\nSo is it implemented in the neon-specific storage, somehow, or where/how\ndoes neon issue the prefetch requests?\n\nNeon mostly preservers Postgres prefetch mechanism, so we are\n using PrefetchBuffer which checks if page is present in shared\n buffers\n and if not - calls smgrprefetch. We are using own storage manager\n implementation which instead of reading pages from local disk,\n download them from page server.\n And prefetch implementation in Neon storager manager is obviously\n also different from one in vanilla Postgres which uses\n posix_fadvise.\n Neon prefetch implementation inserts prefetch request in ring\n buffer and sends it to the server. When read operation is\n performed we check if there is correspondent prefetch request in\n ring buffer and if so - waits its completion.\nAs I already wrote - prefetch is done locally for each backend.\n And each backend has its own connection with page server. It  can\n be changed in future when we implement multiplexing of page server\n connections. But right now prefetch is local. And certainly\n prefetch can improve performance only if we correctly predict\n subsequent page requests.\n If not - then page server does useless jobs and backend has to\n waity and consume all issues prefetch requests. This is why in\n prefetch implementation for most of nodes we  start with minimal\n prefetch distance and then increase it. It allows to perform\n prefetch only for such queries where it is really efficient (OLAP)\n and doesn't degrade performance of simple OLTP queries. \n\nOut prefetch implementation is also compatible with parallel\n plans, but here we need to preserve some range of pages for each\n parallel workers instead of picking page from some shared queue on\n demand. It is one of the major difference with Postgres prefetch\n using posix_fadvise: each backend shoudl prefetch only those pages\n which it will going to read.\n\n\n\n\n\nConcerning index scan we have implemented two different approaches: for\nindex only scan we  try to prefetch leave pages and for index scan we\nprefetch referenced heap pages.\n\n\n\nIn my experience the IOS handling (only prefetching leaf pages) is very\nlimiting, and may easily lead to index-only scans being way slower than\nregular index scans. Which is super surprising for users. It's why I\nended up improving the prefetcher to optionally look at the VM etc.\n\n\nWell, my assumption was the following: prefetch is most efficient\n for OLAP queries.\n Although HTAP (hybrid transactional/analytical processing) is\n popular trend now,\n classical model is that analytic queries are performed on\n \"historical\" data, which was already proceeded by vacuum and\n all-visible bits were set in VM.\n May be this assumption is wrong but it seems to me that if most\n heap pages are not marked as all-visible, then  optimizer should\n prefetch bitmap scan to index-only scan.\n And for combination of index and heap bitmap scans we can\n efficiently prefetch both index and heap pages.\n\n\n\n\n\nIn both cases we start from prefetch distance 0 and increase it until it\nreaches `effective_io_concurrency` for this relation. Doing so we try to\navoid prefetching of useless pages and slowdown of \"point\" lookups\nreturning one or few records.\n\n\n\n\nRight, the regular prefetch ramp-up. My patch does the same thing.\n\n\n\nIf you are interested, you can look at our implementation in neon repo:\nall source are available. But briefly speaking, each backend has its own\nprefetch ring (prefetch requests which are waiting for response). The\nkey idea is that we can send several prefetch requests to page server\nand then receive multiple replies. It allows to increased speed of OLAP\nqueries up to 10 times.\n\n\n\n\nCan you point me to the actual code / branch where it happens? I did\ncheck the github repo, but I don't see anything relevant in the default\nbranch (REL_15_STABLE_neon). There are some \"prefetch\" branches, but\nthose seem abandoned.\n\nImplementation of prefetch mecnahism is in Neon extension:\nhttps://github.com/neondatabase/neon/blob/60ced06586a6811470c16c6386daba79ffaeda13/pgxn/neon/pagestore_smgr.c#L205\nBut concrete implementation of prefetch for particular nodes is\n certainly inside Postgres.\n For example, if you are interested how it is implemented for index\n scan, then please look at:\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L844\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1166\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1467\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1625\nhttps://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L2629\n\n\n\n\n\n\n\n\nHeikki thinks that prefetch can be somehow combined with async-io\nproposal (based on io_uring). But right now they have nothing in common.\n\n\n\n\nI can imagine async I/O being useful here, but I find the flow of I/O\nrequests is quite complex / goes through multiple layers. Or maybe I\njust don't understand how it should work.\n\n\n I also do not think that it will be possible to marry this two\n approaches.", "msg_date": "Sat, 13 Jan 2024 18:13:03 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 1/13/24 17:13, Konstantin Knizhnik wrote:\n> \n> On 13/01/2024 4:51 pm, Tomas Vondra wrote:\n>>\n>> On 1/12/24 20:30, Konstantin Knizhnik wrote:\n>>> On 12/01/2024 7:03 pm, Tomas Vondra wrote:\n>>>> On 10/21/23 14:16, Konstantin Knizhnik wrote:\n>>>>> Hi hackers,\n>>>>>\n>>>>> EXPLAIN statement has a list of options (i.e. ANALYZE, BUFFERS,\n>>>>> COST,...) which help to provide useful details of query execution.\n>>>>> In Neon we have added PREFETCH option which shows information about\n>>>>> page\n>>>>> prefetching during query execution (prefetching is more critical for\n>>>>> Neon\n>>>>> architecture because of separation of compute and storage, so it is\n>>>>> implemented not only for bitmap heap scan as in Vanilla Postgres, but\n>>>>> also for seqscan, indexscan and indexonly scan). Another possible\n>>>>> candidate  for explain options is local file cache (extra caching\n>>>>> layer\n>>>>> above shared buffers which is used to somehow replace file system\n>>>>> cache\n>>>>> in standalone Postgres).\n>>>> Not quite related to this patch about EXPLAIN options, but can you\n>>>> share\n>>>> some details how you implemented prefetching for the other nodes?\n>>>>\n>>>> I'm asking because I've been working on prefetching for index scans, so\n>>>> I'm wondering if there's a better way to do this, or how to do it in a\n>>>> way that would allow neon to maybe leverage that too.\n>>>>\n>>>> regards\n>>>>\n>>> Yes, I am looking at your PR. What we have implemented in Neon is more\n>>> specific to Neon architecture where storage is separated from compute.\n>>> So each page not found in shared buffers has to be downloaded from page\n>>> server. It adds quite noticeable latency, because of network roundtrip.\n>>> While vanilla Postgres can rely on OS file system cache when page is not\n>>> found in shared buffer (access to OS file cache is certainly slower than\n>>> to shared buffers\n>>> because of syscall and copying of page, but performance penaly is not\n>>> very large - less than 15%), Neon has no local files and so has to send\n>>> request to the socket.\n>>>\n>>> This is why we have to perform aggressive prefetching whenever it is\n>>> possible (when it it is possible to predict order of subsequent pages).\n>>> Unlike vanilla Postgres which implements prefetch only for bitmap heap\n>>> scan, we have implemented it for seqscan, index scan, indexonly scan,\n>>> bitmap heap scan, vacuum, pg_prewarm.\n>>> The main difference between Neon prefetch and vanilla Postgres prefetch\n>>> is that first one is backend specific. So each backend prefetches only\n>>> pages which it needs.\n>>> This is why we have to rewrite prefetch for bitmap heap scan, which is\n>>> using `fadvise` and assumes that pages prefetched by one backend in file\n>>> cache, can be used by any other backend.\n>>>\n>> I do understand why prefetching is important in neon (likely more than\n>> for core postgres). I'm interested in how it's actually implemented,\n>> whether it's somehow similar to how my patch does things or in some\n>> different (perhaps neon-specific way), and if the approaches are\n>> different then what are the pros/cons. And so on.\n>>\n>> So is it implemented in the neon-specific storage, somehow, or where/how\n>> does neon issue the prefetch requests?\n> \n> Neon mostly preservers Postgres prefetch mechanism, so we are using\n> PrefetchBuffer which checks if page is present in shared buffers\n> and if not - calls smgrprefetch. We are using own storage manager\n> implementation which instead of reading pages from local disk, download\n> them from page server.\n> And prefetch implementation in Neon storager manager is obviously also\n> different from one in vanilla Postgres which uses posix_fadvise.\n> Neon prefetch implementation inserts prefetch request in ring buffer and\n> sends it to the server. When read operation is performed we check if\n> there is correspondent prefetch request in ring buffer and if so - waits\n> its completion.\n> \n\nThanks. Sure, neon has to use some custom prefetch implementation,\nconsidering not posix_fadvise, considering there's no local page cache\nin the architecture.\n\nThe thing that was not clear to me is who decides what to prefetch,\nwhich code issues the prefetch requests etc. In the github links you\nshared I see it happens in the index AM code (in nbtsearch.c).\n\nThat's interesting, because that's what my first prefetching patch did\ntoo - not the same way, ofc, but in the same layer. Simply because it\nseemed like the simplest way to do that. But the feedback was that's the\nwrong layer, and that it should happen in the executor. And I agree with\nthat - the reasons are somewhere in the other thread.\n\nBased on what I saw in the neon code, I think it should be possible for\nneon to use \"my\" approach too, but that only works for the index scans,\nofc. Not sure what to do about the other places.\n\n> As I already wrote - prefetch is done locally for each backend. And each\n> backend has its own connection with page server. It  can be changed in\n> future when we implement multiplexing of page server connections. But\n> right now prefetch is local. And certainly prefetch can improve\n> performance only if we correctly predict subsequent page requests.\n> If not - then page server does useless jobs and backend has to waity and\n> consume all issues prefetch requests. This is why in prefetch\n> implementation for most of nodes we  start with minimal prefetch\n> distance and then increase it. It allows to perform prefetch only for\n> such queries where it is really efficient (OLAP) and doesn't degrade\n> performance of simple OLTP queries.\n> \n\nNot sure I understand what's so important about prefetches being \"local\"\nfor each backend. I mean even in postgres each backend prefetches it's\nown buffers, no matter what the other backends do. Although, neon\nprobably doesn't have the cross-backend sharing through shared buffers\netc. right?\n\nFWIW I certainly agree with the goal to not harm queries that can't\nbenefit from prefetching. Ramping-up the prefetch distance is something\nmy patch does too, for exactly this reason.\n\n> Out prefetch implementation is also compatible with parallel plans, but\n> here we need to preserve some range of pages for each parallel workers\n> instead of picking page from some shared queue on demand. It is one of\n> the major difference with Postgres prefetch using posix_fadvise: each\n> backend shoudl prefetch only those pages which it will going to read.\n> \n\nUnderstood. I have no opinion on this, though.\n\n>>> Concerning index scan we have implemented two different approaches: for\n>>> index only scan we  try to prefetch leave pages and for index scan we\n>>> prefetch referenced heap pages.\n>> In my experience the IOS handling (only prefetching leaf pages) is very\n>> limiting, and may easily lead to index-only scans being way slower than\n>> regular index scans. Which is super surprising for users. It's why I\n>> ended up improving the prefetcher to optionally look at the VM etc.\n> \n> Well, my assumption was the following: prefetch is most efficient for\n> OLAP queries.\n> Although HTAP (hybrid transactional/analytical processing) is popular\n> trend now,\n> classical model is that analytic queries are performed on \"historical\"\n> data, which was already proceeded by vacuum and all-visible bits were\n> set in VM.\n> May be this assumption is wrong but it seems to me that if most heap\n> pages are not marked as all-visible, then  optimizer should prefetch\n> bitmap scan to index-only scan.\n\nI think this assumption is generally reasonable, but it hinges on the\nassumption that OLAP queries have most indexes recently vacuumed and\nall-visible. I'm not sure it's wise to rely on that.\n\nWithout prefetching it's not that important - the worst thing that would\nhappen is that the IOS degrades into regular index-scan. But with\nprefetching these plans can \"invert\" with respect to cost.\n\nI'm not saying it's terrible or that IOS must have prefetching, but I\nthink it's something users may run into fairly often. And it led me to\nrework the prefetching so that IOS can prefetch too ...\n\n> And for combination of index and heap bitmap scans we can efficiently\n> prefetch both index and heap pages.\n> \n>>> In both cases we start from prefetch distance 0 and increase it until it\n>>> reaches `effective_io_concurrency` for this relation. Doing so we try to\n>>> avoid prefetching of useless pages and slowdown of \"point\" lookups\n>>> returning one or few records.\n>>>\n>> Right, the regular prefetch ramp-up. My patch does the same thing.\n>>\n>>> If you are interested, you can look at our implementation in neon repo:\n>>> all source are available. But briefly speaking, each backend has its own\n>>> prefetch ring (prefetch requests which are waiting for response). The\n>>> key idea is that we can send several prefetch requests to page server\n>>> and then receive multiple replies. It allows to increased speed of OLAP\n>>> queries up to 10 times.\n>>>\n>> Can you point me to the actual code / branch where it happens? I did\n>> check the github repo, but I don't see anything relevant in the default\n>> branch (REL_15_STABLE_neon). There are some \"prefetch\" branches, but\n>> those seem abandoned.\n> \n> Implementation of prefetch mecnahism is in Neon extension:\n> https://github.com/neondatabase/neon/blob/60ced06586a6811470c16c6386daba79ffaeda13/pgxn/neon/pagestore_smgr.c#L205\n> \n> But concrete implementation of prefetch for particular nodes is\n> certainly inside Postgres.\n> For example, if you are interested how it is implemented for index scan,\n> then please look at:\n> https://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L844\n> https://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1166\n> https://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1467\n> https://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L1625\n> https://github.com/neondatabase/postgres/blob/c1c2272f436ed9231f6172f49de219fe71a9280d/src/backend/access/nbtree/nbtsearch.c#L2629\n> \n\nThanks! Very helpful. As I said, I ended up moving the prefetching to\nthe executor. For indexscans I think it should be possible for neon to\nbenefit from that (in a way, it doesn't need to do anything except for\noverriding what PrefetchBuffer does). Not sure about the other places\nwhere neon needs to prefetch, I don't have ambition to rework those.\n\n> \n>>\n>>> Heikki thinks that prefetch can be somehow combined with async-io\n>>> proposal (based on io_uring). But right now they have nothing in common.\n>>>\n>> I can imagine async I/O being useful here, but I find the flow of I/O\n>> requests is quite complex / goes through multiple layers. Or maybe I\n>> just don't understand how it should work.\n> I also do not think that it will be possible to marry this two approaches.\n\nI didn't actually say it would be impossible - I think it seems like a\nuse case where async I/O should be a natural fit. But I'm not sure to do\nthat in a way that would not be super confusing and/or fragile when\nsomething unexpected happens (like a rescan, or maybe some change to the\nindex structure - page split, etc.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 14 Jan 2024 22:47:45 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 14/01/2024 11:47 pm, Tomas Vondra wrote:\n> The thing that was not clear to me is who decides what to prefetch,\n> which code issues the prefetch requests etc. In the github links you\n> shared I see it happens in the index AM code (in nbtsearch.c).\n\n\nIt is up to the particular plan node (seqscan, indexscan,...) which \npages to prefetch.\n\n\n>\n> That's interesting, because that's what my first prefetching patch did\n> too - not the same way, ofc, but in the same layer. Simply because it\n> seemed like the simplest way to do that. But the feedback was that's the\n> wrong layer, and that it should happen in the executor. And I agree with\n> that - the reasons are somewhere in the other thread.\n>\nI read the arguments in\n\nhttps://www.postgresql.org/message-id/flat/8c86c3a6-074e-6c88-3e7e-9452b6a37b9b%40enterprisedb.com#fc792f8d013215ace7971535a5744c83\n\nSeparating prefetch info in index scan descriptor is really good idea. \nIt will be amazing to have generic prefetch mechanism for all indexes.\nBut unfortunately I do not understand how it is possible. The logic of \nindex traversal is implemented inside AM. Executor doesn't know it.\nFor example for B-Tree scan we can prefetch:\n\n- intermediate pages\n- leave pages\n- referenced by TID heap pages\n\nBefore we load next intermediate page, we do not know next leave pages.\nAnd before we load next leave page, we can not find out TIDs from this page.\n\nAnother challenge - is how far we should prefetch (as far as I \nunderstand both your and our approach using dynamically extended \nprefetch window)\n\n> Based on what I saw in the neon code, I think it should be possible for\n> neon to use \"my\" approach too, but that only works for the index scans,\n> ofc. Not sure what to do about the other places.\nWe definitely need prefetch for heap scan (it gives the most advantages \nin performance), for vacuum  and also for pg_prewarm. Also I tried to \nimplement it for custom indexes such as pg_vector. I still not sure \nwhether it is possible to create some generic solution which will work \nfor all indexes.\n\nI have also tried to implement alternative approach for prefetch based \non access statistic.\nIt comes from use case of seqscan of table with larger toasted records. \nSo for each record we have to extract its TOAST data.\nIt is done using standard index scan, but unfortunately index prefetch \ndoesn't help much here: there is usually just one TOAST segment and so \nprefetch just have no chance to do something useful. But as far as heap \nrecords are accessed sequentially, there is good chance that toast table \nwill also be accessed mostly sequentially. So we just can count number \nof sequential requests to each relation and if ratio or seq/rand  \naccesses is above some threshold we can prefetch next pages of this \nrelation. This is really universal approach but ... working mostly for \nTOAST table.\n\n\n>> As I already wrote - prefetch is done locally for each backend. And each\n>> backend has its own connection with page server. It  can be changed in\n>> future when we implement multiplexing of page server connections. But\n>> right now prefetch is local. And certainly prefetch can improve\n>> performance only if we correctly predict subsequent page requests.\n>> If not - then page server does useless jobs and backend has to waity and\n>> consume all issues prefetch requests. This is why in prefetch\n>> implementation for most of nodes we  start with minimal prefetch\n>> distance and then increase it. It allows to perform prefetch only for\n>> such queries where it is really efficient (OLAP) and doesn't degrade\n>> performance of simple OLTP queries.\n>>\n> Not sure I understand what's so important about prefetches being \"local\"\n> for each backend. I mean even in postgres each backend prefetches it's\n> own buffers, no matter what the other backends do. Although, neon\n> probably doesn't have the cross-backend sharing through shared buffers\n> etc. right?\n\n\nSorry if my explanation was not clear:(\n\n> I mean even in postgres each backend prefetches it's own buffers, no matter what the other backends do.\n\nThis is exactly the difference. In Neon such approach doesn't work.\nEach backend maintains it's own prefetch ring. And if prefetched page was not actually received, then the whole pipe is lost.\nI.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it has to consume responses for 1,5,10 and issue another request for page 2.\nInstead of improving speed we are just doing extra job.\nSo each backend should prefetch only those pages which it is actually going to read.\nThis is why prefetch approach used in Postgres for example for parallel bitmap heap scan doesn't work for Neon.\nIf you do `posic_fadvise` then prefetched page is placed in OS cache and can be used by any parallel worker.\nBut in Neon each parallel worker should be given its own range of pages to scan and prefetch only this pages.\n\n>\n>> Well, my assumption was the following: prefetch is most efficient forOLAP queries.\n>> Although HTAP (hybrid transactional/analytical processing) is popular\n>> trend now,\n>> classical model is that analytic queries are performed on \"historical\"\n>> data, which was already proceeded by vacuum and all-visible bits were\n>> set in VM.\n>> May be this assumption is wrong but it seems to me that if most heap\n>> pages are not marked as all-visible, then  optimizer should prefetch\n>> bitmap scan to index-only scan.\n> I think this assumption is generally reasonable, but it hinges on the\n> assumption that OLAP queries have most indexes recently vacuumed and\n> all-visible. I'm not sure it's wise to rely on that.\n>\n> Without prefetching it's not that important - the worst thing that would\n> happen is that the IOS degrades into regular index-scan.\n>\nI think that it is also problem without prefetch. There are cases where \nseqscan or bitmap heap scan are really much faster then IOS because last \none has to perform a lot of visibility checks. Yes, certainly optimizer \ntakes in account percent of all-visible pages.But with it is not tricial \nto adjust optimizer parameters so that it can really choose fastest plan.\n> But withprefetching these plans can \"invert\" with respect to cost.\n>\n> I'm not saying it's terrible or that IOS must have prefetching, but I\n> think it's something users may run into fairly often. And it led me to\n> rework the prefetching so that IOS can prefetch too ...\n>\n>\n\nI think that inspecting VM for prefetch is really good idea.\n\n> Thanks! Very helpful. As I said, I ended up moving the prefetching to\n> the executor. For indexscans I think it should be possible for neon to\n> benefit from that (in a way, it doesn't need to do anything except for\n> overriding what PrefetchBuffer does). Not sure about the other places\n> where neon needs to prefetch, I don't have ambition to rework those.\n>\nOnce your PR will be merged, I will rewrite Neon prefetch implementation \nfopr indexces using your approach.\n\n\n\n\n\n\n\n\n\nOn 14/01/2024 11:47 pm, Tomas Vondra\n wrote:\n\n\n\nThe thing that was not clear to me is who decides what to prefetch,\nwhich code issues the prefetch requests etc. In the github links you\nshared I see it happens in the index AM code (in nbtsearch.c).\n\n\n\nIt is up to the particular plan node (seqscan, indexscan,...)\n which pages to prefetch.\n\n\n\n\n\n\nThat's interesting, because that's what my first prefetching patch did\ntoo - not the same way, ofc, but in the same layer. Simply because it\nseemed like the simplest way to do that. But the feedback was that's the\nwrong layer, and that it should happen in the executor. And I agree with\nthat - the reasons are somewhere in the other thread.\n\n\n\nI read the arguments in \nhttps://www.postgresql.org/message-id/flat/8c86c3a6-074e-6c88-3e7e-9452b6a37b9b%40enterprisedb.com#fc792f8d013215ace7971535a5744c83\n\nSeparating prefetch info in index scan descriptor is really good\n idea. It will be amazing to have generic prefetch mechanism for\n all indexes.\n But unfortunately I do not understand how it is possible. The\n logic of index traversal is implemented inside AM. Executor\n doesn't know it.\n For example for B-Tree scan we can prefetch:\n- intermediate pages\n - leave pages\n - referenced by TID heap pages\n\nBefore we load next intermediate page, we do not know next leave\n pages.\n And before we load next leave page, we can not find out TIDs from\n this page.\n\nAnother challenge - is how far we should prefetch (as far as I\n understand both your and our approach using dynamically extended\n prefetch window)\n\n\n\nBased on what I saw in the neon code, I think it should be possible for\nneon to use \"my\" approach too, but that only works for the index scans,\nofc. Not sure what to do about the other places.\n\n\n We definitely need prefetch for heap scan (it gives the most\n advantages in performance), for vacuum  and also for pg_prewarm.\n Also I tried to implement it for custom indexes such as pg_vector. \n I still not sure whether it is possible to create some generic\n solution which will work for all indexes.\nI have also tried to implement alternative approach for prefetch\n based on access statistic.\n It comes from use case of seqscan of table with larger toasted\n records. So for each record we have to extract its TOAST data.\n It is done using standard index scan, but unfortunately index\n prefetch doesn't help much here: there is usually just one TOAST\n segment and so prefetch just have no chance to do something\n useful. But as far as heap records are accessed sequentially,\n there is good chance that toast table will also be accessed mostly\n sequentially. So we just can count number of sequential requests\n to each relation and if ratio or seq/rand  accesses is above some\n threshold we can prefetch next pages of this relation. This is\n really universal approach but ... working mostly for TOAST table.\n \n\n\n\n\n\n\n\nAs I already wrote - prefetch is done locally for each backend. And each\nbackend has its own connection with page server. It  can be changed in\nfuture when we implement multiplexing of page server connections. But\nright now prefetch is local. And certainly prefetch can improve\nperformance only if we correctly predict subsequent page requests.\nIf not - then page server does useless jobs and backend has to waity and\nconsume all issues prefetch requests. This is why in prefetch\nimplementation for most of nodes we  start with minimal prefetch\ndistance and then increase it. It allows to perform prefetch only for\nsuch queries where it is really efficient (OLAP) and doesn't degrade\nperformance of simple OLTP queries.\n\n\n\n\nNot sure I understand what's so important about prefetches being \"local\"\nfor each backend. I mean even in postgres each backend prefetches it's\nown buffers, no matter what the other backends do. Although, neon\nprobably doesn't have the cross-backend sharing through shared buffers\netc. right?\n\n\n\nSorry if my explanation was not clear:(\n\n> I mean even in postgres each backend prefetches it's own buffers, no matter what the other backends do.\n\nThis is exactly the difference. In Neon such approach doesn't work.\nEach backend maintains it's own prefetch ring. And if prefetched page was not actually received, then the whole pipe is lost.\nI.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it has to consume responses for 1,5,10 and issue another request for page 2.\nInstead of improving speed we are just doing extra job.\nSo each backend should prefetch only those pages which it is actually going to read.\nThis is why prefetch approach used in Postgres for example for parallel bitmap heap scan doesn't work for Neon.\nIf you do `posic_fadvise` then prefetched page is placed in OS cache and can be used by any parallel worker.\nBut in Neon each parallel worker should be given its own range of pages to scan and prefetch only this pages.\n\n\n\nWell, my assumption was the following: prefetch is most efficient forOLAP queries.\nAlthough HTAP (hybrid transactional/analytical processing) is popular\ntrend now,\nclassical model is that analytic queries are performed on \"historical\"\ndata, which was already proceeded by vacuum and all-visible bits were\nset in VM.\nMay be this assumption is wrong but it seems to me that if most heap\npages are not marked as all-visible, then  optimizer should prefetch\nbitmap scan to index-only scan.\n\n\n\nI think this assumption is generally reasonable, but it hinges on the\nassumption that OLAP queries have most indexes recently vacuumed and\nall-visible. I'm not sure it's wise to rely on that.\n\nWithout prefetching it's not that important - the worst thing that would\nhappen is that the IOS degrades into regular index-scan. \n\n\n\n I think that it is also problem without prefetch. There are cases\n where seqscan or bitmap heap scan are really much faster then IOS\n because last one has to perform a lot of visibility checks. Yes,\n certainly optimizer takes in account percent of all-visible pages. But with it is not tricial to adjust optimizer parameters so that it can really choose fastest plan.\n\n\n\n\n But withprefetching these plans can \"invert\" with respect to cost.\n\nI'm not saying it's terrible or that IOS must have prefetching, but I\nthink it's something users may run into fairly often. And it led me to\nrework the prefetching so that IOS can prefetch too ...\n\n\n\n\n\n\nI think that inspecting VM for prefetch is really good idea.\n\n\n\nThanks! Very helpful. As I said, I ended up moving the prefetching to\nthe executor. For indexscans I think it should be possible for neon to\nbenefit from that (in a way, it doesn't need to do anything except for\noverriding what PrefetchBuffer does). Not sure about the other places\nwhere neon needs to prefetch, I don't have ambition to rework those.\n\n\n\nOnce your PR will be merged, I will rewrite Neon prefetch\n implementation fopr indexces using your approach.", "msg_date": "Mon, 15 Jan 2024 16:22:18 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\n\nOn 1/15/24 15:22, Konstantin Knizhnik wrote:\n> \n> On 14/01/2024 11:47 pm, Tomas Vondra wrote:\n>> The thing that was not clear to me is who decides what to prefetch,\n>> which code issues the prefetch requests etc. In the github links you\n>> shared I see it happens in the index AM code (in nbtsearch.c).\n> \n> \n> It is up to the particular plan node (seqscan, indexscan,...) which\n> pages to prefetch.\n> \n> \n>>\n>> That's interesting, because that's what my first prefetching patch did\n>> too - not the same way, ofc, but in the same layer. Simply because it\n>> seemed like the simplest way to do that. But the feedback was that's the\n>> wrong layer, and that it should happen in the executor. And I agree with\n>> that - the reasons are somewhere in the other thread.\n>>\n> I read the arguments in\n> \n> https://www.postgresql.org/message-id/flat/8c86c3a6-074e-6c88-3e7e-9452b6a37b9b%40enterprisedb.com#fc792f8d013215ace7971535a5744c83\n> \n> Separating prefetch info in index scan descriptor is really good idea.\n> It will be amazing to have generic prefetch mechanism for all indexes.\n> But unfortunately I do not understand how it is possible. The logic of\n> index traversal is implemented inside AM. Executor doesn't know it.\n> For example for B-Tree scan we can prefetch:\n> \n> - intermediate pages\n> - leave pages\n> - referenced by TID heap pages\n> \n\nMy patch does not care about prefetching internal index pages. Yes, it's\na limitation, but my assumption is the internal pages are maybe 0.1% of\nthe index, and typically very hot / cached. Yes, if the index is not\nused very often, this may be untrue. But I consider it a possible future\nimprovement, for some other patch. FWIW there's a prefetching patch for\ninserts into indexes (which only prefetches just the index leaf pages).\n\n> Before we load next intermediate page, we do not know next leave pages.\n> And before we load next leave page, we can not find out TIDs from this\n> page.\n> \n\nNot sure I understand what this is about. The patch simply calls the\nindex AM function index_getnext_tid() enough times to fill the prefetch\nqueue. It does not prefetch the next index leaf page, it however does\nprefetch the heap pages. It does not \"stall\" at the boundary of the\nindex leaf page, or something.\n\n> Another challenge - is how far we should prefetch (as far as I\n> understand both your and our approach using dynamically extended\n> prefetch window)\n> \n\nBy dynamic extension of prefetch window you mean the incremental growth\nof the prefetch distance from 0 to effective_io_concurrency? I don't\nthink there's a better solution.\n\nThere might be additional information that we could consider (e.g.\nexpected number of rows for the plan, earlier executions of the scan,\n...) but each of these has a failure more.\n\n>> Based on what I saw in the neon code, I think it should be possible for\n>> neon to use \"my\" approach too, but that only works for the index scans,\n>> ofc. Not sure what to do about the other places.\n> We definitely need prefetch for heap scan (it gives the most advantages\n> in performance), for vacuum  and also for pg_prewarm. Also I tried to\n> implement it for custom indexes such as pg_vector. I still not sure\n> whether it is possible to create some generic solution which will work\n> for all indexes.\n> \n\nI haven't tried with pgvector, but I don't see why my patch would not\nwork for all index AMs that cna return TID.\n\n> I have also tried to implement alternative approach for prefetch based\n> on access statistic.\n> It comes from use case of seqscan of table with larger toasted records.\n> So for each record we have to extract its TOAST data.\n> It is done using standard index scan, but unfortunately index prefetch\n> doesn't help much here: there is usually just one TOAST segment and so\n> prefetch just have no chance to do something useful. But as far as heap\n> records are accessed sequentially, there is good chance that toast table\n> will also be accessed mostly sequentially. So we just can count number\n> of sequential requests to each relation and if ratio or seq/rand \n> accesses is above some threshold we can prefetch next pages of this\n> relation. This is really universal approach but ... working mostly for\n> TOAST table.\n> \n\nAre you're talking about what works / doesn't work in neon, or about\npostgres in general?\n\nI'm not sure what you mean by \"one TOAST segment\" and I'd also guess\nthat if both tables are accessed mostly sequentially, the read-ahead\nwill do most of the work (in postgres).\n\nIt's probably true that as we do a separate index scan for each TOAST-ed\nvalue, that can't really ramp-up the prefetch distance fast enough.\nMaybe we could have a mode where we start with the full distance?\n\n> \n>>> As I already wrote - prefetch is done locally for each backend. And each\n>>> backend has its own connection with page server. It  can be changed in\n>>> future when we implement multiplexing of page server connections. But\n>>> right now prefetch is local. And certainly prefetch can improve\n>>> performance only if we correctly predict subsequent page requests.\n>>> If not - then page server does useless jobs and backend has to waity and\n>>> consume all issues prefetch requests. This is why in prefetch\n>>> implementation for most of nodes we  start with minimal prefetch\n>>> distance and then increase it. It allows to perform prefetch only for\n>>> such queries where it is really efficient (OLAP) and doesn't degrade\n>>> performance of simple OLTP queries.\n>>>\n>> Not sure I understand what's so important about prefetches being \"local\"\n>> for each backend. I mean even in postgres each backend prefetches it's\n>> own buffers, no matter what the other backends do. Although, neon\n>> probably doesn't have the cross-backend sharing through shared buffers\n>> etc. right?\n> \n> \n> Sorry if my explanation was not clear:(\n> \n>> I mean even in postgres each backend prefetches it's own buffers, no\n>> matter what the other backends do.\n> \n> This is exactly the difference. In Neon such approach doesn't work.\n> Each backend maintains it's own prefetch ring. And if prefetched page\n> was not actually received, then the whole pipe is lost.\n> I.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it\n> has to consume responses for 1,5,10 and issue another request for page 2.\n> Instead of improving speed we are just doing extra job.\n> So each backend should prefetch only those pages which it is actually\n> going to read.\n> This is why prefetch approach used in Postgres for example for parallel\n> bitmap heap scan doesn't work for Neon.\n> If you do `posic_fadvise` then prefetched page is placed in OS cache and\n> can be used by any parallel worker.\n> But in Neon each parallel worker should be given its own range of pages\n> to scan and prefetch only this pages.\n> \n\nI still don't quite see/understand the difference. I mean, even in\npostgres each backend does it's own prefetches, using it's own prefetch\nring. But I'm not entirely sure about the neon architecture differences.\n\nDoes this mean neon can do prefetching from the executor in principle?\n\nCould you perhaps describe a situation where the bitmap can prefetching\n(as implemented in Postgres) does not work for neon?\n\n>>\n>>> Well, my assumption was the following: prefetch is most efficient\n>>> forOLAP queries.\n>>> Although HTAP (hybrid transactional/analytical processing) is popular\n>>> trend now,\n>>> classical model is that analytic queries are performed on \"historical\"\n>>> data, which was already proceeded by vacuum and all-visible bits were\n>>> set in VM.\n>>> May be this assumption is wrong but it seems to me that if most heap\n>>> pages are not marked as all-visible, then  optimizer should prefetch\n>>> bitmap scan to index-only scan.\n>> I think this assumption is generally reasonable, but it hinges on the\n>> assumption that OLAP queries have most indexes recently vacuumed and\n>> all-visible. I'm not sure it's wise to rely on that.\n>>\n>> Without prefetching it's not that important - the worst thing that would\n>> happen is that the IOS degrades into regular index-scan.\n>>\n> I think that it is also problem without prefetch. There are cases where\n> seqscan or bitmap heap scan are really much faster then IOS because last\n> one has to perform a lot of visibility checks. Yes, certainly optimizer\n> takes in account percent of all-visible pages.But with it is not tricial\n> to adjust optimizer parameters so that it can really choose fastest plan.\n\nTrue. There's more cases where it can happen, no doubt about it. But I\nthink those cases are somewhat less likely.\n\n>>   But withprefetching these plans can \"invert\" with respect to cost.\n>>\n>> I'm not saying it's terrible or that IOS must have prefetching, but I\n>> think it's something users may run into fairly often. And it led me to\n>> rework the prefetching so that IOS can prefetch too ...\n>>\n>>\n> \n> I think that inspecting VM for prefetch is really good idea.\n> \n>> Thanks! Very helpful. As I said, I ended up moving the prefetching to\n>> the executor. For indexscans I think it should be possible for neon to\n>> benefit from that (in a way, it doesn't need to do anything except for\n>> overriding what PrefetchBuffer does). Not sure about the other places\n>> where neon needs to prefetch, I don't have ambition to rework those.\n>>\n> Once your PR will be merged, I will rewrite Neon prefetch implementation\n> fopr indexces using your approach.\n> \n\nWell, maybe you could try doing rewriting it now, so that you can give\nsome feedback to the patch. I'd appreciate that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 15 Jan 2024 16:08:05 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 15/01/2024 5:08 pm, Tomas Vondra wrote:\n>\n> My patch does not care about prefetching internal index pages. Yes, it's\n> a limitation, but my assumption is the internal pages are maybe 0.1% of\n> the index, and typically very hot / cached. Yes, if the index is not\n> used very often, this may be untrue. But I consider it a possible future\n> improvement, for some other patch. FWIW there's a prefetching patch for\n> inserts into indexes (which only prefetches just the index leaf pages).\n\nWe have to prefetch pages at height-1 level (parents of leave pages) for \nIOS because otherwise prefetch pipeline is broken at each transition to \nnext leave page.\nWhen we start with new leave patch we have to fill prefetch ring from \nthe scratch which certainly has negative impact on performance.\n\n\n> Not sure I understand what this is about. The patch simply calls the\n> index AM function index_getnext_tid() enough times to fill the prefetch\n> queue. It does not prefetch the next index leaf page, it however does\n> prefetch the heap pages. It does not \"stall\" at the boundary of the\n> index leaf page, or something.\n\nOk, now I fully understand your approach. Looks really elegant and works \nfor all indexes.\nThere is still issue with IOS and seqscan.\n\n\n\n>\n>> Another challenge - is how far we should prefetch (as far as I\n>> understand both your and our approach using dynamically extended\n>> prefetch window)\n>>\n> By dynamic extension of prefetch window you mean the incremental growth\n> of the prefetch distance from 0 to effective_io_concurrency?\n\nYes\n\n> I don't\n> think there's a better solution.\n\nI tried one more solution: propagate information about expected number \nof fetched rows to AM. Based on this information it is possible to \nchoose proper prefetch distance.\nCertainly it is not quote precise: we can scan large number rows but \nfilter only few of them. This is why this approach was not committed in \nNeon.\nBut I still think that using statistics for determining prefetch window \nis not so bad idea. May be it needs better thinking.\n\n\n>\n> There might be additional information that we could consider (e.g.\n> expected number of rows for the plan, earlier executions of the scan,\n> ...) but each of these has a failure more.\n\nI wrote reply above before reading next fragment:)\nSo I have already tried it.\n\n> I haven't tried with pgvector, but I don't see why my patch would not\n> work for all index AMs that cna return TID.\n\n\nYes, I agree. But it will be efficient only if getting next TIS is \ncheap  - it is located on the same leaf page.\n\n\n>\n>> I have also tried to implement alternative approach for prefetch based\n>> on access statistic.\n>> It comes from use case of seqscan of table with larger toasted records.\n>> So for each record we have to extract its TOAST data.\n>> It is done using standard index scan, but unfortunately index prefetch\n>> doesn't help much here: there is usually just one TOAST segment and so\n>> prefetch just have no chance to do something useful. But as far as heap\n>> records are accessed sequentially, there is good chance that toast table\n>> will also be accessed mostly sequentially. So we just can count number\n>> of sequential requests to each relation and if ratio or seq/rand\n>> accesses is above some threshold we can prefetch next pages of this\n>> relation. This is really universal approach but ... working mostly for\n>> TOAST table.\n>>\n> Are you're talking about what works / doesn't work in neon, or about\n> postgres in general?\n>\n> I'm not sure what you mean by \"one TOAST segment\" and I'd also guess\n> that if both tables are accessed mostly sequentially, the read-ahead\n> will do most of the work (in postgres).\n\nYes, I agree: in case of vanilla Postgres OS will do read-ahead. But not \nin Neon.\nBy one TOAST segment I mean \"one TOAST record - 2kb.\n\n\n> It's probably true that as we do a separate index scan for each TOAST-ed\n> value, that can't really ramp-up the prefetch distance fast enough.\n> Maybe we could have a mode where we start with the full distance?\n\nSorry, I do not understand. Especially in this case large prefetch \nwindow is undesired.\nMost of records fits in 2kb, so we need to fetch onely one head (TOAST) \nrecord per TOAST index search.\n\n\n>> This is exactly the difference. In Neon such approach doesn't work.\n>> Each backend maintains it's own prefetch ring. And if prefetched page\n>> was not actually received, then the whole pipe is lost.\n>> I.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it\n>> has to consume responses for 1,5,10 and issue another request for page 2.\n>> Instead of improving speed we are just doing extra job.\n>> So each backend should prefetch only those pages which it is actually\n>> going to read.\n>> This is why prefetch approach used in Postgres for example for parallel\n>> bitmap heap scan doesn't work for Neon.\n>> If you do `posic_fadvise` then prefetched page is placed in OS cache and\n>> can be used by any parallel worker.\n>> But in Neon each parallel worker should be given its own range of pages\n>> to scan and prefetch only this pages.\n>>\n> I still don't quite see/understand the difference. I mean, even in\n> postgres each backend does it's own prefetches, using it's own prefetch\n> ring. But I'm not entirely sure about the neon architecture differences\n>\nI am not speaking about your approach. It will work with Neon as well.\nI am describing why implementation of prefetch for heap bitmap scan \ndoesn't work for Neon:\nit issues prefetch requests for pages which never accessed by this \nparallel worker.\n\n> Does this mean neon can do prefetching from the executor in principle?\n>\n> Could you perhaps describe a situation where the bitmap can prefetching\n> (as implemented in Postgres) does not work for neon?\n>\n\nI am speaking about prefetch implementation in nodeBitmpapHeapScan. \nPrefetch iterator is not synced with normal iterator, i.e. they can \nreturn different pages.\n\n>\n> Well, maybe you could try doing rewriting it now, so that you can give\n> some feedback to the patch. I'd appreciate that.\n\nI will try.\n\n\nBest regards,\nKonstantin\n\n\n\n\n\n\n\n\nOn 15/01/2024 5:08 pm, Tomas Vondra\n wrote:\n\n\n\n\nMy patch does not care about prefetching internal index pages. Yes, it's\na limitation, but my assumption is the internal pages are maybe 0.1% of\nthe index, and typically very hot / cached. Yes, if the index is not\nused very often, this may be untrue. But I consider it a possible future\nimprovement, for some other patch. FWIW there's a prefetching patch for\ninserts into indexes (which only prefetches just the index leaf pages).\n\n\nWe have to prefetch pages at height-1 level (parents of leave\n pages) for IOS because otherwise prefetch pipeline is broken at\n each transition to next leave page.\n When we start with new leave patch we have to fill prefetch ring\n from the scratch which certainly has negative impact on\n performance.\n\n\n\n\n\n\nNot sure I understand what this is about. The patch simply calls the\nindex AM function index_getnext_tid() enough times to fill the prefetch\nqueue. It does not prefetch the next index leaf page, it however does\nprefetch the heap pages. It does not \"stall\" at the boundary of the\nindex leaf page, or something.\n\nOk, now I fully understand your approach. Looks really elegant\n and works for all indexes.\n There is still issue with IOS and seqscan.\n\n\n\n\n\n\n\n\n\nAnother challenge - is how far we should prefetch (as far as I\nunderstand both your and our approach using dynamically extended\nprefetch window)\n\n\n\n\nBy dynamic extension of prefetch window you mean the incremental growth\nof the prefetch distance from 0 to effective_io_concurrency? \n\nYes\n\n\nI don't\nthink there's a better solution.\n\nI tried one more solution: propagate information about expected\n number of fetched rows to AM. Based on this information it is\n possible to choose proper prefetch distance.\n Certainly it is not quote precise: we can scan large number rows\n but filter only few of them. This is why this approach was not\n committed in Neon.\n But I still think that using statistics for determining prefetch\n window is not so bad idea. May be it needs better thinking.\n\n\n\n\n\n\n\nThere might be additional information that we could consider (e.g.\nexpected number of rows for the plan, earlier executions of the scan,\n...) but each of these has a failure more.\n\nI wrote reply above before reading next fragment:)\n So I have already tried it.\n\n\n\nI haven't tried with pgvector, but I don't see why my patch would not\nwork for all index AMs that cna return TID.\n\n\n\nYes, I agree. But it will be efficient only if getting next TIS\n is cheap  - it is located on the same leaf page.\n\n\n\n\n\n\n\n\nI have also tried to implement alternative approach for prefetch based\non access statistic.\nIt comes from use case of seqscan of table with larger toasted records.\nSo for each record we have to extract its TOAST data.\nIt is done using standard index scan, but unfortunately index prefetch\ndoesn't help much here: there is usually just one TOAST segment and so\nprefetch just have no chance to do something useful. But as far as heap\nrecords are accessed sequentially, there is good chance that toast table\nwill also be accessed mostly sequentially. So we just can count number\nof sequential requests to each relation and if ratio or seq/rand \naccesses is above some threshold we can prefetch next pages of this\nrelation. This is really universal approach but ... working mostly for\nTOAST table.\n\n\n\n\nAre you're talking about what works / doesn't work in neon, or about\npostgres in general?\n\nI'm not sure what you mean by \"one TOAST segment\" and I'd also guess\nthat if both tables are accessed mostly sequentially, the read-ahead\nwill do most of the work (in postgres).\n\n\nYes, I agree: in case of vanilla Postgres OS will do read-ahead.\n But not in Neon.\n By one TOAST segment I mean \"one TOAST record - 2kb.\n\n\n\n\n\nIt's probably true that as we do a separate index scan for each TOAST-ed\nvalue, that can't really ramp-up the prefetch distance fast enough.\nMaybe we could have a mode where we start with the full distance?\n\nSorry, I do not understand. Especially in this case large\n prefetch window is undesired.\n Most of records fits in 2kb, so we need to fetch onely one head\n (TOAST) record per TOAST index search.\n\n\n\n\n\n\n\nThis is exactly the difference. In Neon such approach doesn't work.\nEach backend maintains it's own prefetch ring. And if prefetched page\nwas not actually received, then the whole pipe is lost.\nI.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it\nhas to consume responses for 1,5,10 and issue another request for page 2.\nInstead of improving speed we are just doing extra job.\nSo each backend should prefetch only those pages which it is actually\ngoing to read.\nThis is why prefetch approach used in Postgres for example for parallel\nbitmap heap scan doesn't work for Neon.\nIf you do `posic_fadvise` then prefetched page is placed in OS cache and\ncan be used by any parallel worker.\nBut in Neon each parallel worker should be given its own range of pages\nto scan and prefetch only this pages.\n\n\n\n\nI still don't quite see/understand the difference. I mean, even in\npostgres each backend does it's own prefetches, using it's own prefetch\nring. But I'm not entirely sure about the neon architecture differences\n\n\n\nI am not speaking about your approach. It will work with Neon as\n well.\n I am describing why implementation of prefetch for heap bitmap\n scan doesn't work for Neon:\n it issues prefetch requests for pages which never accessed by this\n parallel worker.\n\n\n\n\n\nDoes this mean neon can do prefetching from the executor in principle?\n\nCould you perhaps describe a situation where the bitmap can prefetching\n(as implemented in Postgres) does not work for neon?\n\n\n\n\n\nI am speaking about prefetch implementation in\n nodeBitmpapHeapScan. Prefetch iterator is not synced with normal\n iterator, i.e. they can return different pages.\n\n\n\nWell, maybe you could try doing rewriting it now, so that you can give\nsome feedback to the patch. I'd appreciate that.\n\n\nI will try.\n\n\nBest regards,\n Konstantin", "msg_date": "Mon, 15 Jan 2024 22:42:10 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "\n\nOn 1/15/24 21:42, Konstantin Knizhnik wrote:\n> \n> On 15/01/2024 5:08 pm, Tomas Vondra wrote:\n>>\n>> My patch does not care about prefetching internal index pages. Yes, it's\n>> a limitation, but my assumption is the internal pages are maybe 0.1% of\n>> the index, and typically very hot / cached. Yes, if the index is not\n>> used very often, this may be untrue. But I consider it a possible future\n>> improvement, for some other patch. FWIW there's a prefetching patch for\n>> inserts into indexes (which only prefetches just the index leaf pages).\n> \n> We have to prefetch pages at height-1 level (parents of leave pages) for\n> IOS because otherwise prefetch pipeline is broken at each transition to\n> next leave page.\n> When we start with new leave patch we have to fill prefetch ring from\n> the scratch which certainly has negative impact on performance.\n> \n\nBy \"broken\" you mean that you prefetch items only from a single leaf\npage, so immediately after reading the next one nothing is prefetched.\nCorrect? Yeah, I had this problem initially too, when I did the\nprefetching in the index AM code. One of the reasons why it got moved to\nthe executor.\n\n> \n>> Not sure I understand what this is about. The patch simply calls the\n>> index AM function index_getnext_tid() enough times to fill the prefetch\n>> queue. It does not prefetch the next index leaf page, it however does\n>> prefetch the heap pages. It does not \"stall\" at the boundary of the\n>> index leaf page, or something.\n> \n> Ok, now I fully understand your approach. Looks really elegant and works\n> for all indexes.\n> There is still issue with IOS and seqscan.\n> \n\nNot sure. For seqscan, I think this has nothing to do with it. Postgres\nrelies on read-ahad to do the work - of course, if that doesn't work\n(e.g. for async/direct I/O that'd be the case), an improvement will be\nneeded. But it's unrelated to this patch, and I'm certainly not saying\nthis patch does that. I think Thomas/Andres did some work on that.\n\nFor IOS, I think the limitation that this does not prefetch any index\npages (especially the leafs) is there, and it'd be nice to do something\nabout it. But I see it as a separate thing, which I think does need to\nhappen in the index AM layer (not in the executor).\n\n> \n> \n>>\n>>> Another challenge - is how far we should prefetch (as far as I\n>>> understand both your and our approach using dynamically extended\n>>> prefetch window)\n>>>\n>> By dynamic extension of prefetch window you mean the incremental growth\n>> of the prefetch distance from 0 to effective_io_concurrency?\n> \n> Yes\n> \n>> I don't\n>> think there's a better solution.\n> \n> I tried one more solution: propagate information about expected number\n> of fetched rows to AM. Based on this information it is possible to\n> choose proper prefetch distance.\n> Certainly it is not quote precise: we can scan large number rows but\n> filter only few of them. This is why this approach was not committed in\n> Neon.\n> But I still think that using statistics for determining prefetch window\n> is not so bad idea. May be it needs better thinking.\n> \n\nI don't think we should rely on this information too much. It's far too\nunreliable - especially the planner estimates. The run-time data may be\nmore accurate, but I'm worried it may be quite variable (e.g. for\ndifferent runs of the scan).\n\nMy position is to keep this as simple as possible, and prefer to be more\nconservative when possible - that is, shorter prefetch distances. In my\nexperience the benefit of prefetching is subject to diminishing returns,\ni.e. going from 0 => 16 is way bigger difference than 16 => 32. So\nbetter to stick with lower value instead of wasting resources.\n\n> \n>>\n>> There might be additional information that we could consider (e.g.\n>> expected number of rows for the plan, earlier executions of the scan,\n>> ...) but each of these has a failure more.\n> \n> I wrote reply above before reading next fragment:)\n> So I have already tried it.\n> \n>> I haven't tried with pgvector, but I don't see why my patch would not\n>> work for all index AMs that cna return TID.\n> \n> \n> Yes, I agree. But it will be efficient only if getting next TIS is\n> cheap  - it is located on the same leaf page.\n> \n\nMaybe. I haven't tried/thought about it, but yes - if it requires doing\na lot of work in between the prefetches, the benefits of prefetching\nwill diminish naturally. Might be worth doing some experiments.\n\n> \n>>\n>>> I have also tried to implement alternative approach for prefetch based\n>>> on access statistic.\n>>> It comes from use case of seqscan of table with larger toasted records.\n>>> So for each record we have to extract its TOAST data.\n>>> It is done using standard index scan, but unfortunately index prefetch\n>>> doesn't help much here: there is usually just one TOAST segment and so\n>>> prefetch just have no chance to do something useful. But as far as heap\n>>> records are accessed sequentially, there is good chance that toast table\n>>> will also be accessed mostly sequentially. So we just can count number\n>>> of sequential requests to each relation and if ratio or seq/rand\n>>> accesses is above some threshold we can prefetch next pages of this\n>>> relation. This is really universal approach but ... working mostly for\n>>> TOAST table.\n>>>\n>> Are you're talking about what works / doesn't work in neon, or about\n>> postgres in general?\n>>\n>> I'm not sure what you mean by \"one TOAST segment\" and I'd also guess\n>> that if both tables are accessed mostly sequentially, the read-ahead\n>> will do most of the work (in postgres).\n> \n> Yes, I agree: in case of vanilla Postgres OS will do read-ahead. But not\n> in Neon.\n> By one TOAST segment I mean \"one TOAST record - 2kb.\n> \n\nAh, you mean \"TOAST chunk\". Yes, if a record fits into a single TOAST\nchunk, my prefetch won't work. Not sure what to do for neon ...\n\n> \n>> It's probably true that as we do a separate index scan for each TOAST-ed\n>> value, that can't really ramp-up the prefetch distance fast enough.\n>> Maybe we could have a mode where we start with the full distance?\n> \n> Sorry, I do not understand. Especially in this case large prefetch\n> window is undesired.\n> Most of records fits in 2kb, so we need to fetch onely one head (TOAST)\n> record per TOAST index search.\n> \n\nYeah, I was confused what you mean by \"segment\". My point was that if a\nvalue is TOAST-ed into multiple chunks, maybe we should allow more\naggressive prefetching instead of the slow ramp-up ...\n\nBut yeah, if there's just one TOAST chunk, that does not help.\n\n> \n>>> This is exactly the difference. In Neon such approach doesn't work.\n>>> Each backend maintains it's own prefetch ring. And if prefetched page\n>>> was not actually received, then the whole pipe is lost.\n>>> I.e. backend prefetched pages 1,5,10. Then it need to read page 2. So it\n>>> has to consume responses for 1,5,10 and issue another request for\n>>> page 2.\n>>> Instead of improving speed we are just doing extra job.\n>>> So each backend should prefetch only those pages which it is actually\n>>> going to read.\n>>> This is why prefetch approach used in Postgres for example for parallel\n>>> bitmap heap scan doesn't work for Neon.\n>>> If you do `posic_fadvise` then prefetched page is placed in OS cache and\n>>> can be used by any parallel worker.\n>>> But in Neon each parallel worker should be given its own range of pages\n>>> to scan and prefetch only this pages.\n>>>\n>> I still don't quite see/understand the difference. I mean, even in\n>> postgres each backend does it's own prefetches, using it's own prefetch\n>> ring. But I'm not entirely sure about the neon architecture differences\n>>\n> I am not speaking about your approach. It will work with Neon as well.\n> I am describing why implementation of prefetch for heap bitmap scan\n> doesn't work for Neon:\n> it issues prefetch requests for pages which never accessed by this\n> parallel worker.\n> \n>> Does this mean neon can do prefetching from the executor in principle?\n>>\n>> Could you perhaps describe a situation where the bitmap can prefetching\n>> (as implemented in Postgres) does not work for neon?\n>>\n> \n> I am speaking about prefetch implementation in nodeBitmpapHeapScan.\n> Prefetch iterator is not synced with normal iterator, i.e. they can\n> return different pages.\n> \n\nAh, now I think I understand. The workers don't share memory, so the\npages prefetched by one worker are wasted if some other worker ends up\nprocessing them.\n\n>>\n>> Well, maybe you could try doing rewriting it now, so that you can give\n>> some feedback to the patch. I'd appreciate that.\n> \n> I will try.\n> \n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 16 Jan 2024 16:38:12 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "On 16/01/2024 5:38 pm, Tomas Vondra wrote:\n> By \"broken\" you mean that you prefetch items only from a single leaf\n> page, so immediately after reading the next one nothing is prefetched.\n> Correct?\n\n\nYes, exactly. It means that reading first heap page from next leaf page \nwill be done without prefetch which in case of Neon means roundtrip with \npage server (~0.2msec within one data center).\n\n\n> Yeah, I had this problem initially too, when I did the\n> prefetching in the index AM code. One of the reasons why it got moved to\n> the executor.\n\nYeh, it works nice for vanilla Postgres. You call index_getnext_tid() \nand when it reaches end of leaf page it reads next read page. Because of \nOS read-ahead this read is expected to be fast even without prefetch. \nBut not in Neon case - we have to download this page from page server \n(see above). So ideal solution for Neon will be to prefetch both leave \npages and referenced heap pages. And prefetch of last one should be \ninitiated as soon as leaf page is loaded. Unfortunately it is \nnon-trivial to implement and current index scan prefetch implementation \nfor Neon is not doing it.\n\n\n\n\n\n\n\n\n\nOn 16/01/2024 5:38 pm, Tomas Vondra\n wrote:\n\nBy \"broken\" you mean that you prefetch items only from a single leaf\npage, so immediately after reading the next one nothing is prefetched.\nCorrect?\n\n\n\nYes, exactly. It means that reading first heap page from next\n leaf page will be done without prefetch which in case of Neon\n means roundtrip with page server (~0.2msec within one data\n center).\n\n\n\n\n Yeah, I had this problem initially too, when I did the\nprefetching in the index AM code. One of the reasons why it got moved to\nthe executor.\n\nYeh, it works nice for vanilla Postgres. You call index_getnext_tid() and when it reaches end of leaf page it reads next read page.\nBecause of OS read-ahead this read is expected to be fast even without prefetch. But not in Neon case - we have to download this page from page server (see above).\nSo ideal solution for Neon will be to prefetch both leave pages and referenced heap pages. And prefetch of last one should be initiated as soon as leaf page is loaded.\nUnfortunately it is non-trivial to implement and current index scan prefetch implementation for Neon is not doing it.", "msg_date": "Tue, 16 Jan 2024 18:07:16 +0200", "msg_from": "Konstantin Knizhnik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "Came across this while looking for patches to review. IMO this thread has\nbeen hijacked to the point of being not useful for the subject. I suggest\nthis discussion regarding prefetch move to its own thread and this thread\nand commitfest entry be ended/returned with feedback.\n\nAlso IMO, the commitfest is not for early stage idea patches. The stuff on\nthere that is ready for review should at least be thought of by the\noriginal author as something they would be willing to commit. I suggest\nyou post the most recent patch and a summary of the discussion to a new\nthread that hopefully won't be hijacked. Consistent and on-topic replies\nwill keep the topic front-and-center on the lists until a patch is ready\nfor consideration.\n\nDavid J.\n\nCame across this while looking for patches to review.  IMO this thread has been hijacked to the point of being not useful for the subject.  I suggest this discussion regarding prefetch move to its own thread and this thread and commitfest entry be ended/returned with feedback.Also IMO, the commitfest is not for early stage idea patches.  The stuff on there that is ready for review should at least be thought of by the original author as something they would be willing to commit.  I suggest you post the most recent patch and a summary of the discussion to a new thread that hopefully won't be hijacked.  Consistent and on-topic replies will keep the topic front-and-center on the lists until a patch is ready for consideration.David J.", "msg_date": "Tue, 23 Jan 2024 18:54:57 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" }, { "msg_contents": "Hi\n\npo 22. 7. 2024 v 17:08 odesílatel Konstantin Knizhnik <[email protected]>\nnapsal:\n\n>\n> On 16/01/2024 5:38 pm, Tomas Vondra wrote:\n>\n> By \"broken\" you mean that you prefetch items only from a single leaf\n>\n> page, so immediately after reading the next one nothing is prefetched.\n> Correct?\n>\n>\n> Yes, exactly. It means that reading first heap page from next leaf page\n> will be done without prefetch which in case of Neon means roundtrip with\n> page server (~0.2msec within one data center).\n>\n>\n> Yeah, I had this problem initially too, when I did the\n> prefetching in the index AM code. One of the reasons why it got moved to\n> the executor.\n>\n> Yeh, it works nice for vanilla Postgres. You call index_getnext_tid() and\n> when it reaches end of leaf page it reads next read page. Because of OS\n> read-ahead this read is expected to be fast even without prefetch. But not\n> in Neon case - we have to download this page from page server (see above).\n> So ideal solution for Neon will be to prefetch both leave pages and\n> referenced heap pages. And prefetch of last one should be initiated as soon\n> as leaf page is loaded. Unfortunately it is non-trivial to implement and\n> current index scan prefetch implementation for Neon is not doing it.\n>\n\nWhat is the current state of this patch - it is abandoned? It needs a\nrebase.\n\nRegards\n\nPavel\n\nHipo 22. 7. 2024 v 17:08 odesílatel Konstantin Knizhnik <[email protected]> napsal:\n\n\n\nOn 16/01/2024 5:38 pm, Tomas Vondra\n wrote:\n\nBy \"broken\" you mean that you prefetch items only from a single leaf\npage, so immediately after reading the next one nothing is prefetched.\nCorrect?\n\n\n\nYes, exactly. It means that reading first heap page from next\n leaf page will be done without prefetch which in case of Neon\n means roundtrip with page server (~0.2msec within one data\n center).\n\n\n\n\n Yeah, I had this problem initially too, when I did the\nprefetching in the index AM code. One of the reasons why it got moved to\nthe executor.\n\nYeh, it works nice for vanilla Postgres. You call index_getnext_tid() and when it reaches end of leaf page it reads next read page.\nBecause of OS read-ahead this read is expected to be fast even without prefetch. But not in Neon case - we have to download this page from page server (see above).\nSo ideal solution for Neon will be to prefetch both leave pages and referenced heap pages. And prefetch of last one should be initiated as soon as leaf page is loaded.\nUnfortunately it is non-trivial to implement and current index scan prefetch implementation for Neon is not doing it.What is the current state of this patch - it is abandoned?  It needs a rebase.RegardsPavel", "msg_date": "Mon, 22 Jul 2024 17:10:39 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Custom explain options" } ]
[ { "msg_contents": "Hi Hackers,\n\nIs there any specific reason hot_standby_feedback default is set to off? I\nsee some explanation in the thread [1] about recovery_min_apply_delay value\n> 0 causing table bloat. However, recovery_min_apply_delay is set to 0 by\ndefault. So, if a server admin wants to change this value, they can change\nhot_standby_feedback as well if needed right?\n\nThanks!\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n\nHi Hackers,Is there any specific reason hot_standby_feedback default is set to off? I see some explanation in the thread [1] about recovery_min_apply_delay value > 0 causing table bloat. However, recovery_min_apply_delay is set to 0 by default. So, if a server admin wants to change this value, they can change hot_standby_feedback as well if needed right?Thanks![1]: https://www.postgresql.org/message-id/[email protected]", "msg_date": "Sun, 22 Oct 2023 00:50:39 -0700", "msg_from": "sirisha chamarthi <[email protected]>", "msg_from_op": true, "msg_subject": "Why is hot_standby_feedback off by default?" }, { "msg_contents": "On 10/22/23 09:50, sirisha chamarthi wrote:\n> Is there any specific reason hot_standby_feedback default is set to off?\n\n\nYes. No one wants a rogue standby to ruin production.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Sun, 22 Oct 2023 13:56:15 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "Hi,\n\nOn October 22, 2023 4:56:15 AM PDT, Vik Fearing <[email protected]> wrote:\n>On 10/22/23 09:50, sirisha chamarthi wrote:\n>> Is there any specific reason hot_standby_feedback default is set to off?\n>\n>\n>Yes. No one wants a rogue standby to ruin production.\n\nMedium term, I think we need an approximate xid->\"time of assignment\" mapping that's continually maintained on the primary. One of the things that'd show us to do is introduce a GUC to control the maximum effect of hs_feedback on the primary, in a useful unit. Numbers of xids are not a useful unit (100k xids is forever on some systems, a few minutes at best on others, the rate is not necessarily that steady when plpgsql exception handles are used, ...)\n\nIt'd be useful to have such a mapping for other features too. E.g.\n\n - making it visible in pg_stat _activity how problematic a longrunning xact is - a 3 day old xact that doesn't have an xid assigned and has a recent xmin is fine, it won't prevent vacuum from doing things. But a somewhat recent xact that still has a snapshot from before an old xact was cancelled could be problematic.\n\n- turn pg_class.relfrozenxid into an understandable timeframe. It's a fair bit of mental effort to classify \"370M xids old\" into problem/fine (it's e.g. not a problem on a system with a high xid rate, on a big table that takes a bit to a bit to vacuum).\n\n- using the mapping to compute an xid consumption rate IMO would be one building block for smarter AV scheduling. Together with historical vacuum runtimes it'd allow us to start vacuuming early enough to prevent hitting thresholds, adapt pacing, prioritize between tables etc. \n\nGreetings,\n\nAndres \n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nHi,On October 22, 2023 4:56:15 AM PDT, Vik Fearing <[email protected]> wrote:>On 10/22/23 09:50, sirisha chamarthi wrote:>> Is there any specific reason hot_standby_feedback default is set to off?>>>Yes.  No one wants a rogue standby to ruin production.Medium term, I think we need an approximate xid->\"time of assignment\" mapping that's continually maintained on the primary. One of the things that'd show us to do is introduce a GUC to control the maximum effect of hs_feedback  on the primary, in a useful unit. Numbers of xids are not a useful unit (100k xids is forever on some systems, a few minutes at best on others, the rate is not necessarily that steady when plpgsql exception handles are used, ...)It'd be useful to have such a mapping for other features too. E.g. - making it visible in pg_stat _activity how problematic a longrunning xact is - a 3 day old xact that doesn't have an xid assigned and has a recent xmin is fine, it won't prevent vacuum from doing things. But a somewhat recent xact that still has a snapshot from before an old xact was cancelled could be problematic.- turn pg_class.relfrozenxid into an understandable timeframe. It's a fair bit of mental effort to classify \"370M xids old\" into problem/fine (it's e.g. not a problem on a system with a high xid rate, on a big table that takes a bit to a bit to vacuum).- using the mapping to compute an xid consumption rate IMO would be one building block for smarter AV scheduling. Together with historical vacuum runtimes it'd allow us to start vacuuming early enough to prevent hitting thresholds, adapt pacing, prioritize between tables etc. Greetings,Andres -- Sent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Sun, 22 Oct 2023 12:07:59 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "Hi Andres,\n\nOn Sun, Oct 22, 2023 at 12:08 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On October 22, 2023 4:56:15 AM PDT, Vik Fearing <[email protected]>\n> wrote:\n> >On 10/22/23 09:50, sirisha chamarthi wrote:\n> >> Is there any specific reason hot_standby_feedback default is set to off?\n> >\n> >\n> >Yes. No one wants a rogue standby to ruin production.\n>\n> Medium term, I think we need an approximate xid->\"time of assignment\"\n> mapping that's continually maintained on the primary. One of the things\n> that'd show us to do is introduce a GUC to control the maximum effect of\n> hs_feedback on the primary, in a useful unit. Numbers of xids are not a\n> useful unit (100k xids is forever on some systems, a few minutes at best on\n> others, the rate is not necessarily that steady when plpgsql exception\n> handles are used, ...)\n>\n\n+1 on this idea. Please let me give this a try.\n\nThanks,\nSirisha\n\nHi Andres,On Sun, Oct 22, 2023 at 12:08 PM Andres Freund <[email protected]> wrote:Hi,On October 22, 2023 4:56:15 AM PDT, Vik Fearing <[email protected]> wrote:>On 10/22/23 09:50, sirisha chamarthi wrote:>> Is there any specific reason hot_standby_feedback default is set to off?>>>Yes.  No one wants a rogue standby to ruin production.Medium term, I think we need an approximate xid->\"time of assignment\" mapping that's continually maintained on the primary. One of the things that'd show us to do is introduce a GUC to control the maximum effect of hs_feedback  on the primary, in a useful unit. Numbers of xids are not a useful unit (100k xids is forever on some systems, a few minutes at best on others, the rate is not necessarily that steady when plpgsql exception handles are used, ...)+1 on this idea. Please let me give this a try. Thanks,Sirisha", "msg_date": "Sun, 22 Oct 2023 18:26:23 -0700", "msg_from": "sirisha chamarthi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On Sun, Oct 22, 2023 at 4:56 AM Vik Fearing <[email protected]> wrote:\n\n> On 10/22/23 09:50, sirisha chamarthi wrote:\n> > Is there any specific reason hot_standby_feedback default is set to off?\n>\n>\n> Yes. No one wants a rogue standby to ruin production.\n>\n\nAgreed. I believe that any reasonable use of a standby server for queries\nrequires hot_standby_feedback to be turned on. Otherwise, we can\npotentially see query cancellations, increased replication lag because of\nconflicts (while replaying vacuum cleanup records) on standby (resulting in\nlonger failover times if the server is configured for disaster recovery +\nread scaling). Recent logical decoding on standby as well requires\nhot_standby_feedback to be turned on to avoid slot invalidation [1]. If\nthere is no requirement to query the standby, admins can always set\nhot_standby to off. My goal here is to minimize the amount of configuration\ntuning required to use these features.\n\n[1]:\nhttps://www.postgresql.org/docs/current/logicaldecoding-explanation.html\n\nThanks,\nSirisha\n\nOn Sun, Oct 22, 2023 at 4:56 AM Vik Fearing <[email protected]> wrote:On 10/22/23 09:50, sirisha chamarthi wrote:\n> Is there any specific reason hot_standby_feedback default is set to off?\n\n\nYes.  No one wants a rogue standby to ruin production.Agreed. I believe that any reasonable use of a standby server for queries requires hot_standby_feedback to be turned on. Otherwise, we can potentially see query cancellations, increased replication lag because of conflicts (while replaying vacuum cleanup records) on standby (resulting in longer failover times if the server is configured for disaster recovery + read scaling). Recent logical decoding on standby as well requires hot_standby_feedback to be turned on to avoid slot invalidation [1]. If there is no requirement to query the standby, admins can always  set hot_standby to off. My goal here is to minimize the amount of configuration tuning required to use these features.[1]: https://www.postgresql.org/docs/current/logicaldecoding-explanation.htmlThanks,Sirisha", "msg_date": "Sun, 22 Oct 2023 19:02:39 -0700", "msg_from": "sirisha chamarthi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On 10/23/23 04:02, sirisha chamarthi wrote:\n> On Sun, Oct 22, 2023 at 4:56 AM Vik Fearing <[email protected]> wrote:\n> \n>> On 10/22/23 09:50, sirisha chamarthi wrote:\n>>> Is there any specific reason hot_standby_feedback default is set to off?\n>>\n>>\n>> Yes. No one wants a rogue standby to ruin production.\n>>\n> \n> Agreed.\n\n\nOkay...\n\n\n> I believe that any reasonable use of a standby server for queries\n> requires hot_standby_feedback to be turned on. Otherwise, we can\n> potentially see query cancellations, increased replication lag because of\n> conflicts (while replaying vacuum cleanup records) on standby (resulting in\n> longer failover times if the server is configured for disaster recovery +\n> read scaling). Recent logical decoding on standby as well requires\n> hot_standby_feedback to be turned on to avoid slot invalidation [1]. If\n> there is no requirement to query the standby, admins can always set\n> hot_standby to off. My goal here is to minimize the amount of configuration\n> tuning required to use these features.\n> \n> [1]:\n> https://www.postgresql.org/docs/current/logicaldecoding-explanation.html\n\n\nThis does not sound like you agree.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 04:23:32 +0200", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "sirisha chamarthi <[email protected]> writes:\n> I believe that any reasonable use of a standby server for queries\n> requires hot_standby_feedback to be turned on.\n\nThe fact that it's not the default should suggest to you that that's\nnot the majority opinion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Oct 2023 23:35:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On Sun, Oct 22, 2023 at 12:07:59PM -0700, Andres Freund wrote:\n> Medium term, I think we need an approximate xid->\"time of assignment\" mapping that's continually maintained on the primary. One of the things that'd show us to do is introduce a GUC to control the maximum effect of hs_feedback on the primary, in a useful unit. Numbers of xids are not a useful unit (100k xids is forever on some systems, a few minutes at best on others, the rate is not necessarily that steady when plpgsql exception handles are used, ...)\n> \n> It'd be useful to have such a mapping for other features too. E.g.\n> \n> - making it visible in pg_stat _activity how problematic a longrunning xact is - a 3 day old xact that doesn't have an xid assigned and has a recent xmin is fine, it won't prevent vacuum from doing things. But a somewhat recent xact that still has a snapshot from before an old xact was cancelled could be problematic.\n> \n> - turn pg_class.relfrozenxid into an understandable timeframe. It's a fair bit of mental effort to classify \"370M xids old\" into problem/fine (it's e.g. not a problem on a system with a high xid rate, on a big table that takes a bit to a bit to vacuum).\n> \n> - using the mapping to compute an xid consumption rate IMO would be one building block for smarter AV scheduling. Together with historical vacuum runtimes it'd allow us to start vacuuming early enough to prevent hitting thresholds, adapt pacing, prioritize between tables etc. \n\nBig +1 to all of this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Oct 2023 15:39:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On Tue, Oct 24, 2023 at 3:42 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Sun, Oct 22, 2023 at 12:07:59PM -0700, Andres Freund wrote:\n> > Medium term, I think we need an approximate xid->\"time of assignment\" mapping that's continually maintained on the primary. One of the things that'd show us to do is introduce a GUC to control the maximum effect of hs_feedback on the primary, in a useful unit. Numbers of xids are not a useful unit (100k xids is forever on some systems, a few minutes at best on others, the rate is not necessarily that steady when plpgsql exception handles are used, ...)\n> >\n> > It'd be useful to have such a mapping for other features too. E.g.\n> >\n> > - making it visible in pg_stat _activity how problematic a longrunning xact is - a 3 day old xact that doesn't have an xid assigned and has a recent xmin is fine, it won't prevent vacuum from doing things. But a somewhat recent xact that still has a snapshot from before an old xact was cancelled could be problematic.\n> >\n> > - turn pg_class.relfrozenxid into an understandable timeframe. It's a fair bit of mental effort to classify \"370M xids old\" into problem/fine (it's e.g. not a problem on a system with a high xid rate, on a big table that takes a bit to a bit to vacuum).\n> >\n> > - using the mapping to compute an xid consumption rate IMO would be one building block for smarter AV scheduling. Together with historical vacuum runtimes it'd allow us to start vacuuming early enough to prevent hitting thresholds, adapt pacing, prioritize between tables etc.\n>\n> Big +1 to all of this.\n\nSounds like a TODO?\n\n\n", "msg_date": "Mon, 20 Nov 2023 16:34:47 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On 2023-11-20 16:34:47 +0700, John Naylor wrote:\n> Sounds like a TODO?\n\nWFM. I don't personally use or update TODO, as I have my doubts about its\nusefulness or state of maintenance. But please feel free to add this as a TODO\nfrom my end...\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:49:50 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" }, { "msg_contents": "On Tue, Nov 21, 2023 at 6:49 AM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-20 16:34:47 +0700, John Naylor wrote:\n> > Sounds like a TODO?\n>\n> WFM. I don't personally use or update TODO, as I have my doubts about its\n> usefulness or state of maintenance. But please feel free to add this as a TODO\n> from my end...\n\nYeah, I was hoping to change that, but it's been a long row to hoe.\nAnyway, the above idea was added added under \"administration\".\n\n\n", "msg_date": "Tue, 21 Nov 2023 13:31:45 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is hot_standby_feedback off by default?" } ]
[ { "msg_contents": "\nmissing dependencies PostgreSQL 16 OpenSUSE Tumbleweed (SLES 15.5 \npackages)\n\n---\n\n#### YaST2 conflicts list - generated 2023-10-22 10:30:07 ####\n\nthere is no package providing 'libldap_r-2.4.so.2())(64bit)' required by \ninstalling postgresql16-server-16.0-1PGDG.sles15.x86_64\n\n [ ] break 'postgresql16-server-16.0-1PGDG.sles15.x86_64' by \nignoring some dependencies\n\n [ ] do not install postgresql16-server-16.0-1PGDG.sles15.x86_64\n\n\n#### YaST2 conflicts list END ###\n\ndependencies needed:\n\nlibpq.so.5() (64bit)\nlibpthread.so.0() (64bit)\nlibpthread.so.0(GLIBC_2.2.5)(64bit)\nlibm.so.6() (64bit)\nlibm.so.6(GLIBC_2.2.5)(64bit)\nlibm.so.6(GLIBC_2.29)(64bit)\nlibcrypto.so.1.1() (64bit)\nlibcrypto.so.1.1(OPENSSL_1_1_0)(64bit)\nlibssl.so.1.1() (64bit)\nliblz4.so.1() (64bit)\nlibssl.so.1.1(OPENSSL_1_1_0)(64bit)\nlibz.so.1() (64bit)\nlibxml2.so.2() (64bit)\nlibxml2.so.2(LIBXML2_2.4.30)(64bit)\nlibzstd.so.1() (64bit)\nlibxml2.so.2(LIBXML2_2.6.0)(64bit)\nlibc.so.6(GLIBC_2.25)(64bit)\nlibcrypto.so.1.1(OPENSSL_1_1_1)(64bit)\nlibdl.so.2() (64bit)\nlibgssapi_krb5.so.2() (64bit)\nlibgssapi_krb5.so.2(gssapi_krb5_2_MIT)(64bit)\nlibldap_r-2.4.so.2() (64bit)\nlibpam.so.0() (64bit)\nlibpam.so.0(LIBPAM_1.0)(64bit)\nlibsystemd.so.0() (64bit)\nlibsystemd.so.0(LIBSYSTEMD_209)(64bit)\nlibdl.so.2(GLIBC_2.2.5)(64bit)\nlibicudata.so.suse65.1() (64bit)\nlibicui18n.so.suse65.1() (64bit)\nlibicuuc.so.suse65.1() (64bit)\nlibrt.so.1() (64bit)\nlibrt.so.1(GLIBC_2.2.5)(64bit)\nlibxml2.so.2(LIBXML2_2.6.23)(64bit)\nlibxml2.so.2(LIBXML2_2.6.8)(64bit)\nutil-linux\npostgresql16(x86-64) = 16.0-1PGDG.sles15\npostgresql16-libs(x86-64) = 16.0-1PGDG.sles15\n/bin/sh\n/usr/sbin/useradd\nsystemd\n/usr/sbin/groupadd\nglibc\n\n\nused repos:\n\nhttps://download.postgresql.org/pub/repos/zypp/16/suse/sles-15.5-x86_64/\nhttps://download.postgresql.org/pub/repos/zypp/srpms/16/suse/sles-15.5-x86_64/\nhttp://download.opensuse.org/repositories/server:database:postgresql/openSUSE_Tumbleweed/\n\n\nhow can i resolve dependencies..?\n\n\n\n", "msg_date": "Sun, 22 Oct 2023 11:00:07 +0200", "msg_from": "=?UTF-8?Q?Andr=C3=A9_Verwijs?= <[email protected]>", "msg_from_op": true, "msg_subject": "missing dependencies PostgreSQL 16 OpenSUSE Tumbleweed (SLES 15.5\n packages)" }, { "msg_contents": "Hi,\n\nOn Sun, Oct 22, 2023 at 11:00:07AM +0200, Andr� Verwijs wrote:\n> missing dependencies PostgreSQL 16 OpenSUSE Tumbleweed (SLES 15.5\n> packages)\n> \n> ---\n> \n> #### YaST2 conflicts list - generated 2023-10-22 10:30:07 ####\n> \n> there is no package providing 'libldap_r-2.4.so.2())(64bit)' required by\n> installing postgresql16-server-16.0-1PGDG.sles15.x86_64\n\nThose are the packages from zypp.postgresql.org, right? There is a link\nto the issue tracker at https://redmine.postgresql.org/projects/pgrpms/\nfrom the home page, I think it would best to report the problem there.\n\n\nMichael\n\n\n", "msg_date": "Sun, 22 Oct 2023 11:53:59 +0200", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: missing dependencies PostgreSQL 16 OpenSUSE Tumbleweed (SLES\n 15.5 packages)" } ]
[ { "msg_contents": "Hi,\n\nSome time ago I started a thread about prefetching heap pages during\nindex scans [1]. That however only helps when reading rows, not when\ninserting them.\n\nImagine inserting a row into a table with many indexes - we insert the\nrow into the table, and then into the indexes one by one, synchronously.\nWe walk the index, determine the appropriate leaf page, read it from\ndisk into memory, insert the index tuple, and then do the same thing for\nthe next index.\n\nIf there are many indexes, and there's not much correlation with the\ntable, this may easily results in I/O happening synchronously with queue\ndepth 1. Hard to make it even slower ...\n\nThis can be a problem even with a modest number of indexes - imagine\nbulk-loading data into a table using COPY (say, in 100-row batches).\nInserting the rows into heap happens in a bulk, but the indexes are\nstill modified in a loop, as if for single-row inserts. Not great.\n\nThe with multiple connections the concurrent I/O may be generated that\nway, but for low-concurrency workloads (e.g. batch jobs) that may not\nreally work.\n\nI had an idea what we might do about this - we can walk the index,\nalmost as if we're inserting the index tuple, but only the \"inner\"\nnon-leaf pages. And instead of descending to the leaf page, we just\nprefetch it. The non-leaf pages are typically <1% of the index, and hot,\nso likely already cached, so not worth prefetching those.\n\nThe attached patch does a PoC of this. It adds a new AM function\n\"amprefetch\", with an implementation for btree indexes, mimicking the\nindex lookup, except that it only prefetches the leaf page as explained\na bit earlier.\n\nIn the executor, this is wrapped in ExecInsertPrefetchIndexes() which\ngets called in various places right before ExecInsertPrefetchIndexes().\nI thought about doing that in ExecInsertPrefetchIndexes() directly, but\nthat would not work for COPY, where we want to issue the prefetches for\nthe whole batch, not for individual tuples.\n\nThis may need various improvements - the prefetch duplicates a couple\nsteps that could be expensive (e.g. evaluation of index predicates,\nforming index tuples, and so on). Would be nice to improve this, but\ngood enough for PoC I think.\n\nAnother gap is lack of incremental prefetch (ramp-up). We just prefetch\nall the indexes, for all tuples. But I think that's OK. We know we'll\nneed those pages, and the number is fairly limited.\n\nThere's a GUC enable_insert_prefetch, that can be used to enable this\ninsert prefetching.\n\nI did a simple test on two machines - one with SATA SSD RAID, one with\nNVMe SSD. In both cases the data (table+indexes) are an order of\nmagnitude larger than RAM. The indexes are on UUID, so pretty random and\nthere's no correlation. Then batches of 100, 1000 and 10000 rows are\ninserted, with/without the prefetching.\n\nWith 5 indexes, the results look like this:\n\nSATA SSD RAID\n-------------\n\n rows no prefetch prefetch\n 100 176.872 ms 70.910 ms\n 1000 1035.056 ms 590.495 ms\n10000 8494.836 ms 3216.206 ms\n\n\nNVMe\n----\n\n rows no prefetch prefetch\n 100 133.365 ms 72.899 ms\n 1000 1572.379 ms 829.298 ms\n10000 11889.143 ms 3621.981 ms\n\n\nNot bad, I guess. Cutting the time to ~30% is nice.\n\nThe fewer the indexes, the smaller the difference (with 1 index there is\nalmost no difference), of course.\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/45/4351/\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sun, 22 Oct 2023 16:46:51 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "PoC: prefetching index leaf pages (for inserts)" }, { "msg_contents": "Hi,\n\nI had a chance to discuss this patch with Andres off-list a couple days\nago, and he suggested he tried sorting the (index) tuples before\ninserting them, and that yielded significant benefits, possibly\ncomparable to the prefetching.\n\nI've been somewhat skeptical the sorting might be very beneficial, but I\ndecided to do some more tests comparing the benefits.\n\nThe sorting only really works for bulk inserts (e.g. from COPY), when we\nhave multiple index tuples for each index. But I didn't have time to\nrework the code like that, so I took a simpler approach - do the sort in\nthe INSERT, so that the insert tuples are sorted too. So, something like\n\n WITH data AS (SELECT md5(random()::text)::uuid\n FROM generate_series(1,100) ORDER BY 1)\n INSERT INTO t SELECT * FROM data;\n\nObviously, this can only sort the rows in one way - if there are\nmultiple indexes, then only one of them will be sorted, limiting the\npossible benefit of the optimization. In the COPY code we could do a\nseparate sort for each index, so the tuples would be sorted for all\nindexes (which also has a cost, of course).\n\nBut as I said, I decided to do the simple SQL-level sort. There are\nmultiple indexes on the same column, so it's a bit as if we sorted the\ntuples for each index independently.\n\nAnyway, the test inserts batches of 100, ..., 100k rows into tables of\ndifferent sizes (10M - 1B rows), with/without prefetching, and possibly\nsorts the batches before the insert. See the attached index.sql script.\n\nThe PDF shows the results, and also compares the different combinations.\nFor example the first 5 lines are results without and with prefetching,\nfollowed by speedup, where green=faster and red=slower. For example 50%\nmeans prefetching makes it 2x as fast.\n\nSimilarly, first column has timings without / with sorting, with speedup\nright under it. Diagonally, we have speedup for enabling both sorting\nand prefetch.\n\nI did that with different table sizes, where 10M easily fits into RAM\nwhile 1B certainly exceeds it. And I did that on the usual two machines\nwith different amounts of RAM and storage (SATA SSD vs. NVME).\n\nThe results are mostly similar on both, I think:\n\n* On 10M tables (fits into RAM including indexes), prefetching doesn't\nreally help (assuming the data is not evicted from memory for other\nreasons), and actually hurts a bit (~20%). Sorting does help, depending\non the number of indexes - can be 10-40% faster.\n\n* On 1B tables (exceeds RAM), prefetching is a clear winner. Sorting\ndoes not make any difference except for a couple rare \"blips\".\n\n* On 100M tables it's a mix/transition of those two cases.\n\n\nSo maybe we should try doing both, perhaps with some heuristics to only\ndo the prefetching on sufficiently large/random indexes, and sorting\nonly on smaller ones?\n\nAnother option would be to make the prefetching smarter so that we don't\nprefetch data that is already in memory (either in shared buffers or in\npage cache). That would reduce the overhead/slowdown visible in results\non the 10M table.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 1 Nov 2023 15:10:03 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PoC: prefetching index leaf pages (for inserts)" }, { "msg_contents": "Seems cfbot was not entirely happy about the patch, for two reasons:\n\n1) enable_insert_prefetching definition was inconsistent (different\nboot/default values, missing in .conf and so on)\n\n2) stupid bug in execReplication, inserting index entries twice\n\nThe attached v3 should fix all of that, I believe.\n\n\nAs for the path forward, I think the prefetching is demonstrably\nbeneficial. There are cases where it can't help or even harms\nperformance. I think the success depends on three areas:\n\n(a) reducing the costs of the prefetching - For example right now we\nbuild the index tuples twice (once for prefetch, once for the insert),\nbut maybe there's a way to do that only once? There are also predicate\nindexes, and so on.\n\n(b) being smarter about when to prefetch - For example if we only have\none \"prefetchable\" index, it's somewhat pointless to prefetch (for\nsingle-row cases). And so on.\n\n(c) not prefetching when already cached - This is somewhat related to\nthe previous case, but perhaps it'd be cheaper to first check if the\ndata is already cached. For shared buffers it should not be difficult,\nfor page cache we could use preadv2 with RWF_NOWAIT flag. The question\nis if this is cheap enough to be cheaper than just doing posix_fadvise\n(which however only deals with shared buffers).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 6 Nov 2023 18:05:56 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PoC: prefetching index leaf pages (for inserts)" }, { "msg_contents": "On 06/11/2023 19:05, Tomas Vondra wrote:\n> As for the path forward, I think the prefetching is demonstrably\n> beneficial. There are cases where it can't help or even harms\n> performance. I think the success depends on three areas:\n> \n> (a) reducing the costs of the prefetching - For example right now we\n> build the index tuples twice (once for prefetch, once for the insert),\n> but maybe there's a way to do that only once? There are also predicate\n> indexes, and so on.\n> \n> (b) being smarter about when to prefetch - For example if we only have\n> one \"prefetchable\" index, it's somewhat pointless to prefetch (for\n> single-row cases). And so on.\n> \n> (c) not prefetching when already cached - This is somewhat related to\n> the previous case, but perhaps it'd be cheaper to first check if the\n> data is already cached. For shared buffers it should not be difficult,\n> for page cache we could use preadv2 with RWF_NOWAIT flag. The question\n> is if this is cheap enough to be cheaper than just doing posix_fadvise\n> (which however only deals with shared buffers).\n\nI don't like this approach. It duplicates the tree-descend code, and it \nalso duplicates the work of descending the tree at runtime. And it only \naddresses index insertion; there are a lot of places that could benefit \nfrom prefetching or async execution like this.\n\nI think we should think of this as async execution rather than \nprefetching. We don't have the general infrastructure for writing async \ncode, but if we did, this would be much simpler. In an async programming \nmodel, like you have in many other languages like Rust, python or \njavascript, there would be no separate prefetching function. Instead, \naminsert() would return a future that can pause execution if it needs to \ndo I/O. Something like this:\n\naminsert_futures = NIL;\n/* create a future for each index insert */\nfor (<all indexes>)\n{\n aminsert_futures = lappend(aminsert_futures, aminsert(...));\n}\n/* wait for all the futures to finish */\nawait aminsert_futures;\n\nThe async-aware aminsert function would run to completion quickly if all \nthe pages are already in cache. If you get a cache miss, it would start \nan async I/O read for the page, and yield to the other insertions until \nthe I/O completes.\n\nWe already support async execution of FDWs now, with the \nForeignAsyncRequest() and ForeignAsyncConfigureWait() callbacks. Can we \ngeneralize that?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 23 Nov 2023 15:26:23 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PoC: prefetching index leaf pages (for inserts)" }, { "msg_contents": "\n\nOn 11/23/23 14:26, Heikki Linnakangas wrote:\n> On 06/11/2023 19:05, Tomas Vondra wrote:\n>> As for the path forward, I think the prefetching is demonstrably\n>> beneficial. There are cases where it can't help or even harms\n>> performance. I think the success depends on three areas:\n>>\n>> (a) reducing the costs of the prefetching - For example right now we\n>> build the index tuples twice (once for prefetch, once for the insert),\n>> but maybe there's a way to do that only once? There are also predicate\n>> indexes, and so on.\n>>\n>> (b) being smarter about when to prefetch - For example if we only have\n>> one \"prefetchable\" index, it's somewhat pointless to prefetch (for\n>> single-row cases). And so on.\n>>\n>> (c) not prefetching when already cached - This is somewhat related to\n>> the previous case, but perhaps it'd be cheaper to first check if the\n>> data is already cached. For shared buffers it should not be difficult,\n>> for page cache we could use preadv2 with RWF_NOWAIT flag. The question\n>> is if this is cheap enough to be cheaper than just doing posix_fadvise\n>> (which however only deals with shared buffers).\n> \n> I don't like this approach. It duplicates the tree-descend code, and it\n> also duplicates the work of descending the tree at runtime. And it only\n> addresses index insertion; there are a lot of places that could benefit\n> from prefetching or async execution like this.\n> \n\nYeah, I think that's a fair assessment, although I think the amount of\nduplicate code is pretty small (and perhaps it could be refactored to a\ncommon function, which I chose not to do in the PoC patch).\n\n> I think we should think of this as async execution rather than\n> prefetching. We don't have the general infrastructure for writing async\n> code, but if we did, this would be much simpler. In an async programming\n> model, like you have in many other languages like Rust, python or\n> javascript, there would be no separate prefetching function. Instead,\n> aminsert() would return a future that can pause execution if it needs to\n> do I/O. Something like this:\n> \n> aminsert_futures = NIL;\n> /* create a future for each index insert */\n> for (<all indexes>)\n> {\n>     aminsert_futures = lappend(aminsert_futures, aminsert(...));\n> }\n> /* wait for all the futures to finish */\n> await aminsert_futures;\n> \n> The async-aware aminsert function would run to completion quickly if all\n> the pages are already in cache. If you get a cache miss, it would start\n> an async I/O read for the page, and yield to the other insertions until\n> the I/O completes.\n> \n> We already support async execution of FDWs now, with the\n> ForeignAsyncRequest() and ForeignAsyncConfigureWait() callbacks. Can we\n> generalize that?\n> \n\nInteresting idea. I kinda like it in principle, however I'm not very\nfamiliar with how our async execution works (and perhaps even with async\nimplementations in general), so I can't quite say how difficult would it\nbe to do something like that in an AM (instead of an executor).\n\nWhere exactly would be the boundary between who \"creates\" and \"executes\"\nthe requests, what would be the flow of execution? For the FDW it seems\nfairly straightforward, because the boundary is local/remote, and the\nasync request is executed in a separate process.\n\nBut here everything would happen locally, so how would that work?\n\nImagine we're inserting a tuple into two indexes. There's a bunch of\nindex pages we may need to read for each index - first some internal\npages, then some leafs. Presumably we'd want to do each page read as\nasynchronous, and allow transfer of control to the other index.\n\nIIUC the async-aware aminsert() would execute an async request for the\nfirst page it needs to read, with a callback that essentially does the\nnext step of index descent - reads the page, determines the next page to\nread, and then do another async request. Then it'd sleep, which would\nallow transfer of control to the aminsert() on the other index. And then\nwe'd do a similar thing for the leaf pages.\n\nOr do I imagine things wrong?\n\nThe thing I like about this async approach is that it would allow\nprefetching all index pages, while my PoC patch simply assumes all\ninternal pages are in cache and prefetches only the first leaf page.\nThat's much simpler in terms of control flow, but has clear limits.\n\nI however wonder if there are concurrency issues. Imagine there's a\nCOPY, i.e. we're inserting a batch of tuples. Can you run the aminsert()\nfor all the tuples concurrently? Won't that have issues with the\ndifferent \"async threads\" modifying the index for the other threads?\n\nIf those concurrent \"insert threads\" would be an issue, maybe we could\nmake amprefetch() async-aware ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 23 Nov 2023 15:46:31 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PoC: prefetching index leaf pages (for inserts)" }, { "msg_contents": "On Mon, 6 Nov 2023 at 22:36, Tomas Vondra <[email protected]> wrote:\n>\n> Seems cfbot was not entirely happy about the patch, for two reasons:\n>\n> 1) enable_insert_prefetching definition was inconsistent (different\n> boot/default values, missing in .conf and so on)\n>\n> 2) stupid bug in execReplication, inserting index entries twice\n>\n> The attached v3 should fix all of that, I believe.\n>\n>\n> As for the path forward, I think the prefetching is demonstrably\n> beneficial. There are cases where it can't help or even harms\n> performance. I think the success depends on three areas:\n>\n> (a) reducing the costs of the prefetching - For example right now we\n> build the index tuples twice (once for prefetch, once for the insert),\n> but maybe there's a way to do that only once? There are also predicate\n> indexes, and so on.\n>\n> (b) being smarter about when to prefetch - For example if we only have\n> one \"prefetchable\" index, it's somewhat pointless to prefetch (for\n> single-row cases). And so on.\n>\n> (c) not prefetching when already cached - This is somewhat related to\n> the previous case, but perhaps it'd be cheaper to first check if the\n> data is already cached. For shared buffers it should not be difficult,\n> for page cache we could use preadv2 with RWF_NOWAIT flag. The question\n> is if this is cheap enough to be cheaper than just doing posix_fadvise\n> (which however only deals with shared buffers).\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\n7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n=== applying patch ./0001-insert-prefetch-v3.patch\npatching file src/backend/access/brin/brin.c\nHunk #1 FAILED at 117.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/access/brin/brin.c.rej\npatching file src/backend/access/gin/ginutil.c\nHunk #1 FAILED at 64.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/access/gin/ginutil.c.rej\npatching file src/backend/access/gist/gist.c\nHunk #1 FAILED at 86.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/access/gist/gist.c.rej\npatching file src/backend/access/hash/hash.c\nHunk #1 FAILED at 83.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/access/hash/hash.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4622.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 20:15:52 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PoC: prefetching index leaf pages (for inserts)" } ]
[ { "msg_contents": "Hi all,\n\n5e4dacb9878c has reminded me that we don't show the version of OpenSSL\nin the output of ./configure. This would be useful to know when\nlooking at issues within the buildfarm, and I've wanted that a few\ntimes.\n\nHow about the attached to use the openssl command, if available, to\ndisplay this information? Libraries may be installed while the\ncommand is not available, but in most cases I'd like to think that it\nis around, and it is less complex than using something like\nSSLeay_version() from libcrypto.\n\nmeson already shows this information, so no additions are required\nthere. Also, LibreSSL uses `openssl`, right?\n\nThoughts or comments?\n--\nMichael", "msg_date": "Mon, 23 Oct 2023 09:26:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Show version of OpenSSL in ./configure output" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> 5e4dacb9878c has reminded me that we don't show the version of OpenSSL\n> in the output of ./configure. This would be useful to know when\n> looking at issues within the buildfarm, and I've wanted that a few\n> times.\n\n+1, I've wished for that too. It's not 100% clear that whatever\nopenssl is in your PATH matches the libraries we select, but this\nwill get it right in most cases and it seems like about the right\nlevel of effort.\n\n+ pgac_openssl_version=\"$($OPENSSL version 2> /dev/null || echo no)\"\n\nMaybe \"echo 'openssl not found'\" would be better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Oct 2023 20:34:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "On Sun, Oct 22, 2023 at 08:34:40PM -0400, Tom Lane wrote:\n> +1, I've wished for that too. It's not 100% clear that whatever\n> openssl is in your PATH matches the libraries we select, but this\n> will get it right in most cases and it seems like about the right\n> level of effort.\n\nYes, I don't reallt want to add more macros for the sake of this\ninformation.\n\n> + pgac_openssl_version=\"$($OPENSSL version 2> /dev/null || echo no)\"\n> \n> Maybe \"echo 'openssl not found'\" would be better.\n\nSure.\n--\nMichael", "msg_date": "Mon, 23 Oct 2023 10:00:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "On 23.10.23 02:26, Michael Paquier wrote:\n> 5e4dacb9878c has reminded me that we don't show the version of OpenSSL\n> in the output of ./configure. This would be useful to know when\n> looking at issues within the buildfarm, and I've wanted that a few\n> times.\n> \n> How about the attached to use the openssl command, if available, to\n> display this information? Libraries may be installed while the\n> command is not available, but in most cases I'd like to think that it\n> is around, and it is less complex than using something like\n> SSLeay_version() from libcrypto.\n> \n> meson already shows this information, so no additions are required\n> there. Also, LibreSSL uses `openssl`, right?\n\nThe problem is that the binary might not match the library, so this \ncould be very misleading. Also, meson gets the version via pkg-config, \nso the result would also be inconsistent with meson. I am afraid this \napproach would be unreliable in the really interesting cases.\n\n > + # Print version of OpenSSL, if command is available.\n > + AC_ARG_VAR(OPENSSL, [path to openssl command])\n > + PGAC_PATH_PROGS(OPENSSL, openssl)\n\nThere is already a call like this in configure.ac, so (if this approach \nis taken) you should rearrange things to make use of that one.\n\n > + pgac_openssl_version=\"$($OPENSSL version 2> /dev/null || echo no)\"\n > + AC_MSG_NOTICE([using openssl $pgac_openssl_version])\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 08:22:25 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "> On 23 Oct 2023, at 08:22, Peter Eisentraut <[email protected]> wrote:\n> \n> On 23.10.23 02:26, Michael Paquier wrote:\n>> 5e4dacb9878c has reminded me that we don't show the version of OpenSSL\n>> in the output of ./configure. This would be useful to know when\n>> looking at issues within the buildfarm, and I've wanted that a few\n>> times.\n\nMany +1's, this has been on my TODO for some time but has never bubbled to the\ntop. Thanks for working on this.\n\n>> How about the attached to use the openssl command, if available, to\n>> display this information? Libraries may be installed while the\n>> command is not available, but in most cases I'd like to think that it\n>> is around, and it is less complex than using something like\n>> SSLeay_version() from libcrypto.\n>> meson already shows this information, so no additions are required\n>> there. Also, LibreSSL uses `openssl`, right?\n> \n> The problem is that the binary might not match the library, so this could be very misleading. Also, meson gets the version via pkg-config, so the result would also be inconsistent with meson. I am afraid this approach would be unreliable in the really interesting cases.\n\nI tend to agree with this, it would be preferrable to be consistent with meson\nif possible/feasible.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 09:18:58 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 23 Oct 2023, at 08:22, Peter Eisentraut <[email protected]> wrote:\n>> The problem is that the binary might not match the library, so this could be very misleading. Also, meson gets the version via pkg-config, so the result would also be inconsistent with meson. I am afraid this approach would be unreliable in the really interesting cases.\n\n> I tend to agree with this, it would be preferrable to be consistent with meson\n> if possible/feasible.\n\nThe configure script doesn't use pkg-config to find OpenSSL, so that\nwould also risk a misleading result. I'm inclined to guess that\nbelieving \"openssl version\" would be less likely to give a wrong\nanswer than believing \"pkg-config --modversion openssl\". The former\namounts to assuming that your PATH is consistent with whatever you\nset as the include and library search paths. The latter, well,\nhasn't got any principle at all, because we aren't consulting\npkg-config for this.\n\nAlso, since \"PGAC_PATH_PROGS(OPENSSL, openssl)\" prints the full path\nto what it found, you can at least tell after the fact that you\nare being misled, because you can cross-check that path against\nthe -L switches being used for libraries. I don't think there's\nany equivalent sanity check available for what pkg-config tells us,\nif we're not consulting that for the library location.\n\nmeson.build may be fine here --- I suppose the reason that gets\nthe version via pkg-config is that it also gets other build details\nfrom there. It's slightly worrisome that the autoconf and meson\nbuild systems might choose different openssl installations, but\nthat's possibly true of lots of our dependencies.\n\nAnother angle worth considering is that \"openssl version\" provides\nmore information. On my RHEL8 box:\n\n$ pkg-config --modversion openssl\n1.1.1k\n$ openssl version\nOpenSSL 1.1.1k FIPS 25 Mar 2021\n\nOn my laptop using MacPorts:\n\n$ pkg-config --modversion openssl\n3.1.3\n$ openssl version\nOpenSSL 3.1.3 19 Sep 2023 (Library: OpenSSL 3.1.3 19 Sep 2023)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 10:26:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "On 23.10.23 16:26, Tom Lane wrote:\n> Also, since \"PGAC_PATH_PROGS(OPENSSL, openssl)\" prints the full path to \n> what it found, you can at least tell after the fact that you are being \n> misled, because you can cross-check that path against the -L switches \n> being used for libraries.\n\nYeah, that seems ok.\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 18:06:02 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "> On 23 Oct 2023, at 18:06, Peter Eisentraut <[email protected]> wrote:\n> \n> On 23.10.23 16:26, Tom Lane wrote:\n>> Also, since \"PGAC_PATH_PROGS(OPENSSL, openssl)\" prints the full path to what it found, you can at least tell after the fact that you are being misled, because you can cross-check that path against the -L switches being used for libraries.\n> \n> Yeah, that seems ok.\n\n+1, all good points raised, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 19:58:43 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "On Mon, Oct 23, 2023 at 06:06:02PM +0200, Peter Eisentraut wrote:\n> On 23.10.23 16:26, Tom Lane wrote:\n>> Also, since \"PGAC_PATH_PROGS(OPENSSL, openssl)\" prints the full path to\n>> what it found, you can at least tell after the fact that you are being\n>> misled, because you can cross-check that path against the -L switches\n>> being used for libraries.\n>\n> Yeah, that seems ok.\n\nFWIW, I was also contemplating this one yesterday:\n+PKG_CHECK_MODULES(OPENSSL, openssl)\n\nStill, when I link my builds to a custom OpenSSL one, I force PATH to\npoint to a command of openssl related to the libs used so\nPGAC_PATH_PROGS is more useful. I guess that everybody here does the\nsame. It could be of course possible to show both the command from\nPATH and from pkg-config, but that's just confusing IMO.\n\nThere may be a point in doing the same for other commands like LZ4 and\nZstandard but these have been less of a pain in the buildfarm, even if\nwe don't use them for that long, so I cannot get excited about\nspending more ./configure cycles for these.\n\nPlease find attached a patch to move the version call close to the\nexisting PGAC_PATH_PROGS. And of course, I'd like to do a backpatch.\nIs that OK?\n--\nMichael", "msg_date": "Tue, 24 Oct 2023 08:33:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Please find attached a patch to move the version call close to the\n> existing PGAC_PATH_PROGS. And of course, I'd like to do a backpatch.\n> Is that OK?\n\nOK by me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 20:44:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Show version of OpenSSL in ./configure output" }, { "msg_contents": "On Mon, Oct 23, 2023 at 08:44:01PM -0400, Tom Lane wrote:\n> OK by me.\n\nCool, done down to 16 as this depends on c8e4030d1bdd.\n--\nMichael", "msg_date": "Wed, 25 Oct 2023 09:28:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Show version of OpenSSL in ./configure output" } ]
[ { "msg_contents": "Since C99, there can be a trailing comma after the last value in an enum \ndefinition. A lot of new code has been introducing this style on the \nfly. I have noticed that some new patches are now taking an \ninconsistent approach to this. Some add the last comma on the fly if \nthey add a new last value, some are trying to preserve the existing \nstyle in each place, some are even dropping the last comma if there was \none. I figured we could nudge this all in a consistent direction if we \njust add the trailing commas everywhere once. See attached patch; it \nwasn't actually that much.\n\nI omitted a few places where there was a fixed \"last\" value that will \nalways stay last. I also skipped the header files of libpq and ecpg, in \ncase people want to use those with older compilers. There were also a \nsmall number of cases where the enum type wasn't used anywhere (but the \nenum values were), which ended up confusing pgindent a bit.", "msg_date": "Mon, 23 Oct 2023 08:30:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Add trailing commas to enum definitions" }, { "msg_contents": "On Mon, Oct 23, 2023 at 2:37 PM Peter Eisentraut <[email protected]> wrote:\n>\n> Since C99, there can be a trailing comma after the last value in an enum\n\nC99 allows us to do this doesn't mean we must do this, this is not\ninconsistent IMHO, and this will pollute the git log messages, people\nmay *git blame* the file and see the reason for the introduction of the\nline.\n\nThere are a lot of 'typedef struct' as well as 'struct', which is not\ninconsistent either just like the *enum* case.\n\n> definition. A lot of new code has been introducing this style on the\n> fly. I have noticed that some new patches are now taking an\n> inconsistent approach to this. Some add the last comma on the fly if\n> they add a new last value, some are trying to preserve the existing\n> style in each place, some are even dropping the last comma if there was\n> one. I figured we could nudge this all in a consistent direction if we\n> just add the trailing commas everywhere once. See attached patch; it\n> wasn't actually that much.\n>\n> I omitted a few places where there was a fixed \"last\" value that will\n> always stay last. I also skipped the header files of libpq and ecpg, in\n> case people want to use those with older compilers. There were also a\n> small number of cases where the enum type wasn't used anywhere (but the\n> enum values were), which ended up confusing pgindent a bit.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:55:32 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "On Mon, Oct 23, 2023 at 05:55:32PM +0800, Junwang Zhao wrote:\n> On Mon, Oct 23, 2023 at 2:37 PM Peter Eisentraut <[email protected]> wrote:\n>> Since C99, there can be a trailing comma after the last value in an enum\n> \n> C99 allows us to do this doesn't mean we must do this, this is not\n> inconsistent IMHO, and this will pollute the git log messages, people\n> may *git blame* the file and see the reason for the introduction of the\n> line.\n\nI suspect that your concerns about git-blame could be resolved by adding\nthis commit to .git-blame-ignore-revs. From a long-term perspective, I\nthink standardizing on the trailing comma style will actually improve\ngit-blame because patches won't need to add a comma to the previous line\nwhen adding a value.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Oct 2023 15:34:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> From a long-term perspective, I\n> think standardizing on the trailing comma style will actually improve\n> git-blame because patches won't need to add a comma to the previous line\n> when adding a value.\n\nYeah, that's a good point. I had been leaning towards \"this is\nunnecessary churn\", but with that idea I'm now +1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:04:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "On 10/23/23 17:04, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> From a long-term perspective, I\n>> think standardizing on the trailing comma style will actually improve\n>> git-blame because patches won't need to add a comma to the previous line\n>> when adding a value.\n> \n> Yeah, that's a good point. I had been leaning towards \"this is\n> unnecessary churn\", but with that idea I'm now +1.\n\n+1 from me.\n\n-David\n\n\n\n", "msg_date": "Mon, 23 Oct 2023 19:58:23 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "On Tue, Oct 24, 2023 at 4:34 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Oct 23, 2023 at 05:55:32PM +0800, Junwang Zhao wrote:\n> > On Mon, Oct 23, 2023 at 2:37 PM Peter Eisentraut <[email protected]> wrote:\n> >> Since C99, there can be a trailing comma after the last value in an enum\n> >\n> > C99 allows us to do this doesn't mean we must do this, this is not\n> > inconsistent IMHO, and this will pollute the git log messages, people\n> > may *git blame* the file and see the reason for the introduction of the\n> > line.\n>\n> I suspect that your concerns about git-blame could be resolved by adding\n> this commit to .git-blame-ignore-revs. From a long-term perspective, I\n> think standardizing on the trailing comma style will actually improve\n> git-blame because patches won't need to add a comma to the previous line\n> when adding a value.\n\nmake sense, +1 from me now.\n\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 24 Oct 2023 14:07:29 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "\nOn 2023-10-23 Mo 17:04, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> From a long-term perspective, I\n>> think standardizing on the trailing comma style will actually improve\n>> git-blame because patches won't need to add a comma to the previous line\n>> when adding a value.\n> Yeah, that's a good point. I had been leaning towards \"this is\n> unnecessary churn\", but with that idea I'm now +1.\n>\n> \t\t\t\n\n\n+1. It's a fairly common practice in Perl code, too, and I often do it \nfor exactly this reason.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 08:58:38 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add trailing commas to enum definitions" }, { "msg_contents": "On 23.10.23 22:34, Nathan Bossart wrote:\n> On Mon, Oct 23, 2023 at 05:55:32PM +0800, Junwang Zhao wrote:\n>> On Mon, Oct 23, 2023 at 2:37 PM Peter Eisentraut <[email protected]> wrote:\n>>> Since C99, there can be a trailing comma after the last value in an enum\n>>\n>> C99 allows us to do this doesn't mean we must do this, this is not\n>> inconsistent IMHO, and this will pollute the git log messages, people\n>> may *git blame* the file and see the reason for the introduction of the\n>> line.\n> \n> I suspect that your concerns about git-blame could be resolved by adding\n> this commit to .git-blame-ignore-revs. From a long-term perspective, I\n> think standardizing on the trailing comma style will actually improve\n> git-blame because patches won't need to add a comma to the previous line\n> when adding a value.\n\nCommitted that way.\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 13:20:34 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add trailing commas to enum definitions" } ]
[ { "msg_contents": "Hi,\n\nI investigated a crashing postgres instance. Turns out the original issue was\noperator error. But in the process I found a few postgres issues. The scenario\nis basically that redo LSN and checkpoint LSN are in seperate segments, and\nthat for whatever reason, the file containing the redo LSN is missing.\n\nI'd expect a PANIC in that situation. But, turns out that no such luck.\n\nI've looked at 15 and HEAD so far.\n\n1) For some reason I haven't yet debugged, the ReadRecord(PANIC) in\n PerformWalRecovery() doesn't PANIC, but instead just returns NULL\n\n We *do* output a DEBUG message, but well, that's insufficient.\n\n\n2) On HEAD, we then segfault, because the cross check for XLOG_CHECKPOINT_REDO\n causes null pointer dereference. Which I guess is good, we shouldn't have\n gotten there without a record.\n\n\n3) On 15, with or without assertions, we decide that \"redo is not\n required\". Gulp.\n\n\n4) On 15, with assertions enabled, we fail with an assertion in the startup\n process, in FinishWalRecovery()->XLogPrefetcherBeginRead()->XLogBeginRead()\n Assert(!XLogRecPtrIsInvalid(RecPtr))\n\n\n5) On 15, with optimizations enabled, we don't just crash, it gets scarier.\n\n First, the startup process actually creates a bogus WAL segment:\n\n#1 0x000055f53b2725ff in XLogFileInitInternal (path=0x7ffccb7b3360 \"pg_wal/000000010000000000000000\", added=0x7ffccb7b335f, logtli=1, logsegno=1099511627776)\n at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:2936\n#2 PreallocXlogFiles (endptr=endptr@entry=0, tli=tli@entry=1) at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:3433\n#3 0x000055f53b277e00 in PreallocXlogFiles (tli=1, endptr=0) at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:3425\n#4 StartupXLOG () at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:5517\n#5 0x000055f53b4a385e in StartupProcessMain () at /home/andres/src/postgresql-15/src/backend/postmaster/startup.c:282\n#6 0x000055f53b49860b in AuxiliaryProcessMain (auxtype=auxtype@entry=StartupProcess)\n at /home/andres/src/postgresql-15/src/backend/postmaster/auxprocess.c:141\n#7 0x000055f53b4a2eed in StartChildProcess (type=StartupProcess) at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:5432\n#8 PostmasterMain (argc=argc@entry=39, argv=argv@entry=0x55f53d5095d0) at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:1473\n#9 0x000055f53b1d1bff in main (argc=39, argv=0x55f53d5095d0) at /home/andres/src/postgresql-15/src/backend/main/main.c:202\n\n Note the endptr=0 and pg_wal/000000010000000000000000 path.\n\n With normal log level, one wouldn't learn anything about this.\n\n\n Then the *checkpointer* segfaults, trying to write the end-of-recovery\n checkpoint:\n\n#0 __memcpy_evex_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:344\n#1 0x000056102ebe84a2 in memcpy (__len=26, __src=0x56102fbe7b48, __dest=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:29\n#2 CopyXLogRecordToWAL (tli=1, EndPos=160, StartPos=40, rdata=0x56102f31ffb0 <hdr_rdt>, isLogSwitch=false, write_len=114)\n at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:1229\n#3 XLogInsertRecord (rdata=<optimized out>, fpw_lsn=<optimized out>, flags=<optimized out>, num_fpi=0, topxid_included=<optimized out>)\n at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:861\n#4 0x000056102ebf12cb in XLogInsert (rmid=rmid@entry=0 '\\000', info=info@entry=0 '\\000')\n at /home/andres/src/postgresql-15/src/backend/access/transam/xloginsert.c:492\n#5 0x000056102ebea92e in CreateCheckPoint (flags=102) at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:6583\n#6 0x000056102ee0e552 in CheckpointerMain () at /home/andres/src/postgresql-15/src/backend/postmaster/checkpointer.c:455\n#7 0x000056102ee0c5f9 in AuxiliaryProcessMain (auxtype=auxtype@entry=CheckpointerProcess)\n at /home/andres/src/postgresql-15/src/backend/postmaster/auxprocess.c:153\n#8 0x000056102ee12c38 in StartChildProcess (type=CheckpointerProcess) at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:5432\n#9 0x000056102ee16d54 in PostmasterMain (argc=argc@entry=35, argv=argv@entry=0x56102fb6e5d0)\n at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:1466\n#10 0x000056102eb45bff in main (argc=35, argv=0x56102fb6e5d0) at /home/andres/src/postgresql-15/src/backend/main/main.c:202\n\n The immediate cause of the crash is that GetXLogBuffer() is called with\n ptr=40, which makes GetXLogBuffer() think it can use the cached path,\n because cachedPage is still zero.\n\n Which in turn is because because the startup process happily initialized\n XLogCtl->Insert.{CurrBytePos,PrevBytePos} with 0s. Even though it\n initialized RedoRecPtr with the valid redo pointer.\n\n The checkpointer actually ends up resetting the valid RedoRecPtr with\n bogus content as part of CreateCheckPoint(), due to the bogus CurrBytePos.\n\n\n6) On 15, with optimizations enabled, we don't just crash, we also\n can't even get to the prior crashes anymore, because *now* we PANIC:\n\n2023-10-23 16:15:07.208 PDT [2554457][startup][:0][] DEBUG: could not open file \"pg_wal/000000010000000000000095\": No such file or directory\n2023-10-23 16:15:07.208 PDT [2554457][startup][:0][] LOG: redo is not required\n2023-10-23 16:15:07.208 PDT [2554457][startup][:0][] PANIC: invalid magic number 0000 in log segment 000000010000000000000000, offset 0\n2023-10-23 16:15:07.211 PDT [2554453][postmaster][:0][] LOG: startup process (PID 2554457) was terminated by signal 6: Aborted\n\n(rr) bt\n#0 __pthread_kill_implementation (threadid=<optimized out>, signo=signo@entry=6, no_tid=no_tid@entry=0) at ./nptl/pthread_kill.c:44\n#1 0x00007f7f6abc915f in __pthread_kill_internal (signo=6, threadid=<optimized out>) at ./nptl/pthread_kill.c:78\n#2 0x00007f7f6ab7b472 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#3 0x00007f7f6ab654b2 in __GI_abort () at ./stdlib/abort.c:79\n#4 0x00005566b5983339 in errfinish (filename=<optimized out>, lineno=<optimized out>, funcname=<optimized out>)\n at /home/andres/src/postgresql-15/src/backend/utils/error/elog.c:675\n#5 0x00005566b555630c in ReadRecord (xlogprefetcher=0x5566b60fd7c8, emode=emode@entry=23, fetching_ckpt=fetching_ckpt@entry=false, \n replayTLI=replayTLI@entry=1) at /home/andres/src/postgresql-15/src/backend/access/transam/xlogrecovery.c:3082\n#6 0x00005566b5557ea0 in FinishWalRecovery () at /home/andres/src/postgresql-15/src/backend/access/transam/xlogrecovery.c:1454\n#7 0x00005566b554ac1c in StartupXLOG () at /home/andres/src/postgresql-15/src/backend/access/transam/xlog.c:5309\n#8 0x00005566b577685e in StartupProcessMain () at /home/andres/src/postgresql-15/src/backend/postmaster/startup.c:282\n#9 0x00005566b576b60b in AuxiliaryProcessMain (auxtype=auxtype@entry=StartupProcess)\n at /home/andres/src/postgresql-15/src/backend/postmaster/auxprocess.c:141\n#10 0x00005566b5775eed in StartChildProcess (type=StartupProcess) at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:5432\n#11 PostmasterMain (argc=argc@entry=39, argv=argv@entry=0x5566b60fb5d0) at /home/andres/src/postgresql-15/src/backend/postmaster/postmaster.c:1473\n#12 0x00005566b54a4bff in main (argc=39, argv=0x5566b60fb5d0) at /home/andres/src/postgresql-15/src/backend/main/main.c:202\n\n Of course 000000010000000000000000 has bogus data, it's an invalid\n file. Here we *do* actually PANIC, presumably because the file actually\n exists.\n\n\nOf course most of this is downstream from the issue of not PANICing in 1). But\nit feels like it showcases a serious lack of error checking in StartupXLOG(),\nCreateCheckPoint() etc.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Oct 2023 16:21:45 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Various bugs if segment containing redo pointer does not exist" }, { "msg_contents": "Hi,\n\nOn 2023-10-23 16:21:45 -0700, Andres Freund wrote:\n> 1) For some reason I haven't yet debugged, the ReadRecord(PANIC) in\n> PerformWalRecovery() doesn't PANIC, but instead just returns NULL\n>\n> We *do* output a DEBUG message, but well, that's insufficient.\n\nThe debug is from this backtrace:\n\n#0 XLogFileReadAnyTLI (segno=6476, emode=13, source=XLOG_FROM_PG_WAL) at /home/andres/src/postgresql/src/backend/access/transam/xlogrecovery.c:4291\n#1 0x000055d7b3949db0 in WaitForWALToBecomeAvailable (RecPtr=108649259008, randAccess=true, fetching_ckpt=false, tliRecPtr=108665421680, replayTLI=1,\n replayLSN=108665421680, nonblocking=false) at /home/andres/src/postgresql/src/backend/access/transam/xlogrecovery.c:3697\n#2 0x000055d7b39494ff in XLogPageRead (xlogreader=0x55d7b472c470, targetPagePtr=108649250816, reqLen=8192, targetRecPtr=108665421680,\n readBuf=0x55d7b47ba5d8 \"\\024\\321\\005\") at /home/andres/src/postgresql/src/backend/access/transam/xlogrecovery.c:3278\n#3 0x000055d7b3941bb1 in ReadPageInternal (state=0x55d7b472c470, pageptr=108665413632, reqLen=8072)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogreader.c:1014\n#4 0x000055d7b3940f43 in XLogDecodeNextRecord (state=0x55d7b472c470, nonblocking=false)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogreader.c:571\n#5 0x000055d7b3941a41 in XLogReadAhead (state=0x55d7b472c470, nonblocking=false) at /home/andres/src/postgresql/src/backend/access/transam/xlogreader.c:947\n#6 0x000055d7b393f5fa in XLogPrefetcherNextBlock (pgsr_private=94384934340072, lsn=0x55d7b47cfeb8)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogprefetcher.c:496\n#7 0x000055d7b393efcd in lrq_prefetch (lrq=0x55d7b47cfe88) at /home/andres/src/postgresql/src/backend/access/transam/xlogprefetcher.c:256\n#8 0x000055d7b393f190 in lrq_complete_lsn (lrq=0x55d7b47cfe88, lsn=0) at /home/andres/src/postgresql/src/backend/access/transam/xlogprefetcher.c:294\n#9 0x000055d7b39401ba in XLogPrefetcherReadRecord (prefetcher=0x55d7b47bc5e8, errmsg=0x7ffc23505920)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogprefetcher.c:1041\n#10 0x000055d7b3948ff8 in ReadRecord (xlogprefetcher=0x55d7b47bc5e8, emode=23, fetching_ckpt=false, replayTLI=1)\n at /home/andres/src/postgresql/src/backend/access/transam/xlogrecovery.c:3078\n#11 0x000055d7b3946749 in PerformWalRecovery () at /home/andres/src/postgresql/src/backend/access/transam/xlogrecovery.c:1640\n\n\nThe source of the emode=13=DEBUG2 is that that's hardcoded in\nWaitForWALToBecomeAvailable(). I guess the error ought to come from\nXLogPageRead(), but all that happens is this:\n\n\t\t\tcase XLREAD_FAIL:\n\t\t\t\tif (readFile >= 0)\n\t\t\t\t\tclose(readFile);\n\t\t\t\treadFile = -1;\n\t\t\t\treadLen = 0;\n\t\t\t\treadSource = XLOG_FROM_ANY;\n\t\t\t\treturn XLREAD_FAIL;\n\nwhich *does* error out for some other failures:\n\t\t\terrno = save_errno;\n\t\t\tereport(emode_for_corrupt_record(emode, targetPagePtr + reqLen),\n\t\t\t\t\t(errcode_for_file_access(),\n\t\t\t\t\t errmsg(\"could not read from WAL segment %s, LSN %X/%X, offset %u: %m\",\n\t\t\t\t\t\t\tfname, LSN_FORMAT_ARGS(targetPagePtr),\n\t\t\t\t\t\t\treadOff)));\n\nbut not for a file that couldn't be opened. Which wouldn't have to be due to\nENOENT, could also be EACCESS...\n\n\nxlogreader has undergone a fair bit of changes in the last few releases. As of\nnow, I can't really understand who is responsible for reporting what kind of\nerror.\n\n\t/*\n\t * Data input callback\n\t *\n\t * This callback shall read at least reqLen valid bytes of the xlog page\n\t * starting at targetPagePtr, and store them in readBuf. The callback\n\t * shall return the number of bytes read (never more than XLOG_BLCKSZ), or\n\t * -1 on failure. The callback shall sleep, if necessary, to wait for the\n\t * requested bytes to become available. The callback will not be invoked\n\t * again for the same page unless more than the returned number of bytes\n\t * are needed.\n\t *\n\t * targetRecPtr is the position of the WAL record we're reading. Usually\n\t * it is equal to targetPagePtr + reqLen, but sometimes xlogreader needs\n\t * to read and verify the page or segment header, before it reads the\n\t * actual WAL record it's interested in. In that case, targetRecPtr can\n\t * be used to determine which timeline to read the page from.\n\t *\n\t * The callback shall set ->seg.ws_tli to the TLI of the file the page was\n\t * read from.\n\t */\n\tXLogPageReadCB page_read;\n\n\t/*\n\t * Callback to open the specified WAL segment for reading. ->seg.ws_file\n\t * shall be set to the file descriptor of the opened segment. In case of\n\t * failure, an error shall be raised by the callback and it shall not\n\t * return.\n\t *\n\t * \"nextSegNo\" is the number of the segment to be opened.\n\t *\n\t * \"tli_p\" is an input/output argument. WALRead() uses it to pass the\n\t * timeline in which the new segment should be found, but the callback can\n\t * use it to return the TLI that it actually opened.\n\t */\n\tWALSegmentOpenCB segment_open;\n\nMy reading of this is that page_read isn't ever supposed to error out - yet we\ndo it all over (e.g. WALDumpReadPage(), XLogPageRead()) and that segment_open\nis supposed to error out - but we don't even use segment_open for\nxlogrecovery.c. Who is supposed to error out then? And why is it ok for the\nread callbacks to error out directly, if they're supposed to return -1 in case\nof failure?\n\nSomewhat of an aside: It also seems \"The callback shall sleep, if necessary,\nto wait for the requested bytes to become available.\" is outdated, given that\nwe *explicitly* don't do so in some cases and support that via\nXLREAD_WOULDBLOCK?\n\n\nI dug through recent changes, expecting to find the problem. But uh, no. I\nreproduced this in 9.4, and I think the behaviour might have been introduced\nin 9.3 (didn't have a build of that around, hence didn't test that), as part\nof:\n\ncommit abf5c5c9a4f\nAuthor: Heikki Linnakangas <[email protected]>\nDate: 2013-02-22 11:43:04 +0200\n\n If recovery.conf is created after \"pg_ctl stop -m i\", do crash recovery.\n\nBefore that, XLogPageRead() called XLogFileReadAnyTLI() with the \"incoming\"\nemode, when in crash recovery -> PANIC in this case. After that commit, the\ncall to XLogFileReadAnyTLI() was moved to WaitForWALToBecomeAvailable(), but\nthe emode was changed to unconditionally be DEBUG2.\n\n\t\t\t\t/*\n\t\t\t\t * Try to restore the file from archive, or read an existing\n\t\t\t\t * file from pg_xlog.\n\t\t\t\t */\n\t\t\t\treadFile = XLogFileReadAnyTLI(readSegNo, DEBUG2,\n\t\t\t\t\t\tcurrentSource == XLOG_FROM_ARCHIVE ? XLOG_FROM_ANY :\n\t\t\t\t\t\t\t\t\t\t currentSource);\n\nIn 9.4 the server actually came up ok after encountering the problem\n(destroying data in the process!), not sure where we started to bogusly\ninitialize shared memory...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Oct 2023 17:43:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Various bugs if segment containing redo pointer does not exist" }, { "msg_contents": "On Mon, Oct 23, 2023 at 8:43 PM Andres Freund <[email protected]> wrote:\n> The source of the emode=13=DEBUG2 is that that's hardcoded in\n> WaitForWALToBecomeAvailable(). I guess the error ought to come from\n> XLogPageRead(), but all that happens is this:\n>\n> case XLREAD_FAIL:\n> if (readFile >= 0)\n> close(readFile);\n> readFile = -1;\n> readLen = 0;\n> readSource = XLOG_FROM_ANY;\n> return XLREAD_FAIL;\n\nI've been under the impression that the guiding principle here is that\nwe shouldn't error out upon hitting a condition that should only cause\nus to switch sources. I think WaitForWALToBecomeAvailable() is\nsupposed to set things up so that XLogPageRead()'s call to pg_pread()\nwill succeed. If it says it can't, then XLogPageRead() is only obliged\nto pass that information up to the caller, who can decide to wait\nlonger for the data to show up, or give up, or whatever it wants to\ndo. On the other hand, if WaitForWALToBecomeAvailable() says that it's\nfine to go ahead and call pg_pread() and pg_pread() then fails, then\nthat means that we've got a problem with the WAL file other than it\njust not being available yet, like it's the wrong length or there was\nan I/O error, and those are reportable errors. Said differently, in\nthe former case, the WAL is not broken, merely not currently\navailable; in the latter case, it's broken.\n\nThe legibility and maintainability of this code are certainly not\ngreat. The xlogreader mechanism is extremely useful, but maybe we\nshould have done more cleanup of the underlying mechanism first. It's\na giant ball of spaghetti code that is challenging to understand and\nalmost impossible to validate thoroughly (as you just discovered).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 08:58:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Various bugs if segment containing redo pointer does not exist" } ]
[ { "msg_contents": "The commit message in the attached patch provides the reasoning, as follows:\n\nThe user does not benefit from knowing that libpq allocates some/all memory\nusing malloc(). Mentioning malloc() here has a few downsides, and almost no\nbenefits.\n\nAll the relevant mentions of malloc() are followed by an explicit\ninstruction to use PQfreemem() to free the resulting objects. So the\ndocs perform the sufficient job of educating the user on how to properly\nfree the memory. But these mentions of malloc() may still lead an\ninexperienced or careless user to (wrongly) believe that they may use\nfree() to free the resulting memory. They will be in a lot of pain until\nthey learn that (when linked statically) libpq's malloc()/free() cannot\nbe mixed with malloc()/free() of whichever malloc() library the client\napplication is being linked with.\n\nAnother downside of explicitly mentioning that objects returned by libpq\nfunctions are allocated with malloc(), is that it exposes the implementation\ndetails of libpq to the users. Such details are best left unmentioned so that\nthese can be freely changed in the future without having to worry about its\neffect on client applications.\n\nWhenever necessary, it is sufficient to tell the user that the objects/memory\nreturned by libpq functions is allocated on the heap. That is just enough\ndetail for the user to realize that the relevant object/memory needs to be\nfreed; and the instructions that follow mention to use PQfreemem() to free such\nmemory.\n\nOne mention of malloc is left intact, because that mention is unrelated to how\nthe memory is allocated, or how to free it.\n\nIn passing, slightly improve the language of PQencryptPasswordConn()\ndocumentation.\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Mon, 23 Oct 2023 22:13:57 -0700", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Replace references to malloc() in libpq documentation with generic\n language" }, { "msg_contents": "> On 24 Oct 2023, at 07:13, Gurjeet Singh <[email protected]> wrote:\n\n> The user does not benefit from knowing that libpq allocates some/all memory\n> using malloc(). Mentioning malloc() here has a few downsides, and almost no\n> benefits.\n\nI'm not entirely convinced that replacing \"malloc\" with \"allocated on the heap\"\nimproves the documentation. I do agree with this proposed change though:\n\n- all the space that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n+ all the memory that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 10:32:58 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace references to malloc() in libpq documentation with\n generic language" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 24 Oct 2023, at 07:13, Gurjeet Singh <[email protected]> wrote:\n>> The user does not benefit from knowing that libpq allocates some/all memory\n>> using malloc(). Mentioning malloc() here has a few downsides, and almost no\n>> benefits.\n\n> I'm not entirely convinced that replacing \"malloc\" with \"allocated on the heap\"\n> improves the documentation.\n\nThat was my reaction too. The underlying storage allocator *is* malloc,\nand C programmers know what that is, and I don't see how obfuscating\nthat improves matters. It's true that on the miserable excuse for a\nplatform that is Windows, you have to use PQfreemem because of\nMicrosoft's inability to supply a standards-compliant implementation\nof malloc. But I'm not inclined to let that tail wag the dog.\n\n> I do agree with this proposed change though:\n\n> - all the space that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n> + all the memory that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n\n+1, seems harmless.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Oct 2023 11:07:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace references to malloc() in libpq documentation with\n generic language" }, { "msg_contents": "> On 24 Oct 2023, at 17:07, Tom Lane <[email protected]> wrote:\n> Daniel Gustafsson <[email protected]> writes:\n\n>> I do agree with this proposed change though:\n> \n>> - all the space that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n>> + all the memory that will be freed by <xref linkend=\"libpq-PQclear\"/>.\n> \n> +1, seems harmless.\n\nI've pushed this part, skipping the rest.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 22:22:39 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace references to malloc() in libpq documentation with\n generic language" } ]
[ { "msg_contents": "Try this as a user with NOBYPASSRLS:\n\n\nCREATE TABLE rlsbug (deleted boolean);\n\nINSERT INTO rlsbug VALUES (FALSE);\n\nCREATE POLICY p_sel ON rlsbug FOR SELECT TO laurenz USING (NOT deleted);\n\nCREATE POLICY p_upd ON rlsbug FOR UPDATE TO laurenz USING (TRUE);\n\nALTER TABLE rlsbug ENABLE ROW LEVEL SECURITY; \nALTER TABLE rlsbug FORCE ROW LEVEL SECURITY;\n\nUPDATE rlsbug SET deleted = TRUE WHERE NOT deleted; \nERROR: new row violates row-level security policy for table \"rlsbug\"\n\n\nI'd say that this error is wrong. The FOR SELECT policy should be applied \nto the WHERE condition, but certainly not to check new rows.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 24 Oct 2023 10:35:32 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Tue, 24 Oct 2023 at 09:36, Laurenz Albe <[email protected]> wrote:\n>\n> I'd say that this error is wrong. The FOR SELECT policy should be applied\n> to the WHERE condition, but certainly not to check new rows.\n>\n\nYes, I had the same thought recently. I would say that the SELECT\npolicies should only be used to check new rows if the UPDATE has a\nRETURNING clause and SELECT permissions are required on the target\nrelation.\n\nIn other words, it should be OK to UPDATE a row to new values that are\nnot visible according to the table's SELECT policies, provided that\nthe UPDATE command does not attempt to return those new values. That\nwould be consistent with what we do for INSERT.\n\nNote, that the current behaviour goes back a long way, though it's not\nquite clear whether this was intentional [1].\n\n[1] https://github.com/postgres/postgres/commit/7d8db3e8f37aec9d252353904e77381a18a2fa9f\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 24 Oct 2023 11:59:05 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Tue, 24 Oct 2023 at 09:36, Laurenz Albe <[email protected]> wrote:\n>> I'd say that this error is wrong. The FOR SELECT policy should be applied\n>> to the WHERE condition, but certainly not to check new rows.\n\n> Yes, I had the same thought recently. I would say that the SELECT\n> policies should only be used to check new rows if the UPDATE has a\n> RETURNING clause and SELECT permissions are required on the target\n> relation.\n\n> In other words, it should be OK to UPDATE a row to new values that are\n> not visible according to the table's SELECT policies, provided that\n> the UPDATE command does not attempt to return those new values. That\n> would be consistent with what we do for INSERT.\n\n> Note, that the current behaviour goes back a long way, though it's not\n> quite clear whether this was intentional [1].\n\nI'm fairly sure that it was intentional, but I don't recall the\nreasoning; perhaps Stephen does. In any case, I grasp your point\nthat maybe we should distinguish RETURNING from not-RETURNING cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Oct 2023 11:59:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Tue, 2023-10-24 at 11:59 -0400, Tom Lane wrote:\n> I'm fairly sure that it was intentional, but I don't recall the\n> reasoning; perhaps Stephen does.  In any case, I grasp your point\n> that maybe we should distinguish RETURNING from not-RETURNING cases.\n\nPerhaps the idea is that if there are constraints involved, the failure\nor success of an INSERT/UPDATE/DELETE could leak information that you\ndon't have privileges to read.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 10:43:21 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Tue, Oct 24, 2023 at 1:46 PM Jeff Davis <[email protected]> wrote:\n> Perhaps the idea is that if there are constraints involved, the failure\n> or success of an INSERT/UPDATE/DELETE could leak information that you\n> don't have privileges to read.\n\nMy recollection of this topic is pretty hazy, but like Tom, I seem to\nremember it being intentional, and I think the reason had something to\ndo with wanting the slice of a RLS-protect table that you can see to\nfeel like a complete table. When you update a row in a table all of\nwhich is visible to you, the updated row can never vanish as a result\nof that update, so it was thought, if I remember correctly, that this\nshould also be true here. It's also similar to what happens if an\nupdatable view has WITH CHECK OPTION, and I think that was part of the\nprecedent as well. I don't know whether or not the constraint issue\nthat you mention here was also part of the concern, but it may have\nbeen. This was all quite a while ago...\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 14:42:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "Greetings,\n\nOn Tue, Oct 24, 2023 at 14:42 Robert Haas <[email protected]> wrote:\n\n> On Tue, Oct 24, 2023 at 1:46 PM Jeff Davis <[email protected]> wrote:\n> > Perhaps the idea is that if there are constraints involved, the failure\n> > or success of an INSERT/UPDATE/DELETE could leak information that you\n> > don't have privileges to read.\n>\n> My recollection of this topic is pretty hazy, but like Tom, I seem to\n> remember it being intentional, and I think the reason had something to\n> do with wanting the slice of a RLS-protect table that you can see to\n> feel like a complete table. When you update a row in a table all of\n> which is visible to you, the updated row can never vanish as a result\n> of that update, so it was thought, if I remember correctly, that this\n> should also be true here. It's also similar to what happens if an\n> updatable view has WITH CHECK OPTION, and I think that was part of the\n> precedent as well. I don't know whether or not the constraint issue\n> that you mention here was also part of the concern, but it may have\n> been. This was all quite a while ago...\n\n\nYes, having it be similar to a view WITH CHECK OPTION was intentional, also\non not wishing for things to be able to disappear or to not get saved. The\nrisk of a constraint possibly causing the leak of information is better\nthan either having data just thrown away or having the constraint not\nprovide the guarantee it’s supposed to …\n\nThanks,\n\nStephen\n\n(On my phone at an event currently, sorry for not digging in deeper on\nthis..)\n\n>\n\nGreetings,On Tue, Oct 24, 2023 at 14:42 Robert Haas <[email protected]> wrote:On Tue, Oct 24, 2023 at 1:46 PM Jeff Davis <[email protected]> wrote:\n> Perhaps the idea is that if there are constraints involved, the failure\n> or success of an INSERT/UPDATE/DELETE could leak information that you\n> don't have privileges to read.\n\nMy recollection of this topic is pretty hazy, but like Tom, I seem to\nremember it being intentional, and I think the reason had something to\ndo with wanting the slice of a RLS-protect table that you can see to\nfeel like a complete table. When you update a row in a table all of\nwhich is visible to you, the updated row can never vanish as a result\nof that update, so it was thought, if I remember correctly, that this\nshould also be true here. It's also similar to what happens if an\nupdatable view has WITH CHECK OPTION, and I think that was part of the\nprecedent as well. I don't know whether or not the constraint issue\nthat you mention here was also part of the concern, but it may have\nbeen. This was all quite a while ago...Yes, having it be similar to a view WITH CHECK OPTION was intentional, also on not wishing for things to be able to disappear or to not get saved. The risk of a constraint possibly causing the leak of information is better than either having data just thrown away or having the constraint not provide the guarantee it’s supposed to …Thanks,Stephen(On my phone at an event currently, sorry for not digging in deeper on this..)", "msg_date": "Tue, 24 Oct 2023 15:05:50 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Tue, 2023-10-24 at 15:05 -0400, Stephen Frost wrote:\n> On Tue, Oct 24, 2023 at 14:42 Robert Haas <[email protected]> wrote:\n> > On Tue, Oct 24, 2023 at 1:46 PM Jeff Davis <[email protected]> wrote:\n> > > Perhaps the idea is that if there are constraints involved, the failure\n> > > or success of an INSERT/UPDATE/DELETE could leak information that you\n> > > don't have privileges to read.\n> > \n> > My recollection of this topic is pretty hazy, but like Tom, I seem to\n> > remember it being intentional, and I think the reason had something to\n> > do with wanting the slice of a RLS-protect table that you can see to\n> > feel like a complete table. When you update a row in a table all of\n> > which is visible to you, the updated row can never vanish as a result\n> > of that update, so it was thought, if I remember correctly, that this\n> > should also be true here. It's also similar to what happens if an\n> > updatable view has WITH CHECK OPTION, and I think that was part of the\n> > precedent as well. I don't know whether or not the constraint issue\n> > that you mention here was also part of the concern, but it may have\n> > been. This was all quite a while ago...\n> \n> Yes, having it be similar to a view WITH CHECK OPTION was intentional,\n> also on not wishing for things to be able to disappear or to not get saved.\n> The risk of a constraint possibly causing the leak of information is better\n> than either having data just thrown away or having the constraint not\n> provide the guarantee it’s supposed to …\n\nThanks everybody for looking and remembering.\n\nI can accept that the error is intentional, even though it violated the\nPOLA for me. I can buy into the argument that an UPDATE should not make\na row seem to vanish.\n\nI cannot buy into the constraint argument. If the table owner wanted to\nprevent you from causing a constraint violation error with a row you\ncannot see, she wouldn't have given you a FOR UPDATE policy that allows\nyou to perform such an UPDATE.\n\nAnyway, it is probably too late to change a behavior that has been like\nthat for a while and is not manifestly buggy.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 25 Oct 2023 09:45:53 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Wed, 2023-10-25 at 09:45 +0200, Laurenz Albe wrote:\n> I can accept that the error is intentional, even though it violated the\n> POLA for me. I can buy into the argument that an UPDATE should not make\n> a row seem to vanish.\n> \n> I cannot buy into the constraint argument. If the table owner wanted to\n> prevent you from causing a constraint violation error with a row you\n> cannot see, she wouldn't have given you a FOR UPDATE policy that allows\n> you to perform such an UPDATE.\n> \n> Anyway, it is probably too late to change a behavior that has been like\n> that for a while and is not manifestly buggy.\n\nI have thought some more about this, and I believe that if FOR SELECT\npolicies are used to check new rows, you should be allowed to specify\nWITH CHECK on FOR SELECT policies. Why not allow a user to specify\ndifferent conditions for fetching from a table and for new rows after\nan UPDATE?\n\nThe attached patch does that. What so you think?\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 09 Nov 2023 16:16:33 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Thu, 9 Nov 2023 at 15:16, Laurenz Albe <[email protected]> wrote:\n>\n> I have thought some more about this, and I believe that if FOR SELECT\n> policies are used to check new rows, you should be allowed to specify\n> WITH CHECK on FOR SELECT policies. Why not allow a user to specify\n> different conditions for fetching from a table and for new rows after\n> an UPDATE?\n>\n> The attached patch does that. What so you think?\n>\n\nSo you'd be able to write policies that allowed you to do an\nINSERT/UPDATE ... RETURNING, where the WITH CHECK part of the SELECT\npolicy allowed you see the new row, but then if you tried to SELECT it\nlater, the USING part of the policy might say no.\n\nThat seems pretty confusing. I would expect a row to either be visible\nor not, consistently across all commands.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 9 Nov 2023 15:59:20 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Thu, 2023-11-09 at 15:59 +0000, Dean Rasheed wrote:\n> On Thu, 9 Nov 2023 at 15:16, Laurenz Albe <[email protected]> wrote:\n> > I have thought some more about this, and I believe that if FOR SELECT\n> > policies are used to check new rows, you should be allowed to specify\n> > WITH CHECK on FOR SELECT policies.  Why not allow a user to specify\n> > different conditions for fetching from a table and for new rows after\n> > an UPDATE?\n> > \n> > The attached patch does that.  What so you think?\n> \n> So you'd be able to write policies that allowed you to do an\n> INSERT/UPDATE ... RETURNING, where the WITH CHECK part of the SELECT\n> policy allowed you see the new row, but then if you tried to SELECT it\n> later, the USING part of the policy might say no.\n> \n> That seems pretty confusing. I would expect a row to either be visible\n> or not, consistently across all commands.\n\nI think it can be useful to allow a user an UPDATE where the result\ndoes not satisfy the USING clause of the FOR SELECT policy.\n\nTrue, it could surprise that you cannot SELECT something you just saw\nwith UPDATE ... RETURNING, but I would argue that these are different\noperations.\n\nThe idea that an UPDATE should only produce rows you can SELECT is not\ntrue today: if you run an UPDATE without a WHERE clause, you can\ncreate rows you cannot see. The restriction is only on UPDATEs with\na WHERE clause. Weird, isn't it?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 09 Nov 2023 19:55:09 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Thu, 9 Nov 2023 at 18:55, Laurenz Albe <[email protected]> wrote:\n>\n> I think it can be useful to allow a user an UPDATE where the result\n> does not satisfy the USING clause of the FOR SELECT policy.\n>\n> The idea that an UPDATE should only produce rows you can SELECT is not\n> true today: if you run an UPDATE without a WHERE clause, you can\n> create rows you cannot see. The restriction is only on UPDATEs with\n> a WHERE clause. Weird, isn't it?\n>\n\nThat's true, but only if the UPDATE also doesn't have a RETURNING\nclause. What I find weird about your proposal is that it would allow\nan UPDATE ... RETURNING command to return something that would be\nvisible just that once, but then subsequently disappear. That seems\nlike a cure that's worse than the original disease that kicked off\nthis discussion.\n\nAs mentioned by others, the intention was that RLS behave like WITH\nCHECK OPTION on an updatable view, so that new rows can't just\ndisappear. There are, however, 2 differences between the way it\ncurrently works for RLS, and an updatable view:\n\n1). RLS only does this for UPDATE commands. INSERT commands *can*\ninsert new rows that aren't visible, and so disappear.\n\n2). It can't be turned off. The WITH CHECK OPTION on an updatable view\nis an option that the user can choose to turn on or off. That's not\npossible with RLS.\n\nIn a green field, I would say that it would be better to fix (1), so\nthat INSERT and UPDATE are consistent. However, I fear that it may be\ntoo late for that, because any such change would risk breaking\nexisting RLS policy setups in subtle ways.\n\nIt might be possible to change (2) though, by adding a new table-level\noption (similar to a view's WITH CHECK OPTION) that enabled or\ndisabled the checking of new rows for that table, and whose default\nmatched the current behaviour.\n\nBefore going too far down that route though, it is perhaps worth\nasking whether this is something users really want. Is there a real\nuse-case for being able to UPDATE rows and have them disappear?\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 10 Nov 2023 09:39:27 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Fri, 2023-11-10 at 09:39 +0000, Dean Rasheed wrote:\n> On Thu, 9 Nov 2023 at 18:55, Laurenz Albe <[email protected]> wrote:\n> > I think it can be useful to allow a user an UPDATE where the result\n> > does not satisfy the USING clause of the FOR SELECT policy.\n> > \n> > The idea that an UPDATE should only produce rows you can SELECT is not\n> > true today: if you run an UPDATE without a WHERE clause, you can\n> > create rows you cannot see. The restriction is only on UPDATEs with\n> > a WHERE clause. Weird, isn't it?\n> \n> That's true, but only if the UPDATE also doesn't have a RETURNING\n> clause. What I find weird about your proposal is that it would allow\n> an UPDATE ... RETURNING command to return something that would be\n> visible just that once, but then subsequently disappear. That seems\n> like a cure that's worse than the original disease that kicked off\n> this discussion.\n\nWhat kicked off the discussion was my complaint that FOR SELECT\nrules mess with UPDATE, so that's exactly what I would have liked:\nan UPDATE that makes the rows vanish.\n\nMy naïve expectation was that FOR SELECT policies govern SELECT\nand FOR UPDATE policies govern UPDATE. After all, there is a\nWITH CHECK clause for FOR UPDATE policies that checks the result rows.\n\nSo, from my perspective, we should never have let FOR SELECT policies\nmess with an UPDATE. But I am too late for that; such a change would\nbe way too invasive now. So I'd like to introduce a \"back door\" by\ncreating a FOR SELECT policy with WITH CHECK (TRUE).\n\n> As mentioned by others, the intention was that RLS behave like WITH\n> CHECK OPTION on an updatable view, so that new rows can't just\n> disappear. There are, however, 2 differences between the way it\n> currently works for RLS, and an updatable view:\n> \n> 1). RLS only does this for UPDATE commands. INSERT commands *can*\n> insert new rows that aren't visible, and so disappear.\n> \n> 2). It can't be turned off. The WITH CHECK OPTION on an updatable view\n> is an option that the user can choose to turn on or off. That's not\n> possible with RLS.\n\nRight. Plus the above-mentioned fact that you can make rows vanish\nwith an UPDATE that has no WHERE.\n\n> It might be possible to change (2) though, by adding a new table-level\n> option (similar to a view's WITH CHECK OPTION) that enabled or\n> disabled the checking of new rows for that table, and whose default\n> matched the current behaviour.\n\nThat would be a viable solution.\n\nPro: it doesn't make the already hideously complicated RLS system\neven more complicated.\n\nCon: yet another storage option...\n\n> Before going too far down that route though, it is perhaps worth\n> asking whether this is something users really want. Is there a real\n> use-case for being able to UPDATE rows and have them disappear?\n\nWhat triggered my investigation was this question:\nhttps://stackoverflow.com/q/77346757/6464308\n\nI personally don't have any stake in this. I just wanted a way to\nmake RLS behave more like I think it should.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 10 Nov 2023 13:43:51 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Fri, Nov 10, 2023 at 7:43 AM Laurenz Albe <[email protected]> wrote:\n> So, from my perspective, we should never have let FOR SELECT policies\n> mess with an UPDATE. But I am too late for that; such a change would\n> be way too invasive now. So I'd like to introduce a \"back door\" by\n> creating a FOR SELECT policy with WITH CHECK (TRUE).\n\nIn principle I see no problem with some kind of back door here, but\nthat seems like it might not be the right way to do it. I don't think\nwe want constant true to behave arbitrarily differently than any other\nexpression. Maybe that's not what you had in mind and I'm just not\nseeing the full picture, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Nov 2023 12:57:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" }, { "msg_contents": "On Mon, 2023-11-13 at 12:57 -0500, Robert Haas wrote:\n> On Fri, Nov 10, 2023 at 7:43 AM Laurenz Albe <[email protected]> wrote:\n> > So, from my perspective, we should never have let FOR SELECT policies\n> > mess with an UPDATE. But I am too late for that; such a change would\n> > be way too invasive now. So I'd like to introduce a \"back door\" by\n> > creating a FOR SELECT policy with WITH CHECK (TRUE).\n> \n> In principle I see no problem with some kind of back door here, but\n> that seems like it might not be the right way to do it. I don't think\n> we want constant true to behave arbitrarily differently than any other\n> expression. Maybe that's not what you had in mind and I'm just not\n> seeing the full picture, though.\n\nI experimented some more, and I think I see my mistake now.\n\nCurrently, the USING clause of FOR SELECT/ALL/UPDATE policies is\nan *additional* restriction to the WITH CHECK clause.\nSo my suggestion of using the WITH CHECK clause *instead of*\nthe USING clause in FOR SELECT policies would be unprincipled.\n\nSorry for the noise.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 13 Nov 2023 20:31:16 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug: RLS policy FOR SELECT is used to check new rows" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18167\nLogged by: Marius Raicu\nEmail address: [email protected]\nPostgreSQL version: 16.0\nOperating system: RedHat 8\nDescription: \n\nHello all,\r\n\r\nI am encountering some problems when creating partitioned tables when\ndefault_tablespace parameter is set.\r\n\r\nI am not sure if it is a bug or maybe I don't understand the documentation\ncorrectly. In the doc, it is stated:\r\nhttps://www.postgresql.org/docs/16/sql-createtable.html\r\nTABLESPACE tablespace_name \r\nThe tablespace_name is the name of the tablespace in which the new table is\nto be created. If not specified, default_tablespace is consulted, or\ntemp_tablespaces if the table is temporary. For partitioned tables, since no\nstorage is required for the table itself, the tablespace specified overrides\ndefault_tablespace as the default tablespace to use for any newly created\npartitions when no other tablespace is explicitly specified.\r\n\r\nUSING INDEX TABLESPACE tablespace_name \r\nThis clause allows selection of the tablespace in which the index associated\nwith a UNIQUE, PRIMARY KEY, or EXCLUDE constraint will be created. If not\nspecified, default_tablespace is consulted, or temp_tablespaces if the table\nis temporary.\r\n\r\nhttps://www.postgresql.org/docs/16/runtime-config-client.html#GUC-DEFAULT-TABLESPACE\r\ndefault_tablespace (string) \r\nThis variable specifies the default tablespace in which to create objects\n(tables and indexes) when a CREATE command does not explicitly specify a\ntablespace.\r\n\r\nThe value is either the name of a tablespace, or an empty string to specify\nusing the default tablespace of the current database. If the value does not\nmatch the name of any existing tablespace, PostgreSQL will automatically use\nthe default tablespace of the current database. If a nondefault tablespace\nis specified, the user must have CREATE privilege for it, or creation\nattempts will fail.\r\n\r\nThis variable is not used for temporary tables; for them, temp_tablespaces\nis consulted instead.\r\n\r\nThis variable is also not used when creating databases. By default, a new\ndatabase inherits its tablespace setting from the template database it is\ncopied from.\r\n\r\nIf this parameter is set to a value other than the empty string when a\npartitioned table is created, the partitioned table's tablespace will be set\nto that value, which will be used as the default tablespace for partitions\ncreated in the future, even if default_tablespace has changed since then.\r\n\r\nSee the sequence below:\r\n\r\n[marius@mylaptop ~]$ psql\r\npsql (17devel)\r\nType \"help\" for help.\r\n\r\nmarius@[local]:5434/postgres=# show default_tablespace;\r\n default_tablespace \r\n--------------------\r\n \r\n(1 row)\r\n\r\nmarius@[local]:5434/postgres=# create table toto(id numeric) partition by\nlist(id);\r\nCREATE TABLE\r\nmarius@[local]:5434/postgres=# drop table toto;\r\nDROP TABLE\r\nmarius@[local]:5434/postgres=# \\! mkdir /home/marius/pgcode/tblspc1\r\nmarius@[local]:5434/postgres=# \\! ls /home/marius/pgcode\r\nbin pgdata postgresql tblspc1\r\nmarius@[local]:5434/postgres=# \\q\r\n[marius@mylaptop ~]$ vi $PGDATA/postgresql.conf\r\n[marius@mylaptop ~]$ \r\n[marius@mylaptop ~]$ pg_ctl restart\r\nwaiting for server to shut down.... done\r\nserver stopped\r\nwaiting for server to start....2023-10-24 11:14:21.636 CEST [5800] LOG: \nredirecting log output to logging collector process\r\n2023-10-24 11:14:21.636 CEST [5800] HINT: Future log output will appear in\ndirectory \"log\".\r\n done\r\nserver started\r\n[marius@mylaptop ~]$ psql\r\npsql (17devel)\r\nType \"help\" for help.\r\n\r\nmarius@[local]:5434/postgres=# show default_tablespace;\r\n default_tablespace \r\n--------------------\r\n tblspc1\r\n(1 row)\r\n\r\nmarius@[local]:5434/postgres=# create tablespace tblspc1 location\n'/home/marius/pgcode/tblspc1';\r\nCREATE TABLESPACE\r\nmarius@[local]:5434/postgres=# create database test tablespace tblspc1;\r\nCREATE DATABASE\r\nmarius@[local]:5434/postgres=# \\c test\r\nYou are now connected to database \"test\" as user \"marius\".\r\nmarius@[local]:5434/test=# create table toto(id numeric) partition by\nlist(id);\r\nERROR: cannot specify default tablespace for partitioned relations\r\nmarius@[local]:5434/test=# create table toto(id numeric, constraint pk_id\nprimary key(id) using index tablespace tblspc1) partition by list(id);\r\nERROR: cannot specify default tablespace for partitioned relations\r\n\r\n\r\nmarius@[local]:5434/postgres=# \\c test\r\nYou are now connected to database \"test\" as user \"marius\".\r\nmarius@[local]:5434/test=# create table toto2(id numeric, constraint pk_id\nprimary key(id) using index tablespace tblspc1) partition by list(id);\r\nERROR: cannot specify default tablespace for partitioned relations\r\nmarius@[local]:5434/test=# create table toto(id numeric) partition by\nlist(id) tablespace tblspc1;\r\nERROR: cannot specify default tablespace for partitioned relations\r\nmarius@[local]:5434/test=# create table toto(id numeric) partition by\nlist(id);\r\nERROR: cannot specify default tablespace for partitioned relations\r\nmarius@[local]:5434/test=# create table toto2(id numeric, constraint pk_id\nprimary key(id)) partition by list(id);\r\nERROR: cannot specify default tablespace for partitioned relations\r\n\r\nHowever, in another database, 'postgres' by example, which was created in\nthe default tablespace '' (no tablespace at all), it works:\r\n\r\nmarius@[local]:5434/postgres=# create table toto(id numeric) partition by\nlist(id) tablespace tblspc1;\r\nCREATE TABLE\r\nmarius@[local]:5434/postgres=# create table toto2(id numeric, constraint\npk_id primary key(id) using index tablespace tblspc1) partition by\nlist(id);\r\nCREATE TABLE\r\n\r\n\r\nI was able to reproduce this behavior on all versions starting to PG12.\r\nSo, when the default _tablespace is set, you have to specify the tablespace\nclause to CREATE TABLE, despite the fact that the database where you try to\nput the table is created into a tablespace.\r\n\r\nThanks,\r\nMarius Raicu", "msg_date": "Tue, 24 Oct 2023 09:42:28 +0000", "msg_from": "PG Bug reporting form <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #18167: cannot create partitioned tables when default_tablespace\n is set" }, { "msg_contents": "On 2023-Oct-24, PG Bug reporting form wrote:\n\n> marius@[local]:5434/postgres=# show default_tablespace;\n> default_tablespace \n> --------------------\n> tblspc1\n> (1 row)\n> \n> marius@[local]:5434/postgres=# create tablespace tblspc1 location\n> '/home/marius/pgcode/tblspc1';\n> CREATE TABLESPACE\n> marius@[local]:5434/postgres=# create database test tablespace tblspc1;\n> CREATE DATABASE\n> marius@[local]:5434/postgres=# \\c test\n> You are now connected to database \"test\" as user \"marius\".\n> marius@[local]:5434/test=# create table toto(id numeric) partition by\n> list(id);\n> ERROR: cannot specify default tablespace for partitioned relations\n\nOh, so the problem here is that *both* default_tablespace and the\ndatabase's tablespace are set, and then a partitioned table creation\nfails when it doesn't specify any tablespace? That indeed sounds like a\nbug. I'll have a look, thanks. I'm surprised it took so long for this\nto be reported.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n", "msg_date": "Wed, 25 Oct 2023 09:45:44 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #18167: cannot create partitioned tables when\n default_tablespace is set" }, { "msg_contents": "Alvaro Herrera <[email protected]> 于2023年10月25日周三 17:41写道:\n\n> On 2023-Oct-24, PG Bug reporting form wrote:\n>\n> > marius@[local]:5434/postgres=# show default_tablespace;\n> > default_tablespace\n> > --------------------\n> > tblspc1\n> > (1 row)\n> >\n> > marius@[local]:5434/postgres=# create tablespace tblspc1 location\n> > '/home/marius/pgcode/tblspc1';\n> > CREATE TABLESPACE\n> > marius@[local]:5434/postgres=# create database test tablespace tblspc1;\n> > CREATE DATABASE\n> > marius@[local]:5434/postgres=# \\c test\n> > You are now connected to database \"test\" as user \"marius\".\n> > marius@[local]:5434/test=# create table toto(id numeric) partition by\n> > list(id);\n> > ERROR: cannot specify default tablespace for partitioned relations\n>\n> Oh, so the problem here is that *both* default_tablespace and the\n> database's tablespace are set, and then a partitioned table creation\n> fails when it doesn't specify any tablespace? That indeed sounds like a\n> bug. I'll have a look, thanks. I'm surprised it took so long for this\n> to be reported.\n>\n\nOh, interesting issue!\nI found another two case:\nFirst: default_tablespace not set and create part rel failed\npostgres=# create tablespace tbsp3 location '/tender/pgsql/tbsp3';\nCREATE TABLESPACE\npostgres=# create database test3 tablespace tbsp3;\nCREATE DATABASE\npostgres=# \\c test3\nYou are now connected to database \"test3\" as user \"gpadmin\".\ntest3=# show default_tablespace ;\n default_tablespace\n--------------------\n\n(1 row)\n\ntest3=# create table part1(a int) partition by list(a) tablespace tbsp3;\nERROR: cannot specify default tablespace for partitioned relations\n\nSecond: default_tablespace and database's tablespace both set, but part rel\ncreated\ntest3=# set default_tablespace = tbsp2;\nSET\ntest3=# create table part1(a int) partition by list(a);\nCREATE TABLE\n\nI'm not sure if the above two cases are a bug. If the document could\nprovide detailed explanations, that would be great.\n\n\n\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> \"Someone said that it is at least an order of magnitude more work to do\n> production software than a prototype. I think he is wrong by at least\n> an order of magnitude.\" (Brian Kernighan)\n>\n>\n>\n>\n>\n\nAlvaro Herrera <[email protected]> 于2023年10月25日周三 17:41写道:On 2023-Oct-24, PG Bug reporting form wrote:\n\n> marius@[local]:5434/postgres=# show default_tablespace;\n>  default_tablespace \n> --------------------\n>  tblspc1\n> (1 row)\n> \n> marius@[local]:5434/postgres=# create tablespace tblspc1 location\n> '/home/marius/pgcode/tblspc1';\n> CREATE TABLESPACE\n> marius@[local]:5434/postgres=# create database test tablespace tblspc1;\n> CREATE DATABASE\n> marius@[local]:5434/postgres=# \\c test\n> You are now connected to database \"test\" as user \"marius\".\n> marius@[local]:5434/test=# create table toto(id numeric) partition by\n> list(id);\n> ERROR:  cannot specify default tablespace for partitioned relations\n\nOh, so the problem here is that *both* default_tablespace and the\ndatabase's tablespace are set, and then a partitioned table creation\nfails when it doesn't specify any tablespace?  That indeed sounds like a\nbug.  I'll have a look, thanks.  I'm surprised it took so long for this\nto be reported. Oh, interesting issue!I found another two case:First: default_tablespace not set and create part rel failedpostgres=# create tablespace tbsp3 location '/tender/pgsql/tbsp3';CREATE TABLESPACEpostgres=# create database test3 tablespace tbsp3;CREATE DATABASEpostgres=# \\c test3You are now connected to database \"test3\" as user \"gpadmin\".test3=# show default_tablespace ; default_tablespace -------------------- (1 row)test3=# create table part1(a int) partition by list(a) tablespace tbsp3;ERROR:  cannot specify default tablespace for partitioned relationsSecond: default_tablespace and database's tablespace both set, but part rel createdtest3=# set default_tablespace = tbsp2;SETtest3=# create table part1(a int) partition by list(a);CREATE TABLEI'm not sure if the above two cases are a bug. If the document could provide detailed explanations, that would be great. \n-- \nÁlvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\"                              (Brian Kernighan)", "msg_date": "Wed, 25 Oct 2023 17:58:21 +0800", "msg_from": "tender wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #18167: cannot create partitioned tables when\n default_tablespace is set" }, { "msg_contents": "Hello Alvaro, all,\n\nI have done some research regarding this bug.\n\nBasically, we forbid the creation of partitioned tables and indexes if a \ndefault_tablespace is specified in postgresql.conf.\n\nIn tablespace.c, the comment says:\n\n\"Don't allow specifying that when creating a partitioned table, however, \nsince the result is confusing.\"\n\nI did not see why the result is confusing.\n\nI just disabled the checks in tablespace.c, tablecmds.c and indexcmds.c \nand now it works.\n\nI modified the expected result in tests, and the tests are passing too.\n\nSee the attached patch.\n\nRegards,\n\nMarius Raicu\n\nOn 10/25/23 11:58, tender wang wrote:\n>\n>\n> Alvaro Herrera <[email protected]> 于2023年10月25日周三 17:41写道:\n>\n> On 2023-Oct-24, PG Bug reporting form wrote:\n>\n> > marius@[local]:5434/postgres=# show default_tablespace;\n> >  default_tablespace\n> > --------------------\n> >  tblspc1\n> > (1 row)\n> >\n> > marius@[local]:5434/postgres=# create tablespace tblspc1 location\n> > '/home/marius/pgcode/tblspc1';\n> > CREATE TABLESPACE\n> > marius@[local]:5434/postgres=# create database test tablespace\n> tblspc1;\n> > CREATE DATABASE\n> > marius@[local]:5434/postgres=# \\c test\n> > You are now connected to database \"test\" as user \"marius\".\n> > marius@[local]:5434/test=# create table toto(id numeric)\n> partition by\n> > list(id);\n> > ERROR:  cannot specify default tablespace for partitioned relations\n>\n> Oh, so the problem here is that *both* default_tablespace and the\n> database's tablespace are set, and then a partitioned table creation\n> fails when it doesn't specify any tablespace?  That indeed sounds\n> like a\n> bug.  I'll have a look, thanks.  I'm surprised it took so long for\n> this\n> to be reported.\n>\n> Oh, interesting issue!\n> I found another two case:\n> First: default_tablespace not set and create part rel failed\n> postgres=# create tablespace tbsp3 location '/tender/pgsql/tbsp3';\n> CREATE TABLESPACE\n> postgres=# create database test3 tablespace tbsp3;\n> CREATE DATABASE\n> postgres=# \\c test3\n> You are now connected to database \"test3\" as user \"gpadmin\".\n> test3=# show default_tablespace ;\n>  default_tablespace\n> --------------------\n>\n> (1 row)\n>\n> test3=# create table part1(a int) partition by list(a) tablespace tbsp3;\n> ERROR:  cannot specify default tablespace for partitioned relations\n>\n> Second: default_tablespace and database's tablespace both set, but \n> part rel created\n> test3=# set default_tablespace = tbsp2;\n> SET\n> test3=# create table part1(a int) partition by list(a);\n> CREATE TABLE\n>\n> I'm not sure if the above two cases are a bug. If the document could \n> provide detailed explanations, that would be great.\n>\n> -- \n> Álvaro Herrera               48°01'N 7°57'E  —\n> https://www.EnterpriseDB.com/ <https://www.EnterpriseDB.com/>\n> \"Someone said that it is at least an order of magnitude more work\n> to do\n> production software than a prototype. I think he is wrong by at least\n> an order of magnitude.\"                              (Brian Kernighan)\n>\n>\n>\n>", "msg_date": "Thu, 2 Nov 2023 13:50:55 +0100", "msg_from": "Marius RAICU <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #18167: cannot create partitioned tables when\n default_tablespace is set" } ]
[ { "msg_contents": "Hi,\n\nSome time ago I’ve provided some details with the issues we face when trying to use GIST and partitioning at the same time in the postgresql-general mailing list:\nhttps://www.postgresql.org/message-id/3FA1E0A9-8393-41F6-88BD-62EEEA1EC21F%40kleczek.org\nGIST index and ORDER BY\npostgresql.org\n\nWe decided to go with the solution to partition our table by:\n\nRANGE (‘2100-01-01' <-> operation_date).\n\nWhile it (somewhat) solves partition pruning issues described above there is another problem:\nIt is impossible to create a unique constraint on the partitioned table.\n\nSo now we cannot use INSERT … ON CONFLICT (…) DO UPDATE\n\n\n\nMy question to hackers:\nWould it be feasible to implement ORDER BY column GIST index (only) scan for types with total order and sensible greatest and least values?\n\nThanks,\nMichal", "msg_date": "Tue, 24 Oct 2023 13:22:53 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "A case for GIST supporting ORDER BY" }, { "msg_contents": "Hi All,\n\nAttached is a first attempt to implement GIST index (only) scans for ORDER BY column clauses.\n\nThe idea is that it order by column for some datatypes is a special case of ordering by distance:\n\nORDER BY a == ORDER BY a <-> MIN_VALUE\nand\nORDER BY a DESC == ORDER BY a <-> MAX_VALUE\n\nThis allows implementing GIST ordered scans for btree_gist datatypes.\n\nThis in turn makes using GIST with partitioning feasible (I have described issues with such usage in my previous e-mails - see below).\n\nThe solution is not ideal as it requires registering “<“ and “>” operators as ordering operators in opfamily\n(which in turn makes it possible to issue somewhat meaningless “ORDER BY a < ‘constant’)\n\nThe problem is though that right now handling of ORDER BY column clauses is tightly coupled to BTree.\nIt would be good to refactor the code so that semantics of ORDER BY column could be more flexible.\n\nIt would be great if someone could take a look at it.\n\nThanks,\nMichal \n\n> On 24 Oct 2023, at 13:22, Michał Kłeczek <[email protected]> wrote:\n> \n> Hi,\n> \n> Some time ago I’ve provided some details with the issues we face when trying to use GIST and partitioning at the same time in the postgresql-general mailing list:\n> https://www.postgresql.org/message-id/3FA1E0A9-8393-41F6-88BD-62EEEA1EC21F%40kleczek.org\n> We decided to go with the solution to partition our table by:\n> \n> RANGE (‘2100-01-01' <-> operation_date).\n> \n> While it (somewhat) solves partition pruning issues described above there is another problem:\n> It is impossible to create a unique constraint on the partitioned table.\n> \n> So now we cannot use INSERT … ON CONFLICT (…) DO UPDATE\n> \n> \n> \n> My question to hackers:\n> Would it be feasible to implement ORDER BY column GIST index (only) scan for types with total order and sensible greatest and least values?\n> \n> Thanks,\n> Michal", "msg_date": "Mon, 30 Oct 2023 09:04:22 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "DRAFT GIST support for ORDER BY" }, { "msg_contents": "On Mon, 30 Oct 2023 at 09:04, Michał Kłeczek <[email protected]> wrote:\n>\n> Hi All,\n>\n> Attached is a first attempt to implement GIST index (only) scans for ORDER BY column clauses.\n\nCool!\n\n> The solution is not ideal as it requires registering “<“ and “>” operators as ordering operators in opfamily\n> (which in turn makes it possible to issue somewhat meaningless “ORDER BY a < ‘constant’)\n\nI don't quite understand why we need to register new \"<\" and \">\"\noperators. Can't we update the current ones?\n\n> The problem is though that right now handling of ORDER BY column clauses is tightly coupled to BTree.\n> It would be good to refactor the code so that semantics of ORDER BY column could be more flexible.\n\nThe existence of a BTREE operator class for the type is the indicator\nthat (and how) the type can be ordered - that is where PostgreSQL gets\nits methods for ordering most types. Although I agree that it's a\nquirk, I don't mind it that much as an indicator of how a type is\nordered.\nI do agree, though, that operator classes by themselves should be able\nto say \"hey, we support full ordered retrieval as well\". Right now,\nthat seems to be limited to btrees, but indeed a GiST index with\nbtree_gist columns should be able to support the same.\n\n> It would be great if someone could take a look at it.\n\nI've not looked in detail at the patch, but here's some comments:\n\n> --- a/contrib/btree_gist/btree_gist--1.6--1.7.sql\n> +++ b/contrib/btree_gist/btree_gist--1.6--1.7.sql\n\nYou seem to be modifying an existing migration of a released version\nof the btree_bist extension. I suggest you instead add a migration\nfrom 1.7 to a new version 1.8, and update the control file's default\ninstalled version.\n\n> ORDER BY a == ORDER BY a <-> MIN_VALUE\n> and\n> ORDER BY a DESC == ORDER BY a <-> MAX_VALUE\n>\n> This allows implementing GIST ordered scans for btree_gist datatypes.\n>\n> This in turn makes using GIST with partitioning feasible (I have described issues with such usage in my previous e-mails - see below).\n\nDid you take into account that GiST's internal distance function uses\nfloating point, and is thus only an approximation for values that\nrequire more than 2^54 significant bits in their distance function?\nFor example, GiST wouldn't be guaranteed to yield correct ordering of\nint8/bigint when you use `my_column <-> UINT64_MAX` because as far as\nthe floating point math is concerned, 0 is about as far away from\nINT64_MAX as (say) 20 and -21.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 30 Oct 2023 13:31:18 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRAFT GIST support for ORDER BY" }, { "msg_contents": "\n\n> On 30 Oct 2023, at 13:31, Matthias van de Meent <[email protected]> wrote:\n> \n>> The solution is not ideal as it requires registering “<“ and “>” operators as ordering operators in opfamily\n>> (which in turn makes it possible to issue somewhat meaningless “ORDER BY a < ‘constant’)\n> \n> I don't quite understand why we need to register new \"<\" and \">\"\n> operators. Can't we update the current ones?\n\nI wasn’t precise: what is needed is adding pg_amop entries with amoppurpose = ‘o’ for existing “<\" and “>\" operators.\n\n> \n>> The problem is though that right now handling of ORDER BY column clauses is tightly coupled to BTree.\n>> It would be good to refactor the code so that semantics of ORDER BY column could be more flexible.\n> \n> The existence of a BTREE operator class for the type is the indicator\n> that (and how) the type can be ordered - that is where PostgreSQL gets\n> its methods for ordering most types. Although I agree that it's a\n> quirk, I don't mind it that much as an indicator of how a type is\n> ordered.\n> I do agree, though, that operator classes by themselves should be able\n> to say \"hey, we support full ordered retrieval as well\". Right now,\n> that seems to be limited to btrees, but indeed a GiST index with\n> btree_gist columns should be able to support the same.\n\nRight now opfamily and strategy are set in PathKey before creating index scan paths.\n\nThe patch actually copies existing code from create_indexscan_plan\nthat finds an operator OID for (pk_opfamily, pk_strategy).\nThe operator is supposed to be binary with specific operand types.\n\nTo create a path:\n1) do the operator OID lookup as above\n2) look for sortfamily of pg_amop entry for (operator did, index opfamily)\nIf the sort family is the same as pk_opfamily we can create a path.\n\nThe side effect is that it is possible to “ORDER BY column < ‘constant’” as we have more ordering operators in pg_amop.\n\nIdeally we could look up _unary_ operator in pg_amop instead - that would make sense we are actually measuring some “absolute distance”.\nBut this would require more changes - createplan.c would need to decide when to lookup unary and when - binary operator.\n\n\n>> It would be great if someone could take a look at it.\n> \n> I've not looked in detail at the patch, but here's some comments:\n> \n>> --- a/contrib/btree_gist/btree_gist--1.6--1.7.sql\n>> +++ b/contrib/btree_gist/btree_gist--1.6--1.7.sql\n> \n> You seem to be modifying an existing migration of a released version\n> of the btree_bist extension. I suggest you instead add a migration\n> from 1.7 to a new version 1.8, and update the control file's default\n> installed version.\n\nThanks. I didn’t know how to register a new migration so did it that way.\nWill try to fix that.\n\n> \n>> ORDER BY a == ORDER BY a <-> MIN_VALUE\n>> and\n>> ORDER BY a DESC == ORDER BY a <-> MAX_VALUE\n>> \n>> This allows implementing GIST ordered scans for btree_gist datatypes.\n>> \n>> This in turn makes using GIST with partitioning feasible (I have described issues with such usage in my previous e-mails - see below).\n> \n> Did you take into account that GiST's internal distance function uses\n> floating point, and is thus only an approximation for values that\n> require more than 2^54 significant bits in their distance function?\n> For example, GiST wouldn't be guaranteed to yield correct ordering of\n> int8/bigint when you use `my_column <-> UINT64_MAX` because as far as\n> the floating point math is concerned, 0 is about as far away from\n> INT64_MAX as (say) 20 and -21.\n\nHmm… Good point but it means ORDER BY <-> is broken for these types then?\nThe patch assumes it works correctly and just uses it for ordered scans.\n\n—\nMichal\n\n", "msg_date": "Mon, 30 Oct 2023 14:38:52 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRAFT GIST support for ORDER BY" }, { "msg_contents": "On Mon, 30 Oct 2023 at 14:39, Michał Kłeczek <[email protected]> wrote:\n>> On 30 Oct 2023, at 13:31, Matthias van de Meent <[email protected]> wrote:\n>>\n>>> The problem is though that right now handling of ORDER BY column clauses is tightly coupled to BTree.\n>>> It would be good to refactor the code so that semantics of ORDER BY column could be more flexible.\n>>\n>> The existence of a BTREE operator class for the type is the indicator\n>> that (and how) the type can be ordered - that is where PostgreSQL gets\n>> its methods for ordering most types. Although I agree that it's a\n>> quirk, I don't mind it that much as an indicator of how a type is\n>> ordered.\n>> I do agree, though, that operator classes by themselves should be able\n>> to say \"hey, we support full ordered retrieval as well\". Right now,\n>> that seems to be limited to btrees, but indeed a GiST index with\n>> btree_gist columns should be able to support the same.\n>\n> Right now opfamily and strategy are set in PathKey before creating index scan paths.\n>\n> The patch actually copies existing code from create_indexscan_plan\n> that finds an operator OID for (pk_opfamily, pk_strategy).\n> The operator is supposed to be binary with specific operand types.\n>\n> To create a path:\n> 1) do the operator OID lookup as above\n> 2) look for sortfamily of pg_amop entry for (operator did, index opfamily)\n> If the sort family is the same as pk_opfamily we can create a path.\n>\n> The side effect is that it is possible to “ORDER BY column < ‘constant’” as we have more ordering operators in pg_amop.\n>\n> Ideally we could look up _unary_ operator in pg_amop instead - that would make sense we are actually measuring some “absolute distance”.\n> But this would require more changes - createplan.c would need to decide when to lookup unary and when - binary operator.\n\nAfter researching this a bit more, I'm confused: If I register an opclass\n\nCREATE OPERATOR CLASS gist_mytype_btree\nDEFUALT FOR mytype USING gist\nAS\n OPERATOR 1 < (mytype, mytype) FOR ORDER BY mytype_ops, -- operator\n<(mytype, mytype) returns bool\n ...\n OPERATOR 15 <-> (mytype, mytype) FOR ORDER BY mytype_ops. --\noperator <->(mytype, mytype) returns mytype\n ...\n\nThen which order of values does the system expect the index to return\ntuples in when either of these operators is applied?\nIs that\n ORDER BY (index_column opr constant); but bool isn't the type\nsupported by the FOR ORDER BY opclass, or\n ORDER BY (index_column); but this makes no sense for distance operators.\n\nAfter looking at get_relation_info() in optimizer/util/plancat.c, I\nguess the difference is the difference between amhandler->amcanorder\nvs amhandler->amcanorderbyop? But still it's not quite clear what the\nimplication for this is. Does it mean an index AM can either provide\nnatural ordering, or operator ordering, but not both?\n\n>>> ORDER BY a == ORDER BY a <-> MIN_VALUE\n>>> and\n>>> ORDER BY a DESC == ORDER BY a <-> MAX_VALUE\n>>>\n>>> This allows implementing GIST ordered scans for btree_gist datatypes.\n>>>\n>>> This in turn makes using GIST with partitioning feasible (I have described issues with such usage in my previous e-mails - see below).\n>>\n>> Did you take into account that GiST's internal distance function uses\n>> floating point, and is thus only an approximation for values that\n>> require more than 2^54 significant bits in their distance function?\n>> For example, GiST wouldn't be guaranteed to yield correct ordering of\n>> int8/bigint when you use `my_column <-> UINT64_MAX` because as far as\n>> the floating point math is concerned, 0 is about as far away from\n>> INT64_MAX as (say) 20 and -21.\n>\n> Hmm… Good point but it means ORDER BY <-> is broken for these types then?\n> The patch assumes it works correctly and just uses it for ordered scans.\n\nHuh, I didn't know this before, but apparently values are pushed onto\na reorderqueue/pairingheap if the index scan is marked\nxs_recheckorderby (i.e. when the tuple order is not exact), which\nwould be used in this case.\n\nSo it seems like this wouldn't be much of an issue for the patch,\napart from the potential issue where this could use the pairingheap\nmuch more than the usual ordered scan operations, which could result\nin larger-than-normal memory usage. E.g. float btree ops wouldn't work\neffectively at all because every reasonable value is extremely distant\nfrom its max value.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 3 Nov 2023 19:53:43 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRAFT GIST support for ORDER BY" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing an issue after upgrading from 12.13 to 15.4. This happens\nwhen we run a query against a foreign table (fdw on the same instance to\na different database) -- but does not appear when we get rid of\npostgres_fdw:\n\nERROR: cursor can only scan forward\nHINT: Declare it with SCROLL option to enable backward scan.\nCONTEXT: remote SQL command: MOVE BACKWARD ALL IN c1\n\nSQL state: 55000\n\nI attached the query. The name of the foreign table is\n\"foobar.sys_user\".\n\nLooks like the bug #17889, and this is the last email in that thread: \nhttps://www.postgresql.org/message-id/1852635.1682808624%40sss.pgh.pa.us\n\nOTOH, same query works (against the FDW) when we remove the following\nWHERE clause:\n\nWHERE\n tbl.table_status = 'A'\n AND tbl.table_id <> 1\n AND tbl.table_id <> - 2\n\nAny hints?\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Tue, 24 Oct 2023 12:46:40 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": true, "msg_subject": "d844cd75a and postgres_fdw" }, { "msg_contents": "On Tue, Oct 24, 2023 at 8:48 PM Devrim Gündüz <[email protected]> wrote:\n> I'm seeing an issue after upgrading from 12.13 to 15.4. This happens\n> when we run a query against a foreign table (fdw on the same instance to\n> a different database) -- but does not appear when we get rid of\n> postgres_fdw:\n>\n> ERROR: cursor can only scan forward\n> HINT: Declare it with SCROLL option to enable backward scan.\n> CONTEXT: remote SQL command: MOVE BACKWARD ALL IN c1\n>\n> SQL state: 55000\n>\n> I attached the query. The name of the foreign table is\n> \"foobar.sys_user\".\n>\n> Looks like the bug #17889, and this is the last email in that thread:\n> https://www.postgresql.org/message-id/1852635.1682808624%40sss.pgh.pa.us\n>\n> OTOH, same query works (against the FDW) when we remove the following\n> WHERE clause:\n>\n> WHERE\n> tbl.table_status = 'A'\n> AND tbl.table_id <> 1\n> AND tbl.table_id <> - 2\n>\n> Any hints?\n\nThe error occurs when rescanning a postgres_fdw foreign relation, so I\nthink the reason why the query works would be that the planner chose a\njoin plan other than a nestloop join plan.\n\nI proposed a fix for this in [1].\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/CAPmGK149UubRQGLH6QaBkhJvas%2BGz%2BT6tx2MBX9MTJpxDRKPBA%40mail.gmail.com\n\n\n", "msg_date": "Fri, 5 Jul 2024 21:56:28 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: d844cd75a and postgres_fdw" }, { "msg_contents": "On Fri, Jul 5, 2024 at 9:56 PM Etsuro Fujita <[email protected]> wrote:\n> On Tue, Oct 24, 2023 at 8:48 PM Devrim Gündüz <[email protected]> wrote:\n> > I'm seeing an issue after upgrading from 12.13 to 15.4. This happens\n> > when we run a query against a foreign table (fdw on the same instance to\n> > a different database) -- but does not appear when we get rid of\n> > postgres_fdw:\n> >\n> > ERROR: cursor can only scan forward\n> > HINT: Declare it with SCROLL option to enable backward scan.\n> > CONTEXT: remote SQL command: MOVE BACKWARD ALL IN c1\n> >\n> > SQL state: 55000\n> >\n> > I attached the query. The name of the foreign table is\n> > \"foobar.sys_user\".\n> >\n> > Looks like the bug #17889, and this is the last email in that thread:\n> > https://www.postgresql.org/message-id/1852635.1682808624%40sss.pgh.pa.us\n> >\n> > OTOH, same query works (against the FDW) when we remove the following\n> > WHERE clause:\n> >\n> > WHERE\n> > tbl.table_status = 'A'\n> > AND tbl.table_id <> 1\n> > AND tbl.table_id <> - 2\n> >\n> > Any hints?\n>\n> The error occurs when rescanning a postgres_fdw foreign relation, so I\n> think the reason why the query works would be that the planner chose a\n> join plan other than a nestloop join plan.\n>\n> I proposed a fix for this in [1].\n\nI pushed the fix and back-patched to v15. Thanks for the report!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Fri, 19 Jul 2024 14:01:54 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: d844cd75a and postgres_fdw" }, { "msg_contents": "Hi,\n\nOn Fri, 2024-07-19 at 14:01 +0900, Etsuro Fujita wrote:\n> I pushed the fix and back-patched to v15.  Thanks for the report!\n\nThanks a lot!\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Fri, 19 Jul 2024 14:29:41 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: d844cd75a and postgres_fdw" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing the test_decoding code, I noticed that when skip_empty_xacts\noption is specified, it doesn't open the streaming block( e.g.\npg_output_stream_start) before streaming the transactional MESSAGE even if it's\nthe first change in a streaming block.\n\nIt looks inconsistent with what we do when streaming DML\nchanges(e.g. pg_decode_stream_change()).\n\nHere is a small patch to open the stream block in this case.\n\nBest Regards,\nHou Zhijie", "msg_date": "Tue, 24 Oct 2023 11:52:01 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Open a streamed block for transactional messages during decoding" }, { "msg_contents": "On Tue, Oct 24, 2023 at 5:27 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> While reviewing the test_decoding code, I noticed that when skip_empty_xacts\n> option is specified, it doesn't open the streaming block( e.g.\n> pg_output_stream_start) before streaming the transactional MESSAGE even if it's\n> the first change in a streaming block.\n>\n> It looks inconsistent with what we do when streaming DML\n> changes(e.g. pg_decode_stream_change()).\n>\n> Here is a small patch to open the stream block in this case.\n>\n\nThe change looks good to me though I haven't tested it yet. BTW, can\nwe change the comment: \"Output stream start if we haven't yet, but\nonly for the transactional case.\" to \"Output stream start if we\nhaven't yet for transactional messages\"?\n\nI think we should backpatch this fix. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Oct 2023 10:12:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open a streamed block for transactional messages during decoding" }, { "msg_contents": "On Thursday, October 26, 2023 12:42 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Oct 24, 2023 at 5:27 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > While reviewing the test_decoding code, I noticed that when\r\n> > skip_empty_xacts option is specified, it doesn't open the streaming\r\n> block( e.g.\r\n> > pg_output_stream_start) before streaming the transactional MESSAGE\r\n> > even if it's the first change in a streaming block.\r\n> >\r\n> > It looks inconsistent with what we do when streaming DML changes(e.g.\r\n> > pg_decode_stream_change()).\r\n> >\r\n> > Here is a small patch to open the stream block in this case.\r\n> >\r\n> \r\n> The change looks good to me though I haven't tested it yet. BTW, can we\r\n> change the comment: \"Output stream start if we haven't yet, but only for the\r\n> transactional case.\" to \"Output stream start if we haven't yet for transactional\r\n> messages\"?\r\n\r\nThanks for the review and I changed this as suggested.\r\n\r\n> I think we should backpatch this fix. What do you think?\r\n\r\nI think maybe we can improve the code only for HEAD, as skip_empty_xacts is\r\nprimarily used to have consistent test results across different runs and this\r\npatch won't help with that. And I saw in 26dd028, we didn't backpatch for the\r\nsame reason.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Thu, 26 Oct 2023 08:31:48 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Open a streamed block for transactional messages during decoding" }, { "msg_contents": "On Thu, Oct 26, 2023 at 2:01 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Thursday, October 26, 2023 12:42 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Tue, Oct 24, 2023 at 5:27 PM Zhijie Hou (Fujitsu) <[email protected]>\n> > wrote:\n> > >\n> > > While reviewing the test_decoding code, I noticed that when\n> > > skip_empty_xacts option is specified, it doesn't open the streaming\n> > block( e.g.\n> > > pg_output_stream_start) before streaming the transactional MESSAGE\n> > > even if it's the first change in a streaming block.\n> > >\n> > > It looks inconsistent with what we do when streaming DML changes(e.g.\n> > > pg_decode_stream_change()).\n> > >\n> > > Here is a small patch to open the stream block in this case.\n> > >\n> >\n> > The change looks good to me though I haven't tested it yet. BTW, can we\n> > change the comment: \"Output stream start if we haven't yet, but only for the\n> > transactional case.\" to \"Output stream start if we haven't yet for transactional\n> > messages\"?\n>\n> Thanks for the review and I changed this as suggested.\n>\n\n--- a/contrib/test_decoding/expected/stream.out\n+++ b/contrib/test_decoding/expected/stream.out\n@@ -29,7 +29,10 @@ COMMIT;\n SELECT data FROM pg_logical_slot_get_changes('regression_slot',\nNULL,NULL, 'include-xids', '0', 'skip-empty-xacts', '1',\n'stream-changes', '1');\n data\n ----------------------------------------------------------\n+ opening a streamed block for transaction\n streaming message: transactional: 1 prefix: test, sz: 50\n+ closing a streamed block for transaction\n+ aborting streamed (sub)transaction\n\nI was analyzing the reason for the additional message: \"aborting\nstreamed (sub)transaction\" in the above test and it seems to be due to\nthe below check in the function pg_decode_stream_abort():\n\nif (data->skip_empty_xacts && !xact_wrote_changes)\nreturn;\n\nBefore the patch, we won't be setting the 'xact_wrote_changes' flag in\ntxndata which is fixed now. So, this looks okay to me. However, I have\nanother observation in this code which is that for aborts or\nsubtransactions, we are not checking the flag 'stream_wrote_changes',\nso we may end up emitting the abort message even when no actual change\nhas been streamed. I haven't tried to generate a test to verify this\nobservation, so I could be wrong as well but it is worth analyzing\nsuch cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Oct 2023 09:49:48 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open a streamed block for transactional messages during decoding" }, { "msg_contents": "On Monday, October 30, 2023 12:20 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Thu, Oct 26, 2023 at 2:01 PM Zhijie Hou (Fujitsu) <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Thursday, October 26, 2023 12:42 PM Amit Kapila\r\n> <[email protected]> wrote:\r\n> > >\r\n> > > On Tue, Oct 24, 2023 at 5:27 PM Zhijie Hou (Fujitsu)\r\n> > > <[email protected]>\r\n> > > wrote:\r\n> > > >\r\n> > > > While reviewing the test_decoding code, I noticed that when\r\n> > > > skip_empty_xacts option is specified, it doesn't open the\r\n> > > > streaming\r\n> > > block( e.g.\r\n> > > > pg_output_stream_start) before streaming the transactional MESSAGE\r\n> > > > even if it's the first change in a streaming block.\r\n> > > >\r\n> > > > It looks inconsistent with what we do when streaming DML changes(e.g.\r\n> > > > pg_decode_stream_change()).\r\n> > > >\r\n> > > > Here is a small patch to open the stream block in this case.\r\n> > > >\r\n> > >\r\n> > > The change looks good to me though I haven't tested it yet. BTW, can\r\n> > > we change the comment: \"Output stream start if we haven't yet, but\r\n> > > only for the transactional case.\" to \"Output stream start if we\r\n> > > haven't yet for transactional messages\"?\r\n> >\r\n> > Thanks for the review and I changed this as suggested.\r\n> >\r\n> \r\n> --- a/contrib/test_decoding/expected/stream.out\r\n> +++ b/contrib/test_decoding/expected/stream.out\r\n> @@ -29,7 +29,10 @@ COMMIT;\r\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot',\r\n> NULL,NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'stream-changes', '1');\r\n> data\r\n> ----------------------------------------------------------\r\n> + opening a streamed block for transaction\r\n> streaming message: transactional: 1 prefix: test, sz: 50\r\n> + closing a streamed block for transaction aborting streamed\r\n> + (sub)transaction\r\n> \r\n> I was analyzing the reason for the additional message: \"aborting streamed\r\n> (sub)transaction\" in the above test and it seems to be due to the below check in\r\n> the function pg_decode_stream_abort():\r\n> \r\n> if (data->skip_empty_xacts && !xact_wrote_changes) return;\r\n> \r\n> Before the patch, we won't be setting the 'xact_wrote_changes' flag in txndata\r\n> which is fixed now. So, this looks okay to me. However, I have another\r\n> observation in this code which is that for aborts or subtransactions, we are not\r\n> checking the flag 'stream_wrote_changes', so we may end up emitting the\r\n> abort message even when no actual change has been streamed. I haven't tried\r\n> to generate a test to verify this observation, so I could be wrong as well but it is\r\n> worth analyzing such cases.\r\n\r\nI have confirmed that the mentioned case is possible(steps[1]): the\r\nsub-transaction doesn't output any data, but the stream abort for this\r\nsub-transaction will still be sent.\r\n\r\nBut I think this may not be a problemic behavior, as even the pgoutput can\r\nbehave similarly, e.g. If all the changes are filtered by row filter or table\r\nfilter, then the stream abort will still be sent. The subscriber will skip\r\nhandling the STREAM ABORT if the aborted txn was not applied.\r\n\r\nAnd if we want to fix this, in output plugin, we need to record if we have sent\r\nany changes for each sub-transaction so that we can decide whether to send the\r\nfollowing stream abort or not. We cannot use 'stream_wrote_changes' because\r\nit's a per streamed block flag and there could be serval streamed blocks for one\r\nsub-txn. It looks a bit complicate to me.\r\n\r\n\r\n[1]\r\nSELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');\r\nBEGIN;\r\nsavepoint p1;\r\nCREATE TABLE test(a int);\r\nINSERT INTO test VALUES(1);\r\nsavepoint p2;\r\nCREATE TABLE test2(a int);\r\nROLLBACK TO SAVEPOINT p2;\r\nCOMMIT;\r\n\r\nSELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '1', 'stream-changes', '1');\r\n\r\n data\r\n--------------------------------------------------\r\n opening a streamed block for transaction TXN 734\r\n streaming change for TXN 734\r\n closing a streamed block for transaction TXN 734\r\n aborting streamed (sub)transaction TXN 736\r\n committing streamed transaction TXN 734\r\n\r\nBest Regards,\r\nHou zj\r\n\r\n\r\n\r\n", "msg_date": "Mon, 30 Oct 2023 08:47:33 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Open a streamed block for transactional messages during decoding" }, { "msg_contents": "On Mon, Oct 30, 2023 at 2:17 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, October 30, 2023 12:20 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Oct 26, 2023 at 2:01 PM Zhijie Hou (Fujitsu) <[email protected]>\n> > wrote:\n> > >\n> > > On Thursday, October 26, 2023 12:42 PM Amit Kapila\n> > <[email protected]> wrote:\n> > > >\n> > > > On Tue, Oct 24, 2023 at 5:27 PM Zhijie Hou (Fujitsu)\n> > > > <[email protected]>\n> > > > wrote:\n> > > > >\n> > > > > While reviewing the test_decoding code, I noticed that when\n> > > > > skip_empty_xacts option is specified, it doesn't open the\n> > > > > streaming\n> > > > block( e.g.\n> > > > > pg_output_stream_start) before streaming the transactional MESSAGE\n> > > > > even if it's the first change in a streaming block.\n> > > > >\n> > > > > It looks inconsistent with what we do when streaming DML changes(e.g.\n> > > > > pg_decode_stream_change()).\n> > > > >\n> > > > > Here is a small patch to open the stream block in this case.\n> > > > >\n> > > >\n> > > > The change looks good to me though I haven't tested it yet. BTW, can\n> > > > we change the comment: \"Output stream start if we haven't yet, but\n> > > > only for the transactional case.\" to \"Output stream start if we\n> > > > haven't yet for transactional messages\"?\n> > >\n> > > Thanks for the review and I changed this as suggested.\n> > >\n> >\n> > --- a/contrib/test_decoding/expected/stream.out\n> > +++ b/contrib/test_decoding/expected/stream.out\n> > @@ -29,7 +29,10 @@ COMMIT;\n> > SELECT data FROM pg_logical_slot_get_changes('regression_slot',\n> > NULL,NULL, 'include-xids', '0', 'skip-empty-xacts', '1', 'stream-changes', '1');\n> > data\n> > ----------------------------------------------------------\n> > + opening a streamed block for transaction\n> > streaming message: transactional: 1 prefix: test, sz: 50\n> > + closing a streamed block for transaction aborting streamed\n> > + (sub)transaction\n> >\n> > I was analyzing the reason for the additional message: \"aborting streamed\n> > (sub)transaction\" in the above test and it seems to be due to the below check in\n> > the function pg_decode_stream_abort():\n> >\n> > if (data->skip_empty_xacts && !xact_wrote_changes) return;\n> >\n> > Before the patch, we won't be setting the 'xact_wrote_changes' flag in txndata\n> > which is fixed now. So, this looks okay to me. However, I have another\n> > observation in this code which is that for aborts or subtransactions, we are not\n> > checking the flag 'stream_wrote_changes', so we may end up emitting the\n> > abort message even when no actual change has been streamed. I haven't tried\n> > to generate a test to verify this observation, so I could be wrong as well but it is\n> > worth analyzing such cases.\n>\n> I have confirmed that the mentioned case is possible(steps[1]): the\n> sub-transaction doesn't output any data, but the stream abort for this\n> sub-transaction will still be sent.\n>\n> But I think this may not be a problemic behavior, as even the pgoutput can\n> behave similarly, e.g. If all the changes are filtered by row filter or table\n> filter, then the stream abort will still be sent. The subscriber will skip\n> handling the STREAM ABORT if the aborted txn was not applied.\n>\n> And if we want to fix this, in output plugin, we need to record if we have sent\n> any changes for each sub-transaction so that we can decide whether to send the\n> following stream abort or not. We cannot use 'stream_wrote_changes' because\n> it's a per streamed block flag and there could be serval streamed blocks for one\n> sub-txn. It looks a bit complicate to me.\n>\n\nI agree with your analysis. So, pushed the existing patch. BTW, sorry,\nby mistake I used Peter's name as author.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Oct 2023 17:17:36 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Open a streamed block for transactional messages during decoding" } ]
[ { "msg_contents": "I tried PITR recovery, and the 'recovery_target_action' guc is shutdown. I\ngot a failure, and it told me to check the log, finally I found the result\nwas due to guc. I think pg_ctl should print some information which told\nusers recovery had been done.\nI developed a commit in my workspace. The steps below:\n1. postmaster exits with code 3 if startup shutdowns because of recovery\ntarget action\n2. add enum POSTMASER_RECOVERY_SHUTDOWN in pg_ctl\n3. print information to stderr if the postmaster's exit code is 3\nI test, and it's ok.\nI think this information is very useful, especially for some beginners. A\ngood project not only needs performance, but also ease-of-use.\n\nI tried PITR recovery, and the 'recovery_target_action' guc is shutdown. I got a failure, and it told me to check the log, finally I found the result was due to guc. I think pg_ctl should print some information which told users recovery had been done.I developed a commit in my workspace. The steps below:1. postmaster exits with code 3 if startup shutdowns because of recovery target action2. add enum POSTMASER_RECOVERY_SHUTDOWN in pg_ctl3. print information to stderr if the postmaster's exit code is 3I test, and it's ok. I think this information is very useful, especially for some beginners. A good project not only needs performance, but also ease-of-use.", "msg_date": "Tue, 24 Oct 2023 20:33:38 +0800", "msg_from": "Crisp Lee <[email protected]>", "msg_from_op": true, "msg_subject": "make pg_ctl start more friendly" } ]
[ { "msg_contents": "Hi, all\n\nShall we show Parallel Hash node’s total rows of a Parallel-aware HashJoin?\n\nEx: a non-parallel plan, table simple has 20000 rows.\n\nzml=# explain select count(*) from simple r join simple s using (id);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Aggregate (cost=1309.00..1309.01 rows=1 width=8)\n -> Hash Join (cost=617.00..1259.00 rows=20000 width=0)\n Hash Cond: (r.id <x-msg://2/r.id> = s.id <x-msg://2/s.id>)\n -> Seq Scan on simple r (cost=0.00..367.00 rows=20000 width=4)\n -> Hash (cost=367.00..367.00 rows=20000 width=4)\n -> Seq Scan on simple s (cost=0.00..367.00 rows=20000 width=4)\n(6 rows)\n\nWhile a parallel-aware plan:\n\nzml=# explain select count(*) from simple r join simple s using (id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=691.85..691.86 rows=1 width=8)\n -> Gather (cost=691.63..691.84 rows=2 width=8)\n Workers Planned: 2\n -> Partial Aggregate (cost=691.63..691.64 rows=1 width=8)\n -> Parallel Hash Join (cost=354.50..670.80 rows=8333 width=0)\n Hash Cond: (r.id <x-msg://2/r.id> = s.id <x-msg://2/s.id>)\n -> Parallel Seq Scan on simple r (cost=0.00..250.33 rows=8333 width=4)\n -> Parallel Hash (cost=250.33..250.33 rows=8333 width=4)\n -> Parallel Seq Scan on simple s (cost=0.00..250.33 rows=8333 width=4)\n(9 rows)\n\nWhen initial_cost_hashjoin(), we undo the parallel division when parallel ware.\nIt’s reasonable because a shared hash table should have all the data.\nAnd we also take parallel into account for hash plan’s total rows if it’s parallel aware.\n```\n if (best_path->jpath.path.parallel_aware)\n{\n hash_plan->plan.parallel_aware = true;\n hash_plan->rows_total = best_path->inner_rows_total;\n}\n```\n\nBut the Parallel Hash node of plan shows the same rows with subplan, I’m wandering if it’s more reasonable to show rows_total instead of plan_rows for Parallel Hash nodes?\n\nFor this example,\n -> Parallel Hash (rows=20000)\n -> Parallel Seq Scan on simple s (rows=8333)\n\n\n\nZhang Mingli\nHashData https://www.hashdata.xyz\n\n\nHi, allShall we show Parallel Hash node’s total rows of a Parallel-aware HashJoin?Ex: a non-parallel plan,  table simple has 20000 rows.zml=# explain  select count(*) from simple r join simple s using (id);                                   QUERY PLAN-------------------------------------------------------------------------------- Aggregate  (cost=1309.00..1309.01 rows=1 width=8)   ->  Hash Join  (cost=617.00..1259.00 rows=20000 width=0)         Hash Cond: (r.id = s.id)         ->  Seq Scan on simple r  (cost=0.00..367.00 rows=20000 width=4)         ->  Hash  (cost=367.00..367.00 rows=20000 width=4)               ->  Seq Scan on simple s  (cost=0.00..367.00 rows=20000 width=4)(6 rows)While a parallel-aware plan:zml=# explain  select count(*) from simple r join simple s using (id);                                             QUERY PLAN---------------------------------------------------------------------------------------------------- Finalize Aggregate  (cost=691.85..691.86 rows=1 width=8)   ->  Gather  (cost=691.63..691.84 rows=2 width=8)         Workers Planned: 2         ->  Partial Aggregate  (cost=691.63..691.64 rows=1 width=8)               ->  Parallel Hash Join  (cost=354.50..670.80 rows=8333 width=0)                     Hash Cond: (r.id = s.id)                     ->  Parallel Seq Scan on simple r  (cost=0.00..250.33 rows=8333 width=4)                     ->  Parallel Hash  (cost=250.33..250.33 rows=8333 width=4)                           ->  Parallel Seq Scan on simple s  (cost=0.00..250.33 rows=8333 width=4)(9 rows)When initial_cost_hashjoin(), we undo the parallel division when parallel ware.It’s reasonable because a shared hash table should have all the data.And we also take parallel into account for hash plan’s total rows if it’s parallel aware.``` if (best_path->jpath.path.parallel_aware){  hash_plan->plan.parallel_aware = true;  hash_plan->rows_total = best_path->inner_rows_total;}```But the Parallel Hash node of plan shows the same rows with subplan, I’m wandering if it’s more reasonable to show rows_total instead of plan_rows for Parallel Hash nodes?For this example,  -> Parallel Hash (rows=20000)    -> Parallel Seq Scan on simple s (rows=8333)\nZhang MingliHashData https://www.hashdata.xyz", "msg_date": "Tue, 24 Oct 2023 22:46:06 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Should_Explain_show_Parallel_Hash_node=E2=80=99s_total_?=\n =?utf-8?Q?rows=3F?=" } ]
[ { "msg_contents": "Many usages of the foreach macro in the Postgres codebase only use the\nListCell variable to then get its value. This adds macros that\nsimplify iteration code for that very common use case. Instead of\npassing a ListCell you can pass a variable of the type of its\ncontents. This IMHO improves readability of the code by reducing the\ntotal amount of code while also essentially forcing the use of useful\nvariable names.\n\nWhile this might seem like a small quality of life improvement, in\npractice it turns out to be very nice to use. At Microsoft we have\nbeen using macros very similar to these ones in the Citus codebase for\na long time now and we pretty much never use plain foreach anymore for\nnew code.\n\nFinally, I guess there needs to be some bikeshedding on the naming. In\nthe Citus codebase we call them foreach_xyz instead of the\nfor_each_xyz naming pattern that is used in this patchset. I'm not\nsure what the current stance is on if foreach should be written with\nor without an underscore between for and each. Currently pg_list.h\nuses both.\n\nP.S. Similar macros for forboth/forthree are also possible, but\nrequire an exponential macro count handle all different possibilities,\nwhich might not be worth the effort since forboth/forthree are used\nmuch less often than foreach. In Citus we do have 3 forboth macros\nthat don't require ListCell for the most common cases (foreach_ptr,\nforeach_ptr_oid, foreach_int_oid). But I did not want to clutter this\npatchset with that discussion.", "msg_date": "Tue, 24 Oct 2023 18:03:48 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": true, "msg_subject": "Add new for_each macros for iterating over a List that do not require\n ListCell pointer" }, { "msg_contents": "On Tue, Oct 24, 2023 at 06:03:48PM +0200, Jelte Fennema wrote:\n> Many usages of the foreach macro in the Postgres codebase only use the\n> ListCell variable to then get its value. This adds macros that\n> simplify iteration code for that very common use case. Instead of\n> passing a ListCell you can pass a variable of the type of its\n> contents. This IMHO improves readability of the code by reducing the\n> total amount of code while also essentially forcing the use of useful\n> variable names.\n> \n> While this might seem like a small quality of life improvement, in\n> practice it turns out to be very nice to use. At Microsoft we have\n> been using macros very similar to these ones in the Citus codebase for\n> a long time now and we pretty much never use plain foreach anymore for\n> new code.\n\nThis seems reasonable to me.\n\n> Finally, I guess there needs to be some bikeshedding on the naming. In\n> the Citus codebase we call them foreach_xyz instead of the\n> for_each_xyz naming pattern that is used in this patchset. I'm not\n> sure what the current stance is on if foreach should be written with\n> or without an underscore between for and each. Currently pg_list.h\n> uses both.\n\nI don't have a strong opinion on the matter, but if I had to choose, I\nguess I'd pick foreach_*() because these macros are most closely related to\nforeach().\n\nBTW after applying your patches, initdb began failing with the following\nfor me:\n\n\tTRAP: failed Assert(\"n >= 0 && n < list->length\"), File: \"list.c\", Line: 770, PID: 902807\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 11:47:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, 24 Oct 2023 at 18:47, Nathan Bossart <[email protected]> wrote:\n> BTW after applying your patches, initdb began failing with the following\n> for me:\n>\n> TRAP: failed Assert(\"n >= 0 && n < list->length\"), File: \"list.c\", Line: 770, PID: 902807\n\nOh oops... That was an off by one error in the modified\nforeach_delete_current implementation.\nAttached is a fixed version.", "msg_date": "Tue, 24 Oct 2023 18:58:04 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, Oct 24, 2023 at 06:58:04PM +0200, Jelte Fennema wrote:\n> On Tue, 24 Oct 2023 at 18:47, Nathan Bossart <[email protected]> wrote:\n>> BTW after applying your patches, initdb began failing with the following\n>> for me:\n>>\n>> TRAP: failed Assert(\"n >= 0 && n < list->length\"), File: \"list.c\", Line: 770, PID: 902807\n> \n> Oh oops... That was an off by one error in the modified\n> foreach_delete_current implementation.\n> Attached is a fixed version.\n\nThanks, that fixed it for me, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 24 Oct 2023 16:20:56 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, 25 Oct 2023 at 06:00, Jelte Fennema <[email protected]> wrote:\n> Attached is a fixed version.\n\nWith foreach(), we commonly do \"if (lc == NULL)\" at the end of loops\nas a way of checking if we did \"break\" to terminate the loop early.\nDoing the equivalent with the new macros won't be safe as the list\nelement's value we broke on may be set to NULL. I think it might be a\ngood idea to document the fact that this wouldn't be safe with the new\nmacros, or better yet, document the correct way to determine if we\nbroke out the loop early. I imagine someone will want to do some\nconversion work at some future date and it would be good if we could\navoid introducing bugs during that process.\n\nI wonder if we should even bother setting the variable to NULL at the\nend of the loop. It feels like someone might just end up mistakenly\nchecking for NULLs even if we document that it's not safe. If we left\nthe variable pointing to the last list element then the user of the\nmacro is more likely to notice their broken code. It'd also save a bit\nof instruction space.\n\nDavid\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:55:22 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On 2023-Oct-24, Jelte Fennema wrote:\n\n> Many usages of the foreach macro in the Postgres codebase only use the\n> ListCell variable to then get its value. This adds macros that\n> simplify iteration code for that very common use case. Instead of\n> passing a ListCell you can pass a variable of the type of its\n> contents. This IMHO improves readability of the code by reducing the\n> total amount of code while also essentially forcing the use of useful\n> variable names.\n\n+1 for getting rid of useless \"lc\" variables.\n\nLooking at for_each_ptr() I think it may be cleaner to follow\npalloc_object()'s precedent and make it foreach_object() instead (I have\nno love for the extra underscore, but I won't object to it either). And\nlike foreach_node, have it receive a type name to add a cast to.\n\nI'd imagine something like\n\n SubscriptionRelState *rstate;\n\n foreach_object(SubscriptionRelState *, rstate, table_states_not_ready)\n {\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 25 Oct 2023 10:51:55 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, 25 Oct 2023 at 04:55, David Rowley <[email protected]> wrote:\n> With foreach(), we commonly do \"if (lc == NULL)\" at the end of loops\n> as a way of checking if we did \"break\" to terminate the loop early.\n\nAfaict it's done pretty infrequently. The following crude attempt at\nan estimate estimates it's only done about ~1.5% of the time a foreach\nis used:\n$ rg 'lc == NULL' | wc -l\n13\n$ rg '\\bforeach\\(lc,' -S | wc -l\n899\n\n> Doing the equivalent with the new macros won't be safe as the list\n> element's value we broke on may be set to NULL. I think it might be a\n> good idea to document the fact that this wouldn't be safe with the new\n> macros, or better yet, document the correct way to determine if we\n> broke out the loop early. I imagine someone will want to do some\n> conversion work at some future date and it would be good if we could\n> avoid introducing bugs during that process.\n>\n> I wonder if we should even bother setting the variable to NULL at the\n> end of the loop. It feels like someone might just end up mistakenly\n> checking for NULLs even if we document that it's not safe. If we left\n> the variable pointing to the last list element then the user of the\n> macro is more likely to notice their broken code. It'd also save a bit\n> of instruction space.\n\nMakes sense. Addressed this now by mentioning this limitation and\npossible workarounds in the comments of the new macros and by not\nsetting the loop variable to NULL/0. I don't think there's an easy way\nto add this feature to these new macros natively, it's a limitation of\nnot having a second variable. This seems fine to me, since these new\nmacros are meant as an addition to foreach() instead of a complete\nreplacement.", "msg_date": "Wed, 25 Oct 2023 12:05:41 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "Attached is a slightly updated version, with a bit simpler\nimplementation of foreach_delete_current.\nInstead of decrementing i and then adding 1 to it when indexing the\nlist, it now indexes the list using a postfix decrement.", "msg_date": "Wed, 25 Oct 2023 12:39:01 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, 25 Oct 2023 at 13:52, Alvaro Herrera <[email protected]> wrote:\n> Looking at for_each_ptr() I think it may be cleaner to follow\n> palloc_object()'s precedent and make it foreach_object() instead (I have\n> no love for the extra underscore, but I won't object to it either). And\n> like foreach_node, have it receive a type name to add a cast to.\n>\n> I'd imagine something like\n>\n> SubscriptionRelState *rstate;\n>\n> foreach_object(SubscriptionRelState *, rstate, table_states_not_ready)\n> {\n\nCould you clarify why you think it may be cleaner? I don't see much\nbenefit to passing the type in there if all we use it for is adding a\ncast. It seems like extra things to type for little benefit.\npalloc_object uses the passed in type to not only do the cast, but\nalso to determine the size of the the allocation.\n\nIf foreach_object would allow us to remove the declaration further up\nin the function I do see a benefit though.\n\nI attached a new patchset which includes a 3rd patch that does this\n(the other patches are equivalent to v4). I quite like that it moves\nthe type declaration to the loop itself, limiting its scope. But I'm\nnot fully convinced it's worth the hackiness of introducing a second\nfor loop that does a single iteration, just to be able to declare a\nvariable of a different type though. But I don't know another way of\nachieving this. If this hack/trick is deemed acceptable, we can do the\nsame for the other newly introduced macros. The type would not even\nneed to be specified for oid/xid/int because it's already known to be\nOid/TransactionId/int respectively.", "msg_date": "Wed, 25 Oct 2023 14:35:45 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, Oct 25, 2023 at 12:39:01PM +0200, Jelte Fennema wrote:\n> Attached is a slightly updated version, with a bit simpler\n> implementation of foreach_delete_current.\n> Instead of decrementing i and then adding 1 to it when indexing the\n> list, it now indexes the list using a postfix decrement.\n\nBoth the macros and the comments in 0001 seem quite repetitive to me.\nCould we simplify it with something like the following?\n\n #define foreach_internal(var, lst, func) \\\n for (ForEachState var##__state = {(lst), 0}; \\\n (var##__state.l != NIL && \\\n var##__state.i < var##__state.l->length && \\\n (var = func(&var##__state.l->elements[var##__state.i]), true)); \\\n var##__state.i++)\n\n #define foreach_ptr(var, lst) foreach_internal(var, lst, lfirst)\n #define foreach_int(var, lst) foreach_internal(var, lst, lfirst_int)\n #define foreach_oid(var, lst) foreach_internal(var, lst, lfirst_oid)\n #define foreach_xid(var, lst) foreach_internal(var, lst, lfirst_xid)\n\n #define foreach_node(type, var, lst) \\\n for (ForEachState var##__state = {(lst), 0}; \\\n (var##__state.l != NIL && \\\n var##__state.i < var##__state.l->length && \\\n (var = lfirst_node(type, &var##__state.l->elements[var##__state.i]), true));\\\n var##__state.i++)\n\nThere might be a way to use foreach_internal for foreach_node, too, but\nthis is probably already too magical...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 30 Nov 2023 22:20:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Fri, 1 Dec 2023 at 05:20, Nathan Bossart <[email protected]> wrote:\n> Could we simplify it with something like the following?\n\nGreat suggestion! Updated the patchset accordingly.\n\nThis made it also easy to change the final patch to include the\nautomatic scoped declaration logic for all of the new macros. I quite\nlike how the calling code changes to not have to declare the variable.\nBut it's definitely a larger divergence from the status quo than\nwithout patch 0003. So I'm not sure if it's desired.\n\nFinally, I also renamed the functions to use foreach instead of\nfor_each, since based on this thread that seems to be the generally\npreferred naming.", "msg_date": "Thu, 14 Dec 2023 16:54:57 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "The more I think about it and look at the code, the more I like the\nusage of the loop style proposed in the previous 0003 patch (which\nautomatically declares a loop variable for the scope of the loop using\na second for loop).\n\nI did some testing on godbolt.org and both versions of the macros\nresult in the same assembly when compiling with -O2 (and even -O1)\nwhen compiling with ancient versions of gcc (5.1) and clang (3.0):\nhttps://godbolt.org/z/WqfTbhe4e\n\nSo attached is now an updated patchset that only includes these even\neasier to use foreach macros. I also updated some of the comments and\nmoved modifying foreach_delete_current and foreach_current_index to\ntheir own commit.\n\nOn Thu, 14 Dec 2023 at 16:54, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Fri, 1 Dec 2023 at 05:20, Nathan Bossart <[email protected]> wrote:\n> > Could we simplify it with something like the following?\n>\n> Great suggestion! Updated the patchset accordingly.\n>\n> This made it also easy to change the final patch to include the\n> automatic scoped declaration logic for all of the new macros. I quite\n> like how the calling code changes to not have to declare the variable.\n> But it's definitely a larger divergence from the status quo than\n> without patch 0003. So I'm not sure if it's desired.\n>\n> Finally, I also renamed the functions to use foreach instead of\n> for_each, since based on this thread that seems to be the generally\n> preferred naming.", "msg_date": "Mon, 18 Dec 2023 14:30:12 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Mon, 18 Dec 2023 at 19:00, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> The more I think about it and look at the code, the more I like the\n> usage of the loop style proposed in the previous 0003 patch (which\n> automatically declares a loop variable for the scope of the loop using\n> a second for loop).\n>\n> I did some testing on godbolt.org and both versions of the macros\n> result in the same assembly when compiling with -O2 (and even -O1)\n> when compiling with ancient versions of gcc (5.1) and clang (3.0):\n> https://godbolt.org/z/WqfTbhe4e\n>\n> So attached is now an updated patchset that only includes these even\n> easier to use foreach macros. I also updated some of the comments and\n> moved modifying foreach_delete_current and foreach_current_index to\n> their own commit.\n>\n> On Thu, 14 Dec 2023 at 16:54, Jelte Fennema-Nio <[email protected]> wrote:\n> >\n> > On Fri, 1 Dec 2023 at 05:20, Nathan Bossart <[email protected]> wrote:\n> > > Could we simplify it with something like the following?\n> >\n> > Great suggestion! Updated the patchset accordingly.\n> >\n> > This made it also easy to change the final patch to include the\n> > automatic scoped declaration logic for all of the new macros. I quite\n> > like how the calling code changes to not have to declare the variable.\n> > But it's definitely a larger divergence from the status quo than\n> > without patch 0003. So I'm not sure if it's desired.\n> >\n> > Finally, I also renamed the functions to use foreach instead of\n> > for_each, since based on this thread that seems to be the generally\n> > preferred naming.\n\nThanks for working on this, this simplifies foreach further.\nI noticed that this change can be done in several other places too. I\nhad seen the following parts of code from logical replication files\ncan be changed:\n1) The below in pa_detach_all_error_mq function can be changed to foreach_ptr\nforeach(lc, ParallelApplyWorkerPool)\n{\nshm_mq_result res;\nSize nbytes;\nvoid *data;\nParallelApplyWorkerInfo *winfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n\n2) The below in logicalrep_worker_detach function can be changed to foreach_ptr\nforeach(lc, workers)\n{\nLogicalRepWorker *w = (LogicalRepWorker *) lfirst(lc);\n\nif (isParallelApplyWorker(w))\nlogicalrep_worker_stop_internal(w, SIGTERM);\n}\n\n3) The below in ApplyLauncherMain function can be changed to foreach_ptr\n/* Start any missing workers for enabled subscriptions. */\nsublist = get_subscription_list();\nforeach(lc, sublist)\n{\nSubscription *sub = (Subscription *) lfirst(lc);\nLogicalRepWorker *w;\nTimestampTz last_start;\nTimestampTz now;\nlong elapsed;\n\nif (!sub->enabled)\ncontinue;\n\n4) The below in pa_launch_parallel_worker function can be changed to\nforeach_ptr\nListCell *lc;\n\n/* Try to get an available parallel apply worker from the worker pool. */\nforeach(lc, ParallelApplyWorkerPool)\n{\nwinfo = (ParallelApplyWorkerInfo *) lfirst(lc);\n\nif (!winfo->in_use)\nreturn winfo;\n}\n\nShould we start doing these changes too now?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 19 Dec 2023 16:29:12 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, 19 Dec 2023 at 11:59, vignesh C <[email protected]> wrote:\n> I noticed that this change can be done in several other places too.\n\nMy guess would be that ~90% of all existing foreach loops in the\ncodebase can be easily rewritten (and simplified) using these new\nmacros. So converting all of those would likely be quite a bit of\nwork. In patch 0003 I only converted a few of them to get some\ncoverage of the new macros and show how much simpler the usage of them\nis.\n\n> Should we start doing these changes too now?\n\nI think we should at least wait until this patchset is merged before\nwe start changing other places. If there's some feedback on the macros\nand decide we change how they get called, then it would be a waste of\ntime to have to change all the call sites.\n\nAnd even once these patches are merged to master, I think we should\nonly do any bulk changes if/when we backport these macros to all\nsupported PG versions. Backporting to PG12 is probably the hardest,\nsince List its internal layout got heavily changed in PG13. Probably\nnot too hard though, in Citus we've had similar macros work since\nPG11. I'm also not sure what the policy is for backporting patches\nthat introduce new functions/macros in public headers.\n\nWe probably even want to consider some automatic rewriting script (for\nthe obvious cases) and/or timing the merge, to avoid having to do many\nrebases of the patch.\n\n\n", "msg_date": "Tue, 19 Dec 2023 15:44:43 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, Dec 19, 2023 at 03:44:43PM +0100, Jelte Fennema-Nio wrote:\n> On Tue, 19 Dec 2023 at 11:59, vignesh C <[email protected]> wrote:\n>> I noticed that this change can be done in several other places too.\n> \n> My guess would be that ~90% of all existing foreach loops in the\n> codebase can be easily rewritten (and simplified) using these new\n> macros. So converting all of those would likely be quite a bit of\n> work. In patch 0003 I only converted a few of them to get some\n> coverage of the new macros and show how much simpler the usage of them\n> is.\n\nI'm not sure we should proceed with rewriting most/all eligible foreach\nloops. I think it's fine to use the new macros in new code or to update\nexisting loops in passing when changing nearby code, but rewriting\neverything likely just introduces back-patching pain in return for little\ndiscernible gain.\n\n> And even once these patches are merged to master, I think we should\n> only do any bulk changes if/when we backport these macros to all\n> supported PG versions. Backporting to PG12 is probably the hardest,\n> since List its internal layout got heavily changed in PG13. Probably\n> not too hard though, in Citus we've had similar macros work since\n> PG11. I'm also not sure what the policy is for backporting patches\n> that introduce new functions/macros in public headers.\n\nUnless there's some way to argue this is a bug, security issue, or data\ncorruption problem [0], I seriously doubt we will back-patch this.\n\n[0] https://www.postgresql.org/support/versioning/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Dec 2023 09:52:14 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, 19 Dec 2023 at 16:52, Nathan Bossart <[email protected]> wrote:\n> I'm not sure we should proceed with rewriting most/all eligible foreach\n> loops. I think it's fine to use the new macros in new code or to update\n> existing loops in passing when changing nearby code, but rewriting\n> everything likely just introduces back-patching pain in return for little\n> discernible gain.\n\nTo clarify: I totally agree that if we're not backpatching this we\nshouldn't do bulk changes on existing loops to avoid pain when\nbackpatching other patches.\n\n> Unless there's some way to argue this is a bug, security issue, or data\n> corruption problem [0], I seriously doubt we will back-patch this.\n\nIn the past some tooling changes have been backpatched, e.g.\nisolationtester has received various updates over the years (I know\nbecause this broke Citus its isolationtester tests a few times because\nthe output files changed slightly). In some sense this patch could be\nconsidered tooling too. Again: not saying we should back-patch this,\nbut we could only realistically bulk update loops if we do.\n\n\n", "msg_date": "Tue, 19 Dec 2023 17:47:42 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Tue, 19 Dec 2023 at 21:22, Nathan Bossart <[email protected]> wrote:\n>\n> On Tue, Dec 19, 2023 at 03:44:43PM +0100, Jelte Fennema-Nio wrote:\n> > On Tue, 19 Dec 2023 at 11:59, vignesh C <[email protected]> wrote:\n> >> I noticed that this change can be done in several other places too.\n> >\n> > My guess would be that ~90% of all existing foreach loops in the\n> > codebase can be easily rewritten (and simplified) using these new\n> > macros. So converting all of those would likely be quite a bit of\n> > work. In patch 0003 I only converted a few of them to get some\n> > coverage of the new macros and show how much simpler the usage of them\n> > is.\n>\n> I'm not sure we should proceed with rewriting most/all eligible foreach\n> loops. I think it's fine to use the new macros in new code or to update\n> existing loops in passing when changing nearby code, but rewriting\n> everything likely just introduces back-patching pain in return for little\n> discernible gain.\n\n+1 for this. Let's just provide the for_each macros to be used for new code.\nThis means that the\n0003-Use-new-foreach_xyz-macros-in-a-few-places.patch will not be\npresent in the final patch right?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 20 Dec 2023 12:21:05 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, Dec 20, 2023 at 12:21:05PM +0530, vignesh C wrote:\n> On Tue, 19 Dec 2023 at 21:22, Nathan Bossart <[email protected]> wrote:\n>> I'm not sure we should proceed with rewriting most/all eligible foreach\n>> loops. I think it's fine to use the new macros in new code or to update\n>> existing loops in passing when changing nearby code, but rewriting\n>> everything likely just introduces back-patching pain in return for little\n>> discernible gain.\n> \n> +1 for this. Let's just provide the for_each macros to be used for new code.\n> This means that the\n> 0003-Use-new-foreach_xyz-macros-in-a-few-places.patch will not be\n> present in the final patch right?\n\nIt might be worth changing at least one of each type to make sure the\nmacros compile, but yes, I don't think we need to proceed with any sort of\nbulk changes of existing loops for now.\n\nBTW I think v7-0001 and v7-0002 are in pretty good shape. I'm going to\nmark this as ready-for-committer and see if I can get those two committed\nsooner than later.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Dec 2023 21:25:19 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "I spent some time preparing this for commit, which only amounted to some\nlight edits. I am posting a new version of the patch in order to get one\nmore round of cfbot coverage and to make sure there is no remaining\nfeedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 Jan 2024 13:55:19 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, 3 Jan 2024 at 20:55, Nathan Bossart <[email protected]> wrote:\n>\n> I spent some time preparing this for commit, which only amounted to some\n> light edits. I am posting a new version of the patch in order to get one\n> more round of cfbot coverage and to make sure there is no remaining\n> feedback.\n\nOverall your light edits look good to me. The commit message is very\ndescriptive and I like the shortening of the comments. The only thing\nI feel is that I think lost some my original intent is this sentence:\n\n+ * different types. The outer loop only does a single iteration, so we expect\n+ * optimizing compilers will unroll it, thereby optimizing it away.\n\nThe \"we expect\" reads to me as if we're not very sure that compilers\ndo this optimization. Even though we are quite sure. Maybe some small\nchanges like this to clarify that.\n\nThe outer loop only does a single iteration, so we expect that **any**\noptimizing compilers will unroll it, thereby optimizing it away. **We\nknow for sure that gcc and clang do this optimization.**\n\n\n", "msg_date": "Wed, 3 Jan 2024 22:57:07 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, Jan 03, 2024 at 10:57:07PM +0100, Jelte Fennema-Nio wrote:\n> Overall your light edits look good to me. The commit message is very\n> descriptive and I like the shortening of the comments. The only thing\n> I feel is that I think lost some my original intent is this sentence:\n> \n> + * different types. The outer loop only does a single iteration, so we expect\n> + * optimizing compilers will unroll it, thereby optimizing it away.\n> \n> The \"we expect\" reads to me as if we're not very sure that compilers\n> do this optimization. Even though we are quite sure. Maybe some small\n> changes like this to clarify that.\n> \n> The outer loop only does a single iteration, so we expect that **any**\n> optimizing compilers will unroll it, thereby optimizing it away. **We\n> know for sure that gcc and clang do this optimization.**\n\nWFM. Thanks for reviewing the edits.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Jan 2024 16:13:11 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> The \"we expect\" reads to me as if we're not very sure that compilers\n> do this optimization. Even though we are quite sure. Maybe some small\n> changes like this to clarify that.\n\n> The outer loop only does a single iteration, so we expect that **any**\n> optimizing compilers will unroll it, thereby optimizing it away. **We\n> know for sure that gcc and clang do this optimization.**\n\nI like Nathan's wording. Your assertion is contradicted by cases as\nobvious as -O0, and I'm sure a lot of other holes could be poked in it\nas well (e.g, just how far back might gcc choose to do that unrolling?\nDoes the size of the loop body matter?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Jan 2024 17:13:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "On Wed, 3 Jan 2024 at 23:13, Tom Lane <[email protected]> wrote:\n> I like Nathan's wording.\n\nTo be clear, I don't want to block this patch on the wording of that\nsingle comment. So, if you feel Nathan's wording was better, I'm fine\nwith that too. But let me respond to your arguments anyway:\n\n> Your assertion is contradicted by cases as\n> obvious as -O0\n\nMy suggestion specifically mentions optimizing compilers, -O0 is by\ndefinition not an optimizing compiler.\n\n> just how far back might gcc choose to do that unrolling?\n\ngcc 5.1 and clang 3.0 (possibly earlier, but this is the oldest I was\nable to test the code with on godbolt). As seen upthread:\n\n> I did some testing on godbolt.org and both versions of the macros\n> result in the same assembly when compiling with -O2 (and even -O1)\n> when compiling with ancient versions of gcc (5.1) and clang (3.0):\n> https://godbolt.org/z/WqfTbhe4e\n\n> Does the size of the loop body matter?)\n\nI copy pasted a simple printf ~800 times and the answer seems to be\nno, it doesn't matter: https://godbolt.org/z/EahYPa8KM\n\n\n", "msg_date": "Wed, 3 Jan 2024 23:36:36 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" }, { "msg_contents": "Committed after some additional light edits. Thanks for the patch!\n\nOn Wed, Jan 03, 2024 at 11:36:36PM +0100, Jelte Fennema-Nio wrote:\n> On Wed, 3 Jan 2024 at 23:13, Tom Lane <[email protected]> wrote:\n>> I like Nathan's wording.\n> \n> To be clear, I don't want to block this patch on the wording of that\n> single comment. So, if you feel Nathan's wording was better, I'm fine\n> with that too. But let me respond to your arguments anyway:\n\nI decided to keep the v8 wording, if for no other reason than I didn't see\nthe need for lots of detail about how it compiles. IMHO even the vague\nmention of loop unrolling is probably more than is really necessary.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 16:17:21 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new for_each macros for iterating over a List that do not\n require ListCell pointer" } ]
[ { "msg_contents": "Hi hackers!\n\nWe need community feedback on previously discussed topic [1].\nThere are some long-live issues in Postgres related to the TOAST mechanics,\nlike [2].\nSome time ago we already proposed a set of patches with an API allowing to\nplug in\ndifferent TOAST implementations into a live database. The patch set\nintroduced a lot\nof code and was quite crude in some places, so after several\nimplementations we decided\nto try to implement it in the production environment for further check-up.\n\nThe main idea behind pluggable TOAST is make it possible to easily plug in\nand use different\nimplementations of large values storage, preserving existing mechanics to\nkeep backward\ncompatibilitну provide easy Postgres-way give users alternative mechanics\nfor storing large\ncolumn values in a more effective way - we already have custom and very\neffective (up to tens\nand even hundreds of times faster) TOAST implementations for bytea and\nJSONb data types.\n\nAs we see it - Pluggable TOAST proposes\n1) changes in TOAST pointer itself, extending it to store custom data -\ncurrent limitations\nof TOAST pointer were discussed in [1] and [4];\n2) API which allows calls of custom TOAST implementations for certain table\ncolumns and\n(a topic for discussion) certain datatypes.\n\nCustom TOAST could be also used in a not so trivial way - for example,\nlimited columnar storage could be easily implemented and plugged in without\nheavy core modifications\nof implementation of Pluggable Storage (Table Access Methods), preserving\nexisting data\nand database structure, be upgraded, replicated and so on.\n\nAny thoughts and proposals are welcome.\n\n[1] Pluggable TOAST\nhttps://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a%40sigaev.ru\n\n[2] Infinite loop while acquiring new TOAST Oid\nhttps://www.postgresql.org/message-id/flat/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com\n\n[3] JSONB Compression dictionaries\nhttps://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n\n[4] Extending the TOAST pointer\nhttps://www.postgresql.org/message-id/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!We need community feedback on previously discussed topic [1].There are some long-live issues in Postgres related to the TOAST mechanics, like [2].Some time ago we already proposed a set of patches with an API allowing to plug indifferent TOAST implementations into a live database. The patch set introduced a lotof code and was quite crude in some places, so after several implementations we decidedto try to implement it in the production environment for further check-up.The main idea behind pluggable TOAST is make it possible to easily plug in and use differentimplementations of large values storage, preserving existing mechanics to keep backward compatibilitну provide easy Postgres-way  give users alternative mechanics for storing largecolumn values in a more effective way - we already have custom and very effective (up to tensand even hundreds of times faster) TOAST implementations for bytea and JSONb data types.As we see it - Pluggable TOAST proposes 1) changes in TOAST pointer itself, extending it to store custom data - current limitationsof TOAST pointer were discussed in [1] and [4];2) API which allows calls of custom TOAST implementations for certain table columns and(a topic for discussion) certain datatypes.Custom TOAST could be also used in a not so trivial way - for example, limited columnar storage could be easily implemented and plugged in without heavy core modificationsof implementation of Pluggable Storage (Table Access Methods), preserving existing dataand database structure, be upgraded, replicated and so on.Any thoughts and proposals are welcome.[1] Pluggable TOAST https://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a%40sigaev.ru[2] Infinite loop while acquiring new TOAST Oid https://www.postgresql.org/message-id/flat/CAN-LCVPRvRzxeUdYdDCZ7UwZQs1NmZpqBUCd%3D%2BRdMPFTyt-bRQ%40mail.gmail.com[3] JSONB Compression dictionaries https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com[4] Extending the TOAST pointer https://www.postgresql.org/message-id/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 24 Oct 2023 23:37:32 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "RFC: Pluggable TOAST" }, { "msg_contents": "Hi Nikita,\n\n> We need community feedback on previously discussed topic [1].\n> There are some long-live issues in Postgres related to the TOAST mechanics, like [2].\n> Some time ago we already proposed a set of patches with an API allowing to plug in\n> different TOAST implementations into a live database. The patch set introduced a lot\n> of code and was quite crude in some places, so after several implementations we decided\n> to try to implement it in the production environment for further check-up.\n>\n> The main idea behind pluggable TOAST is make it possible to easily plug in and use different\n> implementations of large values storage, preserving existing mechanics to keep backward\n> compatibilitну provide easy Postgres-way give users alternative mechanics for storing large\n> column values in a more effective way - we already have custom and very effective (up to tens\n> and even hundreds of times faster) TOAST implementations for bytea and JSONb data types.\n>\n> As we see it - Pluggable TOAST proposes\n> 1) changes in TOAST pointer itself, extending it to store custom data - current limitations\n> of TOAST pointer were discussed in [1] and [4];\n> 2) API which allows calls of custom TOAST implementations for certain table columns and\n> (a topic for discussion) certain datatypes.\n>\n> Custom TOAST could be also used in a not so trivial way - for example, limited columnar storage could be easily implemented and plugged in without heavy core modifications\n> of implementation of Pluggable Storage (Table Access Methods), preserving existing data\n> and database structure, be upgraded, replicated and so on.\n>\n> Any thoughts and proposals are welcome.\n\nIt seems to me that discarding the previous discussion and starting a\nnew thread where you ask the community for *another* feedback is not\ngoing to be productive. Pretty sure it's not going to change.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:43:54 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi,\n\nAleksander, previous discussion was not a discussion actually, we proposed\na set of big and complex core changes without any discussion preceding it.\nThat was not very good approach although the overall idea behind the patch\nset is very progressive and is ready to solve some old and painful issues\nin Postgres.\n\nAlso, introduction of SQL/JSON will further boost usage of JSON in\ndatabases,\nso our improvements in JSON storage and performance would be very useful.\nThese improvements depend on Pluggable TOAST, without API that allows easy\nplug-in different TOAST implementations they require heavy core\nmodifications\nand are very unlikely to be accepted. Not to mention that such kind of\nchanges\nrequire upgrades, restarts and so on.\n\nPluggable TOAST allows using advanced storage techniques on top of the\ndefault\nPostgres database engine, instead of implementing the complex Pluggable\nStorage\nAPI, and allows plugging these advanced techniques on the fly - without even\nrestarting the server, which is crucial for production systems.\n\nDiscussion on extending the TOAST pointer showed some interest in this\ntopic,\nso I hope this feature would draw some attention in the scope of widely\nused large\nJSON objects.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,Aleksander, previous discussion was not a discussion actually, we proposeda set of big and complex core changes without any discussion preceding it.That was not very good approach although the overall idea behind the patchset is very progressive and is ready to solve some old and painful issues in Postgres.Also, introduction of SQL/JSON will further boost usage of JSON in databases,so our improvements in JSON storage and performance would be very useful.These improvements depend on Pluggable TOAST, without API that allows easyplug-in different TOAST implementations they require heavy core modificationsand are very unlikely to be accepted. Not to mention that such kind of changesrequire upgrades, restarts and so on.Pluggable TOAST allows using advanced storage techniques on top of the defaultPostgres database engine, instead of implementing the complex Pluggable StorageAPI, and allows plugging these advanced techniques on the fly - without evenrestarting the server, which is crucial for production systems.Discussion on extending the TOAST pointer showed some interest in this topic,so I hope this feature would draw some attention in the scope of widely used largeJSON objects.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 26 Oct 2023 14:29:51 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi,\n\n> Aleksander, previous discussion was not a discussion actually, we proposed\n> a set of big and complex core changes without any discussion preceding it.\n> That was not very good approach although the overall idea behind the patch\n> set is very progressive and is ready to solve some old and painful issues in Postgres.\n\nNot true.\n\nThere *was* a discussion and you are aware of all the problems that\nwere pointed out. Most importantly [1][2]. Also you followed the\nthread [3] and are well aware that we want to implement TOAST\nimprovements in PostgreSQL core.\n\nDespite all this you are still insisting on the extendable design as\nif starting a new thread every year or so will change something.\n\n[1]: https://www.postgresql.org/message-id/20230205223313.4dwhlddzg6uhaztg%40alap3.anarazel.de\n[2]: https://www.postgresql.org/message-id/CAJ7c6TOsHtGkup8AVnLTGGt-%2B7EzE2j-cFGr12U37pzGEsU6Fg%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:04:52 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi,\n\nI meant discussion preceding the patch set - there was no any.\n\nAnd the goal of *THIS* topic is to gather a picture on how the community\nsees\nimprovements in TOAST mechanics if it doesn't want it the way we proposed\nbefore, to understand which way to go with JSON advanced storage and other\nenhancements we already have. Previous topic was not of any help here.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I meant discussion preceding the patch set - there was no any.And the goal of *THIS* topic is to gather a picture on how the community seesimprovements in TOAST mechanics if it doesn't want it the way we proposedbefore, to understand which way to go with JSON advanced storage and otherenhancements we already have. Previous topic was not of any help here.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 26 Oct 2023 15:54:27 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi,\n\n> And the goal of *THIS* topic is to gather a picture on how the community sees\n> improvements in TOAST mechanics if it doesn't want it the way we proposed\n> before, to understand which way to go with JSON advanced storage and other\n> enhancements we already have. Previous topic was not of any help here.\n\nPublish your code under an appropriate license first so that 1. anyone\ncan test/benchmark it and 2. merge it to the PostgreSQL core if\nnecessary.\n\nOr better consider participating in the [1] discussion where we\nreached a consensus on RFC and are working on improving TOAST for JSON\nand other types. We try to be mindful of use cases you named before\nlike 64-bit TOAST pointers but we still could use your input.\n\nYou know all this.\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 26 Oct 2023 16:18:04 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "On Tue, 24 Oct 2023 at 22:38, Nikita Malakhov <[email protected]> wrote:\n>\n> Hi hackers!\n>\n> We need community feedback on previously discussed topic [1].\n> There are some long-live issues in Postgres related to the TOAST mechanics, like [2].\n> Some time ago we already proposed a set of patches with an API allowing to plug in\n> different TOAST implementations into a live database. The patch set introduced a lot\n> of code and was quite crude in some places, so after several implementations we decided\n> to try to implement it in the production environment for further check-up.\n>\n> The main idea behind pluggable TOAST is make it possible to easily plug in and use different\n> implementations of large values storage, preserving existing mechanics to keep backward\n> compatibilitну provide easy Postgres-way give users alternative mechanics for storing large\n> column values in a more effective way - we already have custom and very effective (up to tens\n> and even hundreds of times faster) TOAST implementations for bytea and JSONb data types.\n>\n> As we see it - Pluggable TOAST proposes\n> 1) changes in TOAST pointer itself, extending it to store custom data - current limitations\n> of TOAST pointer were discussed in [1] and [4];\n> 2) API which allows calls of custom TOAST implementations for certain table columns and\n> (a topic for discussion) certain datatypes.\n>\n> Custom TOAST could be also used in a not so trivial way - for example, limited columnar storage could be easily implemented and plugged in without heavy core modifications\n> of implementation of Pluggable Storage (Table Access Methods), preserving existing data\n> and database structure, be upgraded, replicated and so on.\n>\n> Any thoughts and proposals are welcome.\n\nTLDR of my thoughts below:\n1. I don't see much value in the \"Pluggable TOAST\" as proposed in [0],\nwhere toasters are both decoupled from the type but also strongly\nbound to the type with tagged vtables.\n2. I do think we should allow *types* to provide their own toast\nslicing implementation (not just \"one blob, compressed then sliced\"),\nso that structured types don't have to read MBs of data to access only\na few of the structure's bytes. As this would be a different way of\nstoring the data, that would likely use a different tag for the\nvaratt_1b_e struct to differentiate the two stored formats.\n3. I do think that attributes shouldn't be required to be stored\neither on disk or in a single palloc-ed area of memory. It is very\nexpensive to copy such large chunks of memory; jsonb is one such\nexample. If the type is composite, allow it to be allocated in\nmultiple regions. This would require a new varatt_1b_e tag to discern\nthat the Datum isn't necessarily located in a single memory context,\nbut with good memory context management that should be fine.\n4. I do think that TOAST needs improvements to allow differential\nupdates, not just full rewrites of the value. I believe this would\nlikely be enabled through solutions for (2) and (3), even if it might\nalready be possible without implementing new vartag options.\n\nMy thoughts:\n\nIn my view, the main job of TOAST is:\n- To make sure a row with large attributes can still fit on a page by\nreducing the size of the representation of attributes in the row\n- To allow us to efficiently handle variable-length attribute values\n- To reduce the overhead of moving large values through query execution\n\nThis is currently implemented through tagged values that contain\nexactly one canonical representation of the type (be it inline, inline\ncompressed, or out of line with or without compression).\n\nOur current implementation assumes that users of the attribute will\nalways use either the decompressed canonical representation, or don't\ncare about the representation at all (except decompression of only\nprefixes, which is a special case), but this is clearly not the case:\nComposite values like ROW types clearly benefit from careful\npartitioning and subdivision of values into self-contained compressed\nchunks: We don't TOAST a table's rows, but do TOAST per attribute.\nJSONB could also benefit if it could create its own on-disk format of\na value: benchmarks of the \"Pluggable Toaster\" patch have shown that\nJSONB operation performance improved significantly with custom toaster\ninfrastructure.\n\nSo, if composite types (like JSONB, ROW and ARRAY) would be able to\nmanually slice their values and create their own representation of\nthat toasted value, then that would probably benefit the system by\nallowing some data to be stored in a more accessible manner than\n\"everything inline, compressed, or out-of-line, detoast (a prefix of)\nall data, or none of it, no partial detoasting\".\n\n\nNow, returning to the table-level TOAST task of making sure the\ntuple's data fits on the page, compressing & out-of-line-ing the data\nuntil it fits:\n\nThings that it currently does: varlena values are compressed and\nout-of-lined with generic compression algorithms and a naive\nslice-and-dice algorithm, and reconstructed (fully, or just a prefix)\nwhen needed.\n\nThings that it could potentially do in the future: Interface with\ntypes to allow the type to slice&dice the tuple; use type-aware\ncompression (or encoding) algorithms to allow partial detoasting and\npartial updates of a single value.\n\nThis would presumably be implemented using a set of new varattrib_1b_e\npointer subtypes whose contents are mostly managed by the type;\nallowing for partial detoasting of the original datum, and allowing\nfor more efficient access to not just the prefix, but intermediate\nspans as well: If compression spans .\n\nSo, the question would be: how do we expose such an API?\n\nI suspect that each type will have only one meaningful specialized\nmethod to toast its values. I don't see much value for registering\ncustom TOASTers when they only work with only the types that have code\nto support explicitly that toaster. This was visible in the 'Pluggable\nToaster' patch that was provided earlier as well - both example\nimplementations of this pluggable toaster were specialized to the\nneeds of one type each, and the type had direct calls into those\n\"pluggable\" toaster's internals, showing no good reason to extend this\nsupport to elsewhere outside the type.\n\nBecause there would be only one meaningful type-aware method of\nTOASTing a value, we could implement this as an optional type support\nfunction that would allow the type to specify how it wants to TOAST\nits values, with the default TOAST as backup in case of still\ntoo-large tuples or if the type does not implement these support\nfunctions. With this I'm thinking mostly towards \"new inout functions\nfor on-disk representations; which return/consume TOASTed slices to\nde/construct the original datum\", and less \"replacement of all of\ntoast's internals\".\n\nSo, in short, I don't think there is a need for a specific \"Pluggable\ntoast API\" like the one in the patchset at [0] that can be loaded\non-demand, but I think that updating our current TOAST system to a\nsystem for which types can provide support functions would likely be\nquite beneficial, for efficient extraction of data from composite\nvalues.\n\nExample support functions:\n\n/* TODO: bikeshedding on names, signatures, further support functions. */\nDatum typsup_roastsliceofbread(Datum ptr, int sizetarget, char cmethod)\nDatum typsup_unroastsliceofbread(Datum ptr)\nvoid typsup_releaseroastedsliceofbread(Datump ptr) /* in case of\nnon-unitary in-memory datums */\n\nWe would probably want at least 2 more subtypes of varattrib_1b_e -\none for on-disk pointers, and one for in-memory pointers - where the\npayload of those pointers is managed by the type's toast mechanism and\nconsidered opaque to the rest of PostgreSQL (and thus not compatible\nwith the binary transfer protocol). Types are currently already\nexpected to be able to handle their own binary representation, so\nallowing types to manage parts of the toast representation should IMHO\nnot be too dangerous, though we should make sure that BINARY COERCIBLE\ntypes share this toast support routine, or be returned to their\ncanonical binary version before they are cast to the coerced type, as\nusing different detoasting mechanisms could result in corrupted data\nand thus crashes.\n\nLastly, there is the compression part of TOAST. I think it should be\nrelatively straightforward to expose the compression-related\ncomponents of TOAST through functions that can then be used by\ntype-specific toast support functions.\nNote that this would be opt-in for a type, thus all functions that use\nthat type's internals should be aware of the different on-disk format\nfor toasted values and should thus be able to handle it gracefully.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a%40sigaev.ru\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:40:02 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "On Thu, 26 Oct 2023 at 15:18, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > And the goal of *THIS* topic is to gather a picture on how the community sees\n> > improvements in TOAST mechanics if it doesn't want it the way we proposed\n> > before, to understand which way to go with JSON advanced storage and other\n> > enhancements we already have. Previous topic was not of any help here.\n>\n> Publish your code under an appropriate license first so that 1. anyone\n> can test/benchmark it and 2. merge it to the PostgreSQL core if\n> necessary.\n>\n> Or better consider participating in the [1] discussion where we\n> reached a consensus on RFC and are working on improving TOAST for JSON\n> and other types. We try to be mindful of use cases you named before\n> like 64-bit TOAST pointers but we still could use your input.\n\nI feel that the no. 2 proposal is significantly different from the\ndiscussion over at [1] in that it concerns changes in the interface\nbetween types and toast, as opposed to as opposed to the no. 1\nproposal (and [1]'s) changes that stay mostly inside the current TOAST\napis and abstractions.\n\nThe \"Compression dictionaries for JSONB\" thread that you linked went\nthe way of \"store and use compression dictionaries for TOAST\ncompression algorithms\", which is at a lower level than one of the\nother ideas, which was to \"allow JSONB to use a dictionary of common\nvalues to dictionary-encode some of the contained entries\". Naive\ncompression of the Datum's bytes makes the compressed datum\nunparseable without decompression, even when dictionaries are used to\ndecrease the compressed size, while a type's own compression\ndictionary substitutions could allow it to maintain it's structure and\nwould thus allow for a lower memory and storage footprint of the\ncolumn's datums during query processing.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 26 Oct 2023 16:14:54 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi!\n\nMatthias, thank you for your patience and explanation. I'd wish I had it\nmuch earlier, it would save a lot of time.\nYou've asked a lot of good questions, and the answers we have for some\nseem to be not very satisfactory, and pointed out some topics that were not\nmentioned before. I have to rethink our approach to the TOAST enhancements\naccording to it.\n\nThanks a lot!\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Matthias, thank you for your patience and explanation. I'd wish I had itmuch earlier, it would save a lot of time.You've asked a lot of good questions, and the answers we have for someseem to be not very satisfactory, and pointed out some topics that were notmentioned before. I have to rethink our approach to the TOAST enhancementsaccording to it.Thanks a lot!--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Thu, 26 Oct 2023 23:56:12 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi,\n\nI've been thinking about Matthias' proposals for some time and have some\nquestions:\n\n>So, in short, I don't think there is a need for a specific \"Pluggable\n>toast API\" like the one in the patchset at [0] that can be loaded\n>on-demand, but I think that updating our current TOAST system to a\n>system for which types can provide support functions would likely be\n>quite beneficial, for efficient extraction of data from composite\n>values.\n\nAs I understand one of the reasons against Pluggable TOAST is that\ndifferences\nin plugged-in Toasters could result in incompatibility even in different\nversions\nof the same DB.\n\nThe importance of the correct TOAST update is out of question, feel like I\nhave\nto prepare a patch for it. There are some questions though, I'd address them\nlater with a patch.\n\n>Example support functions:\n\n>/* TODO: bikeshedding on names, signatures, further support functions. */\n>Datum typsup_roastsliceofbread(Datum ptr, int sizetarget, char cmethod)\n>Datum typsup_unroastsliceofbread(Datum ptr)\n>void typsup_releaseroastedsliceofbread(Datump ptr) /* in case of\n>non-unitary in-memory datums */\n\nI correctly understand that you mean extending PG_TYPE and type cache,\nby adding a new function set for toasting/detoasting a value in addition to\nin/out, etc?\n\nI see several issues here:\n1) We could benefit from knowledge of internals of data being toasted (i.e.\nin case of JSON value with key-value structure) only when EXTERNAL\nstorage mode is set, otherwise value will be compressed before toasted.\nSo we have to keep both TOAST mechanics regarding the storage mode\nbeing used. It's the same issue as in Pluggable TOAST. Is it OK?\n\n2) TOAST pointer is very limited in means of data it keeps, we'd have to\nextend it anyway and keep both for backwards compatibility;\n\n3) There is no API and such an approach would require implementing\ntoast and detoast in every data type we want to be custom toasted, resulting\nin multiple files modification. Maybe we have to consider introducing such\nan API?\n\n4) 1 toast relation per regular relation. With an update mechanics this will\nbe less limiting, but still a limiting factor because 1 entry in base table\ncould have a lot of entries in the toast table. Are we doing something with\nthis?\n\n>We would probably want at least 2 more subtypes of varattrib_1b_e -\n>one for on-disk pointers, and one for in-memory pointers - where the\n>payload of those pointers is managed by the type's toast mechanism and\n>considered opaque to the rest of PostgreSQL (and thus not compatible\n>with the binary transfer protocol). Types are currently already\n>expected to be able to handle their own binary representation, so\n>allowing types to manage parts of the toast representation should IMHO\n>not be too dangerous, though we should make sure that BINARY COERCIBLE\n>types share this toast support routine, or be returned to their\n>canonical binary version before they are cast to the coerced type, as\n>using different detoasting mechanisms could result in corrupted data\n>and thus crashes.\n\n>Lastly, there is the compression part of TOAST. I think it should be\n>relatively straightforward to expose the compression-related\n>components of TOAST through functions that can then be used by\n>type-specific toast support functions.\n>Note that this would be opt-in for a type, thus all functions that use\n>that type's internals should be aware of the different on-disk format\n>for toasted values and should thus be able to handle it gracefully.\n\nThanks a lot for answers!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi,I've been thinking about Matthias' proposals for some time and have somequestions:>So, in short, I don't think there is a need for a specific \"Pluggable>toast API\" like the one in the patchset at [0] that can be loaded>on-demand, but I think that updating our current TOAST system to a>system for which types can provide support functions would likely be>quite beneficial, for efficient extraction of data from composite>values.As I understand one of the reasons against Pluggable TOAST is that differencesin plugged-in Toasters could result in incompatibility even in different versionsof the same DB.The importance of the correct TOAST update is out of question, feel like I haveto prepare a patch for it. There are some questions though, I'd address themlater with a patch.>Example support functions:>/* TODO: bikeshedding on names, signatures, further support functions. */>Datum typsup_roastsliceofbread(Datum ptr, int sizetarget, char cmethod)>Datum typsup_unroastsliceofbread(Datum ptr)>void typsup_releaseroastedsliceofbread(Datump ptr) /* in case of>non-unitary in-memory datums */I correctly understand that you mean extending PG_TYPE and type cache,by adding a new function set for toasting/detoasting a value in addition toin/out, etc?I see several issues here:1) We could benefit from knowledge of internals of data being toasted (i.e.in case of JSON value with key-value structure) only when EXTERNALstorage mode is set, otherwise value will be compressed before toasted.So we have to keep both TOAST mechanics regarding the storage modebeing used. It's the same issue as in Pluggable TOAST. Is it OK?2) TOAST pointer is very limited in means of data it keeps, we'd have toextend it anyway and keep both for backwards compatibility;3) There is no API and such an approach would require implementingtoast and detoast in every data type we want to be custom toasted, resultingin multiple files modification. Maybe we have to consider introducing suchan API?4) 1 toast relation per regular relation. With an update mechanics this willbe less limiting, but still a limiting factor because 1 entry in base tablecould have a lot of entries in the toast table. Are we doing something withthis?>We would probably want at least 2 more subtypes of varattrib_1b_e ->one for on-disk pointers, and one for in-memory pointers - where the>payload of those pointers is managed by the type's toast mechanism and>considered opaque to the rest of PostgreSQL (and thus not compatible>with the binary transfer protocol). Types are currently already>expected to be able to handle their own binary representation, so>allowing types to manage parts of the toast representation should IMHO>not be too dangerous, though we should make sure that BINARY COERCIBLE>types share this toast support routine, or be returned to their>canonical binary version before they are cast to the coerced type, as>using different detoasting mechanisms could result in corrupted data>and thus crashes.>Lastly, there is the compression part of TOAST. I think it should be>relatively straightforward to expose the compression-related>components of TOAST through functions that can then be used by>type-specific toast support functions.>Note that this would be opt-in for a type, thus all functions that use>that type's internals should be aware of the different on-disk format>for toasted values and should thus be able to handle it gracefully.Thanks a lot for answers!-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 7 Nov 2023 13:06:41 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "On Tue, 7 Nov 2023 at 11:06, Nikita Malakhov <[email protected]> wrote:\n>\n> Hi,\n>\n> I've been thinking about Matthias' proposals for some time and have some\n> questions:\n>\n> >So, in short, I don't think there is a need for a specific \"Pluggable\n> >toast API\" like the one in the patchset at [0] that can be loaded\n> >on-demand, but I think that updating our current TOAST system to a\n> >system for which types can provide support functions would likely be\n> >quite beneficial, for efficient extraction of data from composite\n> >values.\n>\n> As I understand one of the reasons against Pluggable TOAST is that differences\n> in plugged-in Toasters could result in incompatibility even in different versions\n> of the same DB.\n\nThat could be part of it, but it definitely wasn't my primary concern.\nThe primary concern remains that the pluggable toaster patch made the\njsonb type expose an API for a pluggable toaster that for all intents\nand purposes only has one implementation due to its API being\nspecifically tailored for the jsonb internals use case, with similar\ntype-specific API bindings getting built for other types, each having\nstrict expectations about the details of the implementation. I agree\nthat it makes sense to specialize TOASTing for jsonb, but what I don't\nunderstand about it is why that would need to be achieved outside the\ncore jsonb code.\n\nI understand that the 'pluggable toaster' APIs originate from one of\nPostgresPRO's forks of PostgreSQL, and I think it shows. That's not to\nsay it's bad, but it seems to be built on different expectations:\nWhen maintaining a fork, you have different tradeoffs when compared to\nmaintaining the main product. A fork's changes need to be covered\nacross many versions with unknown changes, thus you would want the\nsmalles possible changes to enable the feature - pluggable toast makes\nsense here, as the changes are limited to a few jsonb internals, but\nmost complex code is in an extension.\nHowever, for core PostgreSQL, I think this separation makes very\nlittle sense: the complexity of maintaining a toast api for each type\n(when there can be expected to be only one implementation) is much\nmore work than just building a good set of helper functions that do\nthat same job. It allows for more flexibility, as there is no\nnoticable black box api implementation to keep track of.\n\n> The importance of the correct TOAST update is out of question, feel like I have\n> to prepare a patch for it. There are some questions though, I'd address them\n> later with a patch.\n>\n> >Example support functions:\n>\n> >/* TODO: bikeshedding on names, signatures, further support functions. */\n> >Datum typsup_roastsliceofbread(Datum ptr, int sizetarget, char cmethod)\n> >Datum typsup_unroastsliceofbread(Datum ptr)\n> >void typsup_releaseroastedsliceofbread(Datump ptr) /* in case of\n> >non-unitary in-memory datums */\n>\n> I correctly understand that you mean extending PG_TYPE and type cache,\n> by adding a new function set for toasting/detoasting a value in addition to\n> in/out, etc?\n\nYes.\n\n> I see several issues here:\n> 1) We could benefit from knowledge of internals of data being toasted (i.e.\n> in case of JSON value with key-value structure) only when EXTERNAL\n> storage mode is set, otherwise value will be compressed before toasted.\n> So we have to keep both TOAST mechanics regarding the storage mode\n> being used. It's the same issue as in Pluggable TOAST. Is it OK?\n\nI think it is OK that the storage-related changes of this only start\nonce the toast mechanism is\n\n> 2) TOAST pointer is very limited in means of data it keeps, we'd have to\n> extend it anyway and keep both for backwards compatibility;\n\nYes. We already have to retain the current (de)toast infrastructure to\nmake sure current data files can still be read, given that we want to\nretain backward compatibility for currently toasted data.\n\n> 3) There is no API and such an approach would require implementing\n> toast and detoast in every data type we want to be custom toasted, resulting\n> in multiple files modification. Maybe we have to consider introducing such\n> an API?\n\nNo. As I mentioned, we can retain the current toast mechanism for\ncurrent types that do not yet want to use these new toast APIs. If we\nuse one different varatt_1b_e tag for type-owned toast pointers, the\nsystem will be opt-in for types, and for types that don't (yet) have\ntheir own toast slicing design will keep using the old all-or-nothing\nsingle-allocation data with the good old compress-then-slice\nout-of-line toast storage.\n\n> 4) 1 toast relation per regular relation. With an update mechanics this will\n> be less limiting, but still a limiting factor because 1 entry in base table\n> could have a lot of entries in the toast table. Are we doing something with\n> this?\n\nI don't think that is relevant to the topic of type-aware toasting\noptimization. The toast storage relation growing too large is not\nunique to jsonb- or bytea-typed columns, so I believe this is better\nsolved in a different thread. Ideas like 'toast relation per column'\nalso doesn't really solve the issue when the main table only has one\nbigint and one jsonb column, so I think this needs a different\napproach, too. I think solutions could probably best be discussed in a\nseparate thread.\n\nKind regards,\n\nMatthias van de Meent.\n\n\n", "msg_date": "Tue, 7 Nov 2023 12:51:22 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "Hi!\n\nMatthias, regarding your message above, I have a question to ask.\nOn typed TOAST implementations - we thought that TOAST method used\nfor storing data could depend not only on data type, but on the flow or\nworkload,\nlike out bytea appendable toaster which is much (hundreds of times) faster\non\nupdate compared to regular procedure. That was one of ideas behind the\nPluggable TOAST - we can choose the most suitable TOAST implementation\navailable.\n\nIf we have a single TOAST entry point for data type - then we should have\nsome means to control it or choose a TOAST method suitable to our needs.\nOr should not?\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Matthias, regarding your message above, I have a question to ask.On typed TOAST implementations - we thought that TOAST method usedfor storing data could depend not only on data type, but on the flow or workload,like out bytea appendable toaster which is much (hundreds of times) faster onupdate compared to regular procedure. That was one of ideas behind thePluggable TOAST - we can choose the most suitable TOAST implementationavailable.If we have a single TOAST entry point for data type - then we should havesome means to control it or choose a TOAST method suitable to our needs.Or should not?-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/", "msg_date": "Tue, 14 Nov 2023 16:12:20 +0300", "msg_from": "Nikita Malakhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Pluggable TOAST" }, { "msg_contents": "On Tue, 14 Nov 2023, 14:12 Nikita Malakhov, <[email protected]> wrote:\n>\n> Hi!\n>\n> Matthias, regarding your message above, I have a question to ask.\n> On typed TOAST implementations - we thought that TOAST method used\n> for storing data could depend not only on data type, but on the flow or workload,\n> like out bytea appendable toaster which is much (hundreds of times) faster on\n> update compared to regular procedure. That was one of ideas behind the\n> Pluggable TOAST - we can choose the most suitable TOAST implementation\n> available.\n>\n> If we have a single TOAST entry point for data type - then we should have\n> some means to control it or choose a TOAST method suitable to our needs.\n> Or should not?\n\nI'm not sure my interpretation of the question is correct, but I'll\nassume it's \"would you want something like STORAGE\n[plain/external/...] for controlling type-specific toast operations?\".\n\nI don't see many reasons why we'd need a system to disable (some of)\nthose features, with the only one being \"the workload is mostly\nread-only of the full attributes, so any performance overhead of\ntype-aware detoasting is not worth the temporary space savings during\nupdates\". So, while I do think there would be good reasons for typed\ntoasting to be disabled, I don't see a good reason for only specific\nparts of type-specific toasting to be disabled (no reason for 'disable\nthe append optimization for bytea, but not the splice optimization').\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 15 Nov 2023 13:14:44 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Pluggable TOAST" } ]
[ { "msg_contents": "Hi,\n\nI recently mentioned to Heikki that I was seeing latch related wakeups being\nfrequent enough to prevent walwriter from doing a whole lot of work. He asked\nme to write that set of concerns up, which seems quite fair...\n\n\nHere's a profile of walwriter while the following pgbench run was ongoing:\n\nc=1;psql -Xq -c 'drop table if exists testtable_logged; CREATE TABLE testtable_logged(v int not null default 0);' && PGOPTIONS='-c synchronous_commit=off' pgbench -n -c$c -j$c -Mprepared -T150 -f <(echo 'INSERT INTO testtable_logged DEFAULT VALUES;') -P1\n\nLooking at top, walwriter is around 15-20% busy with this\nworkload. Unfortunately a profile quickly shows that little of that work is\nuseful:\n\nperf record --call-graph dwarf -m16M -p $(pgrep -f 'walwriter') sleep 3\n\n- 94.42% 0.00% postgres postgres [.] AuxiliaryProcessMain\n AuxiliaryProcessMain\n - WalWriterMain\n + 78.26% WaitLatch\n + 14.01% XLogBackgroundFlush\n + 0.51% pgstat_report_wal\n 0.29% ResetLatch\n 0.13% pgstat_flush_io\n + 0.02% asm_sysvec_apic_timer_interrupt\n 0.01% HandleWalWriterInterrupts (inlined)\n\n\nConfirmed by the distribution of what syscalls are made:\n\nperf trace -m128M --summary -p $(pgrep -f 'walwriter') sleep 5\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n epoll_wait 216610 0 3744.984 0.000 0.017 0.113 0.03%\n read 216602 0 333.905 0.001 0.002 0.029 0.03%\n fdatasync 27 0 94.703 1.939 3.508 11.279 8.83%\n pwrite64 2998 0 15.646 0.004 0.005 0.027 0.45%\n openat 2 0 0.019 0.006 0.010 0.013 34.84%\n close 2 0 0.004 0.002 0.002 0.003 25.76%\n\nWe're doing far more latch related work than actual work.\n\nThe walwriter many many times wakes up without having to do anything.\n\nAnd if you increase the number of clients to e.g. c=8, it gets worse in some\nways:\n\nperf trace:\n epoll_wait 291512 0 2364.067 0.001 0.008 0.693 0.10%\n read 290938 0 479.837 0.001 0.002 0.020 0.05%\n fdatasync 146 0 410.043 2.508 2.809 7.006 1.90%\n futex 56384 43982 183.896 0.001 0.003 2.791 1.65%\n pwrite64 17058 0 105.625 0.004 0.006 4.015 4.61%\n clock_nanosleep 1 0 1.063 1.063 1.063 1.063 0.00%\n openat 9 0 0.072 0.006 0.008 0.014 14.35%\n close 9 0 0.018 0.002 0.002 0.003 5.55%\n\nNote that we 5x more lock waits (the futex calls) than writes!\n\n\nI think the problem is mainly that XLogSetAsyncXactLSN() wakes up walwriter\nwhenever it is sleeping, regardless of whether the modified asyncXactLSN will\nlead to a write. We even wake up walwriter when we haven't changed\nasyncXactLSN, because our LSN is older than some other backends!\n\nSo often we'll just wake up walwriter, which finds no work, immediately goes\nto sleep, just to be woken again.\n\nBecause of the inherent delay between the checks of XLogCtl->WalWriterSleeping\nand Latch->is_set, we also sometimes end up with multiple processes signalling\nwalwriter, which can be bad, because it increases the likelihood that some of\nthe signals may be received when we are already holding WALWriteLock, delaying\nits release...\n\nBecause of the frequent wakeups, we do something else that's not smart: We\nwrite out 8k blocks individually, many times a second. Often thousands of\n8k pwrites a second.\n\nWe also acquire WALWriteLock and call WaitXLogInsertionsToFinish(), even if\ncould already know we're not going to flush! Not cheap, when you do it this\nmany times a second.\n\n\nThere is an absolutely basic optimization, helping a it at higher client\ncounts: Don't wake if the new asyncXactLSN is <= the old one. But it doesn't\nhelp that much.\n\nI think the most important optimization we need is to have\nXLogSetAsyncXactLSN() only wake up if there is a certain amount of unflushed\nWAL. Unless walsender is hibernating, walsender will wake up on its own after\nwal_writer_delay. I think we can just reuse WalWriterFlushAfter for this.\n\nE.g. a condition like\n\t\tif (WriteRqstPtr <= LogwrtResult.Write + WalWriterFlushAfter * XLOG_BLCKSZ)\n\t\t\treturn;\ndrastically cuts down on the amount of wakeups, without - I think - loosing\nguarantees around synchronous_commit=off.\n\n1 client:\n\nbefore:\ntps = 42926.288765 (without initial connection time)\n\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n epoll_wait 209077 0 3746.918 0.000 0.018 0.143 0.03%\n read 209073 0 310.532 0.001 0.001 0.021 0.02%\n fdatasync 25 0 82.673 2.623 3.307 3.457 1.13%\n pwrite64 2892 0 14.600 0.004 0.005 0.018 0.43%\n\nafter:\n\ntps = 46244.394058 (without initial connection time)\n\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n epoll_wait 25 0 4732.625 0.000 189.305 200.281 4.17%\n fdatasync 25 0 90.264 2.814 3.611 3.835 1.02%\n pwrite64 48 0 15.825 0.020 0.330 0.707 12.76%\n read 21 0 0.117 0.003 0.006 0.007 3.69%\n\n\n8 clients:\n\ntps = 279316.646315 (without initial connection time)\n\n postgres (2861734), 1215159 events, 100.0%\n\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n epoll_wait 267517 0 2150.206 0.000 0.008 0.973 0.12%\n read 266683 0 512.348 0.001 0.002 0.036 0.08%\n fdatasync 149 0 413.658 2.583 2.776 3.395 0.29%\n futex 56597 49588 183.174 0.001 0.003 1.047 0.69%\n pwrite64 17516 0 126.208 0.004 0.007 2.927 3.93%\n\n\nafter:\n\ntps = 290958.322594 (without initial connection time)\n\n postgres (2861534), 1626 events, 100.0%\n\n syscall calls errors total min avg max stddev\n (msec) (msec) (msec) (msec) (%)\n --------------- -------- ------ -------- --------- --------- --------- ------\n epoll_wait 153 0 4383.285 0.000 28.649 32.699 0.92%\n fdatasync 153 0 464.088 2.452 3.033 19.999 4.88%\n pwrite64 306 0 80.361 0.049 0.263 0.590 4.38%\n read 153 0 0.459 0.002 0.003 0.004 1.37%\n futex 49 46 0.211 0.002 0.004 0.038 17.05%\n\n\nMore throughput for less CPU, seems neat :)\n\n\nI'm not addressing that here, but I think we also have the opposite behaviour\n- we're not waking up walwriter often enough. E.g. if you have lots of bulk\ndataloads, walwriter will just wake up once per wal_writer_delay, leading to\nmost of the work being done by backends. We should probably wake walsender at\nthe end of XLogInsertRecord() if there is sufficient outstanding WAL.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 24 Oct 2023 16:09:29 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "walwriter interacts quite badly with synchronous_commit=off" }, { "msg_contents": "On 25/10/2023 02:09, Andres Freund wrote:\n> Because of the inherent delay between the checks of XLogCtl->WalWriterSleeping\n> and Latch->is_set, we also sometimes end up with multiple processes signalling\n> walwriter, which can be bad, because it increases the likelihood that some of\n> the signals may be received when we are already holding WALWriteLock, delaying\n> its release...\n\nThat can only happen when walwriter has just come out of \"hibernation\", \nie. when the system has been idle for a while. So probably not a big \ndeal in practice.\n\n> I think the most important optimization we need is to have\n> XLogSetAsyncXactLSN() only wake up if there is a certain amount of unflushed\n> WAL. Unless walsender is hibernating, walsender will wake up on its own after\n> wal_writer_delay. I think we can just reuse WalWriterFlushAfter for this.\n> \n> E.g. a condition like\n> \t\tif (WriteRqstPtr <= LogwrtResult.Write + WalWriterFlushAfter * XLOG_BLCKSZ)\n> \t\t\treturn;\n> drastically cuts down on the amount of wakeups, without - I think - loosing\n> guarantees around synchronous_commit=off.\n\nIn the patch, you actually did:\n\n> +\t\tif (WriteRqstPtr <= LogwrtResult.Flush + WalWriterFlushAfter * XLOG_BLCKSZ)\n> +\t\t\treturn;\n\nIt means that you never wake up the walwriter to merely *write* the WAL. \nYou only wake it up if it's going to also fsync() it. I think that's \ncorrect and appropriate, but it took me a while to reach that conclusion:\n\nIt might be beneficial to wake up the walwriter just to perform a \nwrite(), to offload that work from the backend. And walwriter will \nactually also perform an fsync() after finishing the current segment, so \nit would make sense to also wake it up when 'asyncXactLSN' crosses a \nsegment boundary. However, if those extra wakeups make sense, they would \nalso make sense when there are no asynchronous commits involved. \nTherefore those extra wakeups should be done elsewhere, perhaps \nsomewhere around AdvanceXLInsertBuffer(). The condition you have in the \npatch is appropriate for XLogSetAsyncXactLSN().\n\nAnother reason to write the WAL aggressively, even if you don't flush \nit, would be to reduce the number of lost transactions on a segfault. \nBut we don't give any guarantees on that, and even with the current \naggressive logic, we only write when a page is full so you're anyway \ngoing to lose the last partial page.\n\nIt also took me a while to convince myself that this calculation matches \nthe calculation that XLogBackgroundFlush() uses to determine whether it \nneeds to flush or not. XLogBackgroundFlush() first divides the request \nand result with XLOG_BLCKSZ and then compares the number of blocks, \nwhereas here you perform the calculation in bytes. I think the result is \nthe same, but to make it more clear, let's do it the same way in both \nplaces.\n\nSee attached. It's the same logic as in your patch, just formulatd more \nclearly IMHO.\n\n> Because of the frequent wakeups, we do something else that's not smart: We\n> write out 8k blocks individually, many times a second. Often thousands of\n> 8k pwrites a second.\n\nEven with this patch, when I bumped up wal_writer_delay to 2 so that the \nwal writer gets woken up by the async commits rather than the timeout, \nthe write pattern is a bit silly:\n\n$ strace -p 1099926 # walwriter\nstrace: Process 1099926 attached\nepoll_wait(10, [{events=EPOLLIN, data={u32=3704011232, \nu64=94261056289248}}], 1, 1991) = 1\nread(3, \n\"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., \n1024) = 128\npwrite64(5, \n\"\\24\\321\\5\\0\\1\\0\\0\\0\\0\\300\\0\\373\\5\\0\\0\\0+\\0\\0\\0\\0\\0\\0\\0\\0\\n\\0\\0n\\276\\242\\305\"..., \n1007616, 49152) = 1007616\nfdatasync(5) = 0\npwrite64(5, \"\\24\\321\\5\\0\\1\\0\\0\\0\\0 \n\\20\\373\\5\\0\\0\\0003\\0\\0\\0\\0\\0\\0\\0\\320\\37\\20\\373\\5\\0\\0\\0\"..., 16384, \n1056768) = 16384\nepoll_wait(10, [{events=EPOLLIN, data={u32=3704011232, \nu64=94261056289248}}], 1, 2000) = 1\nread(3, \n\"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., \n1024) = 128\npwrite64(5, \n\"\\24\\321\\5\\0\\1\\0\\0\\0\\0`\\20\\373\\5\\0\\0\\0+\\0\\0\\0\\0\\0\\0\\0\\0\\n\\0\\0\\5~\\23\\261\"..., \n1040384, 1073152) = 1040384\nfdatasync(5) = 0\npwrite64(5, \"\\24\\321\\4\\0\\1\\0\\0\\0\\0@ \n\\373\\5\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0;\\0\\0\\0\\264'\\246\\3\"..., 16384, 2113536) = 16384\nepoll_wait(10, [{events=EPOLLIN, data={u32=3704011232, \nu64=94261056289248}}], 1, 2000) = 1\nread(3, \n\"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"..., \n1024) = 128\npwrite64(5, \"\\24\\321\\5\\0\\1\\0\\0\\0\\0\\200 \n\\373\\5\\0\\0\\0003\\0\\0\\0\\0\\0\\0\\0\\320\\177 \\373\\5\\0\\0\\0\"..., 1040384, \n2129920) = 1040384\nfdatasync(5) = 0\n\nIn each cycle, the wal writer writes a full 1 MB chunk \n(wal_writer_flush_after = '1MB'), flushes it, and then perform a smaller \nwrite before going to sleep.\n\nThose smaller writes seem a bit silly. But I think it's fine.\n\n> More throughput for less CPU, seems neat :)\n\nIndeed, impressive speedup from such a small patch!\n\n> I'm not addressing that here, but I think we also have the opposite behaviour\n> - we're not waking up walwriter often enough. E.g. if you have lots of bulk\n> dataloads, walwriter will just wake up once per wal_writer_delay, leading to\n> most of the work being done by backends. We should probably wake walsender at\n> the end of XLogInsertRecord() if there is sufficient outstanding WAL.\n\nRight, that's basically the same issue that I reasoned through above. I \ndid some quick testing with a few different settings of wal_buffers, \nwal_writer_flush_after and wal_writer_delay, to try to see that effect. \nBut I was not able to find a case where it makes a difference.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 25 Oct 2023 12:17:03 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walwriter interacts quite badly with synchronous_commit=off" }, { "msg_contents": "Hi,\n\nOn 2023-10-25 12:17:03 +0300, Heikki Linnakangas wrote:\n> On 25/10/2023 02:09, Andres Freund wrote:\n> > Because of the inherent delay between the checks of XLogCtl->WalWriterSleeping\n> > and Latch->is_set, we also sometimes end up with multiple processes signalling\n> > walwriter, which can be bad, because it increases the likelihood that some of\n> > the signals may be received when we are already holding WALWriteLock, delaying\n> > its release...\n>\n> That can only happen when walwriter has just come out of \"hibernation\", ie.\n> when the system has been idle for a while. So probably not a big deal in\n> practice.\n\nMaybe I am missing something here - why can this only happen when hibernating?\nEven outside of that two backends can decide that that they need to wake up\nwalwriter?\n\nWe could prevent that, by updating state when requesting walwriter to be woken\nup. But with the changes we're discussing below, that should be rare.\n\n\n> > I think the most important optimization we need is to have\n> > XLogSetAsyncXactLSN() only wake up if there is a certain amount of unflushed\n> > WAL. Unless walsender is hibernating, walsender will wake up on its own after\n> > wal_writer_delay. I think we can just reuse WalWriterFlushAfter for this.\n> >\n> > E.g. a condition like\n> > \t\tif (WriteRqstPtr <= LogwrtResult.Write + WalWriterFlushAfter * XLOG_BLCKSZ)\n> > \t\t\treturn;\n> > drastically cuts down on the amount of wakeups, without - I think - loosing\n> > guarantees around synchronous_commit=off.\n>\n> In the patch, you actually did:\n>\n> > +\t\tif (WriteRqstPtr <= LogwrtResult.Flush + WalWriterFlushAfter * XLOG_BLCKSZ)\n> > +\t\t\treturn;\n>\n> It means that you never wake up the walwriter to merely *write* the WAL. You\n> only wake it up if it's going to also fsync() it. I think that's correct and\n> appropriate, but it took me a while to reach that conclusion:\n\nYea, after writing the email I got worried that just looking at Write would\nperhaps lead to not flushing data soon enough...\n\n\n> It might be beneficial to wake up the walwriter just to perform a write(),\n> to offload that work from the backend. And walwriter will actually also\n> perform an fsync() after finishing the current segment, so it would make\n> sense to also wake it up when 'asyncXactLSN' crosses a segment boundary.\n> However, if those extra wakeups make sense, they would also make sense when\n> there are no asynchronous commits involved. Therefore those extra wakeups\n> should be done elsewhere, perhaps somewhere around AdvanceXLInsertBuffer().\n> The condition you have in the patch is appropriate for\n> XLogSetAsyncXactLSN().\n\nYea. I agree we should wake up walsender in other situations too...\n\n\n> Another reason to write the WAL aggressively, even if you don't flush it,\n> would be to reduce the number of lost transactions on a segfault. But we\n> don't give any guarantees on that, and even with the current aggressive\n> logic, we only write when a page is full so you're anyway going to lose the\n> last partial page.\n\nWal writer does end up writing the trailing partially filled page during the\nnext wal_writer_delay cycle.\n\n\n> It also took me a while to convince myself that this calculation matches the\n> calculation that XLogBackgroundFlush() uses to determine whether it needs to\n> flush or not. XLogBackgroundFlush() first divides the request and result\n> with XLOG_BLCKSZ and then compares the number of blocks, whereas here you\n> perform the calculation in bytes. I think the result is the same, but to\n> make it more clear, let's do it the same way in both places.\n>\n> See attached. It's the same logic as in your patch, just formulatd more\n> clearly IMHO.\n\nYep, makes sense!\n\n\n> > Because of the frequent wakeups, we do something else that's not smart: We\n> > write out 8k blocks individually, many times a second. Often thousands of\n> > 8k pwrites a second.\n>\n> Even with this patch, when I bumped up wal_writer_delay to 2 so that the wal\n> writer gets woken up by the async commits rather than the timeout, the write\n> pattern is a bit silly:\n>\n> $ strace -p 1099926 # walwriter\n> strace: Process 1099926 attached\n> epoll_wait(10, [{events=EPOLLIN, data={u32=3704011232,\n> u64=94261056289248}}], 1, 1991) = 1\n> read(3,\n> \"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 1024) = 128\n> pwrite64(5, \"\\24\\321\\5\\0\\1\\0\\0\\0\\0\\300\\0\\373\\5\\0\\0\\0+\\0\\0\\0\\0\\0\\0\\0\\0\\n\\0\\0n\\276\\242\\305\"...,\n> 1007616, 49152) = 1007616\n> fdatasync(5) = 0\n> pwrite64(5, \"\\24\\321\\5\\0\\1\\0\\0\\0\\0\n> \\20\\373\\5\\0\\0\\0003\\0\\0\\0\\0\\0\\0\\0\\320\\37\\20\\373\\5\\0\\0\\0\"..., 16384, 1056768)\n> = 16384\n> epoll_wait(10, [{events=EPOLLIN, data={u32=3704011232,\n> u64=94261056289248}}], 1, 2000) = 1\n> read(3,\n> \"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 1024) = 128\n> pwrite64(5,\n> \"\\24\\321\\5\\0\\1\\0\\0\\0\\0`\\20\\373\\5\\0\\0\\0+\\0\\0\\0\\0\\0\\0\\0\\0\\n\\0\\0\\5~\\23\\261\"...,\n> 1040384, 1073152) = 1040384\n> fdatasync(5) = 0\n> pwrite64(5, \"\\24\\321\\4\\0\\1\\0\\0\\0\\0@\n> \\373\\5\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0;\\0\\0\\0\\264'\\246\\3\"..., 16384, 2113536) = 16384\n> epoll_wait(10, [{events=EPOLLIN, data={u32=3704011232,\n> u64=94261056289248}}], 1, 2000) = 1\n> read(3,\n> \"\\27\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0<\\312\\20\\0\\350\\3\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\"...,\n> 1024) = 128\n> pwrite64(5, \"\\24\\321\\5\\0\\1\\0\\0\\0\\0\\200 \\373\\5\\0\\0\\0003\\0\\0\\0\\0\\0\\0\\0\\320\\177\n> \\373\\5\\0\\0\\0\"..., 1040384, 2129920) = 1040384\n> fdatasync(5) = 0\n>\n> In each cycle, the wal writer writes a full 1 MB chunk\n> (wal_writer_flush_after = '1MB'), flushes it, and then perform a smaller\n> write before going to sleep.\n\nI think that's actually somewhat sane - we write out the partial page in the\nsubsequent cycle. That won't happen if the page isn't partially filled or\ndoesn't have an async commit on it.\n\nI think we end up with somewhat bogus write patterns in other cases still, but\nthat's really more an issue in XLogBackgroundFlush() and thus deserves a\nseparate patch/thread.\n\n\n> > I'm not addressing that here, but I think we also have the opposite behaviour\n> > - we're not waking up walwriter often enough. E.g. if you have lots of bulk\n> > dataloads, walwriter will just wake up once per wal_writer_delay, leading to\n> > most of the work being done by backends. We should probably wake walsender at\n> > the end of XLogInsertRecord() if there is sufficient outstanding WAL.\n>\n> Right, that's basically the same issue that I reasoned through above. I did\n> some quick testing with a few different settings of wal_buffers,\n> wal_writer_flush_after and wal_writer_delay, to try to see that effect. But\n> I was not able to find a case where it makes a difference.\n\nI think in the right set of circumstances it can make quite a bit of\ndifference. E.g. I bulk load 3GB of data in a cluster with s_b 1GB. Then I\ncheckpoint and VACUUM FREEZE it. With wal_writer_delay=1ms that's\nconsiderably faster (5.4s) than with wal_writer_delay=2s (8.3s) or even the\ndefault 200ms (7.9s), because a fast walwriter makes it much more likely that\nvacuum won't need to wait for an xlog flush before replacing a buffer in the\nstrategy ring.\n\nI think improving this logic would be quite worthwhile!\n\nAnother benefit of triggering wakeups based on the amount of outstanding\nwrites would be that we could increase wal_writer_delay substantially (with\nperhaps some adjustment for the partial-trailing-page-with-async-commit case),\nreducing power usage. It's imo pretty silly that we have wal writer wake up\nregularly, if it just writes once every few seconds.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Oct 2023 11:59:41 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walwriter interacts quite badly with synchronous_commit=off" }, { "msg_contents": "On 25/10/2023 21:59, Andres Freund wrote:\n> On 2023-10-25 12:17:03 +0300, Heikki Linnakangas wrote:\n>> On 25/10/2023 02:09, Andres Freund wrote:\n>>> Because of the inherent delay between the checks of XLogCtl->WalWriterSleeping\n>>> and Latch->is_set, we also sometimes end up with multiple processes signalling\n>>> walwriter, which can be bad, because it increases the likelihood that some of\n>>> the signals may be received when we are already holding WALWriteLock, delaying\n>>> its release...\n>>\n>> That can only happen when walwriter has just come out of \"hibernation\", ie.\n>> when the system has been idle for a while. So probably not a big deal in\n>> practice.\n> \n> Maybe I am missing something here - why can this only happen when hibernating?\n> Even outside of that two backends can decide that that they need to wake up\n> walwriter?\n\nAh sure, multiple backends can decide to wake up walwriter at the same \ntime. I thought you meant that the window for that was somehow wider \nwhen XLogCtl->WalWriterSleeping.\n\n> We could prevent that, by updating state when requesting walwriter to be woken\n> up. But with the changes we're discussing below, that should be rare.\n\nOne small easy thing we could do to reduce the redundant wakeups: only \nwake up walwriter if asyncXactLSN points to different page than \nprevAsyncXactLSN.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 23:04:28 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walwriter interacts quite badly with synchronous_commit=off" }, { "msg_contents": "On 25/10/2023 21:59, Andres Freund wrote:\n>> See attached. It's the same logic as in your patch, just formulatd more\n>> clearly IMHO.\n> Yep, makes sense!\n\nPushed this. Thanks for the investigation!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 27 Nov 2023 17:55:34 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: walwriter interacts quite badly with synchronous_commit=off" }, { "msg_contents": "On 2023-11-27 17:55:34 +0200, Heikki Linnakangas wrote:\n> On 25/10/2023 21:59, Andres Freund wrote:\n> > > See attached. It's the same logic as in your patch, just formulatd more\n> > > clearly IMHO.\n> > Yep, makes sense!\n> \n> Pushed this. Thanks for the investigation!\n\nThanks!\n\n\n", "msg_date": "Mon, 27 Nov 2023 09:13:59 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: walwriter interacts quite badly with synchronous_commit=off" } ]
[ { "msg_contents": "Hi all,\n\nI don't remember how many times in the last few years when I've had to\nhack the backend to produce a test case that involves a weird race\ncondition across multiple processes running in the backend, to be able\nto prove a point or just test a fix (one recent case: 2b8e5273e949).\nUsually, I come to hardcoding stuff for the following situations:\n- Trigger a PANIC, to force recovery.\n- A FATAL, to take down a session, or just an ERROR.\n- palloc() failure injection.\n- Sleep to slow down a code path.\n- Pause and release with condition variable.\n\nAnd, while that's helpful to prove a point on a thread, nothing comes\nout of it in terms of regression test coverage in the tree because\nthese tests are usually too slow and expensive, as they usually rely\non hardcoded timeouts. So that's pretty much attempting to emulate\nwhat one would do with a debugger in a predictable way, without the\nmanual steps because human hands don't scale well.\n\nThe reason behind that is of course more advanced testing, to be able\nto expand coverage when we have weird and non-deterministic race\nissues to deal with, and the code becoming more complex every year\nmakes that even harder. Fault and failure injection in specific paths\ncomes into mind, additionally, particularly if you manage complex\nprojects based on Postgres.\n\nSo, please find attached a patch set that introduces an in-core\nfacility to be able to set what I'm calling here an \"injection point\",\nthat consists of being able to register in shared memory a callback\nthat can be run within a defined location of the code. It means that\nit is not possible to trigger a callback before shared memory is set,\nbut I've faced far more the case where I wanted to trigger something\nafter shmem is set anyway. Persisting an injection point across\nrestarts is also possible by adding some through an extension's shmem\nhook, as we do for custom LWLocks for example, as long as a library is\nloaded.\n\nThis will remind a bit of what Alexander Korotkov has proposed here:\nhttps://www.postgresql.org/message-id/CAPpHfdtSEOHX8dSk9Qp%2BZ%2B%2Bi4BGQoffKip6JDWngEA%2Bg7Z-XmQ%40mail.gmail.com\nAlso, this is much closee to what Craig Ringer is mentioning here,\nwhere it is named probe points, but I am using a minimal design that\nallows to achieve the same:\nhttps://www.postgresql.org/message-id/CAPpHfdsn-hzneYNbX4qcY5rnwr-BA1ogOCZ4TQCKQAw9qa48kA%40mail.gmail.com\n\nA difference is that I don't really see a point in passing to the\ncallback triggered an area of data coming from the hash table itself,\nas at the end a callback could just refer to an area in shared memory\nor a static set of variables depending on what it wants, with one or\nmore injection points (say a location to set a state, and a second to\ncheck it). So, at the end, the problem comes down in my opinion to\ntwo things:\n- Possibility to trigger a condition defined by some custom code, in \nthe backend (core code or even out-of-core).\n- Possibility to define a location in the code where a named point\nwould be checked.\n\n0001 introduces three APIs to create, run, and drop injection points:\n+extern void InjectionPointCreate(const char *name,\n+ InjectionPointCallback callback);\n+extern void InjectionPointRun(const char *name);\n+extern void InjectionPointDrop(const char *name);\n\nThen one just needs to add a macro like that to trigger the callback\nregistered in the code to test:\nINJECTION_POINT_RUN(\"String\");\nSo the footprint in the core tree is not zero, but it is as minimal as\nit can be.\n\nI have added some documentation to explain that, as well. I am not\nwedded to the name proposed in the patch, so if you feel there is\nbetter, feel free to propose ideas.\n\nThis facility is hidden behind a specific configure/Meson switch,\nmaking it a no-op by default:\n--enable-injection-points\n-Dinjection_points={ true | false }\n\n0002 is a test module to test these routines, that I have kept a\nmaximum simple to ease review of the basics proposed here. This could\nbe extended further to propose more default modes with TAP tests on\nits own, as I don't see a real point in having the SQL bits or some\ncommon callbacks (like for the PANIC or the FATAL cases) in core.\n\nThoughts and comments are welcome.\n--\nMichael", "msg_date": "Wed, 25 Oct 2023 13:13:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Oct 25, 2023 at 9:43 AM Michael Paquier <[email protected]> wrote:\n\n> Hi all,\n>\n> I don't remember how many times in the last few years when I've had to\n> hack the backend to produce a test case that involves a weird race\n> condition across multiple processes running in the backend, to be able\n> to prove a point or just test a fix (one recent case: 2b8e5273e949).\n> Usually, I come to hardcoding stuff for the following situations:\n> - Trigger a PANIC, to force recovery.\n> - A FATAL, to take down a session, or just an ERROR.\n> - palloc() failure injection.\n> - Sleep to slow down a code path.\n> - Pause and release with condition variable.\n\n\n+1 for the feature.\n\nTWIMW, here[1] is an interesting talk from pgconf.in 2020 on the similar\ntopic.\n\n1] https://pgconf.in/conferences/pgconfin2020/program/proposals/101\n\nRegards,\nAmul Sul\n\nOn Wed, Oct 25, 2023 at 9:43 AM Michael Paquier <[email protected]> wrote:Hi all,\n\nI don't remember how many times in the last few years when I've had to\nhack the backend to produce a test case that involves a weird race\ncondition across multiple processes running in the backend, to be able\nto prove a point or just test a fix (one recent case: 2b8e5273e949).\nUsually, I come to hardcoding stuff for the following situations:\n- Trigger a PANIC, to force recovery.\n- A FATAL, to take down a session, or just an ERROR.\n- palloc() failure injection.\n- Sleep to slow down a code path.\n- Pause and release with condition variable.+1 for the feature.TWIMW, here[1] is an interesting talk from pgconf.in 2020 on the similar topic.1] https://pgconf.in/conferences/pgconfin2020/program/proposals/101Regards,Amul Sul", "msg_date": "Wed, 25 Oct 2023 10:06:17 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Oct 25, 2023 at 10:06:17AM +0530, Amul Sul wrote:\n> +1 for the feature.\n> \n> TWIMW, here[1] is an interesting talk from pgconf.in 2020 on the similar\n> topic.\n> \n> 1] https://pgconf.in/conferences/pgconfin2020/program/proposals/101\n\nRight, this uses a shared hash table. There is a patch from 2019 that\nsummarizes this presentation as well:\nhttps://www.postgresql.org/message-id/CANXE4TdxdESX1jKw48xet-5GvBFVSq%3D4cgNeioTQff372KO45A%40mail.gmail.com\n\nA different idea is that this patch could leverage a bgworker instead\nof having a footprint in the postmaster. FWIW, I think that my patch\nis more flexible than the modes added by faultinjector.h (see 0001),\nbecause the actions that can be taken should not be limited by the\ncore code: the point registered could just use what it wants as\ncallback, so an extension could register a custom thing as well.\n--\nMichael", "msg_date": "Wed, 25 Oct 2023 13:57:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "Hi,\n\nOn Wed, 25 Oct 2023 at 07:13, Michael Paquier <[email protected]> wrote:\n>\n> Hi all,\n>\n> I don't remember how many times in the last few years when I've had to\n> hack the backend to produce a test case that involves a weird race\n> condition across multiple processes running in the backend, to be able\n> to prove a point or just test a fix (one recent case: 2b8e5273e949).\n> Usually, I come to hardcoding stuff for the following situations:\n> - Trigger a PANIC, to force recovery.\n> - A FATAL, to take down a session, or just an ERROR.\n> - palloc() failure injection.\n> - Sleep to slow down a code path.\n> - Pause and release with condition variable.\n\nI liked the idea; thanks for working on this!\n\nWhat do you think about creating a function for updating the already\ncreated injection point's callback or name (mostly callback)? For now,\nyou need to drop and recreate the injection point to change the\ncallback or the name.\n\nHere is my code correctness review:\n\ndiff --git a/meson_options.txt b/meson_options.txt\n+option('injection_points', type: 'boolean', value: true,\n+ description: 'Enable injection points')\n+\n\nIt is enabled by default while building with meson.\n\n\ndiff --git a/src/backend/utils/misc/injection_point.c\nb/src/backend/utils/misc/injection_point.c\n+ LWLockRelease(InjectionPointLock);\n+\n+ /* If not found, do nothing? */\n+ if (!found)\n+ return;\n\nIt would be good to log a warning message here.\n\n\nI tried to compile that with -Dwerror=true -Dinjection_points=false\nand got some errors (warnings):\n\ninjection_point.c: In function ‘InjectionPointShmemSize’:\ninjection_point.c:59:1: error: control reaches end of non-void\nfunction [-Werror=return-type]\n\ninjection_point.c: At top level:\ninjection_point.c:32:14: error: ‘InjectionPointHashByName’ defined but\nnot used [-Werror=unused-variable]\n\ntest_injection_points.c: In function ‘test_injection_points_run’:\ntest_injection_points.c:69:21: error: unused variable ‘name’\n[-Werror=unused-variable]\n\n\nThe test_injection_points test runs and passes although I set\n-Dinjection_points=false. That could be misleading, IMO the test\nshould be skipped if Postgres is not compiled with the injection\npoints.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 6 Nov 2023 22:28:14 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Nov 06, 2023 at 10:28:14PM +0300, Nazir Bilal Yavuz wrote:\n> I liked the idea; thanks for working on this!\n\nThanks for the review.\n\n> What do you think about creating a function for updating the already\n> created injection point's callback or name (mostly callback)? For now,\n> you need to drop and recreate the injection point to change the\n> callback or the name.\n\nI am not sure if that's worth the addition. TBH, all the code I've\nseen that would benefit from these APIs just set up a cluster,\nregister a few injection points with a module, and then run a set of\ntests. They can also remove points. So I'm just aiming for simplest\nfor the moment.\n \n> Here is my code correctness review:\n> \n> diff --git a/meson_options.txt b/meson_options.txt\n> +option('injection_points', type: 'boolean', value: true,\n> + description: 'Enable injection points')\n> +\n> \n> It is enabled by default while building with meson.\n\nIndeed, fixed.\n\n> diff --git a/src/backend/utils/misc/injection_point.c\n> b/src/backend/utils/misc/injection_point.c\n> + LWLockRelease(InjectionPointLock);\n> +\n> + /* If not found, do nothing? */\n> + if (!found)\n> + return;\n> \n> It would be good to log a warning message here.\n\nI don't think that's a good idea. If a code path defines a\nINJECTION_POINT_RUN() we'd get spurious warnings except if a point is\nalways defined when the build switch is enabled.\n\n> I tried to compile that with -Dwerror=true -Dinjection_points=false\n> and got some errors (warnings):\n\nRight, fixed these three.\n\n> The test_injection_points test runs and passes although I set\n> -Dinjection_points=false. That could be misleading, IMO the test\n> should be skipped if Postgres is not compiled with the injection\n> points.\n\nThe test suite has been using an alternate output, but perhaps you are\nright that this has little value without the switch enabled anyway.\nI've made the processing optional when the option is not used for\nmeson and ./configure (requires a variable in Makefile.global.in in\nthe latter case), removing the alternate output.\n\nPlease find v2.\n--\nMichael", "msg_date": "Tue, 7 Nov 2023 17:01:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Nov 07, 2023 at 05:01:16PM +0900, Michael Paquier wrote:\n> On Mon, Nov 06, 2023 at 10:28:14PM +0300, Nazir Bilal Yavuz wrote:\n>> I liked the idea; thanks for working on this!\n\n+1, this seems very useful.\n\n> +#ifdef USE_INJECTION_POINTS\n> +#define INJECTION_POINT_RUN(name) InjectionPointRun(name)\n> +#else\n> +#define INJECTION_POINT_RUN(name) ((void) name)\n> +#endif\n\nnitpick: Why is the non-injection-point version \"(void) name\"? I see\n\"(void) true\" used elsewhere for this purpose.\n\n> + <para>\n> + Here is an example of callback for\n> + <literal>InjectionPointCallback</literal>:\n> +<programlisting>\n> +static void\n> +custom_injection_callback(const char *name)\n> +{\n> + elog(NOTICE, \"%s: executed custom callback\", name);\n> +}\n> +</programlisting>\n\nWhy do we provide the name to the callback functions?\n\nOverall, the design looks pretty good to me. I think it's a good idea to\nkeep it simple to start with. Since this is really only intended for\nspecial tests that run in special builds, it seems like we ought to be able\nto change it easily in the future as needed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 14:44:25 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "Hi,\n\nOn 2023-10-25 13:13:38 +0900, Michael Paquier wrote:\n> So, please find attached a patch set that introduces an in-core\n> facility to be able to set what I'm calling here an \"injection point\",\n> that consists of being able to register in shared memory a callback\n> that can be run within a defined location of the code. It means that\n> it is not possible to trigger a callback before shared memory is set,\n> but I've faced far more the case where I wanted to trigger something\n> after shmem is set anyway. Persisting an injection point across\n> restarts is also possible by adding some through an extension's shmem\n> hook, as we do for custom LWLocks for example, as long as a library is\n> loaded.\n\nI would like to see a few example tests using this facility - without that\nit's a bit hard to judge how the impact on core code would be and how easy\ntests are to write.\n\nIt also seems like there's a few bits and pieces missing to actually be able\nto write interesting tests. It's one thing to be able to inject code, but what\nyou commonly want to do for tests is to actually wait for such a spot in the\ncode to be reached, then perhaps wait inside the \"modified\" code, and do\nsomething else in the test script. But as-is a decent amount of C code would\nneed to be written to write such a test, from what I can tell?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Nov 2023 18:32:27 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Nov 10, 2023 at 02:44:25PM -0600, Nathan Bossart wrote:\n> On Tue, Nov 07, 2023 at 05:01:16PM +0900, Michael Paquier wrote:\n>> +#ifdef USE_INJECTION_POINTS\n>> +#define INJECTION_POINT_RUN(name) InjectionPointRun(name)\n>> +#else\n>> +#define INJECTION_POINT_RUN(name) ((void) name)\n>> +#endif\n> \n> nitpick: Why is the non-injection-point version \"(void) name\"? I see\n> \"(void) true\" used elsewhere for this purpose.\n\nOr (void) 0.\n\n>> + <para>\n>> + Here is an example of callback for\n>> + <literal>InjectionPointCallback</literal>:\n>> +<programlisting>\n>> +static void\n>> +custom_injection_callback(const char *name)\n>> +{\n>> + elog(NOTICE, \"%s: executed custom callback\", name);\n>> +}\n>> +</programlisting>\n> \n> Why do we provide the name to the callback functions?\n\nThis is for the use of the same callback across multiple points, and\ntracking the name of the event happening was making sense to me to\nknow which code path is being taken when a callback is called. One\nthing that I got in mind as well here is to be able to register custom\nwait events based on the name of the callback taken, for example on a \ncondition variable, a latch or a named LWLock.\n\n> Overall, the design looks pretty good to me. I think it's a good idea to\n> keep it simple to start with. Since this is really only intended for\n> special tests that run in special builds, it seems like we ought to be able\n> to change it easily in the future as needed.\n\nYes, my first idea is to keep the initial design minimal and take the\ntemperature. As far as I can see, there seem to not be any strong\nobjection with this basic design, still I agree that I need to show a\nbit more code about its usability. I have some SQL and recovery cases\nwhere this is handy and these have piled over time, including at least\ntwo/three of them with more basic APIs in the test module may make\nsense in the initial batch of what I am proposing here.\n--\nMichael", "msg_date": "Mon, 13 Nov 2023 14:48:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Nov 10, 2023 at 06:32:27PM -0800, Andres Freund wrote:\n> I would like to see a few example tests using this facility - without that\n> it's a bit hard to judge how the impact on core code would be and how easy\n> tests are to write.\n\nSure. I was wondering if people would be interested in that first.\n\n> It also seems like there's a few bits and pieces missing to actually be able\n> to write interesting tests. It's one thing to be able to inject code, but what\n> you commonly want to do for tests is to actually wait for such a spot in the\n> code to be reached, then perhaps wait inside the \"modified\" code, and do\n> something else in the test script. But as-is a decent amount of C code would\n> need to be written to write such a test, from what I can tell?\n\nDepends on what you'd want to achieve. As I mentioned at the top of\nthe thread, error, fatal, panics, hardcoded waits are the most common\ncases I've seen in the last years. Conditional waits are not in the\nmain patch but these are simple to support done (I mean, as in the\n0003 attached with a TAP example).\n\nWhile on it, I have extended the patch in the hash table a library\nname and a function name so as the callback is loaded each time an\ninjection point is run. (Perhaps the list of callbacks already loaded\nin a process should be saved in a session-level static list/array to\navoid loading the same callbacks again, not sure if that's worth doing\nfor a test facility assuming that the number of times a callback is\ncalled in a single session is usually very limited. Anyway, that\nwould be simple to add if people prefer this addition.)\n\nAnyway, here is a short list of commits that could have taken benefit\nfrom this facility. There are is much more, but that's a list I\ngrabbed quickly from my notes:\n1) 8a4237908c0f\n2) cb0cca188072\n3) 7863ee4def65 (See https://postgr.es/m/YnT/[email protected]\nwhere an expensive TAP test was included, and I've seen users facing\nthis bug in real life). Revert of the original is clean here as well.\nThe trick is simple: stop a restartpoint during a promotion, and let\nthe restartpoint finish after the promotion.\n4) 409f9ca44713, where injecting an error would stress the consistency\nof the data reset (mentioned an error injected at\nhttps://postgr.es/m/YWZk6nmAzQZS4B/[email protected]). This reverts\ncleanly even today.\n5) b4721f39505b, quite similar (mentioned an error injection exactly\nhere: https://postgr.es/m/[email protected]). This\none requires an error when a transaction is started, something can be\nachieved if the error is triggered conditionally (note that hard\nfailure would prevent the transaction to begin with the initial\nsnapshot taken in InitPostgres, but the module could just use a static\nvariable to track that).\n\nAmong these, I have implemented two examples on top of the main patch\nset in 0002 and 0003: 4) as a TAP test with replication commands and\nan error injection, and 3) that relies on a custom wait event and a\nconditional variable to make the test posted on the other thread\ncheaper, with an injection point waiting for a condition variable in\nthe middle of a restartpoint in the checkpointer. I don't mean to\nnecessarily include all that in the upstream tree, these are just here\nfor reference first.\n\n3) is the most interesting in this set, for sure. That was a nasty\nproblem, and some cheap coverage in the core tree could be really good\nfor it, so I'd like to propose for commit after more polishing. The\ntest of the bug 3) I am referring to takes originally 30~45s to run\nand it was unstable as it could timeout. With an injection point it\ntakes 1~2s. Note that test_injection_points gains a wait/wake logic\nto be able to use condition variables to wait on the restartpoint of a\npromoted standby). Both tests are not shaped for prime day yet, but\nthat's enough for a set of examples IMHO to show what can be done.\n\nDoes it answer your questions?\n--\nMichael", "msg_date": "Tue, 14 Nov 2023 20:53:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "Hello,\n\nGood stuff here, I also have a bunch of bugfix commits that ended up not\nhaving a test because of the need for a debugger or other interaction,\nso let's move forward.\n\nI think the docs (and the macro/function naming) describe things\nbackwards. In my mind, it is INJECTION_POINT_RUN() that creates the\ninjection point; then InjectionPointCreate() attaches something to it.\nSo I would rename the macro to just INJECTION_POINT() and the function\nto InjectionPointAttach(). This way you're saying \"attach function FN\nfrom library L to the injection point P\"; where P is an entity that is\nbeing created by the INJECTION_POINT() call in the code.\n\nYou named the hash table InjectionPointHashByName, which seems weird.\nIs there any *other* way to locate an injection point that is not by\nname?\n\nIn this patch, injection points are instance-wide (because the hash\ntable is in shmem). As soon as you install a callback to one point,\nthat callback will be fired in every session. Maybe for some tests this\nis OK (and in particular your TAP tests have them attached in one\n->safe_psql call and then they hit a completely different session, which\nwouldn't work if the attachments were process-local), but maybe one\nwould want them limited to some specific process. Maybe give an\noptional PID so that if any other process hits that injection point,\nnothing happens?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n", "msg_date": "Tue, 14 Nov 2023 14:11:50 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Nov 14, 2023 at 02:11:50PM +0100, Alvaro Herrera wrote:\n> Good stuff here, I also have a bunch of bugfix commits that ended up not\n> having a test because of the need for a debugger or other interaction,\n> so let's move forward.\n> \n> I think the docs (and the macro/function naming) describe things\n> backwards. In my mind, it is INJECTION_POINT_RUN() that creates the\n> injection point; then InjectionPointCreate() attaches something to it.\n> So I would rename the macro to just INJECTION_POINT() and the function\n> to InjectionPointAttach(). This way you're saying \"attach function FN\n> from library L to the injection point P\"; where P is an entity that is\n> being created by the INJECTION_POINT() call in the code.\n\nOkay. I am not strongly attached to the terms used by the patch. The\nfirst WIP I wrote used completely different terms.\n\n> You named the hash table InjectionPointHashByName, which seems weird.\n> Is there any *other* way to locate an injection point that is not by\n> name?\n\nI am not sure what you mean here. Names are kind of the most\nportable and simplest thing I could think of. Is there something else\nyou have in mind that would allow a mapping between a code path and\nwhat should be run? Perhaps that's useful in some cases, but you were\nalso thinking about an in-core API where it is possible to retrieve a\nlist of callbacks based on a library name and/or a function name? I\ndidn't see a use for it, but why not.\n\n> In this patch, injection points are instance-wide (because the hash\n> table is in shmem). As soon as you install a callback to one point,\n> that callback will be fired in every session. Maybe for some tests this\n> is OK (and in particular your TAP tests have them attached in one\n> ->safe_psql call and then they hit a completely different session, which\n> wouldn't work if the attachments were process-local), but maybe one\n> would want them limited to some specific process. Maybe give an\n> optional PID so that if any other process hits that injection point,\n> nothing happens?\n\nYes, still not something that's required in the core APIs or an\ninitial batch. This is something I've seen used and a central place\nwhere the callbacks are registered allows that because the callback is\ntriggered based on a global state like a MyProcPid or a getpid(), so\nit is possible to pass a condition to a callback when it is created\n(or attached per your wording), with the condition maintained in a\nshmem area that can be part of an extension module that defines the\ncallbacks (in test_injection_points). One trick sometimes is to know\nthe PID beforehand, which may need a second wait point (for example)\nto make a test deterministic so as a test script has the time to get\nthe PID of a running session (bgworkers included) before the process\nhas time to do anything critical for the scenario tested.\n\nAn extra thing is that this design can be extended so as it could be\npossible to pass down to the callback execution a private pointer of\ndata, though that's bound to the code path running the injection\npoint (not in the initial patch). Then it's up to the callback to\ndecide if it needs to do something or not (say, I don't want to run\nthis callback except if I am manipulating page N in an access method,\netc.). The conditional complexity is pushed to the injection\ncallbacks, not the core routines in charge is finding a callback or\nattaching/creating one. I am not sure that it is a good idea to\nenforce a specific conditional logic in the backend core code.\n--\nMichael", "msg_date": "Wed, 15 Nov 2023 07:41:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On 2023-Nov-15, Michael Paquier wrote:\n\n> On Tue, Nov 14, 2023 at 02:11:50PM +0100, Alvaro Herrera wrote:\n\n> > You named the hash table InjectionPointHashByName, which seems weird.\n> > Is there any *other* way to locate an injection point that is not by\n> > name?\n> \n> I am not sure what you mean here. Names are kind of the most\n> portable and simplest thing I could think of.\n\nOh, I think you're overthinking what my comment was. I was saying, just\nname it \"InjectionPointsHash\". Since there appears to be no room for\nanother hash table for injection points, then there's no need to specify\nthat this one is the ByName hash. I couldn't think of any other way to\norganize the injection points either.\n\n> > In this patch, injection points are instance-wide (because the hash\n> > table is in shmem).\n> \n> Yes, still not something that's required in the core APIs or an\n> initial batch.\n\nI agree that we can do the easy thing first and build it up later. I\njust hope we don't get too wedded on the current interface because of\nlack of time in the current release that we get stuck with it.\n\n> I am not sure that it is a good idea to enforce a specific conditional\n> logic in the backend core code.\n\nAgreed, let's get more experience on what other types of tests people\nwant to build, and how are things going to interact with each other.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n", "msg_date": "Wed, 15 Nov 2023 12:21:40 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Nov 15, 2023 at 12:21:40PM +0100, Alvaro Herrera wrote:\n> On 2023-Nov-15, Michael Paquier wrote:\n> Oh, I think you're overthinking what my comment was. I was saying, just\n> name it \"InjectionPointsHash\". Since there appears to be no room for\n> another hash table for injection points, then there's no need to specify\n> that this one is the ByName hash. I couldn't think of any other way to\n> organize the injection points either.\n\nAha, OK. No problem, this was itching me as well but I didn't see an\nargument with changing these names, so I've renamed things a bit more.\n\n>> Yes, still not something that's required in the core APIs or an\n>> initial batch.\n> \n> I agree that we can do the easy thing first and build it up later. I\n> just hope we don't get too wedded on the current interface because of\n> lack of time in the current release that we get stuck with it.\n\nOne thing that I assume we will need with more advanced testing is the\npossibility to pass down one (for a structure of data) or more\narguments to the callback a point is attached to. For that, it is\npossible to add more macros, like INJECTION_POINT_1ARG,\nINJECTION_POINT_ARG(), etc. that use some (void *) pointers. It would\nbe possible to add that even in stable branches, as need arises,\nchanging the structure of the shmem hash table if required to control\nthe way a callback is run.\n\nAt the end, I suspect that it is one of these things where we'll need\nto adapt depending on what people want to do with this stuff. FWIW, I\ncan do already quite a bit with this minimalistic design and an\nexternal extension. Custom wait events are also very handy to monitor\na wait.\n\n>> I am not sure that it is a good idea to enforce a specific conditional\n>> logic in the backend core code.\n> \n> Agreed, let's get more experience on what other types of tests people\n> want to build, and how are things going to interact with each other.\n\nI've worked more on polishing the core facility today, with 0001\nintroducing the backend-side facility. One thing that I mentioned\nlacking is a local cache for processes so as they don't load an\nexternal callback more than once if run. So I have added this local\ncache. When a point is scanned but not found, a previous cache entry\nis cleared if any (this leaks but we don't have a way to unload stuff,\nand for testing that's not a big deal). I've renamed the routines to\nuse attach and detach as terms, and adopted the name you've suggested\nfor the macro. The names around the hash table and its entries have\nbeen changed to what you've suggested. You are right, that's more\nintuitive.\n\n0002 is the test module for the basics, split into its own patch, with\na couple of tests for the local cache part. 0003 and 0004 have been\nadjusted with the new SQL functions. At the end, I'd like to propose\n0004 as it's been a PITA for me and I don't want to break this case\nagain. It needs more work and can be cheaper. One more argument in\nfavor of it is the addition of condition variables to wait and wake\npoints (perhaps with even more variables?) in the test module.\n\nIf there is interest for 0003, I'm OK to work more on it as well, but\nthat's less important IMV.\n\nThoughts and comments are welcome, with a v4 series attached.\n--\nMichael", "msg_date": "Thu, 16 Nov 2023 14:54:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Oct 25, 2023 at 9:43 AM Michael Paquier <[email protected]> wrote:\n>\n> Hi all,\n>\n> I don't remember how many times in the last few years when I've had to\n> hack the backend to produce a test case that involves a weird race\n> condition across multiple processes running in the backend, to be able\n> to prove a point or just test a fix (one recent case: 2b8e5273e949).\n> Usually, I come to hardcoding stuff for the following situations:\n> - Trigger a PANIC, to force recovery.\n> - A FATAL, to take down a session, or just an ERROR.\n> - palloc() failure injection.\n> - Sleep to slow down a code path.\n> - Pause and release with condition variable.\n>\n> And, while that's helpful to prove a point on a thread, nothing comes\n> out of it in terms of regression test coverage in the tree because\n> these tests are usually too slow and expensive, as they usually rely\n> on hardcoded timeouts. So that's pretty much attempting to emulate\n> what one would do with a debugger in a predictable way, without the\n> manual steps because human hands don't scale well.\n>\n> The reason behind that is of course more advanced testing, to be able\n> to expand coverage when we have weird and non-deterministic race\n> issues to deal with, and the code becoming more complex every year\n> makes that even harder. Fault and failure injection in specific paths\n> comes into mind, additionally, particularly if you manage complex\n> projects based on Postgres.\n>\n> So, please find attached a patch set that introduces an in-core\n> facility to be able to set what I'm calling here an \"injection point\",\n> that consists of being able to register in shared memory a callback\n> that can be run within a defined location of the code. It means that\n> it is not possible to trigger a callback before shared memory is set,\n> but I've faced far more the case where I wanted to trigger something\n> after shmem is set anyway. Persisting an injection point across\n> restarts is also possible by adding some through an extension's shmem\n> hook, as we do for custom LWLocks for example, as long as a library is\n> loaded.\n>\n> This will remind a bit of what Alexander Korotkov has proposed here:\n> https://www.postgresql.org/message-id/CAPpHfdtSEOHX8dSk9Qp%2BZ%2B%2Bi4BGQoffKip6JDWngEA%2Bg7Z-XmQ%40mail.gmail.com\n> Also, this is much closee to what Craig Ringer is mentioning here,\n> where it is named probe points, but I am using a minimal design that\n> allows to achieve the same:\n> https://www.postgresql.org/message-id/CAPpHfdsn-hzneYNbX4qcY5rnwr-BA1ogOCZ4TQCKQAw9qa48kA%40mail.gmail.com\n>\n> A difference is that I don't really see a point in passing to the\n> callback triggered an area of data coming from the hash table itself,\n> as at the end a callback could just refer to an area in shared memory\n> or a static set of variables depending on what it wants, with one or\n> more injection points (say a location to set a state, and a second to\n> check it). So, at the end, the problem comes down in my opinion to\n> two things:\n> - Possibility to trigger a condition defined by some custom code, in\n> the backend (core code or even out-of-core).\n> - Possibility to define a location in the code where a named point\n> would be checked.\n>\n> 0001 introduces three APIs to create, run, and drop injection points:\n> +extern void InjectionPointCreate(const char *name,\n> + InjectionPointCallback callback);\n> +extern void InjectionPointRun(const char *name);\n> +extern void InjectionPointDrop(const char *name);\n>\n> Then one just needs to add a macro like that to trigger the callback\n> registered in the code to test:\n> INJECTION_POINT_RUN(\"String\");\n> So the footprint in the core tree is not zero, but it is as minimal as\n> it can be.\n>\n> I have added some documentation to explain that, as well. I am not\n> wedded to the name proposed in the patch, so if you feel there is\n> better, feel free to propose ideas.\n\nActually with Attach and Detach terminology, INJECTION_POINT becomes\nthe place where we \"declare\" the injection point. So the documentation\nneeds to first explain INJECTION_POINT and then explain the other\noperations.\n\n>\n> This facility is hidden behind a specific configure/Meson switch,\n> making it a no-op by default:\n> --enable-injection-points\n> -Dinjection_points={ true | false }\n\nThat's useful, but we will also see demand to enable those in\nproduction (under controlled circumstances). But let the functionality\nmature under a separate flag and developer builds before used in\nproduction.\n\n>\n> 0002 is a test module to test these routines, that I have kept a\n> maximum simple to ease review of the basics proposed here. This could\n> be extended further to propose more default modes with TAP tests on\n> its own, as I don't see a real point in having the SQL bits or some\n> common callbacks (like for the PANIC or the FATAL cases) in core.\n>\n> Thoughts and comments are welcome.\n\nI think this is super useful functionality. Some comments here.\n\n+/*\n+ * Local cache of injection callbacks already loaded, stored in\n+ * TopMemoryContext.\n+ */\n+typedef struct InjectionPointArrayEntry\n+{\n+ char name[INJ_NAME_MAXLEN];\n+ InjectionPointCallback callback;\n+} InjectionPointArrayEntry;\n+\n+static InjectionPointArrayEntry *InjectionPointCacheArray = NULL;\n\nInitial size of this array is small, but given the size of code in a\ngiven path to be tested, I fear that this may not be sufficient. I\nfeel it's better to use hash table whose APIs are already available.\n\n\n+ test_injection_points_attach\n+------------------------------\n+\n+(1 row)\n+\n+SELECT test_injection_points_run('TestInjectionBooh'); -- nothing\n\nI find it pretty useless to expose that as a SQL API. Also adding it\nin tests would make it usable as an example. In this patchset\nINJECTION_POINT should be the only way to trigger an injection point.\n\nThat also brings another question. The INJECTION_POINT needs to be added in the\ncore code but can only be used through an extension. I think there should be\nsome rudimentary albeit raw test in core for this. Extension does some good\nthings like provides built-in actions when the injection is triggered. So, we\nshould keep those too.\n\nI am glad that it covers the frequently needed injections error, notice and\nwait. If someone adds multiple injection points for wait and all of them are\ntriggered (in different backends), test_injection_points_wake() would\nwake them all. When debugging cascaded functionality it's not easy to\nfollow sequence add, trigger, wake for multiple injections. We should\nassociate a conditional variable with the required injection points. Attach\nshould create conditional variable in the shared memory for that injection\npoint and detach should remove it. I see this being mentioned in the commit\nmessage, but I think it's something we need in the first version of \"wait\" to\nbe useful.\n\nAt some point we may want to control how many times an injection is\ntriggered by using a count. Often, I have wanted an ERROR to be thrown\nin a code path once or twice and then stop triggering it. For example\nto test WAL sender restart after a certain event. We can't really time\nDetach correctly to avoid multiple restarts. A count is useful is such\na case.\n\n+/*\n+ * Attach a new injection point.\n+ */\n+void\n+InjectionPointAttach(const char *name,\n+ const char *library,\n+ const char *function)\n+{\n+#ifdef USE_INJECTION_POINTS\n\nIn a non-injection-point build this function would be compiled and a call to\nthis function would throw an error. This means that any test we write which\nuses injection points can not do so optionally. Tests which can be run with and\nwithout injection points built will reduce duplication. We should define this\nfunction as no-op in non-injection-point build to faciliate such tests.\n\nThose tests need to also install extension. That's another pain point.\nSo anyone wants to run the tests needs to compile the extension too. I\nam wondering whether we should have this functionality in the core\nitself somewhere and will be only useful when built with injection.\n\nMany a times it's only a single backend which needs to be subjected to\nan injection. For inducing ERROR and NOTICE, many a times it's also\nthe same backend attached the client session. For WAIT, however we\nneed a way to inject from some other session. We might be able to use\ncurrent signalling mechanism for that (wake sends SIGUSR1 with\nreason). Leaving aside WAIT for a moment when the same backend's\nbehaviour is being controlled, do we really need shared memory and\nalso affect all the running backends. I see some discussion about\nbeing able to trigger only for a given PID, but when that PID of the\ncurrent backend itself shared memory is not required.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 20 Nov 2023 16:53:45 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Nov 20, 2023 at 04:53:45PM +0530, Ashutosh Bapat wrote:\n> On Wed, Oct 25, 2023 at 9:43 AM Michael Paquier <[email protected]> wrote:\n>> I have added some documentation to explain that, as well. I am not\n>> wedded to the name proposed in the patch, so if you feel there is\n>> better, feel free to propose ideas.\n> \n> Actually with Attach and Detach terminology, INJECTION_POINT becomes\n> the place where we \"declare\" the injection point. So the documentation\n> needs to first explain INJECTION_POINT and then explain the other\n> operations.\n\nSure.\n\n>> This facility is hidden behind a specific configure/Meson switch,\n>> making it a no-op by default:\n>> --enable-injection-points\n>> -Dinjection_points={ true | false }\n> \n> That's useful, but we will also see demand to enable those in\n> production (under controlled circumstances). But let the functionality\n> mature under a separate flag and developer builds before used in\n> production.\n\nHmm. Okay. I'd still keep that under a compile switch for now\nanyway to keep a cleaner separation and avoid issues where it would be\npossible to inject code in a production build. Note that I was\nplanning to switch one of my buildfarm animals to use it should this\nstuff make it into the tree. And people would be free to use it if\nthey want. If in production, that would be a risk, IMO.\n\n> +/*\n> + * Local cache of injection callbacks already loaded, stored in\n> + * TopMemoryContext.\n> + */\n> +typedef struct InjectionPointArrayEntry\n> +{\n> + char name[INJ_NAME_MAXLEN];\n> + InjectionPointCallback callback;\n> +} InjectionPointArrayEntry;\n> +\n> +static InjectionPointArrayEntry *InjectionPointCacheArray = NULL;\n> \n> Initial size of this array is small, but given the size of code in a\n> given path to be tested, I fear that this may not be sufficient. I\n> feel it's better to use hash table whose APIs are already available.\n\nI've never seen in recent years a need for a given test to use more\nthan 4~5 points. But I guess that you've seen more than that wanted\nin a prod environment with a fork of Postgres?\n\nI'm OK to switch that to use a hash table in the initial\nimplementation, even for a low number of entries with points that are\nnot in hot code paths that should be OK. At least for my cases\nrelated to testing that's OK. Am I right to assume that you mean a\nHTAB in TopMemoryContext?\n\n> I find it pretty useless to expose that as a SQL API. Also adding it\n> in tests would make it usable as an example. In this patchset\n> INJECTION_POINT should be the only way to trigger an injection point.\n\nThat's useful to test the cache logic while providing simple coverage\nfor the core API, and that's cheap. So I'd rather keep this test\nmodule around with these tests.\n\n> That also brings another question. The INJECTION_POINT needs to be added in the\n> core code but can only be used through an extension. I think there should be\n> some rudimentary albeit raw test in core for this. Extension does some good\n> things like provides built-in actions when the injection is triggered. So, we\n> should keep those too.\n\nYeah, I'd like to agree with that but everything I've seen in the\nrecent years is that every setup finishes by having different\nassumptions about what they want to do, so my intention is to trim\ndown the core of the patch to a bare acceptable minimum and then work\non top of that.\n\n> I am glad that it covers the frequently needed injections error, notice and\n> wait. If someone adds multiple injection points for wait and all of them are\n> triggered (in different backends), test_injection_points_wake() would\n> wake them all. When debugging cascaded functionality it's not easy to\n> follow sequence add, trigger, wake for multiple injections. We should\n> associate a conditional variable with the required injection points. Attach\n> should create conditional variable in the shared memory for that injection\n> point and detach should remove it. I see this being mentioned in the commit\n> message, but I think it's something we need in the first version of \"wait\" to\n> be useful.\n\nMore to the point, I actually disagree with that, because it could be\npossible as well that the same condition variable is used across\nmultiple points. At the end, IMHO, the central hash table should hold\nzero meta-data associated to an injection point like a counter, an\nelog, a condition variable, a sleep time, etc. or any combination of\nall these ones, and should just know about how to load a callback with\na library path and a routine name. I understand that this is leaving\nthe responsibility of what a callback should do down to the individual\ndeveloper implementing a callback into their own extension, where they\nshould be free to have conditions of their own.\n\nSomething that I agree would be very useful for the in-core APIs,\ndepending on the cases, is to be able to push some information to the\ncallback at runtime to let a callback decide what to do depending on a\nrunning state, including a condition registered when a point was\nattached. See my argument about an _ARG macro that passes down to the\ncallback a (void *).\n\n> At some point we may want to control how many times an injection is\n> triggered by using a count. Often, I have wanted an ERROR to be thrown\n> in a code path once or twice and then stop triggering it. For example\n> to test WAL sender restart after a certain event. We can't really time\n> Detach correctly to avoid multiple restarts. A count is useful is such\n> a case.\n\nYeah. That's also something that can be achieved outside the shmem\nhash table, so this is intentionally not part of InjectionPointHash.\n\n> +/*\n> + * Attach a new injection point.\n> + */\n> +void\n> +InjectionPointAttach(const char *name,\n> + const char *library,\n> + const char *function)\n> +{\n> +#ifdef USE_INJECTION_POINTS\n> \n> In a non-injection-point build this function would be compiled and a call to\n> this function would throw an error. This means that any test we write which\n> uses injection points can not do so optionally. Tests which can be run with and\n> without injection points built will reduce duplication. We should define this\n> function as no-op in non-injection-point build to faciliate such tests.\n\nThe argument goes both ways, I guess. I'd rather choose a hard\nfailure to know that what I am doing is not silently ignored, which is\nwhy I made this choice in the patch.\n\n> Those tests need to also install extension. That's another pain point.\n> So anyone wants to run the tests needs to compile the extension too. I\n> am wondering whether we should have this functionality in the core\n> itself somewhere and will be only useful when built with injection.\n\nThat's something that may be discussed on top of the backend APIs,\nthough this comes down to how and what kind of meta-data should be\nassociated to the central shmem hash table. Keeping the shmem hash as\nsmall as possible to keep minimal the traces of this code in core is a\ndesign choice that I'd rather not change.\n\n> Many a times it's only a single backend which needs to be subjected to\n> an injection. For inducing ERROR and NOTICE, many a times it's also\n> the same backend attached the client session.\n\nYep. I've used that across multiple sessions. For the basic\nfacility, I think that's the absolute minimum.\n\n> For WAIT, however we\n> need a way to inject from some other session.\n\nYou can do that already with the patch, no? If you know that a\ndifferent session would cross a given path, you could set a macro in\nit. If you wish for this session to wait before that, it is possible\nto use a second point to make it do so. I've used such techniques as\nwell for more complex reproducible failures than what I've posted in\nthe patch series. In the last months, I've topped a TAP test to rely\non 5 deterministic points, I think. Or perhaps 6. That was a fun\nexercise, for a TAP test coded while self-complaining about the core\nbackend code that does not make this stuff easier.\n\n> We might be able to use\n> current signalling mechanism for that (wake sends SIGUSR1 with\n> reason). Leaving aside WAIT for a moment when the same backend's\n> behaviour is being controlled, do we really need shared memory and\n> also affect all the running backends. I see some discussion about\n> being able to trigger only for a given PID, but when that PID of the\n> current backend itself shared memory is not required.\n\nI am not convinced that there is any need for signalling in most\ncases, as long as you know beforehand the PID of the session you'd\nlike to stop, because this would still require a second session to\nregister a condition based on the PID known. Another possibility that\nI can think of is to use a custom wait event with a second point to\nsetup a different condition. At the end, my point is that it is\npossible to control everything in some extension code that holds the\ncallbacks, with an extra shmem area in the extension that associates\nsome meta-data with a point name, for instance.\n--\nMichael", "msg_date": "Tue, 21 Nov 2023 10:25:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Nov 21, 2023 at 6:56 AM Michael Paquier <[email protected]> wrote:\n>\n> >> This facility is hidden behind a specific configure/Meson switch,\n> >> making it a no-op by default:\n> >> --enable-injection-points\n> >> -Dinjection_points={ true | false }\n> >\n> > That's useful, but we will also see demand to enable those in\n> > production (under controlled circumstances). But let the functionality\n> > mature under a separate flag and developer builds before used in\n> > production.\n>\n> Hmm. Okay. I'd still keep that under a compile switch for now\n> anyway to keep a cleaner separation and avoid issues where it would be\n> possible to inject code in a production build. Note that I was\n> planning to switch one of my buildfarm animals to use it should this\n> stuff make it into the tree. And people would be free to use it if\n> they want. If in production, that would be a risk, IMO.\n\nmakes sense. Just to be clear - by \"in production\" I mean user\ninstallations - they may be testing environments or production\nenvironments.\n\n>\n> > +/*\n> > + * Local cache of injection callbacks already loaded, stored in\n> > + * TopMemoryContext.\n> > + */\n> > +typedef struct InjectionPointArrayEntry\n> > +{\n> > + char name[INJ_NAME_MAXLEN];\n> > + InjectionPointCallback callback;\n> > +} InjectionPointArrayEntry;\n> > +\n> > +static InjectionPointArrayEntry *InjectionPointCacheArray = NULL;\n> >\n> > Initial size of this array is small, but given the size of code in a\n> > given path to be tested, I fear that this may not be sufficient. I\n> > feel it's better to use hash table whose APIs are already available.\n>\n> I've never seen in recent years a need for a given test to use more\n> than 4~5 points. But I guess that you've seen more than that wanted\n> in a prod environment with a fork of Postgres?\n\nA given case may not require more than 4 -5 points but users may\ncreate scripts with many frequently required injection points and\ninstall handlers for them.\n\n>\n> I'm OK to switch that to use a hash table in the initial\n> implementation, even for a low number of entries with points that are\n> not in hot code paths that should be OK. At least for my cases\n> related to testing that's OK. Am I right to assume that you mean a\n> HTAB in TopMemoryContext?\n\nYes.\n\n>\n> > I am glad that it covers the frequently needed injections error, notice and\n> > wait. If someone adds multiple injection points for wait and all of them are\n> > triggered (in different backends), test_injection_points_wake() would\n> > wake them all. When debugging cascaded functionality it's not easy to\n> > follow sequence add, trigger, wake for multiple injections. We should\n> > associate a conditional variable with the required injection points. Attach\n> > should create conditional variable in the shared memory for that injection\n> > point and detach should remove it. I see this being mentioned in the commit\n> > message, but I think it's something we need in the first version of \"wait\" to\n> > be useful.\n>\n> More to the point, I actually disagree with that, because it could be\n> possible as well that the same condition variable is used across\n> multiple points. At the end, IMHO, the central hash table should hold\n> zero meta-data associated to an injection point like a counter, an\n> elog, a condition variable, a sleep time, etc. or any combination of\n> all these ones, and should just know about how to load a callback with\n> a library path and a routine name. I understand that this is leaving\n> the responsibility of what a callback should do down to the individual\n> developer implementing a callback into their own extension, where they\n> should be free to have conditions of their own.\n>\n> Something that I agree would be very useful for the in-core APIs,\n> depending on the cases, is to be able to push some information to the\n> callback at runtime to let a callback decide what to do depending on a\n> running state, including a condition registered when a point was\n> attached. See my argument about an _ARG macro that passes down to the\n> callback a (void *).\n\nThe injection_run function is called from the place where the\ninjection point is declared but that place does not know what\ninjection function is going to be run. So a user can not pass\narguments to injection declaration. What injection to run is decided\nby the injection_attach and thus one can pass arguments to it but then\ninjection_attach stores the information in the shared memory from\nwhere it's picked up by injection_run. So even though you don't want\nto store the arguments in the shared memory, you are creating a design\nwhich takes us towards that direction eventually - otherwise users\nwill end up writing many injection functions - one for each possible\ncombination of count, sleep, conditional variable etc. But let's see\nwhether that happens to be the case in practice. We will need to\nevolve this feature based on usage.\n\n>\n> > Those tests need to also install extension. That's another pain point.\n> > So anyone wants to run the tests needs to compile the extension too. I\n> > am wondering whether we should have this functionality in the core\n> > itself somewhere and will be only useful when built with injection.\n>\n> That's something that may be discussed on top of the backend APIs,\n> though this comes down to how and what kind of meta-data should be\n> associated to the central shmem hash table. Keeping the shmem hash as\n> small as possible to keep minimal the traces of this code in core is a\n> design choice that I'd rather not change.\n\nshmem hash size won't depend upon the number of functions we write in\nthe core. Yes it will add to the core code and may add maintenance\nburden. So I understand your inclination to keep the core minimal.\n\n>\n> > Many a times it's only a single backend which needs to be subjected to\n> > an injection. For inducing ERROR and NOTICE, many a times it's also\n> > the same backend attached the client session.\n>\n> Yep. I've used that across multiple sessions. For the basic\n> facility, I think that's the absolute minimum.\n>\n> > For WAIT, however we\n> > need a way to inject from some other session.\n>\n> You can do that already with the patch, no? If you know that a\n> different session would cross a given path, you could set a macro in\n> it. If you wish for this session to wait before that, it is possible\n> to use a second point to make it do so. I've used such techniques as\n> well for more complex reproducible failures than what I've posted in\n> the patch series. In the last months, I've topped a TAP test to rely\n> on 5 deterministic points, I think. Or perhaps 6. That was a fun\n> exercise, for a TAP test coded while self-complaining about the core\n> backend code that does not make this stuff easier.\n>\n> > We might be able to use\n> > current signalling mechanism for that (wake sends SIGUSR1 with\n> > reason). Leaving aside WAIT for a moment when the same backend's\n> > behaviour is being controlled, do we really need shared memory and\n> > also affect all the running backends. I see some discussion about\n> > being able to trigger only for a given PID, but when that PID of the\n> > current backend itself shared memory is not required.\n>\n> I am not convinced that there is any need for signalling in most\n> cases, as long as you know beforehand the PID of the session you'd\n> like to stop, because this would still require a second session to\n> register a condition based on the PID known. Another possibility that\n> I can think of is to use a custom wait event with a second point to\n> setup a different condition. At the end, my point is that it is\n> possible to control everything in some extension code that holds the\n> callbacks, with an extra shmem area in the extension that associates\n> some meta-data with a point name, for instance.\n\nIf the session which attaches to an injection point is same as the\nsession where the injection point is triggered (most of the ERROR and\nNOTICE injections will see this pattern), we don't need shared memory.\nThere's a performance penalty to it since injection_run will look up\nshared memory. For WAIT we may or may not need shared memory. But\nlet's see what other think and what usage patterns we see.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 22 Nov 2023 21:23:21 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Nov 22, 2023 at 09:23:21PM +0530, Ashutosh Bapat wrote:\n> On Tue, Nov 21, 2023 at 6:56 AM Michael Paquier <[email protected]> wrote:\n>> I've never seen in recent years a need for a given test to use more\n>> than 4~5 points. But I guess that you've seen more than that wanted\n>> in a prod environment with a fork of Postgres?\n> \n> A given case may not require more than 4 -5 points but users may\n> create scripts with many frequently required injection points and\n> install handlers for them.\n\nSure, if a callback is generic enough it could be shared across\nmultiple points.\n\n> The injection_run function is called from the place where the\n> injection point is declared but that place does not know what\n> injection function is going to be run. So a user can not pass\n> arguments to injection declaration.\n\nIt is possible to make that predictible but it means that a callback\nis most likely to be used by one single point. This makes extensions\nin charge of holding the callbacks more complicated, at the benefit of\nkeeping a minimal footprint in the backend code.\n\n> What injection to run is decided\n> by the injection_attach and thus one can pass arguments to it but then\n> injection_attach stores the information in the shared memory from\n> where it's picked up by injection_run. So even though you don't want\n> to store the arguments in the shared memory, you are creating a design\n> which takes us towards that direction eventually - otherwise users\n> will end up writing many injection functions - one for each possible\n> combination of count, sleep, conditional variable etc. But let's see\n> whether that happens to be the case in practice. We will need to\n> evolve this feature based on usage.\n\nA one-one mapping between callback and point is not always necessary.\nIf you wish to use a combination of N points with a sleep callback and\ndifferent sleep times, one can just register a second shmem area in\nthe extension holding the callbacks that links the point names with\nthe sleep time to use.\n\n> shmem hash size won't depend upon the number of functions we write in\n> the core. Yes it will add to the core code and may add maintenance\n> burden. So I understand your inclination to keep the core minimal.\n\nYeah, without a clear benefit, my point is just to throw the\nresponsibility to extension developers for now, which would mean the\naddition of tests that depend on test_injection_points/, or just\ninstall this extension optionally in other code path that need it.\nMaybe 0004 should be in src/test/recovery/ and do that, actually..\nI'll most likely agree with extending all the backend stuff in a more\nmeaningful way, but I am not sure which method should be enforced.\n\n> If the session which attaches to an injection point is same as the\n> session where the injection point is triggered (most of the ERROR and\n> NOTICE injections will see this pattern), we don't need shared memory.\n> There's a performance penalty to it since injection_run will look up\n> shared memory. For WAIT we may or may not need shared memory. But\n> let's see what other think and what usage patterns we see.\n\nThe first POC of the patch that you can find at the top of this thread\ndid that, actually, but this is too limited. IMO, linking things to a\ncentral table is just *much* more useful.\n\nI've implemented a v5 that switches the cache to use a seconf hash\ntable on TopMemoryContext for the cache instead of an array. This\nmakes the cache handling slightly cleaner, so your suggestion was\nright. 0003 and 0004 are the same as previously, passing or failing\nunder the same conditions. I'm wondering if folks have other comments\nabout 0001 and 0002? It sounds to me like the consensus is that this\nstuff is useful and that there are no string objections, so feel free\nto comment.\n\nI don't want to propose 0003 in the tree, just an improved version of\n0004 for the test coverage (still need to improve that).\n--\nMichael", "msg_date": "Fri, 24 Nov 2023 10:56:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Nov 24, 2023 at 7:26 AM Michael Paquier <[email protected]> wrote:\n> If you wish to use a combination of N points with a sleep callback and\n> different sleep times, one can just register a second shmem area in\n> the extension holding the callbacks that links the point names with\n> the sleep time to use.\n>\n\nInteresting idea. For that the callback needs to know the injection\npoint name. At least we should pass that to the callback. It's trivial\nthing to do.\n\n> > shmem hash size won't depend upon the number of functions we write in\n> > the core. Yes it will add to the core code and may add maintenance\n> > burden. So I understand your inclination to keep the core minimal.\n>\n> Yeah, without a clear benefit, my point is just to throw the\n> responsibility to extension developers for now, which would mean the\n> addition of tests that depend on test_injection_points/, or just\n> install this extension optionally in other code path that need it.\n> Maybe 0004 should be in src/test/recovery/ and do that, actually..\n\nThat might work, but in order to run tests in that directory one has\nto also install the extension. Do we have precedence for such kind of\ndependency?\n\n>\n> > If the session which attaches to an injection point is same as the\n> > session where the injection point is triggered (most of the ERROR and\n> > NOTICE injections will see this pattern), we don't need shared memory.\n> > There's a performance penalty to it since injection_run will look up\n> > shared memory. For WAIT we may or may not need shared memory. But\n> > let's see what other think and what usage patterns we see.\n>\n> The first POC of the patch that you can find at the top of this thread\n> did that, actually, but this is too limited. IMO, linking things to a\n> central table is just *much* more useful.\n>\n> I've implemented a v5 that switches the cache to use a seconf hash\n> table on TopMemoryContext for the cache instead of an array. This\n> makes the cache handling slightly cleaner, so your suggestion was\n> right.\n\nglad that you liked the outcome.\n\n> 0003 and 0004 are the same as previously, passing or failing\n> under the same conditions. I'm wondering if folks have other comments\n> about 0001 and 0002? It sounds to me like the consensus is that this\n> stuff is useful\n\nI think so.\n\n> and that there are no string objections, so feel free\n> to comment.\n\nLet's get some more opinions on the design. I will review the detailed\ncode then.\n\n>\n> I don't want to propose 0003 in the tree, just an improved version of\n> 0004 for the test coverage (still need to improve that).\n\nAre you working on v6 already?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 24 Nov 2023 16:37:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Nov 24, 2023 at 04:37:58PM +0530, Ashutosh Bapat wrote:\n> Interesting idea. For that the callback needs to know the injection\n> point name. At least we should pass that to the callback. It's trivial\n> thing to do.\n\nThis is what's done from the beginning, as well as of 0001 in the v5\nseries:\n+INJECTION_POINT(name);\n[...]\n+ injection_callback(name);\n\n> That might work, but in order to run tests in that directory one has\n> to also install the extension. Do we have precedence for such kind of\n> dependency?\n\nYes, please see EXTRA_INSTALL in some of the Makefiles. This can\ninstall stuff from paths different than the location where the tests\nare run.\n\n>> and that there are no string objections, so feel free\n>> to comment.\n> \n> Let's get some more opinions on the design. I will review the detailed\n> code then.\n\nSure. Thanks.\n\n>> I don't want to propose 0003 in the tree, just an improved version of\n>> 0004 for the test coverage (still need to improve that).\n> \n> Are you working on v6 already?\n\nNo, what would be the point at this stage? I dont have much more to\nadd to 0001 and 0002 at the moment, which focus on the core of the\nproblem.\n--\nMichael", "msg_date": "Fri, 24 Nov 2023 23:07:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Nov 24, 2023 at 7:37 PM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Nov 24, 2023 at 04:37:58PM +0530, Ashutosh Bapat wrote:\n> > Interesting idea. For that the callback needs to know the injection\n> > point name. At least we should pass that to the callback. It's trivial\n> > thing to do.\n>\n> This is what's done from the beginning, as well as of 0001 in the v5\n> series:\n> +INJECTION_POINT(name);\n> [...]\n> + injection_callback(name);\n\nIn my first look I missed the actual call to the injection callback in\nInjectionPointRun()\ninjection_callback(name);\n\nSorry for that.\n\nThe way I see it is that an extension using this functionality will\ncreate an auxiliary lookup table keyed by the injection point name to\nobtain the injection point specific arguments (sleep time, count etc.)\nin the shared memory or local memory. Every time an injection callback\nis called it will consult this look up table to get the arguments.\nThat looks ok to me. There might be other ways to achieve the same\neffect. We will learn and absorb whatever benefits core and the users.\nI like that.\n\n>\n> > That might work, but in order to run tests in that directory one has\n> > to also install the extension. Do we have precedence for such kind of\n> > dependency?\n>\n> Yes, please see EXTRA_INSTALL in some of the Makefiles. This can\n> install stuff from paths different than the location where the tests\n> are run.\n\nWFM then.\n\n>\n> >> and that there are no string objections, so feel free\n> >> to comment.\n> >\n> > Let's get some more opinions on the design. I will review the detailed\n> > code then.\n>\n> Sure. Thanks.\n>\n> >> I don't want to propose 0003 in the tree, just an improved version of\n> >> 0004 for the test coverage (still need to improve that).\n> >\n> > Are you working on v6 already?\n>\n> No, what would be the point at this stage? I dont have much more to\n> add to 0001 and 0002 at the moment, which focus on the core of the\n> problem.\n\nSince you wroten \"(still need to improve ...) I thought you are\nworking on v6. No problem. Sorry for the confusion.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 27 Nov 2023 12:14:05 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Nov 27, 2023 at 12:14:05PM +0530, Ashutosh Bapat wrote:\n> Since you wroten \"(still need to improve ...) I thought you are\n> working on v6. No problem. Sorry for the confusion.\n\nI see why my previous message could be confusing. Sorry about that.\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 07:36:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Nov 28, 2023 at 4:07 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 27, 2023 at 12:14:05PM +0530, Ashutosh Bapat wrote:\n> > Since you wroten \"(still need to improve ...) I thought you are\n> > working on v6. No problem. Sorry for the confusion.\n>\n> I see why my previous message could be confusing. Sorry about that.\n\nI haven't specifically done a review or testing of this patch, but I\nhave used this for testing the CLOG group update code with my\nSLRU-specific changes and I found it quite helpful to test some of the\nconcurrent areas where you need to stop processing somewhere in the\nmiddle of the code and testing that area without this kind of\ninjection point framework is really difficult or may not be even\npossible. We wanted to test the case of clog group update where we\ncan get multiple processes added to a single group and get the xid\nstatus updated by the group leader, you can refer to my test in that\nthread[1] (the last patch test_group_commit.patch is using this\nframework for testing). Overall I feel this framework is quite useful\nand easy to use as well.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-udSTGG_t5n9Z3eBbb4_%3DzNoKU%2B8FP-S6zpv-r4Gm-Y%2BQ%40mail.gmail.com\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Dec 2023 11:09:45 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Dec 11, 2023 at 11:09:45AM +0530, Dilip Kumar wrote:\n> I haven't specifically done a review or testing of this patch, but I\n> have used this for testing the CLOG group update code with my\n> SLRU-specific changes and I found it quite helpful to test some of the\n> concurrent areas where you need to stop processing somewhere in the\n> middle of the code and testing that area without this kind of\n> injection point framework is really difficult or may not be even\n> possible. We wanted to test the case of clog group update where we\n> can get multiple processes added to a single group and get the xid\n> status updated by the group leader, you can refer to my test in that\n> thread[1] (the last patch test_group_commit.patch is using this\n> framework for testing).\n\nCould you be more specific? test_group_commit.patch includes this\nline but there is nothing specific about this injection point getting\nused in a test or a callback assigned to it:\n./test_group_commit.patch:+\tINJECTION_POINT(\"ClogGroupCommit\");\n\n> Overall I feel this framework is quite useful\n> and easy to use as well.\n\nCool, thanks.\n--\nMichael", "msg_date": "Mon, 11 Dec 2023 18:44:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Dec 11, 2023 at 3:14 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Dec 11, 2023 at 11:09:45AM +0530, Dilip Kumar wrote:\n> > I haven't specifically done a review or testing of this patch, but I\n> > have used this for testing the CLOG group update code with my\n> > SLRU-specific changes and I found it quite helpful to test some of the\n> > concurrent areas where you need to stop processing somewhere in the\n> > middle of the code and testing that area without this kind of\n> > injection point framework is really difficult or may not be even\n> > possible. We wanted to test the case of clog group update where we\n> > can get multiple processes added to a single group and get the xid\n> > status updated by the group leader, you can refer to my test in that\n> > thread[1] (the last patch test_group_commit.patch is using this\n> > framework for testing).\n>\n> Could you be more specific? test_group_commit.patch includes this\n> line but there is nothing specific about this injection point getting\n> used in a test or a callback assigned to it:\n> ./test_group_commit.patch:+ INJECTION_POINT(\"ClogGroupCommit\");\n\nOops, I only included the code changes where I am adding injection\npoints and some comments to verify that, but missed the actual test\nfile. Attaching it here.\n\nNote: I think the latest patches are conflicting with the head, can you rebase?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 12 Dec 2023 10:27:09 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Dec 12, 2023 at 10:27:09AM +0530, Dilip Kumar wrote:\n> Oops, I only included the code changes where I am adding injection\n> points and some comments to verify that, but missed the actual test\n> file. Attaching it here.\n\nI see. Interesting that this requires persistent connections to work.\nThat's something I've found clunky to rely on when the scenarios a\ntest needs to deal with are rather complex. That's an area that could\nbe made easier to use outside of this patch.. Something got proposed\nby Andrew Dunstan to make the libpq routines usable through a perl\nmodule, for example.\n\n> Note: I think the latest patches are conflicting with the head, can you rebase?\n\nIndeed, as per the recent manipulations in ipci.c for the shmem\ninitialization areas. Here goes a v6.\n--\nMichael", "msg_date": "Tue, 12 Dec 2023 11:44:57 +0100", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Dec 12, 2023 at 4:15 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Dec 12, 2023 at 10:27:09AM +0530, Dilip Kumar wrote:\n> > Oops, I only included the code changes where I am adding injection\n> > points and some comments to verify that, but missed the actual test\n> > file. Attaching it here.\n>\n> I see. Interesting that this requires persistent connections to work.\n> That's something I've found clunky to rely on when the scenarios a\n> test needs to deal with are rather complex. That's an area that could\n> be made easier to use outside of this patch.. Something got proposed\n> by Andrew Dunstan to make the libpq routines usable through a perl\n> module, for example.\n>\n> > Note: I think the latest patches are conflicting with the head, can you rebase?\n>\n> Indeed, as per the recent manipulations in ipci.c for the shmem\n> initialization areas. Here goes a v6.\n\nSorry for replying late here. Another minor conflict has risen again.\nIt's minor enough to be ignored for a review.\n\nOn Tue, Nov 21, 2023 at 6:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 20, 2023 at 04:53:45PM +0530, Ashutosh Bapat wrote:\n> > On Wed, Oct 25, 2023 at 9:43 AM Michael Paquier <[email protected]> wrote:\n> >> I have added some documentation to explain that, as well. I am not\n> >> wedded to the name proposed in the patch, so if you feel there is\n> >> better, feel free to propose ideas.\n> >\n> > Actually with Attach and Detach terminology, INJECTION_POINT becomes\n> > the place where we \"declare\" the injection point. So the documentation\n> > needs to first explain INJECTION_POINT and then explain the other\n> > operations.\n>\n> Sure.\n\nThis discussion has not been addressed in v6. I think the interface\nneeds to be documented in the order below\nINJECTION_POINT - this declares an injection point - i.e. a place in\ncode where an external code can be injected (and run).\nInjectionPointAttach() - this is used to associate (\"attach\") external\ncode to an injection point.\nInjectionPointDetach() - this is used to disassociate (\"detach\")\nexternal code from an injection point.\n\nSpecifying that InjectionPointAttach() \"defines\" an injection point\ngives an impression that the injection point will be \"somehow\" added\nto the code by calling InjectionPointAttach() which is not true. For\nInjectionPointAttach() to be useful, the first argument to it should\nbe something already \"declared\" in the code using INJECTION_POINT().\nHence INJECTION_POINT needs to be mentioned in the documentation\nfirst, followed by Attach and detach. The documentation needs to be\nrephrased to use terms \"declare\", \"attach\" and \"detach\" instead of\n\"define\", \"run\". The first set is aligned with the functionality\nwhereas the second set is aligned with the implementation.\n\nEven if an INJECTION_POINT is not \"declared\" attach would succeed but\ndoesn't do anything. I think this needs to be made clear in the\ndocumentation. Better if we could somehow make Attach() fail if the\nspecified injection point is not \"declared\" using INJECTION_POINT. Of\ncourse we don't want to bloat the hash table with all \"declared\"\ninjection points even if they aren't being attached to and hence not\nused. I think, exposing the current injection point strings as\n#defines and encouraging users to use these macros instead of string\nliterals will be a good start.\n\nWith the current implementation it's possible to \"declare\" injection\npoint with same name at multiple places. It's useful but is it\nintended?\n\n/* Field sizes */\n#define INJ_NAME_MAXLEN 64\n#define INJ_LIB_MAXLEN 128\n#define INJ_FUNC_MAXLEN 128\nI think these limits should be either documented or specified in the\nerror messages for users to fix their code in case of\nerrors/unexpected behaviour.\n\nHere are some code level comments on 0001\n\n+typedef struct InjectionPointArrayEntry\n\nThis is not an array entry anymore. I think we should rename\nInjectionPointEntry as SharedInjectionPointEntry and InjectionPointArrayEntry\nas LocalInjectionPointEntry.\n\n+/* utilities to handle the local array cache */\n+static void\n+injection_point_cache_add(const char *name,\n+ InjectionPointCallback callback)\n+{\n... snip ...\n+\n+ entry = (InjectionPointCacheEntry *)\n+ hash_search(InjectionPointCache, name, HASH_ENTER, &found);\n+\n+ if (!found)\n\nThe function is called only when the injection point is not found in the local\ncache. So this condition will always be true. An Assert will help to make it\nclear and also prevent an unintended callback replacement.\n\n+#ifdef USE_INJECTION_POINTS\n+static bool\n+file_exists(const char *name)\n\nThere's similar function in jit.c and dfmgr.c. Can we not reuse that code?\n\n+ /* Save the entry */\n+ memcpy(entry_by_name->name, name, sizeof(entry_by_name->name));\n+ entry_by_name->name[INJ_NAME_MAXLEN - 1] = '\\0';\n+ memcpy(entry_by_name->library, library, sizeof(entry_by_name->library));\n+ entry_by_name->library[INJ_LIB_MAXLEN - 1] = '\\0';\n+ memcpy(entry_by_name->function, function, sizeof(entry_by_name->function));\n+ entry_by_name->function[INJ_FUNC_MAXLEN - 1] = '\\0';\n\nMost of the code is using strncpy instead of memcpy. Why is this code different?\n\n+ injection_callback = injection_point_cache_get(name);\n+ if (injection_callback == NULL)\n+ {\n+ char path[MAXPGPATH];\n+\n+ /* Found, so just run the callback registered */\n\nThe condition indicates that the callback was not found. Comment looks wrong.\n\n+ snprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path,\n+ entry_by_name->library, DLSUFFIX);\n+\n+ if (!file_exists(path))\n+ elog(ERROR, \"could not find injection library \\\"%s\\\"\", path);\n+\n+ injection_callback = (InjectionPointCallback)\n+ load_external_function(path, entry_by_name->function, true, NULL);\n+\n+ /* add it to the local cache when found */\n+ injection_point_cache_add(name, injection_callback);\n+ }\n+\n\nConsider case\n\nBackend 2\nInjectionPointAttach(\"xyz\", \"abc\", \"pqr\");\n\nBackend 1\nINJECTION_POINT(\"xyz\");\n\nBackend 2\nInjectionPointDetach(\"xyz\");\nInjectionPointAttach(\"xyz\", \"uvw\", \"lmn\");\n\nBackend 1\nINJECTION_POINT(\"xyz\");\n\nIIUC, the last INJECTION_POINT would run abc.pqr instead of uvw.lmn.\nAm I correct?\n\nTo fix this, we have to a. either save qualified name of the function in local\ncache too OR resolve the function name every time INJECTION_POINT is invoked\nand is found in the shared hash table. The first one option is cheaper I think.\nBut it will be good if we can invalidate the local entry when the global entry\nchanges. To keep code simple, we may choose to ignore close race conditions\nwhere INJECTION_POINT is run while InjectionPointAttach or InjectionPointDetach\nis happening. But this way we don't have to look up shared hash table every\ntime INJECTION_POINT is invoked thus improving performance.\n\nI will look at 0002 next.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 2 Jan 2024 15:36:12 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "I'd like to spend some more time reviewing this one, but here are a couple\nof comments.\n\nOn Tue, Dec 12, 2023 at 11:44:57AM +0100, Michael Paquier wrote:\n> +/*\n> + * Allocate shmem space for dynamic shared hash.\n> + */\n> +void\n> +InjectionPointShmemInit(void)\n> +{\n> +#ifdef USE_INJECTION_POINTS\n> +\tHASHCTL\t\tinfo;\n> +\n> +\t/* key is a NULL-terminated string */\n> +\tinfo.keysize = sizeof(char[INJ_NAME_MAXLEN]);\n> +\tinfo.entrysize = sizeof(InjectionPointEntry);\n> +\tInjectionPointHash = ShmemInitHash(\"InjectionPoint hash\",\n> +\t\t\t\t\t\t\t\t\t INJECTION_POINT_HASH_INIT_SIZE,\n> +\t\t\t\t\t\t\t\t\t INJECTION_POINT_HASH_MAX_SIZE,\n> +\t\t\t\t\t\t\t\t\t &info,\n> +\t\t\t\t\t\t\t\t\t HASH_ELEM | HASH_STRINGS);\n> +#endif\n> +}\n\nShould we specify HASH_FIXED_SIZE, too? This hash table will be in the\nmain shared memory segment and therefore won't be able to expand too far\nbeyond the declared maximum size.\n\n> +\t/*\n> +\t * Check if the callback exists in the local cache, to avoid unnecessary\n> +\t * external loads.\n> +\t */\n> +\tinjection_callback = injection_point_cache_get(name);\n> +\tif (injection_callback == NULL)\n> +\t{\n> +\t\tchar\t\tpath[MAXPGPATH];\n> +\n> +\t\t/* Found, so just run the callback registered */\n> +\t\tsnprintf(path, MAXPGPATH, \"%s/%s%s\", pkglib_path,\n> +\t\t\t\t entry_by_name->library, DLSUFFIX);\n> +\n> +\t\tif (!file_exists(path))\n> +\t\t\telog(ERROR, \"could not find injection library \\\"%s\\\"\", path);\n> +\n> +\t\tinjection_callback = (InjectionPointCallback)\n> +\t\t\tload_external_function(path, entry_by_name->function, true, NULL);\n> +\n> +\t\t/* add it to the local cache when found */\n> +\t\tinjection_point_cache_add(name, injection_callback);\n> +\t}\n\nI'm wondering how important it is to cache the callbacks locally.\nload_external_function() won't reload an already-loaded library, so AFAICT\nthis is ultimately just saving a call to dlsym().\n\n> + <literal>name</literal> is the name of the injection point, that\n> + will execute the <literal>function</literal> loaded from\n> + <literal>library</library>.\n> + Injection points are saved in a hash table in shared memory, and\n> + last until the server is shut down.\n> + </para>\n\nI think </library> is supposed to be </literal> here.\n\n> +++ b/src/test/modules/test_injection_points/t/002_invalid_checkpoint_after_promote.pl\n\n0003 and 0004 add tests to the test_injection_points module. Is the idea\nthat we'd add any tests that required injection points here? I think it'd\nbe better if we could move the tests closer to the logic they're testing,\nbut perhaps that is difficult because you also need to define the callback\nfunctions somewhere. Hm...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 23:14:56 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Jan 02, 2024 at 11:14:56PM -0600, Nathan Bossart wrote:\n> Should we specify HASH_FIXED_SIZE, too? This hash table will be in the\n> main shared memory segment and therefore won't be able to expand too far\n> beyond the declared maximum size.\n\nGood point.\n\n> I'm wondering how important it is to cache the callbacks locally.\n> load_external_function() won't reload an already-loaded library, so AFAICT\n> this is ultimately just saving a call to dlsym().\n\nThis keeps a copy to a callback under the same address space, and I\nguess that it would matter if the code where a callback is added gets\nvery hot because this means less function pointers. At the end I\nwould keep the cache as the code to handle it is neither complex nor\nlong, while being isolated in its own paths.\n\n> I think </library> is supposed to be </literal> here.\n\nOkay.\n\n> 0003 and 0004 add tests to the test_injection_points module. Is the idea\n> that we'd add any tests that required injection points here? I think it'd\n> be better if we could move the tests closer to the logic they're testing,\n> but perhaps that is difficult because you also need to define the callback\n> functions somewhere. Hm...\n\nYeah. Agreed that the final result should not have these tests in the\nmodule test_injection_points. What I was thinking here is to move\n002_invalid_checkpoint_after_promote.pl to src/test/recovery/ and pull\nthe module with the callbacks with an EXTRA_INSTALL.\n--\nMichael", "msg_date": "Thu, 4 Jan 2024 08:53:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 4, 2024 at 5:23 AM Michael Paquier <[email protected]> wrote:\n>\n> > 0003 and 0004 add tests to the test_injection_points module. Is the idea\n> > that we'd add any tests that required injection points here? I think it'd\n> > be better if we could move the tests closer to the logic they're testing,\n> > but perhaps that is difficult because you also need to define the callback\n> > functions somewhere. Hm...\n>\n> Yeah. Agreed that the final result should not have these tests in the\n> module test_injection_points. What I was thinking here is to move\n> 002_invalid_checkpoint_after_promote.pl to src/test/recovery/ and pull\n> the module with the callbacks with an EXTRA_INSTALL.\n\n0003 and 0004 are using the extension in this module for some serious\ntesting. The name of the extension test_injection_point indicates that\nit's for testing injection points and not for some serious use of\ninjection callbacks it adds. Changes 0003 and 0004 suggest otherwise.\nI suggest we move test_injection_points from src/test/modules to\ncontrib/ and rename it as \"injection_points\". The test files may still\nbe named as test_injection_point. The TAP tests in 0003 and 0004 once\nmoved to their appropriate places, will load injection_point extension\nand use it. That way predefined injection point callbacks will also be\navailable for others to use.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 4 Jan 2024 18:04:20 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Jan 2, 2024 at 3:36 PM Ashutosh Bapat\n<[email protected]> wrote:\n\n>\n> I will look at 0002 next.\n\nOne more comment on 0001\nInjectionPointAttach() doesn't test whether the given function exists\nin the given library. Even if InjectionPointAttach() succeeds,\nINJECTION_POINT might throw error because the function doesn't exist.\nThis can be seen as an unwanted behaviour. I think\nInjectionPointAttach() should test whether the function exists and\npossibly load it as well by adding it to the local cache.\n\n0002 comments\n--- /dev/null\n+++ b/src/test/modules/test_injection_points/expected/test_injection_points.out\n\nWhen built without injection point support, this test fails. We should\nadd an alternate output file for such a build so that the behaviour\nwith and without injection point support is tested. Or set up things\nsuch that the test is not run under make check in that directory. I\nwill prefer the first option.\n\n+\n+SELECT test_injection_points_run('TestInjectionError'); -- error\n+ERROR: error triggered for injection point TestInjectionError\n+-- Re-load and run again.\n\nWhat's getting Re-loaded here? \\c will create a new connection and\nthus a new backend. Maybe the comment should say \"test in a fresh\nbackend\" or something of that sort?\n\n+\n+SELECT test_injection_points_run('TestInjectionError'); -- error\n+ERROR: error triggered for injection point TestInjectionError\n+-- Remove one entry and check the other one.\n\nLooks confusing to me, we are testing the one removed as well. Am I\nmissing something?\n\n+(1 row)\n+\n+-- All entries removed, nothing happens\n\nWe aren't removing all entries TestInjectionLog2 is still there. Am I\nmissing something?\n\n0003 looks mostly OK.\n\n0004 comments\n\n+\n+# after recovery, the server will not start, and log PANIC: could not\nlocate a valid checkpoint record\n\nIIUC the comment describes the behaviour with 7863ee4def65 reverted.\nBut the test after this comment is written for the behaviour with\n7863ee4def65. That's confusing. Is the intent to describe both\nbehaviours in the comment?\n\n+\n+ /* And sleep.. */\n+ ConditionVariablePrepareToSleep(&inj_state->wait_point);\n+ ConditionVariableSleep(&inj_state->wait_point, test_injection_wait_event);\n+ ConditionVariableCancelSleep();\n\nAccording to the prologue of ConditionVariableSleep(), that function\nshould be called in a loop checking for the desired condition. All the\ncallers that I examined follow that pattern. I think we need to follow\nthat pattern here as well.\n\nBelow comment from ConditionVariableTimedSleep() makes me think that\nthe caller of ConditionVariableSleep() can be woken up even if the\ncondition variable was not signaled. That's why the while() loop\naround ConditionVariableSleep().\n\n* If we're still in the wait list, then the latch must have been set\n* by something other than ConditionVariableSignal; though we don't\n* guarantee not to return spuriously, we'll avoid this obvious case.\n*/.\n\nThat's all I have for now.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 4 Jan 2024 18:22:35 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 04, 2024 at 08:53:11AM +0900, Michael Paquier wrote:\n> On Tue, Jan 02, 2024 at 11:14:56PM -0600, Nathan Bossart wrote:\n>> I'm wondering how important it is to cache the callbacks locally.\n>> load_external_function() won't reload an already-loaded library, so AFAICT\n>> this is ultimately just saving a call to dlsym().\n> \n> This keeps a copy to a callback under the same address space, and I\n> guess that it would matter if the code where a callback is added gets\n> very hot because this means less function pointers. At the end I\n> would keep the cache as the code to handle it is neither complex nor\n> long, while being isolated in its own paths.\n\nFair enough.\n\n>> 0003 and 0004 add tests to the test_injection_points module. Is the idea\n>> that we'd add any tests that required injection points here? I think it'd\n>> be better if we could move the tests closer to the logic they're testing,\n>> but perhaps that is difficult because you also need to define the callback\n>> functions somewhere. Hm...\n> \n> Yeah. Agreed that the final result should not have these tests in the\n> module test_injection_points. What I was thinking here is to move\n> 002_invalid_checkpoint_after_promote.pl to src/test/recovery/ and pull\n> the module with the callbacks with an EXTRA_INSTALL.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 16:24:23 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 04, 2024 at 06:04:20PM +0530, Ashutosh Bapat wrote:\n> 0003 and 0004 are using the extension in this module for some serious\n> testing. The name of the extension test_injection_point indicates that\n> it's for testing injection points and not for some serious use of\n> injection callbacks it adds. Changes 0003 and 0004 suggest otherwise.\n\nYeah, I think test_injection_point should be reserved for testing the\ninjection point machinery.\n\n> I suggest we move test_injection_points from src/test/modules to\n> contrib/ and rename it as \"injection_points\". The test files may still\n> be named as test_injection_point. The TAP tests in 0003 and 0004 once\n> moved to their appropriate places, will load injection_point extension\n> and use it. That way predefined injection point callbacks will also be\n> available for others to use.\n\nRather than defining a module somewhere that tests would need to load,\nshould we just put the common callbacks in the core server? Unless there's\na strong reason to define them elsewhere, that could be a nice way to save\na step in the tests.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 16:31:02 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 04, 2024 at 04:31:02PM -0600, Nathan Bossart wrote:\n> On Thu, Jan 04, 2024 at 06:04:20PM +0530, Ashutosh Bapat wrote:\n>> 0003 and 0004 are using the extension in this module for some serious\n>> testing. The name of the extension test_injection_point indicates that\n>> it's for testing injection points and not for some serious use of\n>> injection callbacks it adds. Changes 0003 and 0004 suggest otherwise.\n> \n> Yeah, I think test_injection_point should be reserved for testing the\n> injection point machinery.\n\nSure. FWIW, it makes sense to me to keep the SQL interface and the\ncallbacks in the module, per the reasons below.\n\n>> I suggest we move test_injection_points from src/test/modules to\n>> contrib/ and rename it as \"injection_points\". The test files may still\n>> be named as test_injection_point. The TAP tests in 0003 and 0004 once\n>> moved to their appropriate places, will load injection_point extension\n>> and use it. That way predefined injection point callbacks will also be\n>> available for others to use.\n> \n> Rather than defining a module somewhere that tests would need to load,\n> should we just put the common callbacks in the core server? Unless there's\n> a strong reason to define them elsewhere, that could be a nice way to save\n> a step in the tests.\n\nNah, having some pre-existing callbacks existing in the backend is\nagainst the original minimalistic design spirit. These would also\nrequire an SQL interface, and the interface design also depends on the\nfunctions registering them when pushing down custom conditions.\nPushing that down to extensions to do what they want will lead to less\nnoise, particularly if you consider that we will most likely want to\ntweak the callback interfaces for backpatched bugs. That's also why I\nthink contrib/ is not a good idea, src/test/modules/ serving the\nactual testing purpose here.\n--\nMichael", "msg_date": "Fri, 5 Jan 2024 08:38:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 5, 2024 at 5:08 AM Michael Paquier <[email protected]> wrote:\n>\n> >> I suggest we move test_injection_points from src/test/modules to\n> >> contrib/ and rename it as \"injection_points\". The test files may still\n> >> be named as test_injection_point. The TAP tests in 0003 and 0004 once\n> >> moved to their appropriate places, will load injection_point extension\n> >> and use it. That way predefined injection point callbacks will also be\n> >> available for others to use.\n> >\n> > Rather than defining a module somewhere that tests would need to load,\n> > should we just put the common callbacks in the core server? Unless there's\n> > a strong reason to define them elsewhere, that could be a nice way to save\n> > a step in the tests.\n>\n> Nah, having some pre-existing callbacks existing in the backend is\n> against the original minimalistic design spirit. These would also\n> require an SQL interface, and the interface design also depends on the\n> functions registering them when pushing down custom conditions.\n> Pushing that down to extensions to do what they want will lead to less\n> noise, particularly if you consider that we will most likely want to\n> tweak the callback interfaces for backpatched bugs. That's also why I\n> think contrib/ is not a good idea, src/test/modules/ serving the\n> actual testing purpose here.\n\nWell, you have already showed that the SQL interface created for the\ntest module is being used for testing a core feature. The tests for\nthat should stay somewhere near the other tests for those features.\nUsing an extension named \"test_injection_point\" and which resides in a\ntest module for testing core features doesn't look great. Hence\nsuggestion to move it to contrib.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 5 Jan 2024 12:41:33 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 05, 2024 at 12:41:33PM +0530, Ashutosh Bapat wrote:\n> Well, you have already showed that the SQL interface created for the\n> test module is being used for testing a core feature. The tests for\n> that should stay somewhere near the other tests for those features.\n> Using an extension named \"test_injection_point\" and which resides in a\n> test module for testing core features doesn't look great. Hence\n> suggestion to move it to contrib.\n\nI mean why? We test a bunch of stuff in src/test/modules/, and this\nis not intended to be released to the outside world.\n\nPutting that in contrib/ has a lot of extra cost. One is\ndocumentation and more complexity regarding versioning when it comes\nto upgrading it to a new version. I don't think that it is a good\nidea to deal with this extra load of work for something that I'd aim\nto be used for having improved *test* coverage, and the build switch\nshould stay. Saying that, I'd be OK with renaming the module to\ninjection_points, but I will fight hard about keeping that in\nsrc/test/modules/. That's less maintenance headache to think about\nwhen having to deal with complex racy bugs.\n--\nMichael", "msg_date": "Fri, 5 Jan 2024 16:18:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 5, 2024 at 12:49 PM Michael Paquier <[email protected]> wrote:\n>\n> I mean why? We test a bunch of stuff in src/test/modules/, and this\n> is not intended to be released to the outside world.\n>\n> Putting that in contrib/ has a lot of extra cost. One is\n> documentation and more complexity regarding versioning when it comes\n> to upgrading it to a new version. I don't think that it is a good\n> idea to deal with this extra load of work for something that I'd aim\n> to be used for having improved *test* coverage, and the build switch\n> should stay. Saying that, I'd be OK with renaming the module to\n> injection_points,\n\nOk. Thanks.\n\n> but I will fight hard about keeping that in\n> src/test/modules/. That's less maintenance headache to think about\n> when having to deal with complex racy bugs.\n\nFor me getting this feature in code in a usable manner is more\nimportant than its place in the code. I have no plans to fight over\nit. :).\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 5 Jan 2024 12:55:22 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Dec 12, 2023 at 4:15 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Dec 12, 2023 at 10:27:09AM +0530, Dilip Kumar wrote:\n> > Oops, I only included the code changes where I am adding injection\n> > points and some comments to verify that, but missed the actual test\n> > file. Attaching it here.\n>\n> I see. Interesting that this requires persistent connections to work.\n> That's something I've found clunky to rely on when the scenarios a\n> test needs to deal with are rather complex. That's an area that could\n> be made easier to use outside of this patch.. Something got proposed\n> by Andrew Dunstan to make the libpq routines usable through a perl\n> module, for example.\n>\n> > Note: I think the latest patches are conflicting with the head, can you rebase?\n>\n> Indeed, as per the recent manipulations in ipci.c for the shmem\n> initialization areas. Here goes a v6.\n\nSome comments in 0001, mostly cosmetics\n\n1.\n+/* utilities to handle the local array cache */\n+static void\n+injection_point_cache_add(const char *name,\n+ InjectionPointCallback callback)\n\nI think the comment for this function should be more specific about\nadding an entry to the local injection_point_cache_add. And add\ncomments for other functions as well e.g. injection_point_cache_get\n\n\n2.\n+typedef struct InjectionPointEntry\n+{\n+ char name[INJ_NAME_MAXLEN]; /* hash key */\n+ char library[INJ_LIB_MAXLEN]; /* library */\n+ char function[INJ_FUNC_MAXLEN]; /* function */\n+} InjectionPointEntry;\n\nSome comments would be good for the structure\n\n3.\n\n+static bool\n+file_exists(const char *name)\n+{\n+ struct stat st;\n+\n+ Assert(name != NULL);\n+ if (stat(name, &st) == 0)\n+ return !S_ISDIR(st.st_mode);\n+ else if (!(errno == ENOENT || errno == ENOTDIR))\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not access file \\\"%s\\\": %m\", name)));\n+ return false;\n+}\n\ndfmgr.c has a similar function so can't we reuse it by making that\nfunction external?\n\n4.\n+ if (found)\n+ {\n+ LWLockRelease(InjectionPointLock);\n+ elog(ERROR, \"injection point \\\"%s\\\" already defined\", name);\n+ }\n+\n...\n+#else\n+ elog(ERROR, \"Injection points are not supported by this build\");\n\nBetter to use similar formatting for error output, Injection vs\ninjection (better not to capitalize the first letter for consistency\npov)\n\n5.\n+ * Check first the shared hash table, and adapt the local cache\n+ * depending on that as it could be possible that an entry to run\n+ * has been removed.\n+ */\n\nWhat if the entry is removed after we have released the\nInjectionPointLock? Or this would not cause any harm?\n\n\n0004:\n\nI think\ntest_injection_points_wake() and test_injection_wait() can be moved as\npart of 0002\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 15:00:25 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 05, 2024 at 08:38:22AM +0900, Michael Paquier wrote:\n> On Thu, Jan 04, 2024 at 04:31:02PM -0600, Nathan Bossart wrote:\n>> Rather than defining a module somewhere that tests would need to load,\n>> should we just put the common callbacks in the core server? Unless there's\n>> a strong reason to define them elsewhere, that could be a nice way to save\n>> a step in the tests.\n> \n> Nah, having some pre-existing callbacks existing in the backend is\n> against the original minimalistic design spirit. These would also\n> require an SQL interface, and the interface design also depends on the\n> functions registering them when pushing down custom conditions.\n> Pushing that down to extensions to do what they want will lead to less\n> noise, particularly if you consider that we will most likely want to\n> tweak the callback interfaces for backpatched bugs. That's also why I\n> think contrib/ is not a good idea, src/test/modules/ serving the\n> actual testing purpose here.\n\nAh, so IIUC we'd have to put some functions in pg_proc.dat even though they\nwould only be used for a handful of tests in special builds. I'd agree\nthat's not desirable.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 10:27:51 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 05, 2024 at 04:18:49PM +0900, Michael Paquier wrote:\n> Putting that in contrib/ has a lot of extra cost. One is\n> documentation and more complexity regarding versioning when it comes\n> to upgrading it to a new version. I don't think that it is a good\n> idea to deal with this extra load of work for something that I'd aim\n> to be used for having improved *test* coverage, and the build switch\n> should stay.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 10:28:47 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 05, 2024 at 10:28:47AM -0600, Nathan Bossart wrote:\n> +1\n\nExtra note for this thread: it is possible to add a SQL test case for\nproblems like what's been reported on this thread when facing a\npartial write failure:\nhttps://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Sun, 7 Jan 2024 10:20:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 05, 2024 at 03:00:25PM +0530, Dilip Kumar wrote:\n> Some comments in 0001, mostly cosmetics\n> \n> 1.\n> +/* utilities to handle the local array cache */\n> +static void\n> +injection_point_cache_add(const char *name,\n> + InjectionPointCallback callback)\n> \n> I think the comment for this function should be more specific about\n> adding an entry to the local injection_point_cache_add. And add\n> comments for other functions as well e.g. injection_point_cache_get\n\nAnd it is not an array anymore. Note InjectionPointArrayEntry that\nstill existed.\n\n> 2.\n> +typedef struct InjectionPointEntry\n> +{\n> + char name[INJ_NAME_MAXLEN]; /* hash key */\n> + char library[INJ_LIB_MAXLEN]; /* library */\n> + char function[INJ_FUNC_MAXLEN]; /* function */\n> +} InjectionPointEntry;\n> \n> Some comments would be good for the structure\n\nSure. I've spent more time documenting things in injection_point.c,\naddressing any inconsistencies.\n\n> 3.\n> \n> +static bool\n> +file_exists(const char *name)\n> +{\n> + struct stat st;\n> +\n> + Assert(name != NULL);\n> + if (stat(name, &st) == 0)\n> + return !S_ISDIR(st.st_mode);\n> + else if (!(errno == ENOENT || errno == ENOTDIR))\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not access file \\\"%s\\\": %m\", name)));\n> + return false;\n> +}\n> \n> dfmgr.c has a similar function so can't we reuse it by making that\n> function external?\n\nYes. Note that jit.c has an extra copy of it. I was holding on the\nrefactoring, but let's bite the bullet and have a single routine.\nI've moved that into a 0001 that builds on top of the rest.\n\n> 4.\n> + if (found)\n> + {\n> + LWLockRelease(InjectionPointLock);\n> + elog(ERROR, \"injection point \\\"%s\\\" already defined\", name);\n> + }\n> +\n> ...\n> +#else\n> + elog(ERROR, \"Injection points are not supported by this build\");\n> \n> Better to use similar formatting for error output, Injection vs\n> injection (better not to capitalize the first letter for consistency\n> pov)\n\nFixed.\n\n> 5.\n> + * Check first the shared hash table, and adapt the local cache\n> + * depending on that as it could be possible that an entry to run\n> + * has been removed.\n> + */\n> \n> What if the entry is removed after we have released the\n> InjectionPointLock? Or this would not cause any harm?\n\nWith an entry found in the shmem table? I don't really think that we\nneed to care about such cases, TBH, because the injection point would\nhave been found in the table to start with. This comes down to if we\nshould try to hold InjectionPointLock while calling the callback, and\nthat may not be a good idea in some cases if you'd expect a high\nconcurrency on the callback running.\n\n> 0004:\n> \n> I think\n> test_injection_points_wake() and test_injection_wait() can be moved as\n> part of 0002\n\nNah. I intend to keep the introduction of this API where it becomes\nrelevant. Perhaps this could also use an isolation test? This could\nalways be polished once we agree on 0001 and 0002.\n\n(I'll post a v6 a bit later, there are more comments posted here and\nthere.)\n--\nMichael", "msg_date": "Tue, 9 Jan 2024 10:09:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "(Compiled two separate review emails into a single one)\n\nOn Tue, Jan 02, 2024 at 03:36:12PM +0530, Ashutosh Bapat wrote:\n> This discussion has not been addressed in v6. I think the interface\n> needs to be documented in the order below\n> INJECTION_POINT - this declares an injection point - i.e. a place in\n> code where an external code can be injected (and run).\n> InjectionPointAttach() - this is used to associate (\"attach\") external\n> code to an injection point.\n> InjectionPointDetach() - this is used to disassociate (\"detach\")\n> external code from an injection point.\n>\n> [arguments about doc organization]\n>\n> Even if an INJECTION_POINT is not \"declared\" attach would succeed but\n> doesn't do anything. I think this needs to be made clear in the\n> documentation. Better if we could somehow make Attach() fail if the\n> specified injection point is not \"declared\" using INJECTION_POINT. Of\n> course we don't want to bloat the hash table with all \"declared\"\n> injection points even if they aren't being attached to and hence not\n> used.\n\nOkay, I can see your point. I have reorganized the docs in the\nfollowing order:\n- INJECTION_POINT\n- Attach\n- Detach\n\n> I think, exposing the current injection point strings as\n> #defines and encouraging users to use these macros instead of string\n> literals will be a good start.\n\nNah, I disagree with this one actually. It is easy to grep for the\nmacro INJECTION_POINT to be able to achieve the same research job, and\nthis would make the code more inconsistent when callbacks are run\nwithin extensions which don't care about a #define in a backend\nheader.\n\n> With the current implementation it's possible to \"declare\" injection\n> point with same name at multiple places. It's useful but is it\n> intended?\n\nYes. I would recommend not doing that, but I don't see why there\nwould be a point in restricting that, either.\n\n> /* Field sizes */\n> #define INJ_NAME_MAXLEN 64\n> #define INJ_LIB_MAXLEN 128\n> #define INJ_FUNC_MAXLEN 128\n> I think these limits should be either documented or specified in the\n> error messages for users to fix their code in case of\n> errors/unexpected behaviour.\n\nAdding them to the error messages when attaching is a good idea.\nDone.\n\n> This is not an array entry anymore. I think we should rename\n> InjectionPointEntry as SharedInjectionPointEntry and InjectionPointArrayEntry\n> as LocalInjectionPointEntry.\n\nYep, fixed.\n\n> +/* utilities to handle the local array cache */\n> +static void\n> +injection_point_cache_add(const char *name,\n> + InjectionPointCallback callback)\n> +{\n> ... snip ...\n> +\n> + entry = (InjectionPointCacheEntry *)\n> + hash_search(InjectionPointCache, name, HASH_ENTER, &found);\n> +\n> + if (!found)\n> \n> The function is called only when the injection point is not found in the local\n> cache. So this condition will always be true. An Assert will help to make it\n> clear and also prevent an unintended callback replacement.\n\nRight, as coded that seems pointless to make the found conditional. I\nthink that I coded it this way when doing some earlier work with this\ncode, and finished with a simpler thing.\n\n> +#ifdef USE_INJECTION_POINTS\n> +static bool\n> +file_exists(const char *name)\n> \n> There's similar function in jit.c and dfmgr.c. Can we not reuse that code?\n\nThis has been mentioned in a different comment. Refactored as of\n0001, but there is something here related to EACCES for the JIT path.\nSeems weird to me that we would not fail if the JIT library cannot be\naccessed when stat() fails.\n\n> + /* Save the entry */\n> + memcpy(entry_by_name->name, name, sizeof(entry_by_name->name));\n> + entry_by_name->name[INJ_NAME_MAXLEN - 1] = '\\0';\n> + memcpy(entry_by_name->library, library, sizeof(entry_by_name->library));\n> + entry_by_name->library[INJ_LIB_MAXLEN - 1] = '\\0';\n> + memcpy(entry_by_name->function, function, sizeof(entry_by_name->function));\n> + entry_by_name->function[INJ_FUNC_MAXLEN - 1] = '\\0';\n> \n> Most of the code is using strncpy instead of memcpy. Why is this code different?\n\nstrncpy() is less used in the backend code. It comes to a matter of\ntaste, IMO.\n\n> + injection_callback = injection_point_cache_get(name);\n> + if (injection_callback == NULL)\n> + {\n> + char path[MAXPGPATH];\n> +\n> + /* Found, so just run the callback registered */\n> \n> The condition indicates that the callback was not found. Comment looks wrong.\n\nFixed.\n\n> Consider case\n> \n> Backend 2\n> InjectionPointAttach(\"xyz\", \"abc\", \"pqr\");\n> \n> Backend 1\n> INJECTION_POINT(\"xyz\");\n> \n> Backend 2\n> InjectionPointDetach(\"xyz\");\n> InjectionPointAttach(\"xyz\", \"uvw\", \"lmn\");\n> \n> Backend 1\n> INJECTION_POINT(\"xyz\");\n> \n> IIUC, the last INJECTION_POINT would run abc.pqr instead of uvw.lmn.\n> Am I correct?\n\nYeah, that's an intended design choice to keep the code simpler and\nfaster as there is no need to track the library and function names in\nthe local caches or implement something similar to invalidation\nmessages for this facility because it would impact performance anyway\nin the call paths. In short, just don't do that, or use two distinct\npoints.\n\nOn Thu, Jan 04, 2024 at 06:22:35PM +0530, Ashutosh Bapat wrote:\n> One more comment on 0001\n> InjectionPointAttach() doesn't test whether the given function exists\n> in the given library. Even if InjectionPointAttach() succeeds,\n> INJECTION_POINT might throw error because the function doesn't exist.\n> This can be seen as an unwanted behaviour. I think\n> InjectionPointAttach() should test whether the function exists and\n> possibly load it as well by adding it to the local cache.\n\nThis has the disadvantage of filling the local cache but that may not\nbe necessary with an extra load_external_function() in the attach\npath. I agree to make things safer, but I would do that when\nattempting to run the callback instead. Perhaps there's an argument\nfor the case of somebody replacing a library on-the-fly. I don't\nreally buy it, but people like doing fancy things sometimes.\n\n> 0002 comments\n> --- /dev/null\n> +++ b/src/test/modules/test_injection_points/expected/test_injection_points.out\n> \n> When built without injection point support, this test fails. We should\n> add an alternate output file for such a build so that the behaviour\n> with and without injection point support is tested. Or set up things\n> such that the test is not run under make check in that directory. I\n> will prefer the first option.\n\nsrc/test/modules/Makefile has a safeguard for ./configure, and there's\none in test_injection_points/meson.build for Meson. The test is not\nrun when the switches are not used, rather than using an alternate\noutput file. There was a different issue when moving the tests to\nsrc/test/recovery/, though, where we need to make the execution of the\ntests conditional on get_option('injection_points').\n\n> +\n> +SELECT test_injection_points_run('TestInjectionError'); -- error\n> +ERROR: error triggered for injection point TestInjectionError\n> +-- Re-load and run again.\n> \n> What's getting Re-loaded here? \\c will create a new connection and\n> thus a new backend. Maybe the comment should say \"test in a fresh\n> backend\" or something of that sort?\n\nThe local cache is reloaded. Reworded.\n\n> Looks confusing to me, we are testing the one removed as well. Am I\n> missing something?\n> [...]\n> We aren't removing all entries TestInjectionLog2 is still there. Am I\n> missing something?\n\nReworded all that.\n\n> +# after recovery, the server will not start, and log PANIC: could not\n> locate a valid checkpoint record\n> \n> IIUC the comment describes the behaviour with 7863ee4def65 reverted.\n> But the test after this comment is written for the behaviour with\n> 7863ee4def65. That's confusing. Is the intent to describe both\n> behaviours in the comment?\n\nThis came from the original test case posted on the thread that\ntreated this bug. There's more that bugs me for this script that I\nwould like to polish. Let's focus on 0001 and 0002 for now..\n\n> According to the prologue of ConditionVariableSleep(), that function\n> should be called in a loop checking for the desired condition. All the\n> callers that I examined follow that pattern. I think we need to follow\n> that pattern here as well.\n> \n> Below comment from ConditionVariableTimedSleep() makes me think that\n> the caller of ConditionVariableSleep() can be woken up even if the\n> condition variable was not signaled. That's why the while() loop\n> around ConditionVariableSleep().\n\nThat's the thing here, we don't have an extra condition to check\nafter. The variable sleep is what triggers the stop. :)\nPerhaps this could be made smarter or with something else, I'm OK to\nrevisit that with the polishing for 0003 I'm planning. We could use a\nseparate shared state, for example, but that does not improve the test\nreadability, either.\n\nAttached is a v7 series. What do you think? 0004 and 0005 for the\nextra tests still need more discussion and much more polishing, IMO.\n--\nMichael", "msg_date": "Tue, 9 Jan 2024 13:39:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Jan 9, 2024 at 10:09 AM Michael Paquier <[email protected]> wrote:\n\n>\n> Okay, I can see your point. I have reorganized the docs in the\n> following order:\n> - INJECTION_POINT\n> - Attach\n> - Detach\n\nThanks. This looks better. Needs further wordsmithy. But that can wait\ntill the code has been reviewed.\n\n>\n> > I think, exposing the current injection point strings as\n> > #defines and encouraging users to use these macros instead of string\n> > literals will be a good start.\n>\n> Nah, I disagree with this one actually. It is easy to grep for the\n> macro INJECTION_POINT to be able to achieve the same research job, and\n> this would make the code more inconsistent when callbacks are run\n> within extensions which don't care about a #define in a backend\n> header.\n\nThe macros should be in extension facing header, ofc. But I take back\nthis suggestion since defining these macros is extra work (every time\na new injection point is declared) and your suggestion to grep\npractically works. We can improve things as needed.\n\n>\n> > With the current implementation it's possible to \"declare\" injection\n> > point with same name at multiple places. It's useful but is it\n> > intended?\n>\n> Yes. I would recommend not doing that, but I don't see why there\n> would be a point in restricting that, either.\n\nSince an unintentional misspelling might trigger an unintended\ninjection point. But we will see how much of that happens in practice.\n\n> > +#ifdef USE_INJECTION_POINTS\n> > +static bool\n> > +file_exists(const char *name)\n> >\n> > There's similar function in jit.c and dfmgr.c. Can we not reuse that code?\n>\n> This has been mentioned in a different comment. Refactored as of\n> 0001, but there is something here related to EACCES for the JIT path.\n> Seems weird to me that we would not fail if the JIT library cannot be\n> accessed when stat() fails.\n\nI agree with this change to jit. Without having search permissions on\nevery directory in the path, the function can not determine if the\nfile exists or not. So throwing an error is better than just returning\nfalse which means that\nthe file does not exist.\n\n>\n> > + /* Save the entry */\n> > + memcpy(entry_by_name->name, name, sizeof(entry_by_name->name));\n> > + entry_by_name->name[INJ_NAME_MAXLEN - 1] = '\\0';\n> > + memcpy(entry_by_name->library, library, sizeof(entry_by_name->library));\n> > + entry_by_name->library[INJ_LIB_MAXLEN - 1] = '\\0';\n> > + memcpy(entry_by_name->function, function, sizeof(entry_by_name->function));\n> > + entry_by_name->function[INJ_FUNC_MAXLEN - 1] = '\\0';\n> >\n> > Most of the code is using strncpy instead of memcpy. Why is this code different?\n>\n> strncpy() is less used in the backend code. It comes to a matter of\n> taste, IMO.\n\nTo me using memcpy implies that the contents of the memory being\ncopied can be non-character. For a buffer containing a character\nstring I would prefer strncpy. But I wouldn't argue furher..\n\n>\n> Yeah, that's an intended design choice to keep the code simpler and\n> faster as there is no need to track the library and function names in\n> the local caches or implement something similar to invalidation\n> messages for this facility because it would impact performance anyway\n> in the call paths. In short, just don't do that, or use two distinct\n> points.\n\nIn practice the InjectionPointDetach() and InjectionPointAttach()\ncalls may not be close by and the user may not be able to figure out\nwhy the injection points are behaviour weird. It may impact\nperformance but unexpected behaviour should be avoided, IMO.\n\nIf nothing else this should be documented.\n\n>\n> On Thu, Jan 04, 2024 at 06:22:35PM +0530, Ashutosh Bapat wrote:\n> > One more comment on 0001\n> > InjectionPointAttach() doesn't test whether the given function exists\n> > in the given library. Even if InjectionPointAttach() succeeds,\n> > INJECTION_POINT might throw error because the function doesn't exist.\n> > This can be seen as an unwanted behaviour. I think\n> > InjectionPointAttach() should test whether the function exists and\n> > possibly load it as well by adding it to the local cache.\n>\n> This has the disadvantage of filling the local cache but that may not\n> be necessary with an extra load_external_function() in the attach\n> path. I agree to make things safer, but I would do that when\n> attempting to run the callback instead. Perhaps there's an argument\n> for the case of somebody replacing a library on-the-fly. I don't\n> really buy it, but people like doing fancy things sometimes.\n\nI am ok with not populating the cache but checking with just\nload_external_function(). This is again another ease of use scenario\nwhere a silly mistake by user is caught earlier making user's life\neasier. That at least should be the goal of the first cut.\n\n>\n> > 0002 comments\n> > --- /dev/null\n> > +++ b/src/test/modules/test_injection_points/expected/test_injection_points.out\n> >\n> > When built without injection point support, this test fails. We should\n> > add an alternate output file for such a build so that the behaviour\n> > with and without injection point support is tested. Or set up things\n> > such that the test is not run under make check in that directory. I\n> > will prefer the first option.\n>\n> src/test/modules/Makefile has a safeguard for ./configure, and there's\n> one in test_injection_points/meson.build for Meson. The test is not\n> run when the switches are not used, rather than using an alternate\n> output file.\n\nWith v6 I could run the test when built with enable_injection_point\nfalse. I just ran make check in that folder. Didn't test meson build.\n\n> There was a different issue when moving the tests to\n> src/test/recovery/, though, where we need to make the execution of the\n> tests conditional on get_option('injection_points').\n\n>\n> > +\n> > +SELECT test_injection_points_run('TestInjectionError'); -- error\n> > +ERROR: error triggered for injection point TestInjectionError\n> > +-- Re-load and run again.\n> >\n> > What's getting Re-loaded here? \\c will create a new connection and\n> > thus a new backend. Maybe the comment should say \"test in a fresh\n> > backend\" or something of that sort?\n>\n> The local cache is reloaded. Reworded.\n\nWe are starting a new backend not \"re\"loading a cache in an existing\nbackend per say.\n\n>\n> That's the thing here, we don't have an extra condition to check\n> after. The variable sleep is what triggers the stop. :)\n> Perhaps this could be made smarter or with something else, I'm OK to\n> revisit that with the polishing for 0003 I'm planning. We could use a\n> separate shared state, for example, but that does not improve the test\n> readability, either.\n\nYeah, I think we have to use another shared state. If the waiting\nbackend moves ahead without test_injection_point_wake() being called,\nthat could lead to unexpected and very weird behaviour.\n\nIt looks like ConditionVariable just remembers the processes that need\nto be woken up during broadcast or signal. But by itself it doesn't\nguarantee desired condition when woken up.\n\n>\n> Attached is a v7 series. What do you think? 0004 and 0005 for the\n> extra tests still need more discussion and much more polishing, IMO.\n\nGenerally I think the 0001 and 0002 are in good shape. However, I\nwould like them to be more easy to use - like catching simple user\nerrors that can be easily caught. That saves a lot of frustration\nbecause of unexpected behaviour. I will review 0001 and 0002 from v7\nin detail again, but it might take a few days.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:21:03 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Wed, Jan 10, 2024 at 03:21:03PM +0530, Ashutosh Bapat wrote:\n> On Tue, Jan 9, 2024 at 10:09 AM Michael Paquier <[email protected]> wrote:\n>>> +#ifdef USE_INJECTION_POINTS\n>>> +static bool\n>>> +file_exists(const char *name)\n>>>\n>>> There's similar function in jit.c and dfmgr.c. Can we not reuse that code?\n>>\n>> This has been mentioned in a different comment. Refactored as of\n>> 0001, but there is something here related to EACCES for the JIT path.\n>> Seems weird to me that we would not fail if the JIT library cannot be\n>> accessed when stat() fails.\n> \n> I agree with this change to jit. Without having search permissions on\n> every directory in the path, the function can not determine if the\n> file exists or not. So throwing an error is better than just returning\n> false which means that\n> the file does not exist.\n\nI was looking at the original set of threads related to JIT, and this\nhas been mentioned nowhere. I think that I'm going to give it a shot\nand see how the buildfarm reacts. If that finishes with red, we could\nalways revert this part of the patch in jit.c still keep the\nrefactored routine.\n\n>> Yeah, that's an intended design choice to keep the code simpler and\n>> faster as there is no need to track the library and function names in\n>> the local caches or implement something similar to invalidation\n>> messages for this facility because it would impact performance anyway\n>> in the call paths. In short, just don't do that, or use two distinct\n>> points.\n> \n> In practice the InjectionPointDetach() and InjectionPointAttach()\n> calls may not be close by and the user may not be able to figure out\n> why the injection points are behaviour weird. It may impact\n> performance but unexpected behaviour should be avoided, IMO.\n> \n> If nothing else this should be documented.\n\nIn all the infrastructures I've looked at, folks did not really care\nabout having an invalidation for the callbacks loaded. Still I'm OK\nto add something in the documentation about that, say among the lines\nof an extra sentence like:\n\"The callbacks loaded by a process are cached within each process.\nThere is no invalidation facility for the callbacks attached to\ninjection points, hence updating a callback for an injection point\nrequires a restart of the process to release its cache and the\nprevious callbacks attached to it.\"\n\n> I am ok with not populating the cache but checking with just\n> load_external_function(). This is again another ease of use scenario\n> where a silly mistake by user is caught earlier making user's life\n> easier. That at least should be the goal of the first cut.\n\nI don't really aim for complicated here, just useful.\n\n> With v6 I could run the test when built with enable_injection_point\n> false. I just ran make check in that folder. Didn't test meson build.\n\nThe CI has been failing because 041_invalid_checkpoint_after_promote\nwas loading Time::HiRes::nanosleep and Windows does not support it.\n\n> Yeah, I think we have to use another shared state. If the waiting\n> backend moves ahead without test_injection_point_wake() being called,\n> that could lead to unexpected and very weird behaviour.\n> \n> It looks like ConditionVariable just remembers the processes that need\n> to be woken up during broadcast or signal. But by itself it doesn't\n> guarantee desired condition when woken up.\n\nYeah, I'm not sure yet about how to do that in the most elegant way.\nBut this part could always happen after 0001~0003.\n\n>> Attached is a v7 series. What do you think? 0004 and 0005 for the\n>> extra tests still need more discussion and much more polishing, IMO.\n> \n> Generally I think the 0001 and 0002 are in good shape. However, I\n> would like them to be more easy to use - like catching simple user\n> errors that can be easily caught. That saves a lot of frustration\n> because of unexpected behaviour. I will review 0001 and 0002 from v7\n> in detail again, but it might take a few days.\n\nThanks again for the reviews. I still intend to focus solely on 0001,\n0002 and 0003 for the current commit fest to have something able to\nenforce error states in backends, at least. There have been quite a\nfew bugs that could have coverage thanks for that.\n--\nMichael", "msg_date": "Thu, 11 Jan 2024 13:11:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 11, 2024 at 9:42 AM Michael Paquier <[email protected]> wrote:\n>\n> >> Yeah, that's an intended design choice to keep the code simpler and\n> >> faster as there is no need to track the library and function names in\n> >> the local caches or implement something similar to invalidation\n> >> messages for this facility because it would impact performance anyway\n> >> in the call paths. In short, just don't do that, or use two distinct\n> >> points.\n> >\n> > In practice the InjectionPointDetach() and InjectionPointAttach()\n> > calls may not be close by and the user may not be able to figure out\n> > why the injection points are behaviour weird. It may impact\n> > performance but unexpected behaviour should be avoided, IMO.\n> >\n> > If nothing else this should be documented.\n>\n> In all the infrastructures I've looked at, folks did not really care\n> about having an invalidation for the callbacks loaded. Still I'm OK\n> to add something in the documentation about that, say among the lines\n> of an extra sentence like:\n> \"The callbacks loaded by a process are cached within each process.\n> There is no invalidation facility for the callbacks attached to\n> injection points, hence updating a callback for an injection point\n> requires a restart of the process to release its cache and the\n> previous callbacks attached to it.\"\n\nIt doesn't behave exactly like that either. If the INJECTION_POINT is\nrun after detach (but before Attach), the local cache will be updated.\nA subsequent attach and INJECTION_POINT call would fetch the new\ncallback.\n\n>\n> > I am ok with not populating the cache but checking with just\n> > load_external_function(). This is again another ease of use scenario\n> > where a silly mistake by user is caught earlier making user's life\n> > easier. That at least should be the goal of the first cut.\n>\n> I don't really aim for complicated here, just useful.\n\nIt isn't complicated. Such simple error check improve user's\nconfidence on the feature and better be part of the 1st cut.\n\n>\n> > With v6 I could run the test when built with enable_injection_point\n> > false. I just ran make check in that folder. Didn't test meson build.\n>\n> The CI has been failing because 041_invalid_checkpoint_after_promote\n> was loading Time::HiRes::nanosleep and Windows does not support it.\n\nSome miscommunication here. The SQL test under injection_point module\ncan be run in a build without injection_point and it fails. I think\nit's better to have an alternate output for the same or prohibit the\ntest running itself.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 11 Jan 2024 14:47:27 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 11, 2024 at 02:47:27PM +0530, Ashutosh Bapat wrote:\n> On Thu, Jan 11, 2024 at 9:42 AM Michael Paquier <[email protected]> wrote:\n>> I don't really aim for complicated here, just useful.\n> \n> It isn't complicated. Such simple error check improve user's\n> confidence on the feature and better be part of the 1st cut.\n\nI'm really not sure about that, because it does not impact the scope\nof the facility even with all the use cases I've seen where injection\npoints could be used. It could always be added later if there's a\nstrong push for it. For testing, I'm biased about attempting to load\ncallbacks in the process attaching them.\n\n>>> With v6 I could run the test when built with enable_injection_point\n>>> false. I just ran make check in that folder. Didn't test meson build.\n>>\n>> The CI has been failing because 041_invalid_checkpoint_after_promote\n>> was loading Time::HiRes::nanosleep and Windows does not support it.\n> \n> Some miscommunication here. The SQL test under injection_point module\n> can be run in a build without injection_point and it fails. I think\n> it's better to have an alternate output for the same or prohibit the\n> test running itself.\n\nThe same problem exists if you try to run the SSL tests in\nsrc/test/ssl/ without support build for them. Protections at the\nupper levels are good enough for the CI and the buildfarm, while\nmaking the overall maintenance cheaper, so I'm happy with just these.\n\nIt also seems like you've missed this message, where this has been\nmentioned (spoiler: first version of the patch used an alternate\noutput):\nhttps://www.postgresql.org/message-id/[email protected] \n--\nMichael", "msg_date": "Fri, 12 Jan 2024 08:35:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 12, 2024 at 5:05 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 02:47:27PM +0530, Ashutosh Bapat wrote:\n> > On Thu, Jan 11, 2024 at 9:42 AM Michael Paquier <[email protected]> wrote:\n> >> I don't really aim for complicated here, just useful.\n> >\n> > It isn't complicated. Such simple error check improve user's\n> > confidence on the feature and better be part of the 1st cut.\n>\n> I'm really not sure about that, because it does not impact the scope\n> of the facility even with all the use cases I've seen where injection\n> points could be used. It could always be added later if there's a\n> strong push for it. For testing, I'm biased about attempting to load\n> callbacks in the process attaching them.\n>\n\nI am not able to understand the objection to adding another handful of\nlines of code. The core code is quite minimal and better to be robust.\nWe may seek someone else's opinion to break the tie.\n\n> >>> With v6 I could run the test when built with enable_injection_point\n> >>> false. I just ran make check in that folder. Didn't test meson build.\n> >>\n> >> The CI has been failing because 041_invalid_checkpoint_after_promote\n> >> was loading Time::HiRes::nanosleep and Windows does not support it.\n> >\n> > Some miscommunication here. The SQL test under injection_point module\n> > can be run in a build without injection_point and it fails. I think\n> > it's better to have an alternate output for the same or prohibit the\n> > test running itself.\n>\n> The same problem exists if you try to run the SSL tests in\n> src/test/ssl/ without support build for them. Protections at the\n> upper levels are good enough for the CI and the buildfarm, while\n> making the overall maintenance cheaper, so I'm happy with just these.\n>\n> It also seems like you've missed this message, where this has been\n> mentioned (spoiler: first version of the patch used an alternate\n> output):\n> https://www.postgresql.org/message-id/[email protected]\n\nAh! sorry for missing that. If there's a precedent, I am ok. If the\nconfusion arises we can fix it later.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 12 Jan 2024 09:40:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Fri, Jan 12, 2024 at 08:35:42AM +0900, Michael Paquier wrote:\n> It also seems like you've missed this message, where this has been\n> mentioned (spoiler: first version of the patch used an alternate\n> output):\n> https://www.postgresql.org/message-id/[email protected] \n\nThe refactoring of 0001 has now been applied as of e72a37528dda, and\nthe buildfarm looks stable (at least for now).\n\nHere is a rebased patch set of the rest.\n--\nMichael", "msg_date": "Fri, 12 Jan 2024 13:26:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "Hi Michael,\n\nThere is some overlap between Dtrace functionality and this\nfunctionality. But I see differences too. E.g. injection points offer\ndeeper integration whereas dtrace provides more information to the\nprobe like callstack and argument values etc. We need to assess\nwhether these functionality can co-exist and whether we need both of\nthem. If the answer to both of these questions is yes, it will be good\nto add documentation explaining the differences and similarities and\nalso some guidance on when to use what.\n\n\nOn Fri, Jan 12, 2024 at 9:56 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 08:35:42AM +0900, Michael Paquier wrote:\n> > It also seems like you've missed this message, where this has been\n> > mentioned (spoiler: first version of the patch used an alternate\n> > output):\n> > https://www.postgresql.org/message-id/[email protected]\n>\n> The refactoring of 0001 has now been applied as of e72a37528dda, and\n> the buildfarm looks stable (at least for now).\n>\n> Here is a rebased patch set of the rest.\n\n+\n+#ifdef USE_INJECTION_POINTS\n+static bool\n+file_exists(const char *name)\n+{\n+ struct stat st;\n+\n+ Assert(name != NULL);\n+ if (stat(name, &st) == 0)\n+ return !S_ISDIR(st.st_mode);\n+ else if (!(errno == ENOENT || errno == ENOTDIR))\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not access file \\\"%s\\\": %m\", name)));\n+ return false;\n+}\n\nShouldn't this be removed now? The code should use one from fd.c\n\nOther code changes look good. I think the documentation and comments\nneed some changes esp. considering the users point of view. Have\nattached two patches (0003, and 0004) with those changes to be applied\non top of 0001 and 0002 respectively. Please review them. Might need\nsome wordsmithy and language correction. Attaching the whole patch set\nto keep cibot happy.\n\nThis is review of 0001 and 0002 only. Once we take care of these\ncomments I think those patches will be ready for commit except one\npoint of contention mentioned in [1]. We haven't heard any third\nopinion yet.\n\n[1] https://www.postgresql.org/message-id/CAExHW5sc_ar7=W9XCcC9TwYxZF71Ghc6poQ_+u4HXTXmNB7KAw@mail.gmail.com\n\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 18 Jan 2024 10:56:09 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Thu, Jan 18, 2024 at 10:56:09AM +0530, Ashutosh Bapat wrote:\n> There is some overlap between Dtrace functionality and this\n> functionality. But I see differences too. E.g. injection points offer\n> deeper integration whereas dtrace provides more information to the\n> probe like callstack and argument values etc. We need to assess\n> whether these functionality can co-exist and whether we need both of\n> them. If the answer to both of these questions is yes, it will be good\n> to add documentation explaining the differences and similarities and\n> also some guidance on when to use what.\n\nPerhaps, I'm not sure how much we want to do regarding that yet,\ninjection points have no external dependencies and will work across\nall environments as long as dlsym() (or an equivalent) is able to\nwork, while being cheaper because they don't spawn an external process\nto trace the call.\n\n> +\n> +#ifdef USE_INJECTION_POINTS\n> +static bool\n> +file_exists(const char *name)\n> +{\n> + struct stat st;\n> +\n> + Assert(name != NULL);\n> + if (stat(name, &st) == 0)\n> + return !S_ISDIR(st.st_mode);\n> + else if (!(errno == ENOENT || errno == ENOTDIR))\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not access file \\\"%s\\\": %m\", name)));\n> + return false;\n> +}\n> \n> Shouldn't this be removed now? The code should use one from fd.c\n\nYep, removed that.\n\n> Other code changes look good. I think the documentation and comments\n> need some changes esp. considering the users point of view. Have\n> attached two patches (0003, and 0004) with those changes to be applied\n> on top of 0001 and 0002 respectively. Please review them. Might need\n> some wordsmithy and language correction. Attaching the whole patch set\n> to keep cibot happy.\n\nThe CF bot was perhaps happy but your 0004 has forgotten to update the\nexpected output. There were also a few typos, some markups and edits\nrequired for 0002 but as a whole what you have suggested was an\nimprovement. Thanks.\n\n> This is review of 0001 and 0002 only. Once we take care of these\n> comments I think those patches will be ready for commit except one\n> point of contention mentioned in [1]. We haven't heard any third\n> opinion yet.\n\n0001~0004 have been now applied, and I'm marking the CF entry as\ncommitted. I'll create a new thread once I have put more energy into\nthe regression test improvements. Now the fun can really begin. I am\nalso going to switch my buildfarm animals to use the new ./configure\nswitch.\n--\nMichael", "msg_date": "Mon, 22 Jan 2024 13:38:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Jan 22, 2024 at 10:08 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 10:56:09AM +0530, Ashutosh Bapat wrote:\n> > There is some overlap between Dtrace functionality and this\n> > functionality. But I see differences too. E.g. injection points offer\n> > deeper integration whereas dtrace provides more information to the\n> > probe like callstack and argument values etc. We need to assess\n> > whether these functionality can co-exist and whether we need both of\n> > them. If the answer to both of these questions is yes, it will be good\n> > to add documentation explaining the differences and similarities and\n> > also some guidance on when to use what.\n>\n> Perhaps, I'm not sure how much we want to do regarding that yet,\n> injection points have no external dependencies and will work across\n> all environments as long as dlsym() (or an equivalent) is able to\n> work, while being cheaper because they don't spawn an external process\n> to trace the call.\n\nYes. Both have their advantages and disadvantages. So I believe both\nwill stay but that means the guidance is necessary. We may want to see\nreception and add the guidance later in the release cycle.\n\n>\n> > Other code changes look good. I think the documentation and comments\n> > need some changes esp. considering the users point of view. Have\n> > attached two patches (0003, and 0004) with those changes to be applied\n> > on top of 0001 and 0002 respectively. Please review them. Might need\n> > some wordsmithy and language correction. Attaching the whole patch set\n> > to keep cibot happy.\n>\n> The CF bot was perhaps happy but your 0004 has forgotten to update the\n> expected output. There were also a few typos, some markups and edits\n> required for 0002 but as a whole what you have suggested was an\n> improvement. Thanks.\n\nSorry for that. Glad that you found those suggestions acceptable.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 22 Jan 2024 10:23:07 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On 22/01/2024 06:38, Michael Paquier wrote:\n> 0001~0004 have been now applied, and I'm marking the CF entry as\n> committed.\n\nWoo-hoo!\n\nI wrote the attached patch to enable injection points in the Cirrus CI \nconfig, to run the injection tests I wrote for a GIN bug today [1]. But \nthat led to a crash in the asan-enabled build [2]. I didn't investigate \nit yet.\n\n[1] \nhttps://www.postgresql.org/message-id/d8f0b068-0e6e-4b2c-8932-62507eb7e1c6%40iki.fi\n[2] https://cirrus-ci.com/task/5242888636858368\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 22 Jan 2024 18:08:10 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On 22/01/2024 18:08, Heikki Linnakangas wrote:\n> I wrote the attached patch to enable injection points in the Cirrus CI\n> config, to run the injection tests I wrote for a GIN bug today [1]. But\n> that led to a crash in the asan-enabled build [2]. I didn't investigate\n> it yet.\n\nPushed a fix for the crash.\n\nWhat do you think of enabling this in the Cirrus CI config?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 21:02:48 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Mon, Jan 22, 2024 at 09:02:48PM +0200, Heikki Linnakangas wrote:\n> On 22/01/2024 18:08, Heikki Linnakangas wrote:\n>> I wrote the attached patch to enable injection points in the Cirrus CI\n>> config, to run the injection tests I wrote for a GIN bug today [1]. But\n>> that led to a crash in the asan-enabled build [2]. I didn't investigate\n>> it yet.\n> \n> Pushed a fix for the crash.\n\nThat's embarrassing. Thanks for the quick fix.\n\n> What do you think of enabling this in the Cirrus CI config?\n\nThat was on my TODO list of things to tackle and propose, but perhaps\nthere is no point in waiting more so I've applied your patch.\n--\nMichael", "msg_date": "Tue, 23 Jan 2024 12:08:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" }, { "msg_contents": "On Tue, Jan 23, 2024 at 12:08:17PM +0900, Michael Paquier wrote:\n> That was on my TODO list of things to tackle and propose, but perhaps\n> there is no point in waiting more so I've applied your patch.\n\nSlightly off topic and while I don't forget about it.. Please find\nattached a copy of the patch posted around [1] to be able to define\ninjection points with input arguments, so as it is possible to execute\ncallbacks with values coming from the code path where the point is\nattached.\n\nFor example, a backend could use this kind of macro to have a callback\nattached to this point use some runtime value:\nINJECTION_POINT_1ARG(\"InjectionPointBoo\", &some_value);\n\n[1]: https://www.postgresql.org/message-id/Za8TLyD9HIjzFlhJ%40paquier.xyz\n--\nMichael", "msg_date": "Tue, 23 Jan 2024 12:32:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding facility for injection points (or probe points?) for more\n advanced tests" } ]
[ { "msg_contents": "Hi,\n\nWhen reading the documentation about operator class, I found\nthe following description:\n\n The pg_am table contains one row for every index access method. \n Support for access to regular tables is built into PostgreSQL, \n but all index access methods are described in pg_am.\n\nIt seems to me that this description says pg_am contains only\nindex access methods but not table methods. I wonder it is missed\nto fix this when tableam was supported and other documentation\nwas changed in b73c3a11963c8bb783993cfffabb09f558f86e37.\n\nAttached is a patch to remove the sentence that starts with\n\"Support for access to regular tables is ....\".\n\nRagards,\nYugo Nagata\n\n-- \nYugo NAGATA <[email protected]>", "msg_date": "Wed, 25 Oct 2023 17:25:51 +0900", "msg_from": "Yugo NAGATA <[email protected]>", "msg_from_op": true, "msg_subject": "doc: a small improvement about pg_am description" }, { "msg_contents": "On Wed, 2023-10-25 at 17:25 +0900, Yugo NAGATA wrote:\n> It seems to me that this description says pg_am contains only\n> index access methods but not table methods. I wonder it is missed\n> to fix this when tableam was supported and other documentation\n> was changed in b73c3a11963c8bb783993cfffabb09f558f86e37.\n\nThank you for the report.\n\nThat section should not refer to pg_am directly now that there's CREATE\nACCESS METHOD. I committed a fix for that which also fixes the problem\nyou describe.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 13:45:33 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: a small improvement about pg_am description" } ]
[ { "msg_contents": "Hello, hackers!\nWe are running PG13.10 and recently we have encountered what appears to be\na bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and\nsome other catalog-writer, possibly ANALYZE.\nThe problem is that after successfully creating index on relation (which\npreviosly didnt have any indexes), its pg_class.relhasindex remains set to\n\"false\", which is illegal, I think.\nIndex was built using the following statement:\nALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n\nPG_CLASS:\n# select ctid,oid,xmin,xmax, * from pg_class where oid = 3566558198;\n-[ RECORD 1\n]-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nctid | (12,49)\noid | 3566558198\nxmin | 1202298791\nxmax | 0\nrelname | example\nrelnamespace | 16479\nreltype | 3566558200\nreloftype | 0\nrelowner | 16386\nrelam | 2\nrelfilenode | 3566558198\nreltablespace | 0\nrelpages | 152251\nreltuples | 1.1565544e+07\nrelallvisible | 127837\nreltoastrelid | 3566558203\nrelhasindex | f\nrelisshared | f\nrelpersistence | p\nrelkind | r\nrelnatts | 12\nrelchecks | 0\nrelhasrules | f\nrelhastriggers | f\nrelhassubclass | f\nrelrowsecurity | f\nrelforcerowsecurity | f\nrelispopulated | t\nrelreplident | d\nrelispartition | f\nrelrewrite | 0\nrelfrozenxid | 1201647807\nrelminmxid | 1\nrelacl |\nreloptions |\nrelpartbound |\n\nPG_INDEX:\n# select ctid,xmin,xmax,indexrelid::regclass,indrelid::regclass, * from\npg_index where indexrelid = 3569625749;\n-[ RECORD 1 ]--+---------------------------------------------\nctid | (3,30)\nxmin | 1202295045\nxmax | 0\nindexrelid | \"example_pkey\"\nindrelid | \"example\"\nindexrelid | 3569625749\nindrelid | 3566558198\nindnatts | 1\nindnkeyatts | 1\nindisunique | t\nindisprimary | t\nindisexclusion | f\nindimmediate | t\nindisclustered | f\nindisvalid | t\nindcheckxmin | f\nindisready | t\nindislive | t\nindisreplident | f\nindkey | 1\nindcollation | 0\nindclass | 3124\nindoption | 0\nindexprs |\nindpred |\n\nLooking into the WAL via waldump given us the following picture (full\nwaldump output is attached):\n\ntx: 1202295045, lsn: AAB1/D38378D0, prev AAB1/D3837208, desc: FPI , blkref\n#0: rel 1663/16387/3569625749 blk 0 FPW\ntx: 1202298790, lsn: AAB1/D3912EC0, prev AAB1/D3912E80, desc: NEW_CID rel\n1663/16387/1259; tid 6/24; cmin: 0, cmax: 4294967295, combo: 4294967295\ntx: 1202298790, lsn: AAB1/D3927580, prev AAB1/D3926988, desc: COMMIT\n2023-10-04 22:41:23.863979 UTC\ntx: 1202298791, lsn: AAB1/D393C230, prev AAB1/D393C1F0, desc: HOT_UPDATE\noff 24 xmax 1202298791 flags 0x20 ; new off 45 xmax 0, blkref #0: rel\n1663/16387/1259 blk 6\ntx: 1202298791, lsn: AAB1/D394ADA0, prev AAB1/D394AD60, desc: UPDATE off 45\nxmax 1202298791 flags 0x00 ; new off 28 xmax 0, blkref #0: rel\n1663/16387/1259 blk 5, blkref #1: rel 1663/16387/1259 blk 6\ntx: 1202298791, lsn: AAB1/D3961088, prev AAB1/D3961048, desc: NEW_CID rel\n1663/16387/1259; tid 12/49; cmin: 50, cmax: 4294967295, combo: 4294967295\ntx: 1202295045, lsn: AAB1/D3962E78, prev AAB1/D3962E28, desc: INPLACE off\n24, blkref #0: rel 1663/16387/1259 blk 6\ntx: 1202295045, lsn: AAB1/D39632A0, prev AAB1/D3963250, desc: COMMIT\n2023-10-04 22:41:23.878565 UTC\ntx: 1202298791, lsn: AAB1/D3973420, prev AAB1/D39733D0, desc: COMMIT\n2023-10-04 22:41:23.884951 UTC\n\n1202295045 - create index statement\n1202298790 and 1202298791 are some other concurrent operations,\nunfortunately I wasnt able to determine what are they\n\nSo looks like 1202295045, updated tuple (6,24) in pg_class INPLACE, in\nwhich at this point xmax was already set by 1202298791 and new tuple in\n(12,49) was in created.\nSo after 1202298791 was commited, that inplace update was effectively lost.\nIf we do an inclusive PITR with (recovery_target_xid = 1202295045), we can\nsee the following picture (notice relhasindex and xmax):\n\n# select ctid,oid, xmin,xmax,relhasindex,cmin,cmax from pg_class where oid\n= 3566558198;\n-[ RECORD 1 ]-----------\nctid | (6,24)\noid | 3566558198\nxmin | 1202298790\nxmax | 1202298791\nrelhasindex | t\ncmin | 0\ncmax | 0\n\nI've tried to reproduce this scenario with CREATE INDEX and various\nconcurrent statements, but no luck.\nAttached full waldump output for the relevant WAL segment.", "msg_date": "Wed, 25 Oct 2023 13:39:41 +0300", "msg_from": "Smolkin Grigory <[email protected]>", "msg_from_op": true, "msg_subject": "race condition in pg_class" }, { "msg_contents": "\n\n> On 25 Oct 2023, at 13:39, Smolkin Grigory <[email protected]> wrote:\n> \n> We are running PG13.10 and recently we have encountered what appears to be a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and some other catalog-writer, possibly ANALYZ\n\n> I've tried to reproduce this scenario with CREATE INDEX and various concurrent statements, but no luck.\n\nMaybe it would be possible to reproduce with modifying tests for concurrent index creation. For example add “ANALYZE” here [0].\nKeep in mind that for easier reproduction it would make sense to increase transaction count radically.\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://github.com/postgres/postgres/blob/master/contrib/amcheck/t/002_cic.pl#L34\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:57:11 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Smolkin Grigory <[email protected]> writes:\n> We are running PG13.10 and recently we have encountered what appears to be\n> a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and\n> some other catalog-writer, possibly ANALYZE.\n> The problem is that after successfully creating index on relation (which\n> previosly didnt have any indexes), its pg_class.relhasindex remains set to\n> \"false\", which is illegal, I think.\n> Index was built using the following statement:\n> ALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n\nALTER TABLE ADD CONSTRAINT would certainly have taken\nAccessExclusiveLock on the \"example\" table, which should be sufficient\nto prevent anything else from touching its pg_class row. The only\nmechanism I can think of that might bypass that is a manual UPDATE on\npg_class, which would just manipulate the row as a row without concern\nfor associated relation-level locks. Any chance that somebody was\ndoing something like that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Oct 2023 14:06:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "> ALTER TABLE ADD CONSTRAINT would certainly have taken\n> AccessExclusiveLock on the \"example\" table, which should be sufficient\n> to prevent anything else from touching its pg_class row. The only\n> mechanism I can think of that might bypass that is a manual UPDATE on\n> pg_class, which would just manipulate the row as a row without concern\n> for associated relation-level locks. Any chance that somebody was\n> doing something like that?\n\nNo chance. Our infrastructure dont do that, and users dont just have the\nprivileges to mess with pg_catalog.\n\nср, 25 окт. 2023 г. в 21:06, Tom Lane <[email protected]>:\n\n> Smolkin Grigory <[email protected]> writes:\n> > We are running PG13.10 and recently we have encountered what appears to\n> be\n> > a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT\n> and\n> > some other catalog-writer, possibly ANALYZE.\n> > The problem is that after successfully creating index on relation (which\n> > previosly didnt have any indexes), its pg_class.relhasindex remains set\n> to\n> > \"false\", which is illegal, I think.\n> > Index was built using the following statement:\n> > ALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n>\n> ALTER TABLE ADD CONSTRAINT would certainly have taken\n> AccessExclusiveLock on the \"example\" table, which should be sufficient\n> to prevent anything else from touching its pg_class row. The only\n> mechanism I can think of that might bypass that is a manual UPDATE on\n> pg_class, which would just manipulate the row as a row without concern\n> for associated relation-level locks. Any chance that somebody was\n> doing something like that?\n>\n> regards, tom lane\n>\n\n> ALTER TABLE ADD CONSTRAINT would certainly have taken> AccessExclusiveLock on the \"example\" table, which should be sufficient> to prevent anything else from touching its pg_class row.  The only> mechanism I can think of that might bypass that is a manual UPDATE on> pg_class, which would just manipulate the row as a row without concern> for associated relation-level locks.  Any chance that somebody was> doing something like that?No chance. Our infrastructure dont do that, and users dont just have the privileges to mess with pg_catalog.ср, 25 окт. 2023 г. в 21:06, Tom Lane <[email protected]>:Smolkin Grigory <[email protected]> writes:\n> We are running PG13.10 and recently we have encountered what appears to be\n> a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and\n> some other catalog-writer, possibly ANALYZE.\n> The problem is that after successfully creating index on relation (which\n> previosly didnt have any indexes), its pg_class.relhasindex remains set to\n> \"false\", which is illegal, I think.\n> Index was built using the following statement:\n> ALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n\nALTER TABLE ADD CONSTRAINT would certainly have taken\nAccessExclusiveLock on the \"example\" table, which should be sufficient\nto prevent anything else from touching its pg_class row.  The only\nmechanism I can think of that might bypass that is a manual UPDATE on\npg_class, which would just manipulate the row as a row without concern\nfor associated relation-level locks.  Any chance that somebody was\ndoing something like that?\n\n                        regards, tom lane", "msg_date": "Thu, 26 Oct 2023 12:52:41 +0300", "msg_from": "Smolkin Grigory <[email protected]>", "msg_from_op": true, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Oct 25, 2023 at 01:39:41PM +0300, Smolkin Grigory wrote:\n> We are running PG13.10 and recently we have encountered what appears to be\n> a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and\n> some other catalog-writer, possibly ANALYZE.\n> The problem is that after successfully creating index on relation (which\n> previosly didnt have any indexes), its pg_class.relhasindex remains set to\n> \"false\", which is illegal, I think.\n> Index was built using the following statement:\n> ALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n\nThis is going to be a problem with any operation that does a transactional\npg_class update without taking a lock that conflicts with ShareLock. GRANT\ndoesn't lock the table at all, so I can reproduce this in v17 as follows:\n\n== session 1\ncreate table t (c int);\nbegin;\ngrant select on t to public;\n\n== session 2\nalter table t add primary key (c);\n\n== back in session 1\ncommit;\n\n\nWe'll likely need to change how we maintain relhasindex or perhaps take a lock\nin GRANT.\n\n> Looking into the WAL via waldump given us the following picture (full\n> waldump output is attached):\n\n> 1202295045 - create index statement\n> 1202298790 and 1202298791 are some other concurrent operations,\n> unfortunately I wasnt able to determine what are they\n\nCan you explore that as follows?\n\n- PITR to just before the COMMIT record.\n- Save all rows of pg_class.\n- PITR to just after the COMMIT record.\n- Save all rows of pg_class.\n- Diff the two sets of saved rows.\n\nWhich columns changed? The evidence you've shown would be consistent with a\ntransaction doing GRANT or REVOKE on dozens of tables. If the changed column\nis something other than relacl, that would be great to know.\n\nOn the off-chance it's relevant, what extensions do you have (\\dx in psql)?\n\n\n", "msg_date": "Thu, 26 Oct 2023 21:44:04 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "> This is going to be a problem with any operation that does a transactional\n> pg_class update without taking a lock that conflicts with ShareLock.\nGRANT\n> doesn't lock the table at all, so I can reproduce this in v17 as follows:\n>\n> == session 1\n> create table t (c int);\n> begin;\n> grant select on t to public;\n>\n> == session 2\n> alter table t add primary key (c);\n>\n> == back in session 1\n> commit;\n>\n>\n> We'll likely need to change how we maintain relhasindex or perhaps take a\nlock\n> in GRANT.\n\nOh, that explains it. Thank you very much.\n\n> Can you explore that as follows?\n>\n>- PITR to just before the COMMIT record.\n>- Save all rows of pg_class.\n>- PITR to just after the COMMIT record.\n>- Save all rows of pg_class.\n>- Diff the two sets of saved rows.\n\nSure, but it will take some time, its a large db with lots of WAL segments\nto apply.\n\n> extensions\n\n extname | extversion\n--------------------+------------\n plpgsql | 1.0\n pg_stat_statements | 1.8\n pg_buffercache | 1.3\n pgstattuple | 1.5\n\n> This is going to be a problem with any operation that does a transactional> pg_class update without taking a lock that conflicts with ShareLock.  GRANT> doesn't lock the table at all, so I can reproduce this in v17 as follows:> > == session 1> create table t (c int);> begin;> grant select on t to public;> > == session 2> alter table t add primary key (c);> > == back in session 1> commit;> > > We'll likely need to change how we maintain relhasindex or perhaps take a lock> in GRANT.Oh, that explains it. Thank you very much.> Can you explore that as follows?>>- PITR to just before the COMMIT record.>- Save all rows of pg_class.>- PITR to just after the COMMIT record.>- Save all rows of pg_class.>- Diff the two sets of saved rows.Sure, but it will take some time, its a large db with lots of WAL segments to apply.> extensions      extname       | extversion--------------------+------------ plpgsql            | 1.0 pg_stat_statements | 1.8 pg_buffercache     | 1.3 pgstattuple        | 1.5", "msg_date": "Fri, 27 Oct 2023 13:03:23 +0300", "msg_from": "Smolkin Grigory <[email protected]>", "msg_from_op": true, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Oct 26, 2023 at 09:44:04PM -0700, Noah Misch wrote:\n> On Wed, Oct 25, 2023 at 01:39:41PM +0300, Smolkin Grigory wrote:\n> > We are running PG13.10 and recently we have encountered what appears to be\n> > a bug due to some race condition between ALTER TABLE ... ADD CONSTRAINT and\n> > some other catalog-writer, possibly ANALYZE.\n> > The problem is that after successfully creating index on relation (which\n> > previosly didnt have any indexes), its pg_class.relhasindex remains set to\n> > \"false\", which is illegal, I think.\n\nIt's damaging. The table will behave like it has no indexes. If something\nadds an index later, old indexes will reappear, corrupt, having not received\nupdates during the relhasindex=false era. (\"pg_amcheck --heapallindexed\" can\ndetect this.)\n\n> > Index was built using the following statement:\n> > ALTER TABLE \"example\" ADD constraint \"example_pkey\" PRIMARY KEY (id);\n> \n> This is going to be a problem with any operation that does a transactional\n> pg_class update without taking a lock that conflicts with ShareLock. GRANT\n> doesn't lock the table at all, so I can reproduce this in v17 as follows:\n> \n> == session 1\n> create table t (c int);\n> begin;\n> grant select on t to public;\n> \n> == session 2\n> alter table t add primary key (c);\n> \n> == back in session 1\n> commit;\n> \n> \n> We'll likely need to change how we maintain relhasindex or perhaps take a lock\n> in GRANT.\n\nThe main choice is accepting more DDL blocking vs. accepting inefficient\nrelcache builds. Options I'm seeing:\n\n=== \"more DDL blocking\" option family\n\nB1. Take ShareUpdateExclusiveLock in GRANT, REVOKE, and anything that makes\n transactional pg_class updates without holding some stronger lock. New\n asserts could catch future commands failing to do this.\n\nB2. Take some shorter-lived lock around pg_class tuple formation, such that\n GRANT blocks CREATE INDEX, but two CREATE INDEX don't block each other.\n Anything performing a transactional update of a pg_class row would acquire\n the lock in exclusive mode before fetching the old tuple and hold it till\n end of transaction. relhasindex=true in-place updates would acquire it\n the same way, but they would release it after the inplace update. I\n expect a new heavyweight lock type, e.g. LOCKTAG_RELATION_DEFINITION, with\n the same key as LOCKTAG_RELATION. This has less blocking than the\n previous option, but it's more complicated to explain to both users and\n developers.\n\nB3. Have CREATE INDEX do an EvalPlanQual()-style thing to update all successor\n tuple versions. Like the previous option, this would require new locking,\n but the new lock would not need to persist till end of xact. It would be\n even more complicated to explain to users and developers. (If this is\n promising enough to warrant more detail, let me know.)\n\nB4. Use transactional updates to set relhasindex=true. Two CREATE INDEX\n commands on the same table would block each other. If we did it the way\n most DDL does today, they'd get \"tuple concurrently updated\" failures\n after the blocking ends.\n\n=== \"inefficient relcache builds\" option family\n\nR1. Ignore relhasindex; possibly remove it in v17. Relcache builds et\n al. will issue more superfluous queries.\n\nR2. As a weird variant of the previous option, keep relhasindex and make all\n transactional updates of pg_class set relhasindex=true pessimistically.\n (VACUUM will set it back to false.)\n\n=== other\n\nO1. This is another case where the sometimes-discussed \"pg_class_nt\" for\n nontransactional columns would help. I'm ruling that out as too hard to\n back-patch.\n\n\nAre there other options important to consider? I currently like (B1) the\nmost, followed closely by (R1) and (B2). A key unknown is the prevalence of\nindex-free tables. Low prevalence would argue in favor of (R1). In my\nlimited experience, they've been rare. That said, I assume relcache builds\nhappen a lot more than GRANTs, so it's harder to bound the damage from (R1)\ncompared to the damage from (B1). Thoughts on this decision?\n\nThanks,\nnm\n\n\n", "msg_date": "Fri, 27 Oct 2023 11:48:32 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Thu, Oct 26, 2023 at 09:44:04PM -0700, Noah Misch wrote:\n>> We'll likely need to change how we maintain relhasindex or perhaps take a lock\n>> in GRANT.\n\n> The main choice is accepting more DDL blocking vs. accepting inefficient\n> relcache builds. Options I'm seeing:\n\nIt looks to me like you're only thinking about relhasindex, but it\nseems to me that any call of heap_inplace_update brings some\nrisk of this kind. Excluding the bootstrap-mode-only usage in\ncreate_toast_table, I see four callers:\n\n* index_update_stats updating a pg_class tuple's\n relhasindex, relpages, reltuples, relallvisible\n\n* vac_update_relstats updating a pg_class tuple's\n relpages, reltuples, relallvisible, relhasindex, relhasrules,\n relhastriggers, relfrozenxid, relminmxid\n\n* vac_update_datfrozenxid updating a pg_database tuple's\n datfrozenxid, datminmxid\n\n* dropdb updating a pg_database tuple's datconnlimit\n\nSo we have just as much of a problem with GRANTs on databases\nas GRANTs on relations. Also, it looks like we can lose\nknowledge of the presence of rules and triggers, which seems\nnearly as bad as forgetting about indexes. The rest of these\nupdates might not be correctness-critical, although I wonder\nhow bollixed things could get if we forget an advancement of\nrelfrozenxid or datfrozenxid (especially if the calling\ntransaction goes on to make other changes that assume that\nthe update happened).\n\nBTW, vac_update_datfrozenxid believes (correctly I think) that\nit cannot use the syscache copy of a tuple as the basis for in-place\nupdate, because syscache will have detoasted any toastable fields.\nThese other callers are ignoring that, which seems like it should\nresult in heap_inplace_update failing with \"wrong tuple length\".\nI wonder how come we're not seeing reports of that from the field.\n\nI'm inclined to propose that heap_inplace_update should check to\nmake sure that it's operating on the latest version of the tuple\n(including, I guess, waiting for an uncommitted update?) and throw\nerror if not. I think this is what your B3 option is, but maybe\nI misinterpreted. It might be better to throw error immediately\ninstead of waiting to see if the other updater commits.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 15:32:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Oct 27, 2023 at 03:32:26PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Thu, Oct 26, 2023 at 09:44:04PM -0700, Noah Misch wrote:\n> >> We'll likely need to change how we maintain relhasindex or perhaps take a lock\n> >> in GRANT.\n> \n> > The main choice is accepting more DDL blocking vs. accepting inefficient\n> > relcache builds. Options I'm seeing:\n> \n> It looks to me like you're only thinking about relhasindex, but it\n> seems to me that any call of heap_inplace_update brings some\n> risk of this kind. Excluding the bootstrap-mode-only usage in\n> create_toast_table, I see four callers:\n> \n> * index_update_stats updating a pg_class tuple's\n> relhasindex, relpages, reltuples, relallvisible\n> \n> * vac_update_relstats updating a pg_class tuple's\n> relpages, reltuples, relallvisible, relhasindex, relhasrules,\n> relhastriggers, relfrozenxid, relminmxid\n> \n> * vac_update_datfrozenxid updating a pg_database tuple's\n> datfrozenxid, datminmxid\n> \n> * dropdb updating a pg_database tuple's datconnlimit\n> \n> So we have just as much of a problem with GRANTs on databases\n> as GRANTs on relations. Also, it looks like we can lose\n> knowledge of the presence of rules and triggers, which seems\n> nearly as bad as forgetting about indexes. The rest of these\n> updates might not be correctness-critical, although I wonder\n> how bollixed things could get if we forget an advancement of\n> relfrozenxid or datfrozenxid (especially if the calling\n> transaction goes on to make other changes that assume that\n> the update happened).\n\nThanks for researching that. Let's treat frozenxid stuff as critical; I\nwouldn't want to advance XID limits based on a datfrozenxid that later gets\nrolled back. I agree relhasrules and relhastriggers are also critical. The\n\"inefficient relcache builds\" option family can't solve cases like\nrelfrozenxid and datconnlimit, so that leaves us with the \"more DDL blocking\"\noption family.\n\n> BTW, vac_update_datfrozenxid believes (correctly I think) that\n> it cannot use the syscache copy of a tuple as the basis for in-place\n> update, because syscache will have detoasted any toastable fields.\n> These other callers are ignoring that, which seems like it should\n> result in heap_inplace_update failing with \"wrong tuple length\".\n> I wonder how come we're not seeing reports of that from the field.\n\nGood question. Perhaps we'll need some test cases that exercise each inplace\nupdate against a row having a toast pointer. It's too easy to go a long time\nwithout encountering those in the field.\n\n> I'm inclined to propose that heap_inplace_update should check to\n> make sure that it's operating on the latest version of the tuple\n> (including, I guess, waiting for an uncommitted update?) and throw\n> error if not. I think this is what your B3 option is, but maybe\n> I misinterpreted. It might be better to throw error immediately\n> instead of waiting to see if the other updater commits.\n\nThat's perhaps closer to B2. To be pedantic, B3 was about not failing or\nwaiting for GRANT to commit but instead inplace-updating every member of the\nupdate chain. For B2, I was thinking we don't need to error. There are two\nproblematic orders of events. The easy one is heap_inplace_update() mutating\na tuple that already has an xmax. That's the one in the test case upthread,\nand detecting it is trivial. The harder one is heap_inplace_update() mutating\na tuple after GRANT fetches the old tuple, before GRANT enters heap_update().\nI anticipate a new locktag per catalog that can receive inplace updates,\ni.e. LOCKTAG_RELATION_DEFINITION and LOCKTAG_DATABASE_DEFINITION. Here's a\nwalk-through for the pg_database case. GRANT will use the following sequence\nof events:\n\n- acquire LOCKTAG_DATABASE_DEFINITION in exclusive mode\n- fetch latest pg_database tuple\n- heap_update()\n- COMMIT, releasing LOCKTAG_DATABASE_DEFINITION\n\nvac_update_datfrozenxid() sequence of events:\n\n- acquire LOCKTAG_DATABASE_DEFINITION in exclusive mode\n- (now, all GRANTs on the given database have committed or aborted)\n- fetch latest pg_database tuple\n- heap_inplace_update()\n- release LOCKTAG_DATABASE_DEFINITION, even if xact not ending\n- continue with other steps, e.g. vac_truncate_clog()\n\nHow does that compare to what you envisioned? vac_update_datfrozenxid() could\nfurther use xmax as a best-efforts thing to catch conflict with manual UPDATE\nstatements, but it wouldn't solve the case where the UPDATE had fetched the\ntuple but not yet heap_update()'d it.\n\n\n", "msg_date": "Fri, 27 Oct 2023 14:49:46 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Fri, Oct 27, 2023 at 03:32:26PM -0400, Tom Lane wrote:\n>> I'm inclined to propose that heap_inplace_update should check to\n>> make sure that it's operating on the latest version of the tuple\n>> (including, I guess, waiting for an uncommitted update?) and throw\n>> error if not. I think this is what your B3 option is, but maybe\n>> I misinterpreted. It might be better to throw error immediately\n>> instead of waiting to see if the other updater commits.\n\n> That's perhaps closer to B2. To be pedantic, B3 was about not failing or\n> waiting for GRANT to commit but instead inplace-updating every member of the\n> update chain. For B2, I was thinking we don't need to error. There are two\n> problematic orders of events. The easy one is heap_inplace_update() mutating\n> a tuple that already has an xmax. That's the one in the test case upthread,\n> and detecting it is trivial. The harder one is heap_inplace_update() mutating\n> a tuple after GRANT fetches the old tuple, before GRANT enters heap_update().\n\nUgh ... you're right, what I was imagining would not catch that last case.\n\n> I anticipate a new locktag per catalog that can receive inplace updates,\n> i.e. LOCKTAG_RELATION_DEFINITION and LOCKTAG_DATABASE_DEFINITION.\n\nWe could perhaps make this work by using the existing tuple-lock\ninfrastructure, rather than inventing new locktags (a choice that\nspills to a lot of places including clients that examine pg_locks).\n\nI would prefer though to find a solution that only depends on making\nheap_inplace_update protect itself, without high-level cooperation\nfrom the possibly-interfering updater. This is basically because\nI'm still afraid that we're defining the problem too narrowly.\nFor one thing, I have nearly zero confidence that GRANT et al are\nthe only problematic source of conflicting transactional updates.\nFor another, I'm worried that some extension may be using\nheap_inplace_update against a catalog we're not considering here.\nI'd also like to find a solution that fixes the case of a conflicting\nmanual UPDATE (although certainly that's a stretch goal we may not be\nable to reach).\n\nI wonder if there's a way for heap_inplace_update to mark the tuple\nheader as just-updated in a way that regular heap_update could\nrecognize. (For standard catalog updates, we'd then end up erroring\nin simple_heap_update, which I think is fine.) We can't update xmin,\nbecause the VACUUM callers don't have an XID; but maybe there's some\nother way? I'm speculating about putting a funny value into xmax,\nor something like that, and having heap_update check that what it\nsees in xmax matches what was in the tuple the update started with.\n\nOr we could try to get rid of in-place updates, but that seems like\na mighty big lift. All of the existing callers, except maybe\nthe johnny-come-lately dropdb usage, have solid documented reasons\nto do it that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 18:40:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Oct 27, 2023 at 06:40:55PM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Fri, Oct 27, 2023 at 03:32:26PM -0400, Tom Lane wrote:\n> >> I'm inclined to propose that heap_inplace_update should check to\n> >> make sure that it's operating on the latest version of the tuple\n> >> (including, I guess, waiting for an uncommitted update?) and throw\n> >> error if not. I think this is what your B3 option is, but maybe\n> >> I misinterpreted. It might be better to throw error immediately\n> >> instead of waiting to see if the other updater commits.\n> \n> > That's perhaps closer to B2. To be pedantic, B3 was about not failing or\n> > waiting for GRANT to commit but instead inplace-updating every member of the\n> > update chain. For B2, I was thinking we don't need to error. There are two\n> > problematic orders of events. The easy one is heap_inplace_update() mutating\n> > a tuple that already has an xmax. That's the one in the test case upthread,\n> > and detecting it is trivial. The harder one is heap_inplace_update() mutating\n> > a tuple after GRANT fetches the old tuple, before GRANT enters heap_update().\n> \n> Ugh ... you're right, what I was imagining would not catch that last case.\n> \n> > I anticipate a new locktag per catalog that can receive inplace updates,\n> > i.e. LOCKTAG_RELATION_DEFINITION and LOCKTAG_DATABASE_DEFINITION.\n> \n> We could perhaps make this work by using the existing tuple-lock\n> infrastructure, rather than inventing new locktags (a choice that\n> spills to a lot of places including clients that examine pg_locks).\n\nThat could be okay. It would be weird to reuse a short-term lock like that\none as something held till end of transaction. But the alternative of new\nlocktags ain't perfect, as you say.\n\n> I would prefer though to find a solution that only depends on making\n> heap_inplace_update protect itself, without high-level cooperation\n> from the possibly-interfering updater. This is basically because\n> I'm still afraid that we're defining the problem too narrowly.\n> For one thing, I have nearly zero confidence that GRANT et al are\n> the only problematic source of conflicting transactional updates.\n\nLikewise here, but I have fair confidence that an assertion would flush out\nthe rest. heap_inplace_update() would assert that the backend holds one of\nthe acceptable locks. It could even be an elog; heap_inplace_update() can\ntolerate that cost.\n\n> For another, I'm worried that some extension may be using\n> heap_inplace_update against a catalog we're not considering here.\n\nA pgxn search finds \"citus\" using heap_inplace_update().\n\n> I'd also like to find a solution that fixes the case of a conflicting\n> manual UPDATE (although certainly that's a stretch goal we may not be\n> able to reach).\n\nIt would be nice.\n\n> I wonder if there's a way for heap_inplace_update to mark the tuple\n> header as just-updated in a way that regular heap_update could\n> recognize. (For standard catalog updates, we'd then end up erroring\n> in simple_heap_update, which I think is fine.) We can't update xmin,\n> because the VACUUM callers don't have an XID; but maybe there's some\n> other way? I'm speculating about putting a funny value into xmax,\n> or something like that, and having heap_update check that what it\n> sees in xmax matches what was in the tuple the update started with.\n\nHmmm. Achieving it without an XID would be the real trick. (With an XID, we\ncould use xl_heap_lock like heap_update() does.) Thinking out loud, what if\nheap_inplace_update() sets HEAP_XMAX_INVALID and xmax =\nTransactionIdAdvance(xmax)? Or change t_ctid in a similar way. Then regular\nheap_update() could complain if the field changed vs. last seen value. This\nfeels like something to regret later in terms of limiting our ability to\nharness those fields for more-valuable ends or compact them away in a future\npage format. I can't pinpoint a specific loss, so the idea might have legs.\nNontransactional data in separate tables or in new metapages smells like the\nright long-term state. A project wanting to reuse the tuple header bits could\nintroduce such storage to unblock its own bit reuse.\n\n> Or we could try to get rid of in-place updates, but that seems like\n> a mighty big lift. All of the existing callers, except maybe\n> the johnny-come-lately dropdb usage, have solid documented reasons\n> to do it that way.\n\nYes, removing that smells problematic.\n\n\n", "msg_date": "Fri, 27 Oct 2023 16:26:12 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "I prototyped two ways, one with a special t_ctid and one with LockTuple().\n\nOn Fri, Oct 27, 2023 at 04:26:12PM -0700, Noah Misch wrote:\n> On Fri, Oct 27, 2023 at 06:40:55PM -0400, Tom Lane wrote:\n> > Noah Misch <[email protected]> writes:\n> > > On Fri, Oct 27, 2023 at 03:32:26PM -0400, Tom Lane wrote:\n\n> > > I anticipate a new locktag per catalog that can receive inplace updates,\n> > > i.e. LOCKTAG_RELATION_DEFINITION and LOCKTAG_DATABASE_DEFINITION.\n> > \n> > We could perhaps make this work by using the existing tuple-lock\n> > infrastructure, rather than inventing new locktags (a choice that\n> > spills to a lot of places including clients that examine pg_locks).\n> \n> That could be okay. It would be weird to reuse a short-term lock like that\n> one as something held till end of transaction. But the alternative of new\n> locktags ain't perfect, as you say.\n\nThat worked.\n\n> > I would prefer though to find a solution that only depends on making\n> > heap_inplace_update protect itself, without high-level cooperation\n> > from the possibly-interfering updater. This is basically because\n> > I'm still afraid that we're defining the problem too narrowly.\n> > For one thing, I have nearly zero confidence that GRANT et al are\n> > the only problematic source of conflicting transactional updates.\n> \n> Likewise here, but I have fair confidence that an assertion would flush out\n> the rest. heap_inplace_update() would assert that the backend holds one of\n> the acceptable locks. It could even be an elog; heap_inplace_update() can\n> tolerate that cost.\n\nThat check would fall in both heap_inplace_update() and heap_update(). After\nall, a heap_inplace_update() check won't detect an omission in GRANT.\n\n> > For another, I'm worried that some extension may be using\n> > heap_inplace_update against a catalog we're not considering here.\n> \n> A pgxn search finds \"citus\" using heap_inplace_update().\n> \n> > I'd also like to find a solution that fixes the case of a conflicting\n> > manual UPDATE (although certainly that's a stretch goal we may not be\n> > able to reach).\n> \n> It would be nice.\n\nI expect most approaches could get there by having ExecModifyTable() arrange\nfor the expected locking or other actions. That's analogous to how\nheap_update() takes care of sinval even for a manual UPDATE.\n\n> > I wonder if there's a way for heap_inplace_update to mark the tuple\n> > header as just-updated in a way that regular heap_update could\n> > recognize. (For standard catalog updates, we'd then end up erroring\n> > in simple_heap_update, which I think is fine.) We can't update xmin,\n> > because the VACUUM callers don't have an XID; but maybe there's some\n> > other way? I'm speculating about putting a funny value into xmax,\n> > or something like that, and having heap_update check that what it\n> > sees in xmax matches what was in the tuple the update started with.\n> \n> Hmmm. Achieving it without an XID would be the real trick. (With an XID, we\n> could use xl_heap_lock like heap_update() does.) Thinking out loud, what if\n> heap_inplace_update() sets HEAP_XMAX_INVALID and xmax =\n> TransactionIdAdvance(xmax)? Or change t_ctid in a similar way. Then regular\n> heap_update() could complain if the field changed vs. last seen value. This\n> feels like something to regret later in terms of limiting our ability to\n> harness those fields for more-valuable ends or compact them away in a future\n> page format. I can't pinpoint a specific loss, so the idea might have legs.\n> Nontransactional data in separate tables or in new metapages smells like the\n> right long-term state. A project wanting to reuse the tuple header bits could\n> introduce such storage to unblock its own bit reuse.\n\nheap_update() does not have the pre-modification xmax today, so I used t_ctid.\nheap_modify_tuple() preserves t_ctid, so heap_update() already has the\npre-modification t_ctid in key cases. For details of how the prototype uses\nt_ctid, see comment at \"#define InplaceCanaryOffsetNumber\". The prototype\ndoesn't prevent corruption in the following scenario, because the aborted\nALTER TABLE RENAME overwrites the special t_ctid:\n\n == session 1\n drop table t;\n create table t (c int);\n begin;\n -- in gdb, set breakpoint on heap_modify_tuple\n grant select on t to public;\n\n == session 2\n alter table t add primary key (c);\n begin; alter table t rename to t2; rollback;\n\n == back in session 1\n -- release breakpoint\n -- want error (would get it w/o the begin;alter;rollback)\n commit;\n\nI'm missing how to mark the tuple in a fashion accessible to a second\nheap_update() after a rolled-back heap_update(). The mark needs enough bits\n\"N\" so it's implausible for 2^N inplace updates to happen between GRANT\nfetching the old tuple and GRANT completing heap_update(). Looking for bits\nthat could persist across a rolled-back heap_update(), we have 3 in t_ctid, 2\nin t_infomask2, and 0 in xmax. I definitely don't want to paint us into a\ncorner by spending the t_infomask2 bits on this. Even if I did, 2^(3+2)=32\nwouldn't clearly be enough inplace updates.\n\nIs there a way to salvage the goal of fixing the bug without modifying code\nlike ExecGrant_common()? If not, I'm inclined to pick from one of the\nfollowing designs:\n\n- Acquire ShareUpdateExclusiveLock in GRANT ((B1) from previous list). It\n does make GRANT more intrusive; e.g. GRANT will cancel autovacuum. I'm\n leaning toward this one for two reasons. First, it doesn't slow\n heap_update() outside of assert builds. Second, it makes the GRANT\n experience more like the rest our DDL, in that concurrent DDL will make\n GRANT block, not fail.\n\n- GRANT passes to heapam the fixed-size portion of the pre-modification tuple.\n heap_update() compares those bytes to the oldtup in shared buffers to see if\n an inplace update happened. (HEAD could get the bytes from a new\n heap_update() parameter, while back branches would need a different passing\n approach.)\n\n- LockTuple(), as seen in its attached prototype. I like this least at the\n moment, because it changes pg_locks content without having a clear advantage\n over the previous option. Also, the prototype has enough FIXME markers that\n I expect this to get hairy before it's done.\n\nI might change my preference after further prototypes. Does anyone have a\nstrong preference between those? Briefly, I did consider these additional\nalternatives:\n\n- Just accept the yet-rarer chance of corruption from this message's test\n procedure.\n\n- Hold a buffer lock long enough to solve things.\n\n- Remember the tuples where we overwrote a special t_ctid, and reverse the\n overwrite during abort processing. But I/O in the abort path sounds bad.\n\nThanks,\nnm", "msg_date": "Wed, 1 Nov 2023 20:09:15 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "I'm attaching patches implementing the LockTuple() design. It turns out we\ndon't just lose inplace updates. We also overwrite unrelated tuples,\nreproduced at inplace.spec. Good starting points are README.tuplock and the\nheap_inplace_update_scan() header comment.\n\nOn Wed, Nov 01, 2023 at 08:09:15PM -0700, Noah Misch wrote:\n> On Fri, Oct 27, 2023 at 04:26:12PM -0700, Noah Misch wrote:\n> > On Fri, Oct 27, 2023 at 06:40:55PM -0400, Tom Lane wrote:\n> > > We could perhaps make this work by using the existing tuple-lock\n> > > infrastructure, rather than inventing new locktags (a choice that\n> > > spills to a lot of places including clients that examine pg_locks).\n\n> > > I'd also like to find a solution that fixes the case of a conflicting\n> > > manual UPDATE (although certainly that's a stretch goal we may not be\n> > > able to reach).\n\nI implemented that; search for ri_needLockTagTuple.\n\n> - GRANT passes to heapam the fixed-size portion of the pre-modification tuple.\n> heap_update() compares those bytes to the oldtup in shared buffers to see if\n> an inplace update happened. (HEAD could get the bytes from a new\n> heap_update() parameter, while back branches would need a different passing\n> approach.)\n\nThis could have been fine, but ...\n\n> - LockTuple(), as seen in its attached prototype. I like this least at the\n> moment, because it changes pg_locks content without having a clear advantage\n> over the previous option.\n\n... I settled on the LockTuple() design for these reasons:\n\n- Solves more conflicts by waiting, instead of by ERROR or by retry loops.\n- Extensions wanting inplace updates don't have a big disadvantage over core\n code inplace updates.\n- One could use this to stop \"tuple concurrently updated\" for pg_class rows,\n by using SearchSysCacheLocked1() for all pg_class DDL and making that\n function wait for any existing xmax like inplace_xmax_lock() does. I don't\n expect to write that, but it's a nice option to have.\n- pg_locks shows the new lock acquisitions.\n\nSeparable, nontrivial things not fixed in the attached patch stack:\n\n- Inplace update uses transactional CacheInvalidateHeapTuple(). ROLLBACK of\n CREATE INDEX wrongly discards the inval, leading to the relhasindex=t loss\n still seen in inplace-inval.spec. CacheInvalidateRelmap() does this right.\n\n- AtEOXact_Inval(true) is outside the RecordTransactionCommit() critical\n section, but it is critical. We must not commit transactional DDL without\n other backends receiving an inval. (When the inplace inval becomes\n nontransactional, it will face the same threat.)\n\n- Trouble is possible, I bet, if the system crashes between the inplace-update\n memcpy() and XLogInsert(). See the new XXX comment below the memcpy().\n Might solve this by inplace update setting DELAY_CHKPT, writing WAL, and\n finally issuing memcpy() into the buffer.\n\n- [consequences limited to transient failure] Since a PROC_IN_VACUUM backend's\n xmin does not stop pruning, an MVCC scan in that backend can find zero\n tuples when one is live. This is like what all backends got in the days of\n SnapshotNow catalog scans. See the pgbench test suite addition. (Perhaps\n the fix is to make VACUUM do its MVCC scans outside of PROC_IN_VACUUM,\n setting that flag later and unsetting it earlier.)\n\nIf you find decisions in this thread's patches are tied to any of those such\nthat I should not separate those, let's discuss that. Topics in the patches\nthat I feel are most fruitful for debate:\n\n- This makes inplace update block if the tuple has an updater. It's like one\n GRANT blocking another, except an inplace updater won't get \"ERROR: tuple\n concurrently updated\" like one of the GRANTs would. I had implemented\n versions that avoided this blocking by mutating each tuple in the updated\n tuple chain. That worked, but it had corner cases bad for maintainability,\n listed in the inplace_xmax_lock() header comment. I'd rather accept the\n blocking, so hackers can rule out those corner cases. A long-running GRANT\n already hurts VACUUM progress more just by keeping an XID running.\n\n- Pre-checks could make heap_inplace_update_cancel() calls rarer. Avoiding\n one of those avoids an exclusive buffer lock, and it avoids waiting on\n concurrent heap_update() if any. We'd pre-check the syscache tuple.\n EventTriggerOnLogin() does it that way, because the code was already in that\n form. I expect only vac_update_datfrozenxid() concludes !dirty enough to\n matter. I didn't bother with the optimization, but it would be simple.\n\n- If any citus extension user feels like porting its heap_inplace_update()\n call to this, I'd value hearing about your experience.\n\n- I paid more than my usual attention to test coverage, considering the patch\n stack's intensity compared to most back-patch bug fixes.\n\nI've kept all the above topics brief; feel free to ask for more details.\n\nThanks,\nnm", "msg_date": "Sun, 12 May 2024 16:29:23 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, May 12, 2024 at 04:29:23PM -0700, Noah Misch wrote:\n> I'm attaching patches implementing the LockTuple() design. It turns out we\n> don't just lose inplace updates. We also overwrite unrelated tuples,\n> reproduced at inplace.spec. Good starting points are README.tuplock and the\n> heap_inplace_update_scan() header comment.\n\nAbout inplace050-tests-inj-v1.patch.\n\n+\t/* Check if blocked_pid is in injection_wait(). */\n+\tproc = BackendPidGetProc(blocked_pid);\n+\tif (proc == NULL)\n+\t\tPG_RETURN_BOOL(false);\t/* session gone: definitely unblocked */\n+\twait_event =\n+\t\tpgstat_get_wait_event(UINT32_ACCESS_ONCE(proc->wait_event_info));\n+\tif (wait_event && strncmp(\"INJECTION_POINT(\",\n+\t\t\t\t\t\t\t wait_event,\n+\t\t\t\t\t\t\t strlen(\"INJECTION_POINT(\")) == 0)\n+\t\tPG_RETURN_BOOL(true);\n\nHmm. I am not sure that this is the right interface for the job\nbecause this is not only related to injection points but to the\nmonitoring of a one or more wait events when running a permutation\nstep. Perhaps this is something that should be linked to the spec\nfiles with some property area listing the wait events we're expected\nto wait on instead when running a step that we know will wait?\n--\nMichael", "msg_date": "Mon, 13 May 2024 16:59:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Mon, May 13, 2024 at 04:59:59PM +0900, Michael Paquier wrote:\n> About inplace050-tests-inj-v1.patch.\n> \n> +\t/* Check if blocked_pid is in injection_wait(). */\n> +\tproc = BackendPidGetProc(blocked_pid);\n> +\tif (proc == NULL)\n> +\t\tPG_RETURN_BOOL(false);\t/* session gone: definitely unblocked */\n> +\twait_event =\n> +\t\tpgstat_get_wait_event(UINT32_ACCESS_ONCE(proc->wait_event_info));\n> +\tif (wait_event && strncmp(\"INJECTION_POINT(\",\n> +\t\t\t\t\t\t\t wait_event,\n> +\t\t\t\t\t\t\t strlen(\"INJECTION_POINT(\")) == 0)\n> +\t\tPG_RETURN_BOOL(true);\n> \n> Hmm. I am not sure that this is the right interface for the job\n> because this is not only related to injection points but to the\n> monitoring of a one or more wait events when running a permutation\n> step.\n\nCould you say more about that? Permutation steps don't monitor wait events\ntoday. This patch would be the first instance of that.\n\n> Perhaps this is something that should be linked to the spec\n> files with some property area listing the wait events we're expected\n> to wait on instead when running a step that we know will wait?\n\nThe spec syntax doesn't distinguish contention types at all. The isolation\ntester's needs are limited to distinguishing:\n\n (a) process is waiting on another test session\n (b) process is waiting on automatic background activity (autovacuum, mainly)\n\nAutomatic background activity doesn't make a process enter or leave\ninjection_wait(), so all injection point wait events fall in (a). (The tester\nignores (b), since those clear up without intervention. Failing to ignore\nthem, as the tester did long ago, made output unstable.)\n\n\n", "msg_date": "Mon, 13 May 2024 11:54:03 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, May 12, 2024 at 7:29 PM Noah Misch <[email protected]> wrote:\n> - [consequences limited to transient failure] Since a PROC_IN_VACUUM backend's\n> xmin does not stop pruning, an MVCC scan in that backend can find zero\n> tuples when one is live. This is like what all backends got in the days of\n> SnapshotNow catalog scans. See the pgbench test suite addition. (Perhaps\n> the fix is to make VACUUM do its MVCC scans outside of PROC_IN_VACUUM,\n> setting that flag later and unsetting it earlier.)\n\nAre you saying that this is a problem already, or that the patch\ncauses it to start happening? If it's the former, that's horrible. If\nit's the latter, I'd say that is a fatal defect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 May 2024 15:53:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Mon, May 13, 2024 at 03:53:08PM -0400, Robert Haas wrote:\n> On Sun, May 12, 2024 at 7:29 PM Noah Misch <[email protected]> wrote:\n> > - [consequences limited to transient failure] Since a PROC_IN_VACUUM backend's\n> > xmin does not stop pruning, an MVCC scan in that backend can find zero\n> > tuples when one is live. This is like what all backends got in the days of\n> > SnapshotNow catalog scans. See the pgbench test suite addition. (Perhaps\n> > the fix is to make VACUUM do its MVCC scans outside of PROC_IN_VACUUM,\n> > setting that flag later and unsetting it earlier.)\n> \n> Are you saying that this is a problem already, or that the patch\n> causes it to start happening? If it's the former, that's horrible.\n\nThe former.\n\n\n", "msg_date": "Mon, 13 May 2024 13:30:38 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, May 12, 2024 at 04:29:23PM -0700, Noah Misch wrote:\n> I'm attaching patches implementing the LockTuple() design.\n\nStarting 2024-06-10, I plan to push the first seven of the ten patches:\n\ninplace005-UNEXPECTEDPASS-tap-meson-v1.patch\ninplace010-tests-v1.patch\ninplace040-waitfuncs-v1.patch\ninplace050-tests-inj-v1.patch\ninplace060-nodeModifyTable-comments-v1.patch\n Those five just deal in tests, test infrastructure, and comments.\ninplace070-rel-locks-missing-v1.patch\n Main risk is new DDL deadlocks.\ninplace080-catcache-detoast-inplace-stale-v1.patch\n If it fails to fix the bug it targets, I expect it's a no-op rather than\n breaking things.\n\nI'll leave the last three of the ten needing review. Those three are beyond\nmy skill to self-certify.\n\n\n", "msg_date": "Wed, 5 Jun 2024 11:17:06 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 5, 2024 at 2:17 PM Noah Misch <[email protected]> wrote:\n> Starting 2024-06-10, I plan to push the first seven of the ten patches:\n>\n> inplace005-UNEXPECTEDPASS-tap-meson-v1.patch\n> inplace010-tests-v1.patch\n> inplace040-waitfuncs-v1.patch\n> inplace050-tests-inj-v1.patch\n> inplace060-nodeModifyTable-comments-v1.patch\n> Those five just deal in tests, test infrastructure, and comments.\n> inplace070-rel-locks-missing-v1.patch\n> Main risk is new DDL deadlocks.\n> inplace080-catcache-detoast-inplace-stale-v1.patch\n> If it fails to fix the bug it targets, I expect it's a no-op rather than\n> breaking things.\n>\n> I'll leave the last three of the ten needing review. Those three are beyond\n> my skill to self-certify.\n\nIt's not this patch set's fault, but I'm not very pleased to see that\nthe injection point wait events have been shoehorned into the\n\"Extension\" category - which they are not - instead of being a new\nwait_event_type. That would have avoided the ugly wait-event naming\npattern, inconsistent with everything else, introduced by\ninplace050-tests-inj-v1.patch.\n\nI think that the comments and commit messages in this patch set could,\nin some places, use improvement. For instance,\ninplace060-nodeModifyTable-comments-v1.patch reflows a bunch of\ncomments, which makes it hard to see what actually changed, and the\ncommit message doesn't tell you, either. A good bit of it seems to be\nchanging \"a view\" to \"a view INSTEAD OF trigger\" or \"a view having an\nINSTEAD OF trigger,\" but the reasoning behind that change is not\nspelled out anywhere. The reader is left to guess what the other case\nis and why the same principles don't apply to it. I don't doubt that\nthe new comments are more correct than the old ones, but I expect\nfuture patch authors to have difficulty maintaining that state of\naffairs.\n\nSimilarly, inplace070-rel-locks-missing-v1.patch adds no comments.\nIMHO, the commit message also isn't very informative. It disclaims\nknowledge of what bug it's fixing, while at the same time leaving the\nreader to figure out for themselves how the behavior has changed.\nConsequently, I expect writing the release notes for a release\nincluding this patch to be difficult: \"We added some locks that block\n... something ... in some circumstances ... to prevent ... something.\"\nIt's not really the job of the release note author to fill in those\nblanks, but rather of the patch author or committer. I don't want to\noverburden the act of fixing bugs, but I just feel like more\nexplanation is needed here. When I see for example that we're adding a\nlock acquisition to the end of heap_create(), I can't help but wonder\nif it's really true that we don't take a lock on a just-created\nrelation today. I'm certainly under the impression that we lock\nnewly-created, uncommitted relations, and a quick test seems to\nconfirm that. I don't quite know whether that happens, but evidently\nthis call is guarding against something more subtle than a categorical\nfailure to lock a relation on creation so I think there should be a\ncomment explaining what that thing is.\n\nIt's also quite surprising that SetRelationHasSubclass() says \"take X\nlock before calling\" and 2 of 4 callers just don't. I guess that's how\nit is. But shouldn't we then have an assertion inside that function to\nguard against future mistakes? If the reason why we failed to add this\ninitially is discernible from the commit messages that introduced the\nbug, it would be nice to mention what it seems to have been; if not,\nit would at least be nice to mention the offending commit(s). I'm also\na bit worried that this is going to cause deadlocks, but I suppose if\nit does, that's still better than the status quo.\n\nIsInplaceUpdateOid's header comment says IsInplaceUpdateRelation\ninstead of IsInplaceUpdateOid.\n\ninplace080-catcache-detoast-inplace-stale-v1.patch seems like another\nplace where spelling out the rationale in more detail would be helpful\nto future readers; for instance, the commit message says that\nPgDatabaseToastTable is the only one affected, but it doesn't say why\nthe others are not, or why this one is. The lengthy comment in\nCatalogCacheCreateEntry is also difficult to correlate with the code\nwhich follows. I can't guess whether the two cases called out in the\ncomment always needed to be handled and were handled save only for\nin-place updates, and thus the comment changes were simply taking the\nopportunity to elaborate on the existing comments; or whether one of\nthose cases is preexisting and the other arises from the desire to\nhandle inplace updates. It could be helpful to mention relevant\nidentifiers from the code in the comment text e.g.\n\"systable_recheck_tuple detects ordinary updates by noting changes to\nthe tuple's visibility information, while the equalTuple() case\ndetects inplace updates.\"\n\nIMHO, this patch set underscores the desirability of removing in-place\nupdate altogether. That sounds difficult and not back-patchable, but I\ncan't classify what this patch set does as anything better than grotty\nhacks to work around serious design deficiencies. That is not a vote\nagainst these patches: I see no better way forward. Nonetheless, I\ndislike the lack of better options.\n\nI have done only cursory review of the last two patches and don't feel\nI'm in a place to certify them, at least not now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 6 Jun 2024 09:48:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> It's not this patch set's fault, but I'm not very pleased to see that\n> the injection point wait events have been shoehorned into the\n> \"Extension\" category - which they are not - instead of being a new\n> wait_event_type. That would have avoided the ugly wait-event naming\n> pattern, inconsistent with everything else, introduced by\n> inplace050-tests-inj-v1.patch.\n\nNot sure to agree with that. The set of core backend APIs supporting\ninjection points have nothing to do with wait events. The library\nattached to one or more injection points *may* decide to use a wait\nevent like what the wait/wakeup calls in modules/injection_points do,\nbut that's entirely optional. These rely on custom wait events,\nplugged into the Extension category as the code run is itself in an\nextension. I am not arguing against the point that it may be\ninteresting to plug in custom wait event categories, but the current\ndesign of wait events makes that much harder than what core is\ncurrently able to handle, and I am not sure that this brings much at\nthe end as long as the wait event strings can be customized.\n\nI've voiced upthread concerns over the naming enforced by the patch\nand the way it plugs the namings into the isolation functions, by the\nway.\n--\nMichael", "msg_date": "Fri, 7 Jun 2024 08:20:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 6, 2024 at 7:20 PM Michael Paquier <[email protected]> wrote:\n> On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> > It's not this patch set's fault, but I'm not very pleased to see that\n> > the injection point wait events have been shoehorned into the\n> > \"Extension\" category - which they are not - instead of being a new\n> > wait_event_type. That would have avoided the ugly wait-event naming\n> > pattern, inconsistent with everything else, introduced by\n> > inplace050-tests-inj-v1.patch.\n>\n> Not sure to agree with that. The set of core backend APIs supporting\n> injection points have nothing to do with wait events. The library\n> attached to one or more injection points *may* decide to use a wait\n> event like what the wait/wakeup calls in modules/injection_points do,\n> but that's entirely optional. These rely on custom wait events,\n> plugged into the Extension category as the code run is itself in an\n> extension. I am not arguing against the point that it may be\n> interesting to plug in custom wait event categories, but the current\n> design of wait events makes that much harder than what core is\n> currently able to handle, and I am not sure that this brings much at\n> the end as long as the wait event strings can be customized.\n>\n> I've voiced upthread concerns over the naming enforced by the patch\n> and the way it plugs the namings into the isolation functions, by the\n> way.\n\nI think the core code should provide an \"Injection Point\" wait event\ntype and let extensions add specific wait events there, just like you\ndid for \"Extension\". Then this ugly naming would go away. As I see it,\n\"Extension\" is only supposed to be used as a catch-all when we have no\nother information, but here we do. If we refuse to use the\nwait_event_type field to categorize waits, then people are going to\nhave to find some other way to get that data into the system, as Noah\nhas done.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:08:03 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Jun 07, 2024 at 09:08:03AM -0400, Robert Haas wrote:\n> On Thu, Jun 6, 2024 at 7:20 PM Michael Paquier <[email protected]> wrote:\n> > On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> > > It's not this patch set's fault, but I'm not very pleased to see that\n> > > the injection point wait events have been shoehorned into the\n> > > \"Extension\" category - which they are not - instead of being a new\n> > > wait_event_type. That would have avoided the ugly wait-event naming\n> > > pattern, inconsistent with everything else, introduced by\n> > > inplace050-tests-inj-v1.patch.\n> >\n> > Not sure to agree with that. The set of core backend APIs supporting\n> > injection points have nothing to do with wait events. The library\n> > attached to one or more injection points *may* decide to use a wait\n> > event like what the wait/wakeup calls in modules/injection_points do,\n> > but that's entirely optional. These rely on custom wait events,\n> > plugged into the Extension category as the code run is itself in an\n> > extension. I am not arguing against the point that it may be\n> > interesting to plug in custom wait event categories, but the current\n> > design of wait events makes that much harder than what core is\n> > currently able to handle, and I am not sure that this brings much at\n> > the end as long as the wait event strings can be customized.\n> >\n> > I've voiced upthread concerns over the naming enforced by the patch\n> > and the way it plugs the namings into the isolation functions, by the\n> > way.\n> \n> I think the core code should provide an \"Injection Point\" wait event\n> type and let extensions add specific wait events there, just like you\n> did for \"Extension\".\n\nMichael, could you accept the core code offering that, or not? If so, I am\ncontent to implement that. If not, for injection point wait events, I have\njust one priority. The isolation tester already detects lmgr locks without\nthe test writer teaching it about each lock individually. I want it to have\nthat same capability for injection points. Do you think we can find something\neveryone can accept, having that property? These wait events show up in tests\nonly, and I'm happy to make the cosmetics be anything compatible with that\ndetection ability.\n\n\n", "msg_date": "Mon, 10 Jun 2024 19:19:27 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> On Wed, Jun 5, 2024 at 2:17 PM Noah Misch <[email protected]> wrote:\n> > Starting 2024-06-10, I plan to push the first seven of the ten patches:\n> >\n> > inplace005-UNEXPECTEDPASS-tap-meson-v1.patch\n> > inplace010-tests-v1.patch\n> > inplace040-waitfuncs-v1.patch\n> > inplace050-tests-inj-v1.patch\n> > inplace060-nodeModifyTable-comments-v1.patch\n> > Those five just deal in tests, test infrastructure, and comments.\n> > inplace070-rel-locks-missing-v1.patch\n> > Main risk is new DDL deadlocks.\n> > inplace080-catcache-detoast-inplace-stale-v1.patch\n> > If it fails to fix the bug it targets, I expect it's a no-op rather than\n> > breaking things.\n> >\n> > I'll leave the last three of the ten needing review. Those three are beyond\n> > my skill to self-certify.\n> \n> It's not this patch set's fault, but I'm not very pleased to see that\n> the injection point wait events have been shoehorned into the\n> \"Extension\" category\n\nI've replied on that branch of the thread.\n\n> I think that the comments and commit messages in this patch set could,\n> in some places, use improvement. For instance,\n> inplace060-nodeModifyTable-comments-v1.patch reflows a bunch of\n> comments, which makes it hard to see what actually changed, and the\n> commit message doesn't tell you, either. A good bit of it seems to be\n> changing \"a view\" to \"a view INSTEAD OF trigger\" or \"a view having an\n> INSTEAD OF trigger,\" but the reasoning behind that change is not\n> spelled out anywhere. The reader is left to guess what the other case\n> is and why the same principles don't apply to it. I don't doubt that\n> the new comments are more correct than the old ones, but I expect\n> future patch authors to have difficulty maintaining that state of\n> affairs.\n\nThe two kinds are trigger-updatable views and auto-updatable views. I've\nadded sentences about that to the nodeModifyTable.c header comment. One could\nargue for dropping the INSTEAD OF comment changes outside of the header.\n\n> Similarly, inplace070-rel-locks-missing-v1.patch adds no comments.\n> IMHO, the commit message also isn't very informative. It disclaims\n> knowledge of what bug it's fixing, while at the same time leaving the\n> reader to figure out for themselves how the behavior has changed.\n> Consequently, I expect writing the release notes for a release\n> including this patch to be difficult: \"We added some locks that block\n> ... something ... in some circumstances ... to prevent ... something.\"\n> It's not really the job of the release note author to fill in those\n> blanks, but rather of the patch author or committer. I don't want to\n\nI had been thinking release notes should just say \"Add missing DDL lock\nacquisitions\". One can cure a breach of our locking standards without proving\nsome specific bad outcome. However, one could counter that commands like\nGRANT follow a different standard, and perhaps SetRelationHasSubclass() should\nuse the GRANT standard. Hence, I researched the bugs this fixes and split\ninplace070-rel-locks-missing into three patches:\n\n1. [inplace065-lock-SequenceChangePersistence] Lock in\n SequenceChangePersistence(), where the omission can lose nextval()\n increments of the sequence.\n\n2. [inplace071-lock-SetRelationHasSubclass] Lock in SetRelationHasSubclass().\n This one has only minor benefits; see the new commit message. A fair\n alternative would be tuple-level locking in inplace120-locktag, like that\n patch adds to GRANT. That might avoid some deadlocks. I feel like the\n minor benefits justify the way I chose, but it's a weak preference.\n\n3. [inplace075-lock-heap_create] Add to heap creation:\n\n> overburden the act of fixing bugs, but I just feel like more\n> explanation is needed here. When I see for example that we're adding a\n> lock acquisition to the end of heap_create(), I can't help but wonder\n> if it's really true that we don't take a lock on a just-created\n> relation today. I'm certainly under the impression that we lock\n> newly-created, uncommitted relations, and a quick test seems to\n> confirm that. I don't quite know whether that happens, but evidently\n> this call is guarding against something more subtle than a categorical\n> failure to lock a relation on creation so I think there should be a\n> comment explaining what that thing is.\n\nI've covered that in the new log message. To lock as early as possible, I've\nmoved this up a layer, to just after relid assignment. One could argue this\nchange belongs in inplace120 rather than its own patch, since it's only here\nto eliminate a harmless exception to the rule inplace120 asserts.\n\nI've removed the update_relispartition() that appeared in\ninplace070-rel-locks-missing-v1.patch. Only an older, unpublished draft of\nthe rules (that inplace110-successors adds to README.tuplock) required that\nlock. The lock might be worthwhile for avoiding \"tuple concurrently updated\",\nbut it's out of scope for $SUBJECT.\n\n> It's also quite surprising that SetRelationHasSubclass() says \"take X\n> lock before calling\" and 2 of 4 callers just don't. I guess that's how\n> it is. But shouldn't we then have an assertion inside that function to\n> guard against future mistakes? If the reason why we failed to add this\n\nWorks for me. Done. I've moved the LockHeldByMe() change from\ninplace110-successors to this patch, since the assertion wants it.\n\n> initially is discernible from the commit messages that introduced the\n> bug, it would be nice to mention what it seems to have been; if not,\n> it would at least be nice to mention the offending commit(s). I'm also\n\nDone.\n\n> a bit worried that this is going to cause deadlocks, but I suppose if\n> it does, that's still better than the status quo.\n> \n> IsInplaceUpdateOid's header comment says IsInplaceUpdateRelation\n> instead of IsInplaceUpdateOid.\n\nFixed.\n\n> inplace080-catcache-detoast-inplace-stale-v1.patch seems like another\n> place where spelling out the rationale in more detail would be helpful\n> to future readers; for instance, the commit message says that\n> PgDatabaseToastTable is the only one affected, but it doesn't say why\n> the others are not, or why this one is. The lengthy comment in\n\nI've updated the commit message to answer that.\n\n> CatalogCacheCreateEntry is also difficult to correlate with the code\n> which follows. I can't guess whether the two cases called out in the\n> comment always needed to be handled and were handled save only for\n> in-place updates, and thus the comment changes were simply taking the\n> opportunity to elaborate on the existing comments; or whether one of\n> those cases is preexisting and the other arises from the desire to\n> handle inplace updates. It could be helpful to mention relevant\n> identifiers from the code in the comment text e.g.\n> \"systable_recheck_tuple detects ordinary updates by noting changes to\n> the tuple's visibility information, while the equalTuple() case\n> detects inplace updates.\"\n\nThe patch was elaborating on existing comments. Reading the patch again\ntoday, the elaboration no longer feels warranted. Hence, I've rewritten that\ncomment addition. I've included identifiers, and the patch no longer adds\ncomment material orthogonal to inplace updates.\n\nThanks,\nnm", "msg_date": "Mon, 10 Jun 2024 19:45:25 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Mon, Jun 10, 2024 at 07:19:27PM -0700, Noah Misch wrote:\n> On Fri, Jun 07, 2024 at 09:08:03AM -0400, Robert Haas wrote:\n>> I think the core code should provide an \"Injection Point\" wait event\n>> type and let extensions add specific wait events there, just like you\n>> did for \"Extension\".\n> \n> Michael, could you accept the core code offering that, or not? If so, I am\n> content to implement that. If not, for injection point wait events, I have\n> just one priority. The isolation tester already detects lmgr locks without\n> the test writer teaching it about each lock individually. I want it to have\n> that same capability for injection points. Do you think we can find something\n> everyone can accept, having that property? These wait events show up in tests\n> only, and I'm happy to make the cosmetics be anything compatible with that\n> detection ability.\n\nAdding a wait event class for injection point is an interesting\nsuggestion that would simplify the detection in the isolation function\nquite a bit. Are you sure that this is something that would be fit\nfor v17 material? TBH, I am not sure.\n\nAt the end, the test coverage has the highest priority and the bugs\nyou are addressing are complex enough that isolation tests of this\nlevel are a necessity, so I don't object to what\ninplace050-tests-inj-v2.patch introduces with the naming dependency\nfor the time being on HEAD. I'll just adapt and live with that\ndepending on what I deal with, while trying to improve HEAD later on.\n\nI'm still wondering if there is something that could be more elegant\nthan a dedicated class for injection points, but I cannot think about\nsomething that would be better for isolation tests on top of my head.\nIf there is something I can think of, I'll just go and implement it :)\n--\nMichael", "msg_date": "Tue, 11 Jun 2024 13:37:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Hello, everyone.\n\nI am not sure, but I think that issue may be related to the issue described\nin\nhttps://www.postgresql.org/message-id/CANtu0ojXmqjmEzp-%3DaJSxjsdE76iAsRgHBoK0QtYHimb_mEfsg%40mail.gmail.com\n\nIt looks like REINDEX CONCURRENTLY could interfere with ON CONFLICT UPDATE\nin some strange way.\n\nBest regards,\nMikhail.\n\nHello, everyone.I am not sure, but I think that issue may be related to the issue described in https://www.postgresql.org/message-id/CANtu0ojXmqjmEzp-%3DaJSxjsdE76iAsRgHBoK0QtYHimb_mEfsg%40mail.gmail.comIt looks like REINDEX CONCURRENTLY could interfere with ON CONFLICT UPDATE in some strange way.Best regards,Mikhail.", "msg_date": "Wed, 12 Jun 2024 15:02:43 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Tue, Jun 11, 2024 at 01:37:21PM +0900, Michael Paquier wrote:\n> On Mon, Jun 10, 2024 at 07:19:27PM -0700, Noah Misch wrote:\n> > On Fri, Jun 07, 2024 at 09:08:03AM -0400, Robert Haas wrote:\n> >> I think the core code should provide an \"Injection Point\" wait event\n> >> type and let extensions add specific wait events there, just like you\n> >> did for \"Extension\".\n> > \n> > Michael, could you accept the core code offering that, or not? If so, I am\n> > content to implement that. If not, for injection point wait events, I have\n> > just one priority. The isolation tester already detects lmgr locks without\n> > the test writer teaching it about each lock individually. I want it to have\n> > that same capability for injection points. Do you think we can find something\n> > everyone can accept, having that property? These wait events show up in tests\n> > only, and I'm happy to make the cosmetics be anything compatible with that\n> > detection ability.\n> \n> Adding a wait event class for injection point is an interesting\n> suggestion that would simplify the detection in the isolation function\n> quite a bit. Are you sure that this is something that would be fit\n> for v17 material? TBH, I am not sure.\n\nIf I were making a list of changes always welcome post-beta, it wouldn't\ninclude adding wait event types. But I don't hesitate to add one if it\nunblocks a necessary test for a bug present in all versions.\n\n> At the end, the test coverage has the highest priority and the bugs\n> you are addressing are complex enough that isolation tests of this\n> level are a necessity, so I don't object to what\n> inplace050-tests-inj-v2.patch introduces with the naming dependency\n> for the time being on HEAD. I'll just adapt and live with that\n> depending on what I deal with, while trying to improve HEAD later on.\n\nHere's what I'm reading for each person's willingness to tolerate each option:\n\nSTRATEGY | Paquier | Misch | Haas\n--------------------------------------------------------\nnew \"Injection Point\" wait type | maybe | yes | yes\nINJECTION_POINT(...) naming | yes | yes | unknown\nisolation spec says event names | yes | no | unknown\n\nCorrections and additional strategy lines welcome. Robert, how do you judge\nthe lines where I've listed you as \"unknown\"?\n\n> I'm still wondering if there is something that could be more elegant\n> than a dedicated class for injection points, but I cannot think about\n> something that would be better for isolation tests on top of my head.\n> If there is something I can think of, I'll just go and implement it :)\n\nI once considered changing them to use advisory lock waits instead of\nConditionVariableSleep(), but I recall that was worse from the perspective of\ninjection points in critical sections.\n\n\n", "msg_date": "Wed, 12 Jun 2024 10:54:52 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 12, 2024 at 1:54 PM Noah Misch <[email protected]> wrote:\n> If I were making a list of changes always welcome post-beta, it wouldn't\n> include adding wait event types. But I don't hesitate to add one if it\n> unblocks a necessary test for a bug present in all versions.\n\nHowever, injection points themselves are not present in all versions,\nso even if we invent a new wait-event type, we'll have difficulty\ntesting older versions, unless we're planning to back-patch all of\nthat infrastructure, which I assume we aren't.\n\nPersonally, I think the fact that injection point wait events were put\nunder Extension is a design mistake that should be corrected before 17\nis out of beta.\n\n> Here's what I'm reading for each person's willingness to tolerate each option:\n>\n> STRATEGY | Paquier | Misch | Haas\n> --------------------------------------------------------\n> new \"Injection Point\" wait type | maybe | yes | yes\n> INJECTION_POINT(...) naming | yes | yes | unknown\n> isolation spec says event names | yes | no | unknown\n>\n> Corrections and additional strategy lines welcome. Robert, how do you judge\n> the lines where I've listed you as \"unknown\"?\n\nI'd tolerate INJECTION_POINT() if we had no other option but I think\nit's clearly inferior. Does the last line refer to putting the\nspecific wait event names in the isolation spec file? If so, I'd also\nbe fine with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 12 Jun 2024 14:08:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 12, 2024 at 02:08:31PM -0400, Robert Haas wrote:\n> On Wed, Jun 12, 2024 at 1:54 PM Noah Misch <[email protected]> wrote:\n> > If I were making a list of changes always welcome post-beta, it wouldn't\n> > include adding wait event types. But I don't hesitate to add one if it\n> > unblocks a necessary test for a bug present in all versions.\n> \n> However, injection points themselves are not present in all versions,\n> so even if we invent a new wait-event type, we'll have difficulty\n> testing older versions, unless we're planning to back-patch all of\n> that infrastructure, which I assume we aren't.\n\nRight. We could put the injection point tests in v18 only instead of v17+v18.\nI feel that would be an overreaction to a dispute about names that show up\nonly in tests. Even so, I could accept that.\n\n> Personally, I think the fact that injection point wait events were put\n> under Extension is a design mistake that should be corrected before 17\n> is out of beta.\n\nWorks for me. I don't personally have a problem with the use of Extension,\nsince it is a src/test/modules extension creating them.\n\n> > Here's what I'm reading for each person's willingness to tolerate each option:\n> >\n> > STRATEGY | Paquier | Misch | Haas\n> > --------------------------------------------------------\n> > new \"Injection Point\" wait type | maybe | yes | yes\n> > INJECTION_POINT(...) naming | yes | yes | unknown\n> > isolation spec says event names | yes | no | unknown\n> >\n> > Corrections and additional strategy lines welcome. Robert, how do you judge\n> > the lines where I've listed you as \"unknown\"?\n> \n> I'd tolerate INJECTION_POINT() if we had no other option but I think\n> it's clearly inferior. Does the last line refer to putting the\n> specific wait event names in the isolation spec file? If so, I'd also\n> be fine with that.\n\nYes, the last line does refer to that. Updated table:\n\nSTRATEGY | Paquier | Misch | Haas\n--------------------------------------------------------\nnew \"Injection Point\" wait type | maybe | yes | yes\nINJECTION_POINT(...) naming | yes | yes | no\nisolation spec says event names | yes | no | yes\n\nI find that's adequate support for the first line. If there are no objections\nin the next 24hr, I will implement that.\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:32:23 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 12, 2024 at 03:02:43PM +0200, Michail Nikolaev wrote:\n> I am not sure, but I think that issue may be related to the issue described\n> in\n> https://www.postgresql.org/message-id/CANtu0ojXmqjmEzp-%3DaJSxjsdE76iAsRgHBoK0QtYHimb_mEfsg%40mail.gmail.com\n> \n> It looks like REINDEX CONCURRENTLY could interfere with ON CONFLICT UPDATE\n> in some strange way.\n\nCan you say more about the connection you see between $SUBJECT and that? That\nlooks like a valid report of an important bug, but I'm not following the\npotential relationship to $SUBJECT.\n\nOn your other thread, it would be useful to see stack traces from the high-CPU\nprocesses once the live lock has ended all query completion.\n\n\n", "msg_date": "Wed, 12 Jun 2024 12:48:57 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Hello!\n\n> Can you say more about the connection you see between $SUBJECT and that?\nThat\n> looks like a valid report of an important bug, but I'm not following the\n> potential relationship to $SUBJECT.\n\nI was guided by the following logic:\n* A pg_class race condition can cause table indexes to look stale.\n* REINDEX updates indexes\n* errors can be explained by different backends using different arbiter\nindexes\n\n> On your other thread, it would be useful to see stack traces from the\nhigh-CPU\n> processes once the live lock has ended all query completion.\nI'll do.\n\nBest regards,\nMikhail.\n\nHello!> Can you say more about the connection you see between $SUBJECT and that?  That> looks like a valid report of an important bug, but I'm not following the> potential relationship to $SUBJECT.I was guided by the following logic:* A pg_class race condition can cause table indexes to look stale.* REINDEX updates indexes* errors can be explained by different backends using different arbiter indexes> On your other thread, it would be useful to see stack traces from the high-CPU> processes once the live lock has ended all query completion.I'll do.Best regards,Mikhail.", "msg_date": "Wed, 12 Jun 2024 22:02:00 +0200", "msg_from": "Michail Nikolaev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 12, 2024 at 12:32:23PM -0700, Noah Misch wrote:\n> On Wed, Jun 12, 2024 at 02:08:31PM -0400, Robert Haas wrote:\n>> Personally, I think the fact that injection point wait events were put\n>> under Extension is a design mistake that should be corrected before 17\n>> is out of beta.\n\nWell, isolation tests and the way to wait for specific points in them\nis something I've thought about when working on the initial injpoint\ninfrastructure, but all my ideas went down to the fact that this is\nnot specific to injection points: I've also wanted to be able to cause\nan isolation to wait for a specific event (class,name). A hardcoded\nsleep is an example. Even if I discourage anything like that in the\nin-core tests because they're slow on fast machines and can be\nunreliable on slow machines, it is a fact that they are used by\nout-of-core code and that extension developers find them acceptable.\n\n> Works for me. I don't personally have a problem with the use of Extension,\n> since it is a src/test/modules extension creating them.\n\nThat's the original reason why Extension has been used in this case,\nbecause the points are assigned in an extension.\n\n> Yes, the last line does refer to that. Updated table:\n> \n> STRATEGY | Paquier | Misch | Haas\n> --------------------------------------------------------\n> new \"Injection Point\" wait type | maybe | yes | yes\n> INJECTION_POINT(...) naming | yes | yes | no\n> isolation spec says event names | yes | no | yes\n> \n> I find that's adequate support for the first line. If there are no objections\n> in the next 24hr, I will implement that.\n\nOK. That sounds like a consensus to me, useful enough for the cases\nat hand.\n--\nMichael", "msg_date": "Thu, 13 Jun 2024 07:54:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jun 12, 2024 at 10:02:00PM +0200, Michail Nikolaev wrote:\n> > Can you say more about the connection you see between $SUBJECT and that?\n> That\n> > looks like a valid report of an important bug, but I'm not following the\n> > potential relationship to $SUBJECT.\n> \n> I was guided by the following logic:\n> * A pg_class race condition can cause table indexes to look stale.\n> * REINDEX updates indexes\n> * errors can be explained by different backends using different arbiter\n> indexes\n\nGot it. The race condition of $SUBJECT involves inplace updates, and the\nwrong content becomes permanent. Hence, I suspect they're unrelated.\n\n\n", "msg_date": "Thu, 13 Jun 2024 13:22:23 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Mon, Jun 10, 2024 at 07:45:25PM -0700, Noah Misch wrote:\n> On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> > It's not this patch set's fault, but I'm not very pleased to see that\n> > the injection point wait events have been shoehorned into the\n> > \"Extension\" category\n> \n> I've replied on that branch of the thread.\n\nI think the attached covers all comments to date. I gave everything v3, but\nmost patches have just a no-conflict rebase vs. v2. The exceptions are\ninplace031-inj-wait-event (implements the holding from that branch of the\nthread) and inplace050-tests-inj (updated to cooperate with inplace031). Much\nof inplace031-inj-wait-event is essentially s/Extension/Custom/ for the\ninfrastructure common to the two custom wait event types.\n\nThanks,\nnm", "msg_date": "Thu, 13 Jun 2024 17:35:49 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 13, 2024 at 05:35:49PM -0700, Noah Misch wrote:\n> I think the attached covers all comments to date. I gave everything v3, but\n> most patches have just a no-conflict rebase vs. v2. The exceptions are\n> inplace031-inj-wait-event (implements the holding from that branch of the\n> thread) and inplace050-tests-inj (updated to cooperate with inplace031). Much\n> of inplace031-inj-wait-event is essentially s/Extension/Custom/ for the\n> infrastructure common to the two custom wait event types.\n\nLooking at inplace031-inj-wait-event..\n\nThe comment at the top of GetWaitEventCustomNames() requires an\nupdate, still mentioning extensions.\n\nGetWaitEventCustomIdentifier() is incorrect, and should return\n\"InjectionPoint\" in the default case of this class name, no? I would\njust pass the classID to GetWaitEventCustomIdentifier().\n\nIt is suboptimal to have pg_get_wait_events() do two scans of\nWaitEventCustomHashByName. Wouldn't it be better to do a single scan,\nreturning a set of (class_name,event_name) fed to the tuplestore of\nthis SRF?\n\n uint32\n WaitEventExtensionNew(const char *wait_event_name)\n {\n+\treturn WaitEventCustomNew(PG_WAIT_EXTENSION, wait_event_name);\n+}\n+\n+uint32\n+WaitEventInjectionPointNew(const char *wait_event_name)\n+{\n+\treturn WaitEventCustomNew(PG_WAIT_INJECTIONPOINT, wait_event_name);\n+}\n\nHmm. The advantage of two routines is that it is possible to control\nthe class IDs allowed to use the custom wait events. Shouldn't the\nsecond routine be documented in xfunc.sgml?\n\nwait_event_names.txt also needs tweaks, in the shape of a new class\nname for the new class \"InjectionPoint\" so as it can be documented for\nits default case. That's a fallback if an event ID cannot be found,\nwhich should not be the case, still that's more correct than showing\n\"Extension\" for all class IDs covered by custom wait events.\n--\nMichael", "msg_date": "Fri, 14 Jun 2024 09:58:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Jun 14, 2024 at 09:58:59AM +0900, Michael Paquier wrote:\n> Looking at inplace031-inj-wait-event..\n> \n> The comment at the top of GetWaitEventCustomNames() requires an\n> update, still mentioning extensions.\n\nThanks. Fixed locally.\n\n> GetWaitEventCustomIdentifier() is incorrect, and should return\n> \"InjectionPoint\" in the default case of this class name, no?\n\nI intentionally didn't provide a default event ID for InjectionPoint.\nPG_WAIT_EXTENSION needs a default case for backward compatibility, if nothing\nelse. For this second custom type, it's needless complexity. The value\n0x0B000000U won't just show up like PG_WAIT_EXTENSION does.\nGetLWLockIdentifier() also has no default case. How do you see it?\n\n> I would\n> just pass the classID to GetWaitEventCustomIdentifier().\n\nAs you say, that would allow eventId==0 to raise \"could not find custom wait\nevent\" for PG_WAIT_INJECTIONPOINT instead of wrongly returning \"Extension\".\nEven if 0x0B000000U somehow does show up, having pg_stat_activity report\n\"Extension\" instead of an error, in a developer test run, feels unimportant to\nme.\n\n> It is suboptimal to have pg_get_wait_events() do two scans of\n> WaitEventCustomHashByName. Wouldn't it be better to do a single scan,\n> returning a set of (class_name,event_name) fed to the tuplestore of\n> this SRF?\n\nMicro-optimization of pg_get_wait_events() doesn't matter. I did consider\nthat or pushing more of the responsibility into wait_events.c, but I\nconsidered it on code repetition grounds, not performance grounds.\n\n> uint32\n> WaitEventExtensionNew(const char *wait_event_name)\n> {\n> +\treturn WaitEventCustomNew(PG_WAIT_EXTENSION, wait_event_name);\n> +}\n> +\n> +uint32\n> +WaitEventInjectionPointNew(const char *wait_event_name)\n> +{\n> +\treturn WaitEventCustomNew(PG_WAIT_INJECTIONPOINT, wait_event_name);\n> +}\n> \n> Hmm. The advantage of two routines is that it is possible to control\n> the class IDs allowed to use the custom wait events. Shouldn't the\n> second routine be documented in xfunc.sgml?\n\nThe patch added to xfunc.sgml an example of using it. I'd be more inclined to\ndelete the WaitEventExtensionNew() docbook documentation than to add its level\nof detail for WaitEventInjectionPointNew(). We don't have that kind of\ndocumentation for most extension-facing C functions.\n\n\n", "msg_date": "Thu, 13 Jun 2024 19:42:25 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 13, 2024 at 07:42:25PM -0700, Noah Misch wrote:\n> On Fri, Jun 14, 2024 at 09:58:59AM +0900, Michael Paquier wrote:\n>> GetWaitEventCustomIdentifier() is incorrect, and should return\n>> \"InjectionPoint\" in the default case of this class name, no?\n> \n> I intentionally didn't provide a default event ID for InjectionPoint.\n> PG_WAIT_EXTENSION needs a default case for backward compatibility, if nothing\n> else. For this second custom type, it's needless complexity. The value\n> 0x0B000000U won't just show up like PG_WAIT_EXTENSION does.\n> GetLWLockIdentifier() also has no default case. How do you see it?\n\nI would add a default for consistency as this is just a few extra\nlines, but if you feel strongly about that, I'm OK as well. It makes\na bit easier the detection of incorrect wait event numbers set\nincorrectly in extensions depending on the class wanted.\n\n> The patch added to xfunc.sgml an example of using it. I'd be more inclined to\n> delete the WaitEventExtensionNew() docbook documentation than to add its level\n> of detail for WaitEventInjectionPointNew(). We don't have that kind of\n> documentation for most extension-facing C functions.\n\nIt's one of the areas where I think that we should have more\ndocumentation, not less of it, so I'd rather keep it and maintaining\nit is not really a pain (?). The backend gets complicated enough\nthese days that limiting what developers have to guess on their own is\na better long-term approach because the Postgres out-of-core ecosystem\nis expanding a lot (aka have also in-core documentation for hooks,\neven if there's been a lot of reluctance historically about having\nthem).\n--\nMichael", "msg_date": "Sun, 16 Jun 2024 09:28:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, Jun 16, 2024 at 09:28:05AM +0900, Michael Paquier wrote:\n> On Thu, Jun 13, 2024 at 07:42:25PM -0700, Noah Misch wrote:\n> > On Fri, Jun 14, 2024 at 09:58:59AM +0900, Michael Paquier wrote:\n> >> GetWaitEventCustomIdentifier() is incorrect, and should return\n> >> \"InjectionPoint\" in the default case of this class name, no?\n> > \n> > I intentionally didn't provide a default event ID for InjectionPoint.\n> > PG_WAIT_EXTENSION needs a default case for backward compatibility, if nothing\n> > else. For this second custom type, it's needless complexity. The value\n> > 0x0B000000U won't just show up like PG_WAIT_EXTENSION does.\n> > GetLWLockIdentifier() also has no default case. How do you see it?\n> \n> I would add a default for consistency as this is just a few extra\n> lines, but if you feel strongly about that, I'm OK as well. It makes\n> a bit easier the detection of incorrect wait event numbers set\n> incorrectly in extensions depending on the class wanted.\n\nIt would be odd to detect exactly 0x0B000000U and not other invalid inputs,\nlike 0x0A000001U where only 0x0B000001U is valid. I'm attaching roughly what\nit would take. Shall I squash this into inplace031?\n\nThe thing I feel strongly about here is keeping focus on fixing $SUBJECT bugs\nthat are actually corrupting data out there. I think we should all limit our\ninterest in the verbiage of strings that appear only when running developer\ntests, especially when $SUBJECT is a bug fix. When the string appears only\nafter C code passes invalid input to other C code, it matters even less.\n\n> > The patch added to xfunc.sgml an example of using it. I'd be more inclined to\n> > delete the WaitEventExtensionNew() docbook documentation than to add its level\n> > of detail for WaitEventInjectionPointNew(). We don't have that kind of\n> > documentation for most extension-facing C functions.\n> \n> It's one of the areas where I think that we should have more\n> documentation, not less of it, so I'd rather keep it and maintaining\n> it is not really a pain (?). The backend gets complicated enough\n> these days that limiting what developers have to guess on their own is\n> a better long-term approach because the Postgres out-of-core ecosystem\n> is expanding a lot (aka have also in-core documentation for hooks,\n> even if there's been a lot of reluctance historically about having\n> them).\n\n[getting deeply off topic -- let's move this to another thread if it needs to\nexpand] I like reducing the need to guess. So far in this inplace update\nproject (this thread plus postgr.es/m/[email protected]),\nthree patches just fix comments. Even comments carry quite a price, but I\nvalue them. When we hand-maintain documentation of a C function in both its\nheader comment and another place, I get skeptical about whether hackers\n(including myself) will actually keep them in sync and skeptical of the\nincremental value of maintaining the second version.", "msg_date": "Sun, 16 Jun 2024 07:07:08 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, Jun 16, 2024 at 07:07:08AM -0700, Noah Misch wrote:\n> It would be odd to detect exactly 0x0B000000U and not other invalid inputs,\n> like 0x0A000001U where only 0x0B000001U is valid. I'm attaching roughly what\n> it would take. Shall I squash this into inplace031?\n\nAgreed that merging both together is cleaner. Moving the event class\ninto the key of WaitEventCustomEntryByInfo leads to a more consistent\nfinal result.\n\n> The thing I feel strongly about here is keeping focus on fixing $SUBJECT bugs\n> that are actually corrupting data out there.\n\nAgreed to focus on that first.\n--\nMichael", "msg_date": "Mon, 17 Jun 2024 08:01:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jun 13, 2024 at 05:35:49PM -0700, Noah Misch wrote:\n> On Mon, Jun 10, 2024 at 07:45:25PM -0700, Noah Misch wrote:\n> > On Thu, Jun 06, 2024 at 09:48:51AM -0400, Robert Haas wrote:\n> > > It's not this patch set's fault, but I'm not very pleased to see that\n> > > the injection point wait events have been shoehorned into the\n> > > \"Extension\" category\n> > \n> > I've replied on that branch of the thread.\n> \n> I think the attached covers all comments to date. I gave everything v3, but\n> most patches have just a no-conflict rebase vs. v2. The exceptions are\n> inplace031-inj-wait-event (implements the holding from that branch of the\n> thread) and inplace050-tests-inj (updated to cooperate with inplace031). Much\n> of inplace031-inj-wait-event is essentially s/Extension/Custom/ for the\n> infrastructure common to the two custom wait event types.\n\nStarting 2024-06-27, I'd like to push\ninplace080-catcache-detoast-inplace-stale and earlier patches, self-certifying\nthem if needed. Then I'll submit the last three to the commitfest. Does\nanyone want me to delay that step?\n\nTwo more test-related changes compared to v3:\n\n- In inplace010-tests, add to 027_stream_regress.pl a test that catalog\n contents match between primary and standby. If one of these patches broke\n replay of inplace updates, this would help catch it.\n\n- In inplace031-inj-wait-event, make sysviews.sql indifferent to whether\n InjectionPoint wait events exist. installcheck need this if other activity\n created such an event since the last postmaster restart.", "msg_date": "Fri, 21 Jun 2024 14:28:42 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Jun 21, 2024 at 02:28:42PM -0700, Noah Misch wrote:\n> On Thu, Jun 13, 2024 at 05:35:49PM -0700, Noah Misch wrote:\n> > I think the attached covers all comments to date. I gave everything v3, but\n> > most patches have just a no-conflict rebase vs. v2. The exceptions are\n> > inplace031-inj-wait-event (implements the holding from that branch of the\n> > thread) and inplace050-tests-inj (updated to cooperate with inplace031). Much\n> > of inplace031-inj-wait-event is essentially s/Extension/Custom/ for the\n> > infrastructure common to the two custom wait event types.\n> \n> Starting 2024-06-27, I'd like to push\n> inplace080-catcache-detoast-inplace-stale and earlier patches, self-certifying\n> them if needed. Then I'll submit the last three to the commitfest. Does\n> anyone want me to delay that step?\n\nPushed. Buildfarm member prion is failing the new inplace-inval.spec, almost\nsurely because prion uses -DCATCACHE_FORCE_RELEASE and inplace-inval.spec is\ntesting an extant failure to inval a cache entry. Naturally, inexorable inval\nmasks the extant bug. Ideally, I'd just skip the test under any kind of cache\nclobber option. I don't know a pleasant way to do that, so these are\nknown-feasible things I'm considering:\n\n1. Neutralize the test in all branches, probably by having it just not report\n the final answer. Undo in the later fix patch.\n\n2. v14+ has pg_backend_memory_contexts. In the test, run some plpgsql that\n uses heuristics on that to deduce whether caches are getting released.\n Have a separate expected output for the cache-release scenario. Perhaps\n also have the test treat installcheck like cache-release, since\n installcheck could experience sinval reset with similar consequences.\n Neutralize the test in v12 & v13.\n\n3. Add a test module with a C function that reports whether any kind of cache\n clobber is active. Call it in this test. Have a separate expected output\n for the cache-release scenario.\n\nPreferences or other ideas? I'm waffling between (1) and (2). I'll give it\nmore thought over the next day.\n\nThanks,\nnm\n\n\n", "msg_date": "Thu, 27 Jun 2024 22:13:53 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> Pushed. Buildfarm member prion is failing the new inplace-inval.spec, almost\n> surely because prion uses -DCATCACHE_FORCE_RELEASE and inplace-inval.spec is\n> testing an extant failure to inval a cache entry. Naturally, inexorable inval\n> masks the extant bug. Ideally, I'd just skip the test under any kind of cache\n> clobber option. I don't know a pleasant way to do that, so these are\n> known-feasible things I'm considering:\n\n> 1. Neutralize the test in all branches, probably by having it just not report\n> the final answer. Undo in the later fix patch.\n\n> 2. v14+ has pg_backend_memory_contexts. In the test, run some plpgsql that\n> uses heuristics on that to deduce whether caches are getting released.\n> Have a separate expected output for the cache-release scenario. Perhaps\n> also have the test treat installcheck like cache-release, since\n> installcheck could experience sinval reset with similar consequences.\n> Neutralize the test in v12 & v13.\n\n> 3. Add a test module with a C function that reports whether any kind of cache\n> clobber is active. Call it in this test. Have a separate expected output\n> for the cache-release scenario.\n\n> Preferences or other ideas? I'm waffling between (1) and (2). I'll give it\n> more thought over the next day.\n\nI'd just go for (1). We were doing fine without this test case.\nI can't see expending effort towards hiding its result rather\nthan actually fixing anything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 28 Jun 2024 01:17:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Jun 28, 2024 at 01:17:22AM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > Pushed. Buildfarm member prion is failing the new inplace-inval.spec, almost\n> > surely because prion uses -DCATCACHE_FORCE_RELEASE and inplace-inval.spec is\n> > testing an extant failure to inval a cache entry. Naturally, inexorable inval\n> > masks the extant bug. Ideally, I'd just skip the test under any kind of cache\n> > clobber option. I don't know a pleasant way to do that, so these are\n> > known-feasible things I'm considering:\n> \n> > 1. Neutralize the test in all branches, probably by having it just not report\n> > the final answer. Undo in the later fix patch.\n> \n> > 2. v14+ has pg_backend_memory_contexts. In the test, run some plpgsql that\n> > uses heuristics on that to deduce whether caches are getting released.\n> > Have a separate expected output for the cache-release scenario. Perhaps\n> > also have the test treat installcheck like cache-release, since\n> > installcheck could experience sinval reset with similar consequences.\n> > Neutralize the test in v12 & v13.\n> \n> > 3. Add a test module with a C function that reports whether any kind of cache\n> > clobber is active. Call it in this test. Have a separate expected output\n> > for the cache-release scenario.\n> \n> > Preferences or other ideas? I'm waffling between (1) and (2). I'll give it\n> > more thought over the next day.\n> \n> I'd just go for (1). We were doing fine without this test case.\n> I can't see expending effort towards hiding its result rather\n> than actually fixing anything.\n\nGood point, any effort on (2) would be wasted once the fixes get certified. I\npushed (1). I'm attaching the rebased fix patches.", "msg_date": "Fri, 28 Jun 2024 19:42:51 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Hello Noah,\n\n29.06.2024 05:42, Noah Misch wrote:\n> Good point, any effort on (2) would be wasted once the fixes get certified. I\n> pushed (1). I'm attaching the rebased fix patches.\n\nPlease look at a new anomaly, introduced by inplace110-successors-v5.patch:\nCREATE TABLE t (i int) PARTITION BY LIST(i);\nCREATE TABLE p1 (i int);\nALTER TABLE t ATTACH PARTITION p1 FOR VALUES IN (1);\nALTER TABLE t DETACH PARTITION p1;\nANALYZE t;\n\ntriggers unexpected\nERROR:  tuple to be updated was already modified by an operation triggered by the current command\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 3 Jul 2024 06:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jul 03, 2024 at 06:00:00AM +0300, Alexander Lakhin wrote:\n> 29.06.2024 05:42, Noah Misch wrote:\n> > Good point, any effort on (2) would be wasted once the fixes get certified. I\n> > pushed (1). I'm attaching the rebased fix patches.\n> \n> Please look at a new anomaly, introduced by inplace110-successors-v5.patch:\n> CREATE TABLE t (i int) PARTITION BY LIST(i);\n> CREATE TABLE p1 (i int);\n> ALTER TABLE t ATTACH PARTITION p1 FOR VALUES IN (1);\n> ALTER TABLE t DETACH PARTITION p1;\n> ANALYZE t;\n> \n> triggers unexpected\n> ERROR:� tuple to be updated was already modified by an operation triggered by the current command\n\nThanks. Today, it's okay to issue heap_inplace_update() after heap_update()\nwithout an intervening CommandCounterIncrement(). The patch makes the CCI\nrequired. The ANALYZE in your example reaches this with a heap_update to set\nrelhassubclass=f. I've fixed this by just adding a CCI (and adding to the\ntests in vacuum.sql).\n\nThe alternative would be to allow inplace updates on TM_SelfModified tuples.\nI can't think of a specific problem with allowing that, but I feel that would\nmake system state interactions harder to reason about. It might be optimal to\nallow that in back branches only, to reduce the chance of releasing a bug like\nthe one you found.", "msg_date": "Wed, 3 Jul 2024 16:09:54 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Jul 03, 2024 at 04:09:54PM -0700, Noah Misch wrote:\n> On Wed, Jul 03, 2024 at 06:00:00AM +0300, Alexander Lakhin wrote:\n> > 29.06.2024 05:42, Noah Misch wrote:\n> > > Good point, any effort on (2) would be wasted once the fixes get certified. I\n> > > pushed (1). I'm attaching the rebased fix patches.\n> > \n> > Please look at a new anomaly, introduced by inplace110-successors-v5.patch:\n> > CREATE TABLE t (i int) PARTITION BY LIST(i);\n> > CREATE TABLE p1 (i int);\n> > ALTER TABLE t ATTACH PARTITION p1 FOR VALUES IN (1);\n> > ALTER TABLE t DETACH PARTITION p1;\n> > ANALYZE t;\n> > \n> > triggers unexpected\n> > ERROR:� tuple to be updated was already modified by an operation triggered by the current command\n> \n> Thanks. Today, it's okay to issue heap_inplace_update() after heap_update()\n> without an intervening CommandCounterIncrement().\n\nCorrection: it's not okay today. If code does that, heap_inplace_update()\nmutates a tuple that is going to become invisible at CCI. The lack of CCI\nyields a minor live bug in v14+. Its consequences seem to be limited to\nfailing to update reltuples for a partitioned table having zero partitions.\n\n> The patch makes the CCI\n> required. The ANALYZE in your example reaches this with a heap_update to set\n> relhassubclass=f. I've fixed this by just adding a CCI (and adding to the\n> tests in vacuum.sql).\n\nThat's still the right fix, but I've separated it into its own patch and\nexpanded the test. All the non-comment changes between v5 and v6 are now part\nof the separate patch.\n\n> The alternative would be to allow inplace updates on TM_SelfModified tuples.\n> I can't think of a specific problem with allowing that, but I feel that would\n> make system state interactions harder to reason about. It might be optimal to\n> allow that in back branches only, to reduce the chance of releasing a bug like\n> the one you found.\n\nAllowing a mutation of a TM_SelfModified tuple is bad, since that tuple is\ngoing to become dead soon. Mutating its successor could be okay. Since we'd\nexpect such code to be unreachable, I'm not keen carry such code. For that\nscenario, I'd rather keep the error you encountered. Other opinions?", "msg_date": "Wed, 3 Jul 2024 19:57:07 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Hello Noah,\n\n28.06.2024 08:13, Noah Misch wrote:\n> Pushed. ...\n\nPlease look also at another anomaly, I've discovered.\n\nAn Assert added with d5f788b41 may be falsified with:\nCREATE TABLE t(a int PRIMARY KEY);\nINSERT INTO t VALUES (1);\nCREATE VIEW v AS SELECT * FROM t;\n\nMERGE INTO v USING (VALUES (1)) AS va(a) ON v.a = va.a\n   WHEN MATCHED THEN DO NOTHING\n   WHEN NOT MATCHED THEN DO NOTHING;\n\nTRAP: failed Assert(\"resultRelInfo->ri_TrigDesc\"), File: \"nodeModifyTable.c\", Line: 2891, PID: 1590670\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 4 Jul 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jul 04, 2024 at 08:00:00AM +0300, Alexander Lakhin wrote:\n> 28.06.2024 08:13, Noah Misch wrote:\n> > Pushed. ...\n> \n> Please look also at another anomaly, I've discovered.\n> \n> An Assert added with d5f788b41 may be falsified with:\n> CREATE TABLE t(a int PRIMARY KEY);\n> INSERT INTO t VALUES (1);\n> CREATE VIEW v AS SELECT * FROM t;\n> \n> MERGE INTO v USING (VALUES (1)) AS va(a) ON v.a = va.a\n> � WHEN MATCHED THEN DO NOTHING\n> � WHEN NOT MATCHED THEN DO NOTHING;\n> \n> TRAP: failed Assert(\"resultRelInfo->ri_TrigDesc\"), File: \"nodeModifyTable.c\", Line: 2891, PID: 1590670\n\nThanks. When all the MERGE actions are DO NOTHING, view_has_instead_trigger()\nreturns true, so we use the wholerow code regardless of the view's triggers or\nauto update capability. The behavior is fine, so I'm fixing the new assertion\nand comments with new patch inplace087-merge-DO-NOTHING-v8.patch. The closest\nrelevant tests processed zero rows, so they narrowly avoided witnessing this\nassertion.", "msg_date": "Thu, 4 Jul 2024 15:08:16 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Jul 04, 2024 at 03:08:16PM -0700, Noah Misch wrote:\n> On Thu, Jul 04, 2024 at 08:00:00AM +0300, Alexander Lakhin wrote:\n> > 28.06.2024 08:13, Noah Misch wrote:\n> > > Pushed. ...\n> > \n> > Please look also at another anomaly, I've discovered.\n> > \n> > An Assert added with d5f788b41 may be falsified with:\n> > CREATE TABLE t(a int PRIMARY KEY);\n> > INSERT INTO t VALUES (1);\n> > CREATE VIEW v AS SELECT * FROM t;\n> > \n> > MERGE INTO v USING (VALUES (1)) AS va(a) ON v.a = va.a\n> > � WHEN MATCHED THEN DO NOTHING\n> > � WHEN NOT MATCHED THEN DO NOTHING;\n> > \n> > TRAP: failed Assert(\"resultRelInfo->ri_TrigDesc\"), File: \"nodeModifyTable.c\", Line: 2891, PID: 1590670\n> \n> Thanks. When all the MERGE actions are DO NOTHING, view_has_instead_trigger()\n> returns true\n\nI've pushed the two patches for your reports. To placate cfbot, I'm attaching\nthe remaining patches.", "msg_date": "Sun, 14 Jul 2024 10:48:00 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Hello Noah,\n\n28.06.2024 08:13, Noah Misch wrote:\n> Pushed.\n\nA recent buildfarm test failure [1] showed that the\nintra-grant-inplace-db.spec test added with 0844b3968 may fail\non a slow machine (per my understanding):\n\ntest intra-grant-inplace-db       ... FAILED     4302 ms\n\n@@ -21,8 +21,7 @@\n      WHERE datname = current_catalog\n          AND age(datfrozenxid) > (SELECT min(age(x)) FROM frozen_witness);\n\n-?column?\n-----------------------\n-datfrozenxid retreated\n-(1 row)\n+?column?\n+--------\n+(0 rows)\n\nwhilst the previous (successful) run shows much shorter duration:\ntest intra-grant-inplace-db       ... ok          540 ms\n\nI reproduced this failure on a VM slowed down so that the test duration\nreached 4+ seconds, with 100 test: intra-grant-inplace-db in\nisolation_schedule:\ntest intra-grant-inplace-db       ... ok         4324 ms\ntest intra-grant-inplace-db       ... FAILED     4633 ms\ntest intra-grant-inplace-db       ... ok         4649 ms\n\nBut as the test going to be modified by the inplace110-successors-v8.patch\nand the modified test (with all three latest patches applied) passes\nreliably in the same conditions, maybe this failure doesn't deserve a\ndeeper exploration.\n\nWhat do you think?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=habu&dt=2024-07-18%2003%3A08%3A08\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 20 Jul 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sat, Jul 20, 2024 at 11:00:00AM +0300, Alexander Lakhin wrote:\n> 28.06.2024 08:13, Noah Misch wrote:\n> > Pushed.\n> \n> A recent buildfarm test failure [1] showed that the\n> intra-grant-inplace-db.spec test added with 0844b3968 may fail\n> on a slow machine\n\n> But as the test going to be modified by the inplace110-successors-v8.patch\n> and the modified test (with all three latest patches applied) passes\n> reliably in the same conditions, maybe this failure doesn't deserve a\n> deeper exploration.\n\nAgreed. Let's just wait for code review of the actual bug fix, not develop a\nseparate change to stabilize the test. One flake in three weeks is low enough\nto make that okay.\n\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=habu&dt=2024-07-18%2003%3A08%3A08\n\n\n", "msg_date": "Sat, 20 Jul 2024 04:28:35 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Sat, Jul 20, 2024 at 11:00:00AM +0300, Alexander Lakhin wrote:\n>> A recent buildfarm test failure [1] showed that the\n>> intra-grant-inplace-db.spec test added with 0844b3968 may fail\n>> on a slow machine\n\n>> But as the test going to be modified by the inplace110-successors-v8.patch\n>> and the modified test (with all three latest patches applied) passes\n>> reliably in the same conditions, maybe this failure doesn't deserve a\n>> deeper exploration.\n\n> Agreed. Let's just wait for code review of the actual bug fix, not develop a\n> separate change to stabilize the test. One flake in three weeks is low enough\n> to make that okay.\n\nIt's now up to three similar failures in the past ten days: in\naddition to\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=habu&dt=2024-07-18%2003%3A08%3A08\n\nI see\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=urutu&dt=2024-07-22%2018%3A00%3A46\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-07-28%2012%3A20%3A37\n\nIs it time to worry yet? If this were HEAD only, I'd not be too\nconcerned; but two of these three are on allegedly-stable branches.\nAnd we have releases coming up fast.\n\n(BTW, I don't think taipan qualifies as a slow machine.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2024 11:50:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sun, Jul 28, 2024 at 11:50:33AM -0400, Tom Lane wrote:\n> Noah Misch <[email protected]> writes:\n> > On Sat, Jul 20, 2024 at 11:00:00AM +0300, Alexander Lakhin wrote:\n> >> A recent buildfarm test failure [1] showed that the\n> >> intra-grant-inplace-db.spec test added with 0844b3968 may fail\n> \n> >> But as the test going to be modified by the inplace110-successors-v8.patch\n> >> and the modified test (with all three latest patches applied) passes\n> >> reliably in the same conditions, maybe this failure doesn't deserve a\n> >> deeper exploration.\n> \n> > Agreed. Let's just wait for code review of the actual bug fix, not develop a\n> > separate change to stabilize the test. One flake in three weeks is low enough\n> > to make that okay.\n> \n> It's now up to three similar failures in the past ten days\n\n> Is it time to worry yet? If this were HEAD only, I'd not be too\n> concerned; but two of these three are on allegedly-stable branches.\n> And we have releases coming up fast.\n\nI don't know; neither decision feels terrible to me. A bug fix that would\naddress both the data corruption causes and those buildfarm failures has been\nawaiting review on this thread for 77 days. The data corruption causes are\nmore problematic than 0.03% of buildfarm runs getting noise failures. Two\nwrongs don't make a right, but a commit masking that level of buildfarm noise\nalso feels like sending the wrong message.\n\n\n", "msg_date": "Sun, 28 Jul 2024 09:51:32 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Sun, Jul 28, 2024 at 11:50:33AM -0400, Tom Lane wrote:\n>> Is it time to worry yet? If this were HEAD only, I'd not be too\n>> concerned; but two of these three are on allegedly-stable branches.\n>> And we have releases coming up fast.\n\n> I don't know; neither decision feels terrible to me.\n\nYeah, same here. Obviously, it'd be better to spend effort on getting\nthe bug fix committed than to spend effort on some cosmetic\nworkaround.\n\nThe fact that the failure is in the isolation tests not the core\nregression tests reduces my level of concern somewhat about shipping\nit this way. I think that packagers typically run the core tests\nnot check-world during package verification, so they won't hit this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Jul 2024 12:59:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On 14/07/2024 20:48, Noah Misch wrote:\n> I've pushed the two patches for your reports. To placate cfbot, I'm attaching\n> the remaining patches.\n\ninplace090-LOCKTAG_TUPLE-eoxact-v8.patch: Makes sense. A comment would \nbe in order, it looks pretty random as it is. Something like:\n\n/*\n * Tuple locks are currently only held for short durations within a\n * transaction. Check that we didn't forget to release one.\n */\n\ninplace110-successors-v8.patch: Makes sense.\n\nThe README changes would be better as part of the third patch, as this \npatch doesn't actually do any of the new locking described in the \nREADME, and it fixes the \"inplace update updates wrong tuple\" bug even \nwithout those tuple locks.\n\n> + * ... [any slow preparation not requiring oldtup] ...\n> + * heap_inplace_update_scan([...], &tup, &inplace_state);\n> + * if (!HeapTupleIsValid(tup))\n> + *\telog(ERROR, [...]);\n> + * ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> + * if (dirty)\n> + *\theap_inplace_update_finish(inplace_state, tup);\n> + * else\n> + *\theap_inplace_update_cancel(inplace_state);\n\nI wonder if the functions should be called \"systable_*\" and placed in \ngenam.c rather than in heapam.c. The interface looks more like the \nexisting systable functions. It feels like a modularity violation for a \nfunction in heapam.c to take an argument like \"indexId\", and call back \ninto systable_* functions.\n\n> \t/*----------\n> \t * XXX A crash here can allow datfrozenxid() to get ahead of relfrozenxid:\n> \t *\n> \t * [\"D\" is a VACUUM (ONLY_DATABASE_STATS)]\n> \t * [\"R\" is a VACUUM tbl]\n> \t * D: vac_update_datfrozenid() -> systable_beginscan(pg_class)\n> \t* c * D: systable_getnext() returns pg_class tuple of tbl\n> \t * R: memcpy() into pg_class tuple of tbl\n> \t * D: raise pg_database.datfrozenxid, XLogInsert(), finish\n> \t * [crash]\n> \t * [recovery restores datfrozenxid w/o relfrozenxid]\n> \t */\n\nHmm, that's a tight race, but feels bad to leave it unfixed. One \napproach would be to modify the tuple on the buffer only after \nWAL-logging it. That way, D cannot read the updated value before it has \nbeen WAL logged. Just need to make sure that the change still gets \nincluded in the WAL record. Maybe something like:\n\nif (RelationNeedsWAL(relation))\n{\n /*\n * Make a temporary copy of the page that includes the change, in\n * case the a full-page image is logged\n */\n PGAlignedBlock tmppage;\n\n memcpy(tmppage.data, page, BLCKSZ);\n\n /* copy the tuple to the temporary copy */\n memcpy(...);\n\n XLogRegisterBlock(0, ..., tmppage, REGBUF_STANDARD);\n XLogInsert();\n}\n\n/* copy the tuple to the buffer */\nmemcpy(...);\n\n\n> pg_class heap_inplace_update_scan() callers: before the call, acquire\n> LOCKTAG_RELATION in mode ShareLock (CREATE INDEX), ShareUpdateExclusiveLock\n> (VACUUM), or a mode with strictly more conflicts. If the update targets a\n> row of RELKIND_INDEX (but not RELKIND_PARTITIONED_INDEX), that lock must be\n> on the table. Locking the index rel is optional. (This allows VACUUM to\n> overwrite per-index pg_class while holding a lock on the table alone.) We\n> could allow weaker locks, in which case the next paragraph would simply call\n> for stronger locks for its class of commands. heap_inplace_update_scan()\n> acquires and releases LOCKTAG_TUPLE in InplaceUpdateTupleLock, an alias for\n> ExclusiveLock, on each tuple it overwrites.\n> \n> pg_class heap_update() callers: before copying the tuple to modify, take a\n> lock that conflicts with at least one of those from the preceding paragraph.\n> SearchSysCacheLocked1() is one convenient way to acquire LOCKTAG_TUPLE.\n> After heap_update(), release any LOCKTAG_TUPLE. Most of these callers opt\n> to acquire just the LOCKTAG_RELATION.\n\nThese rules seem complicated. Phrasing this slightly differently, if I \nunderstand correctly: for a heap_update() caller, it's always sufficient \nto hold LOCKTAG_TUPLE, but if you happen to hold some other lock on the \nrelation that conflicts with those mentioned in the first paragraph, \nthen you can skip the LOCKTAG_TUPLE lock.\n\nCould we just stipulate that you must always hold LOCKTAG_TUPLE when you \ncall heap_update() on pg_class or pg_database? That'd make the rule simple.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 16 Aug 2024 12:26:28 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "Thanks for reviewing.\n\nOn Fri, Aug 16, 2024 at 12:26:28PM +0300, Heikki Linnakangas wrote:\n> On 14/07/2024 20:48, Noah Misch wrote:\n> > I've pushed the two patches for your reports. To placate cfbot, I'm attaching\n> > the remaining patches.\n> \n> inplace090-LOCKTAG_TUPLE-eoxact-v8.patch: Makes sense. A comment would be in\n> order, it looks pretty random as it is. Something like:\n> \n> /*\n> * Tuple locks are currently only held for short durations within a\n> * transaction. Check that we didn't forget to release one.\n> */\n\nWill add.\n\n> inplace110-successors-v8.patch: Makes sense.\n> \n> The README changes would be better as part of the third patch, as this patch\n> doesn't actually do any of the new locking described in the README, and it\n> fixes the \"inplace update updates wrong tuple\" bug even without those tuple\n> locks.\n\nThat should work. Will confirm.\n\n> > + * ... [any slow preparation not requiring oldtup] ...\n> > + * heap_inplace_update_scan([...], &tup, &inplace_state);\n> > + * if (!HeapTupleIsValid(tup))\n> > + *\telog(ERROR, [...]);\n> > + * ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> > + * if (dirty)\n> > + *\theap_inplace_update_finish(inplace_state, tup);\n> > + * else\n> > + *\theap_inplace_update_cancel(inplace_state);\n> \n> I wonder if the functions should be called \"systable_*\" and placed in\n> genam.c rather than in heapam.c. The interface looks more like the existing\n> systable functions. It feels like a modularity violation for a function in\n> heapam.c to take an argument like \"indexId\", and call back into systable_*\n> functions.\n\nYes, _scan() and _cancel() especially are wrappers around systable. Some API\noptions follow. Any preference or other ideas?\n\n==== direct s/heap_/systable_/ rename\n\n systable_inplace_update_scan([...], &tup, &inplace_state);\n if (!HeapTupleIsValid(tup))\n\telog(ERROR, [...]);\n ... [buffer is exclusive-locked; mutate \"tup\"] ...\n if (dirty)\n\tsystable_inplace_update_finish(inplace_state, tup);\n else\n\tsystable_inplace_update_cancel(inplace_state);\n\n==== make the first and last steps more systable-like\n\n systable_inplace_update_begin([...], &tup, &inplace_state);\n if (!HeapTupleIsValid(tup))\n\telog(ERROR, [...]);\n ... [buffer is exclusive-locked; mutate \"tup\"] ...\n if (dirty)\n\tsystable_inplace_update(inplace_state, tup);\n systable_inplace_update_end(inplace_state);\n\n==== no systable_ wrapper for middle step, more like CatalogTupleUpdate\n\n systable_inplace_update_begin([...], &tup, &inplace_state);\n if (!HeapTupleIsValid(tup))\n\telog(ERROR, [...]);\n ... [buffer is exclusive-locked; mutate \"tup\"] ...\n if (dirty)\n\theap_inplace_update(relation,\n\t\t\t\t\t\tsystable_inplace_old_tuple(inplace_state),\n\t\t\t\t\t\ttup,\n\t\t\t\t\t\tsystable_inplace_buffer(inplace_state));\n systable_inplace_update_end(inplace_state);\n\n> > \t/*----------\n> > \t * XXX A crash here can allow datfrozenxid() to get ahead of relfrozenxid:\n> > \t *\n> > \t * [\"D\" is a VACUUM (ONLY_DATABASE_STATS)]\n> > \t * [\"R\" is a VACUUM tbl]\n> > \t * D: vac_update_datfrozenid() -> systable_beginscan(pg_class)\n> > \t* c * D: systable_getnext() returns pg_class tuple of tbl\n> > \t * R: memcpy() into pg_class tuple of tbl\n> > \t * D: raise pg_database.datfrozenxid, XLogInsert(), finish\n> > \t * [crash]\n> > \t * [recovery restores datfrozenxid w/o relfrozenxid]\n> > \t */\n> \n> Hmm, that's a tight race, but feels bad to leave it unfixed. One approach\n> would be to modify the tuple on the buffer only after WAL-logging it. That\n> way, D cannot read the updated value before it has been WAL logged. Just\n> need to make sure that the change still gets included in the WAL record.\n> Maybe something like:\n> \n> if (RelationNeedsWAL(relation))\n> {\n> /*\n> * Make a temporary copy of the page that includes the change, in\n> * case the a full-page image is logged\n> */\n> PGAlignedBlock tmppage;\n> \n> memcpy(tmppage.data, page, BLCKSZ);\n> \n> /* copy the tuple to the temporary copy */\n> memcpy(...);\n> \n> XLogRegisterBlock(0, ..., tmppage, REGBUF_STANDARD);\n> XLogInsert();\n> }\n> \n> /* copy the tuple to the buffer */\n> memcpy(...);\n\nYes, that's the essence of\ninplace180-datfrozenxid-overtakes-relfrozenxid-v1.patch from\nhttps://postgr.es/m/flat/[email protected].\n\n> > pg_class heap_inplace_update_scan() callers: before the call, acquire\n> > LOCKTAG_RELATION in mode ShareLock (CREATE INDEX), ShareUpdateExclusiveLock\n> > (VACUUM), or a mode with strictly more conflicts. If the update targets a\n> > row of RELKIND_INDEX (but not RELKIND_PARTITIONED_INDEX), that lock must be\n> > on the table. Locking the index rel is optional. (This allows VACUUM to\n> > overwrite per-index pg_class while holding a lock on the table alone.) We\n> > could allow weaker locks, in which case the next paragraph would simply call\n> > for stronger locks for its class of commands. heap_inplace_update_scan()\n> > acquires and releases LOCKTAG_TUPLE in InplaceUpdateTupleLock, an alias for\n> > ExclusiveLock, on each tuple it overwrites.\n> > \n> > pg_class heap_update() callers: before copying the tuple to modify, take a\n> > lock that conflicts with at least one of those from the preceding paragraph.\n> > SearchSysCacheLocked1() is one convenient way to acquire LOCKTAG_TUPLE.\n> > After heap_update(), release any LOCKTAG_TUPLE. Most of these callers opt\n> > to acquire just the LOCKTAG_RELATION.\n> \n> These rules seem complicated. Phrasing this slightly differently, if I\n> understand correctly: for a heap_update() caller, it's always sufficient to\n> hold LOCKTAG_TUPLE, but if you happen to hold some other lock on the\n> relation that conflicts with those mentioned in the first paragraph, then\n> you can skip the LOCKTAG_TUPLE lock.\n\nYes.\n\n> Could we just stipulate that you must always hold LOCKTAG_TUPLE when you\n> call heap_update() on pg_class or pg_database? That'd make the rule simple.\n\nWe could. That would change more code sites. Rough estimate:\n\n$ git grep -E CatalogTupleUpd'.*(class|relrelation|relationRelation)' | wc -l\n23\n\nIf the count were 2, I'd say let's simplify the rule like you're exploring.\n(I originally had a complicated rule for pg_database, but I abandoned that\nwhen it helped few code sites.) If it were 100, I'd say the complicated rule\nis worth it. A count of 23 makes both choices fair.\n\nLong-term, I hope relfrozenxid gets reimplemented with storage outside\npg_class, removing the need for inplace updates. So the additional 23 code\nsites might change back at a future date. That shouldn't be a big\nconsideration, though.\n\nAnother option here would be to preface that README section with a simplified\nview, something like, \"If a warning brought you here, take a tuple lock. The\nrest of this section is just for people needing to understand the conditions\nfor --enable-casserts emitting that warning.\" How about that instead of\nsimplifying the rules?\n\n\n", "msg_date": "Fri, 16 Aug 2024 21:07:48 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On 17/08/2024 07:07, Noah Misch wrote:\n> On Fri, Aug 16, 2024 at 12:26:28PM +0300, Heikki Linnakangas wrote:\n>> On 14/07/2024 20:48, Noah Misch wrote:\n>>> + * ... [any slow preparation not requiring oldtup] ...\n>>> + * heap_inplace_update_scan([...], &tup, &inplace_state);\n>>> + * if (!HeapTupleIsValid(tup))\n>>> + *\telog(ERROR, [...]);\n>>> + * ... [buffer is exclusive-locked; mutate \"tup\"] ...\n>>> + * if (dirty)\n>>> + *\theap_inplace_update_finish(inplace_state, tup);\n>>> + * else\n>>> + *\theap_inplace_update_cancel(inplace_state);\n>>\n>> I wonder if the functions should be called \"systable_*\" and placed in\n>> genam.c rather than in heapam.c. The interface looks more like the existing\n>> systable functions. It feels like a modularity violation for a function in\n>> heapam.c to take an argument like \"indexId\", and call back into systable_*\n>> functions.\n> \n> Yes, _scan() and _cancel() especially are wrappers around systable. Some API\n> options follow. Any preference or other ideas?\n> \n> ==== direct s/heap_/systable_/ rename [option 1]\n> \n> systable_inplace_update_scan([...], &tup, &inplace_state);\n> if (!HeapTupleIsValid(tup))\n> \telog(ERROR, [...]);\n> ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> if (dirty)\n> \tsystable_inplace_update_finish(inplace_state, tup);\n> else\n> \tsystable_inplace_update_cancel(inplace_state);\n> \n> ==== make the first and last steps more systable-like [option 2]\n> \n> systable_inplace_update_begin([...], &tup, &inplace_state);\n> if (!HeapTupleIsValid(tup))\n> \telog(ERROR, [...]);\n> ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> if (dirty)\n> \tsystable_inplace_update(inplace_state, tup);\n> systable_inplace_update_end(inplace_state);\n> \n> ==== no systable_ wrapper for middle step, more like CatalogTupleUpdate [option 3]\n> \n> systable_inplace_update_begin([...], &tup, &inplace_state);\n> if (!HeapTupleIsValid(tup))\n> \telog(ERROR, [...]);\n> ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> if (dirty)\n> \theap_inplace_update(relation,\n> \t\t\t\t\t\tsystable_inplace_old_tuple(inplace_state),\n> \t\t\t\t\t\ttup,\n> \t\t\t\t\t\tsystable_inplace_buffer(inplace_state));\n> systable_inplace_update_end(inplace_state);\n\nMy order of preference is: 2, 1, 3.\n\n>> Could we just stipulate that you must always hold LOCKTAG_TUPLE when you\n>> call heap_update() on pg_class or pg_database? That'd make the rule simple.\n> \n> We could. That would change more code sites. Rough estimate:\n> \n> $ git grep -E CatalogTupleUpd'.*(class|relrelation|relationRelation)' | wc -l\n> 23\n> \n> If the count were 2, I'd say let's simplify the rule like you're exploring.\n> (I originally had a complicated rule for pg_database, but I abandoned that\n> when it helped few code sites.) If it were 100, I'd say the complicated rule\n> is worth it. A count of 23 makes both choices fair.\n\nOk.\n\nHow many of those for RELKIND_INDEX vs tables? I'm thinking if we should \nalways require a tuple lock on indexes, if that would make a difference.\n\n> Long-term, I hope relfrozenxid gets reimplemented with storage outside\n> pg_class, removing the need for inplace updates. So the additional 23 code\n> sites might change back at a future date. That shouldn't be a big\n> consideration, though.\n> \n> Another option here would be to preface that README section with a simplified\n> view, something like, \"If a warning brought you here, take a tuple lock. The\n> rest of this section is just for people needing to understand the conditions\n> for --enable-casserts emitting that warning.\" How about that instead of\n> simplifying the rules?\n\nWorks for me. Or perhaps the rules could just be explained more \nsuccinctly. Something like:\n\n-----\npg_class heap_inplace_update_scan() callers: before the call, acquire a \nlock on the relation in mode ShareUpdateExclusiveLock or stricter. If \nthe update targets a row of RELKIND_INDEX (but not \nRELKIND_PARTITIONED_INDEX), that lock must be on the table, locking the \nindex rel is not necessary. (This allows VACUUM to overwrite per-index \npg_class while holding a lock on the table alone.) \nheap_inplace_update_scan() acquires and releases LOCKTAG_TUPLE in \nInplaceUpdateTupleLock, an alias for ExclusiveLock, on each tuple it \noverwrites.\n\npg_class heap_update() callers: before copying the tuple to modify, take \na lock on the tuple, or a ShareUpdateExclusiveLock or stricter on the \nrelation.\n\nSearchSysCacheLocked1() is one convenient way to acquire the tuple lock. \nMost heap_update() callers already hold a suitable lock on the relation \nfor other reasons, and can skip the tuple lock. If you do acquire the \ntuple lock, release it immediately after the update.\n\n\npg_database: before copying the tuple to modify, all updaters of \npg_database rows acquire LOCKTAG_TUPLE. (Few updaters acquire \nLOCKTAG_OBJECT on the database OID, so it wasn't worth extending that as \na second option.)\n-----\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 20 Aug 2024 11:59:45 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> On 17/08/2024 07:07, Noah Misch wrote:\n> > On Fri, Aug 16, 2024 at 12:26:28PM +0300, Heikki Linnakangas wrote:\n> > > I wonder if the functions should be called \"systable_*\" and placed in\n> > > genam.c rather than in heapam.c. The interface looks more like the existing\n> > > systable functions. It feels like a modularity violation for a function in\n> > > heapam.c to take an argument like \"indexId\", and call back into systable_*\n> > > functions.\n> > \n> > Yes, _scan() and _cancel() especially are wrappers around systable. Some API\n> > options follow. Any preference or other ideas?\n> > \n> > ==== direct s/heap_/systable_/ rename [option 1]\n> > \n> > systable_inplace_update_scan([...], &tup, &inplace_state);\n> > if (!HeapTupleIsValid(tup))\n> > \telog(ERROR, [...]);\n> > ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> > if (dirty)\n> > \tsystable_inplace_update_finish(inplace_state, tup);\n> > else\n> > \tsystable_inplace_update_cancel(inplace_state);\n> > \n> > ==== make the first and last steps more systable-like [option 2]\n> > \n> > systable_inplace_update_begin([...], &tup, &inplace_state);\n> > if (!HeapTupleIsValid(tup))\n> > \telog(ERROR, [...]);\n> > ... [buffer is exclusive-locked; mutate \"tup\"] ...\n> > if (dirty)\n> > \tsystable_inplace_update(inplace_state, tup);\n> > systable_inplace_update_end(inplace_state);\n\n> My order of preference is: 2, 1, 3.\n\nI kept tuple locking responsibility in heapam.c. That's simpler and better\nfor modularity, but it does mean we release+acquire after any xmax wait.\nBefore, we avoided that if the next genam.c scan found the same TID. (If the\nnext scan finds the same TID, the xmax probably aborted.) I think DDL aborts\nare rare enough to justify simplifying as this version does. I don't expect\nanyone to notice the starvation outside of tests built to show it. (With\nprevious versions, one can show it with a purpose-built test that commits\ninstead of aborting, like the \"001_pgbench_grant@9\" test.)\n\nThis move also loses the optimization of unpinning before XactLockTableWait().\nheap_update() doesn't optimize that way, so that's fine.\n\nThe move ended up more like (1), though I did do\ns/systable_inplace_update_scan/systable_inplace_update_begin/ like in (2). I\nfelt that worked better than (2) to achieve lock release before\nCacheInvalidateHeapTuple(). Alternatives that could be fine:\n\n- In the cancel case, call both systable_inplace_update_cancel and\n systable_inplace_update_end. _finish or _cancel would own unlock, while\n _end would own systable_endscan().\n\n- Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n tolerable now, this gets less attractive after the inplace160 patch from\n https://postgr.es/m/flat/[email protected]\n\nI made the other changes we discussed, also.\n\n> > > Could we just stipulate that you must always hold LOCKTAG_TUPLE when you\n> > > call heap_update() on pg_class or pg_database? That'd make the rule simple.\n> > \n> > We could. That would change more code sites. Rough estimate:\n> > \n> > $ git grep -E CatalogTupleUpd'.*(class|relrelation|relationRelation)' | wc -l\n> > 23\n\n> How many of those for RELKIND_INDEX vs tables? I'm thinking if we should\n> always require a tuple lock on indexes, if that would make a difference.\n\nThree sites. See attached inplace125 patch. Is it a net improvement? If so,\nI'll squash it into inplace120.\n\n> > Another option here would be to preface that README section with a simplified\n> > view, something like, \"If a warning brought you here, take a tuple lock. The\n> > rest of this section is just for people needing to understand the conditions\n> > for --enable-casserts emitting that warning.\" How about that instead of\n> > simplifying the rules?\n> \n> Works for me. Or perhaps the rules could just be explained more succinctly.\n> Something like:\n> \n> -----\n\nI largely used your text instead.\n\n\nWhile doing these updates, I found an intra-grant-inplace.spec permutation\nbeing flaky on inplace110 but stable on inplace120. That turned out not to be\nv9-specific. As of patch v1, I now see it was already flaky (~5% failure\nhere). I've now added to inplace110 a minimal tweak to stabilize that spec,\nwhich inplace120 removes.\n\nThanks,\nnm", "msg_date": "Thu, 22 Aug 2024 00:32:00 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Aug 29, 2024 at 8:11 PM Noah Misch <[email protected]> wrote:\n>\n> On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> > My order of preference is: 2, 1, 3.\n>\n> I kept tuple locking responsibility in heapam.c. That's simpler and better\n> for modularity, but it does mean we release+acquire after any xmax wait.\n> Before, we avoided that if the next genam.c scan found the same TID. (If the\n> next scan finds the same TID, the xmax probably aborted.) I think DDL aborts\n> are rare enough to justify simplifying as this version does. I don't expect\n> anyone to notice the starvation outside of tests built to show it. (With\n> previous versions, one can show it with a purpose-built test that commits\n> instead of aborting, like the \"001_pgbench_grant@9\" test.)\n>\n> This move also loses the optimization of unpinning before XactLockTableWait().\n> heap_update() doesn't optimize that way, so that's fine.\n>\n> The move ended up more like (1), though I did do\n> s/systable_inplace_update_scan/systable_inplace_update_begin/ like in (2). I\n> felt that worked better than (2) to achieve lock release before\n> CacheInvalidateHeapTuple(). Alternatives that could be fine:\n>\n From a consistency point of view, I find it cleaner if we can have all\nthe heap_inplace_lock and heap_inplace_unlock in the same set of\nfunctions. So here those would be the systable_inplace_* functions.\n\n> - In the cancel case, call both systable_inplace_update_cancel and\n> systable_inplace_update_end. _finish or _cancel would own unlock, while\n> _end would own systable_endscan().\n>\nWhat happens to CacheInvalidateHeapTuple() in this approach? I think\nit will still need to be brought to the genam.c layer if we are\nreleasing the lock in systable_inplace_update_finish.\n\n> - Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n> tolerable now, this gets less attractive after the inplace160 patch from\n> https://postgr.es/m/flat/[email protected]\n>\nI skimmed through the inplace160 patch. It wasn't clear to me why this\nbecomes less attractive with the patch. I see there is a new\nCacheInvalidateHeapTupleInPlace but that looks like it would be called\nwhile holding the lock. And then there is an\nAcceptInvalidationMessages which can perhaps be moved to the genam.c\nlayer too. Is the concern that one invalidation call will be in the\nheapam layer and the other will be in the genam layer?\n\nAlso I have a small question from inplace120.\n\nI see that all the places we check resultRelInfo->ri_needLockTagTuple,\nwe can just call\nIsInplaceUpdateRelation(resultRelInfo->ri_RelationDesc). Is there a\nbig advantage of storing a separate bool field? Also there is another\nwrite to ri_RelationDesc in CatalogOpenIndexes in\nsrc/backlog/catalog/indexing.c. I think ri_needLockTagTuple needs to\nbe set there also to keep it consistent with ri_RelationDesc. Please\nlet me know if I am missing something about the usage of the new\nfield.\n\nThanks & Regards,\nNitin Motiani\nGoogle\n\n\n", "msg_date": "Thu, 29 Aug 2024 21:08:43 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Aug 29, 2024 at 09:08:43PM +0530, Nitin Motiani wrote:\n> On Thu, Aug 29, 2024 at 8:11 PM Noah Misch <[email protected]> wrote:\n> > On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> > > My order of preference is: 2, 1, 3.\n> >\n> > I kept tuple locking responsibility in heapam.c. That's simpler and better\n> > for modularity, but it does mean we release+acquire after any xmax wait.\n> > Before, we avoided that if the next genam.c scan found the same TID. (If the\n> > next scan finds the same TID, the xmax probably aborted.) I think DDL aborts\n> > are rare enough to justify simplifying as this version does. I don't expect\n> > anyone to notice the starvation outside of tests built to show it. (With\n> > previous versions, one can show it with a purpose-built test that commits\n> > instead of aborting, like the \"001_pgbench_grant@9\" test.)\n> >\n> > This move also loses the optimization of unpinning before XactLockTableWait().\n> > heap_update() doesn't optimize that way, so that's fine.\n> >\n> > The move ended up more like (1), though I did do\n> > s/systable_inplace_update_scan/systable_inplace_update_begin/ like in (2). I\n> > felt that worked better than (2) to achieve lock release before\n> > CacheInvalidateHeapTuple(). Alternatives that could be fine:\n> >\n> From a consistency point of view, I find it cleaner if we can have all\n> the heap_inplace_lock and heap_inplace_unlock in the same set of\n> functions. So here those would be the systable_inplace_* functions.\n\nThat will technically be the case after inplace160, and I could make it so\nhere by inlining heap_inplace_unlock() into its heapam.c caller. Would that\nbe cleaner or less clean?\n\n> > - In the cancel case, call both systable_inplace_update_cancel and\n> > systable_inplace_update_end. _finish or _cancel would own unlock, while\n> > _end would own systable_endscan().\n> >\n> What happens to CacheInvalidateHeapTuple() in this approach? I think\n> it will still need to be brought to the genam.c layer if we are\n> releasing the lock in systable_inplace_update_finish.\n\nCancel scenarios don't do invalidation. (Same under other alternatives.)\n\n> > - Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n> > tolerable now, this gets less attractive after the inplace160 patch from\n> > https://postgr.es/m/flat/[email protected]\n> >\n> I skimmed through the inplace160 patch. It wasn't clear to me why this\n> becomes less attractive with the patch. I see there is a new\n> CacheInvalidateHeapTupleInPlace but that looks like it would be called\n> while holding the lock. And then there is an\n> AcceptInvalidationMessages which can perhaps be moved to the genam.c\n> layer too. Is the concern that one invalidation call will be in the\n> heapam layer and the other will be in the genam layer?\n\nThat, or a critical section would start in heapam.c, then end in genam.c.\nCurrent call tree at inplace160 v4:\n\ngenam.c:systable_inplace_update_finish\n heapam.c:heap_inplace_update\n PreInplace_Inval\n START_CRIT_SECTION\n BUFFER_LOCK_UNLOCK\n AtInplace_Inval\n END_CRIT_SECTION\n UnlockTuple\n AcceptInvalidationMessages\n\nIf we hoisted all of invalidation up to the genam.c layer, a critical section\nthat starts in heapam.c would end in genam.c:\n\ngenam.c:systable_inplace_update_finish\n PreInplace_Inval\n heapam.c:heap_inplace_update\n START_CRIT_SECTION\n BUFFER_LOCK_UNLOCK\n AtInplace_Inval\n END_CRIT_SECTION\n UnlockTuple\n AcceptInvalidationMessages\n\nIf we didn't accept splitting the critical section but did accept splitting\ninvalidation responsibilities, one gets perhaps:\n\ngenam.c:systable_inplace_update_finish\n PreInplace_Inval\n heapam.c:heap_inplace_update\n START_CRIT_SECTION\n BUFFER_LOCK_UNLOCK\n AtInplace_Inval\n END_CRIT_SECTION\n UnlockTuple\n AcceptInvalidationMessages\n\nThat's how I ended up at inplace120 v9's design.\n\n> Also I have a small question from inplace120.\n> \n> I see that all the places we check resultRelInfo->ri_needLockTagTuple,\n> we can just call\n> IsInplaceUpdateRelation(resultRelInfo->ri_RelationDesc). Is there a\n> big advantage of storing a separate bool field? Also there is another\n\nNo, not a big advantage. I felt it was more in line with the typical style of\nsrc/backend/executor.\n\n> write to ri_RelationDesc in CatalogOpenIndexes in\n> src/backlog/catalog/indexing.c. I think ri_needLockTagTuple needs to\n> be set there also to keep it consistent with ri_RelationDesc. Please\n> let me know if I am missing something about the usage of the new\n> field.\n\nCan you say more about consequences you found?\n\nOnly the full executor reads the field, doing so when it fetches the most\nrecent version of a row. CatalogOpenIndexes() callers lack the full\nexecutor's practice of fetching the most recent version of a row, so they\ncouldn't benefit reading the field.\n\nI don't think any CatalogOpenIndexes() caller passes its ResultRelInfo to the\nfull executor, and \"typedef struct ResultRelInfo *CatalogIndexState\" exists in\npart to keep it that way. Since CatalogOpenIndexes() skips ri_TrigDesc and\nother fields, I would expect other malfunctions if some caller tried.\n\nThanks,\nnm\n\n\n", "msg_date": "Fri, 30 Aug 2024 18:10:43 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sat, Aug 31, 2024 at 6:40 AM Noah Misch <[email protected]> wrote:\n>\n> On Thu, Aug 29, 2024 at 09:08:43PM +0530, Nitin Motiani wrote:\n> > On Thu, Aug 29, 2024 at 8:11 PM Noah Misch <[email protected]> wrote:\n> > > On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> > > > My order of preference is: 2, 1, 3.\n> > >\n> > > I kept tuple locking responsibility in heapam.c. That's simpler and better\n> > > for modularity, but it does mean we release+acquire after any xmax wait.\n> > > Before, we avoided that if the next genam.c scan found the same TID. (If the\n> > > next scan finds the same TID, the xmax probably aborted.) I think DDL aborts\n> > > are rare enough to justify simplifying as this version does. I don't expect\n> > > anyone to notice the starvation outside of tests built to show it. (With\n> > > previous versions, one can show it with a purpose-built test that commits\n> > > instead of aborting, like the \"001_pgbench_grant@9\" test.)\n> > >\n> > > This move also loses the optimization of unpinning before XactLockTableWait().\n> > > heap_update() doesn't optimize that way, so that's fine.\n> > >\n> > > The move ended up more like (1), though I did do\n> > > s/systable_inplace_update_scan/systable_inplace_update_begin/ like in (2). I\n> > > felt that worked better than (2) to achieve lock release before\n> > > CacheInvalidateHeapTuple(). Alternatives that could be fine:\n> > >\n> > From a consistency point of view, I find it cleaner if we can have all\n> > the heap_inplace_lock and heap_inplace_unlock in the same set of\n> > functions. So here those would be the systable_inplace_* functions.\n>\n> That will technically be the case after inplace160, and I could make it so\n> here by inlining heap_inplace_unlock() into its heapam.c caller. Would that\n> be cleaner or less clean?\n>\n\nI am not sure. It seems more inconsistent to take the lock using\nheap_inplace_lock but then just unlock by calling LockBuffer. On the\nother hand, it doesn't seem that different from the way\nSearchSysCacheLocked1 and UnlockTuple are used in inplace120. If we\nare doing it this way, perhaps it would be good to rename\nheap_inplace_update to heap_inplace_update_and_unlock.\n\n> > > - In the cancel case, call both systable_inplace_update_cancel and\n> > > systable_inplace_update_end. _finish or _cancel would own unlock, while\n> > > _end would own systable_endscan().\n> > >\n> > What happens to CacheInvalidateHeapTuple() in this approach? I think\n> > it will still need to be brought to the genam.c layer if we are\n> > releasing the lock in systable_inplace_update_finish.\n>\n> Cancel scenarios don't do invalidation. (Same under other alternatives.)\n>\n\nSorry, I wasn't clear about this one. Let me rephrase. My\nunderstanding is that the code in this approach would look like below\n:\nif (dirty)\n systable_inplace_update_finish(inplace_state, tup);\nelse\n systable_inplace_update_cancel(inplace_state);\nsystable_inplace_update_end(inplace_state);\n\nAnd that in this structure, both _finish and _cancel will call\nheap_inplace_unlock and then _end will call systable_endscan. So even\nwith this structure, the invalidation has to happen inside _finish\nafter the unlock. So this also pulls the invalidation to the genam.c\nlayer. Am I understanding this correctly?\n\n> > > - Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n> > > tolerable now, this gets less attractive after the inplace160 patch from\n> > > https://postgr.es/m/flat/[email protected]\n> > >\n> > I skimmed through the inplace160 patch. It wasn't clear to me why this\n> > becomes less attractive with the patch. I see there is a new\n> > CacheInvalidateHeapTupleInPlace but that looks like it would be called\n> > while holding the lock. And then there is an\n> > AcceptInvalidationMessages which can perhaps be moved to the genam.c\n> > layer too. Is the concern that one invalidation call will be in the\n> > heapam layer and the other will be in the genam layer?\n>\n> That, or a critical section would start in heapam.c, then end in genam.c.\n> Current call tree at inplace160 v4:\n>\n> genam.c:systable_inplace_update_finish\n> heapam.c:heap_inplace_update\n> PreInplace_Inval\n> START_CRIT_SECTION\n> BUFFER_LOCK_UNLOCK\n> AtInplace_Inval\n> END_CRIT_SECTION\n> UnlockTuple\n> AcceptInvalidationMessages\n>\n> If we hoisted all of invalidation up to the genam.c layer, a critical section\n> that starts in heapam.c would end in genam.c:\n>\n> genam.c:systable_inplace_update_finish\n> PreInplace_Inval\n> heapam.c:heap_inplace_update\n> START_CRIT_SECTION\n> BUFFER_LOCK_UNLOCK\n> AtInplace_Inval\n> END_CRIT_SECTION\n> UnlockTuple\n> AcceptInvalidationMessages\n>\n> If we didn't accept splitting the critical section but did accept splitting\n> invalidation responsibilities, one gets perhaps:\n>\n> genam.c:systable_inplace_update_finish\n> PreInplace_Inval\n> heapam.c:heap_inplace_update\n> START_CRIT_SECTION\n> BUFFER_LOCK_UNLOCK\n> AtInplace_Inval\n> END_CRIT_SECTION\n> UnlockTuple\n> AcceptInvalidationMessages\n>\n\nHow about this alternative?\n\n genam.c:systable_inplace_update_finish\n PreInplace_Inval\n START_CRIT_SECTION\n heapam.c:heap_inplace_update\n BUFFER_LOCK_UNLOCK\n AtInplace_Inval\n END_CRIT_SECTION\n UnlockTuple\n AcceptInvalidationMessages\n\nLooking at inplace160, it seems that the start of the critical section\nis right after PreInplace_Inval. So why not pull START_CRIT_SECTION\nand END_CRIT_SECTION out to the genam.c layer? Alternatively since\nheap_inplace_update is commented as a subroutine of\nsystable_inplace_update_finish, should everything just be moved to the\ngenam.c layer? Although it looks like you already considered and\nrejected this approach. So just pulling out the critical sections\nstart and end is fine. Am I missing something here?\n\nIf the above alternatives are not possible, it's probably fine to go\nahead with the current patch with the function renamed to\nheap_inplace_update_and_unlock (or something similar) as mentioned\nearlier?\n\n> That's how I ended up at inplace120 v9's design.\n>\n> > Also I have a small question from inplace120.\n> >\n> > I see that all the places we check resultRelInfo->ri_needLockTagTuple,\n> > we can just call\n> > IsInplaceUpdateRelation(resultRelInfo->ri_RelationDesc). Is there a\n> > big advantage of storing a separate bool field? Also there is another\n>\n> No, not a big advantage. I felt it was more in line with the typical style of\n> src/backend/executor.\n>\n\nThanks for the clarification. For ri_TrigDesc, I see the following\ncomment in execMain.c :\n\n/* make a copy so as not to depend on relcache info not changing... */\nresultRelInfo->ri_TrigDesc = CopyTriggerDesc(resultRelationDesc->trigdesc);\n\nSo in this case I see more value in having a separate field compared\nto the bool field for ri_needLockTagTuple.\n\n> > write to ri_RelationDesc in CatalogOpenIndexes in\n> > src/backlog/catalog/indexing.c. I think ri_needLockTagTuple needs to\n> > be set there also to keep it consistent with ri_RelationDesc. Please\n> > let me know if I am missing something about the usage of the new\n> > field.\n>\n> Can you say more about consequences you found?\n>\n\nMy apologies that I wasn't clear. I haven't found any consequences. I\njust find it a smell that there are two fields which are not\nindependent and they go out of sync. And that's why my preference is\nto not have a dependent field unless there is a specific advantage.\n\n> Only the full executor reads the field, doing so when it fetches the most\n> recent version of a row. CatalogOpenIndexes() callers lack the full\n> executor's practice of fetching the most recent version of a row, so they\n> couldn't benefit reading the field.\n>\n> I don't think any CatalogOpenIndexes() caller passes its ResultRelInfo to the\n> full executor, and \"typedef struct ResultRelInfo *CatalogIndexState\" exists in\n> part to keep it that way. Since CatalogOpenIndexes() skips ri_TrigDesc and\n> other fields, I would expect other malfunctions if some caller tried.\n>\n\nSorry, I missed the typedef. Thanks for pointing that out. I agree\nthat the likelihood of any malfunction is low. But even for the\nri_TrigDesc, CatalogOpenIndexes() still sets it to NULL. So shouldn't\nri_needLockTagTuple also be set to a default value of false? My\npreference would be not to have a separate bool field to avoid\nthinking about these scenarios.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n", "msg_date": "Tue, 3 Sep 2024 21:24:52 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Tue, Sep 03, 2024 at 09:24:52PM +0530, Nitin Motiani wrote:\n> On Sat, Aug 31, 2024 at 6:40 AM Noah Misch <[email protected]> wrote:\n> > On Thu, Aug 29, 2024 at 09:08:43PM +0530, Nitin Motiani wrote:\n> > > On Thu, Aug 29, 2024 at 8:11 PM Noah Misch <[email protected]> wrote:\n> > > > - In the cancel case, call both systable_inplace_update_cancel and\n> > > > systable_inplace_update_end. _finish or _cancel would own unlock, while\n> > > > _end would own systable_endscan().\n> > > >\n> > > What happens to CacheInvalidateHeapTuple() in this approach? I think\n> > > it will still need to be brought to the genam.c layer if we are\n> > > releasing the lock in systable_inplace_update_finish.\n\n> understanding is that the code in this approach would look like below\n> :\n> if (dirty)\n> systable_inplace_update_finish(inplace_state, tup);\n> else\n> systable_inplace_update_cancel(inplace_state);\n> systable_inplace_update_end(inplace_state);\n> \n> And that in this structure, both _finish and _cancel will call\n> heap_inplace_unlock and then _end will call systable_endscan. So even\n> with this structure, the invalidation has to happen inside _finish\n> after the unlock.\n\nRight.\n\n> So this also pulls the invalidation to the genam.c\n> layer. Am I understanding this correctly?\n\nCompared to the v9 patch, the \"call both\" alternative would just move the\nsystable_endscan() call to a new systable_inplace_update_end(). It wouldn't\nmove anything across the genam:heapam boundary.\nsystable_inplace_update_finish() would remain a thin wrapper around a heapam\nfunction.\n\n> > > > - Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n> > > > tolerable now, this gets less attractive after the inplace160 patch from\n> > > > https://postgr.es/m/flat/[email protected]\n> > > >\n> > > I skimmed through the inplace160 patch. It wasn't clear to me why this\n> > > becomes less attractive with the patch. I see there is a new\n> > > CacheInvalidateHeapTupleInPlace but that looks like it would be called\n> > > while holding the lock. And then there is an\n> > > AcceptInvalidationMessages which can perhaps be moved to the genam.c\n> > > layer too. Is the concern that one invalidation call will be in the\n> > > heapam layer and the other will be in the genam layer?\n> >\n> > That, or a critical section would start in heapam.c, then end in genam.c.\n> > Current call tree at inplace160 v4:\n\n> How about this alternative?\n> \n> genam.c:systable_inplace_update_finish\n> PreInplace_Inval\n> START_CRIT_SECTION\n> heapam.c:heap_inplace_update\n> BUFFER_LOCK_UNLOCK\n> AtInplace_Inval\n> END_CRIT_SECTION\n> UnlockTuple\n> AcceptInvalidationMessages\n\n> Looking at inplace160, it seems that the start of the critical section\n> is right after PreInplace_Inval. So why not pull START_CRIT_SECTION\n> and END_CRIT_SECTION out to the genam.c layer?\n\nheap_inplace_update() has an elog(ERROR) that needs to happen outside any\ncritical section. Since the condition for that elog deals with tuple header\ninternals, it belongs at the heapam layer more than the systable layer.\n\n> Alternatively since\n> heap_inplace_update is commented as a subroutine of\n> systable_inplace_update_finish, should everything just be moved to the\n> genam.c layer? Although it looks like you already considered and\n> rejected this approach.\n\nCalling XLogInsert(RM_HEAP_ID) in genam.c would be a worse modularity\nviolation than the one that led to the changes between v8 and v9. I think\neven calling CacheInvalidateHeapTuple() in genam.c would be a worse modularity\nviolation than the one attributed to v8. Modularity would have the\nheap_inplace function resemble heap_update() handling of invals.\n\n> If the above alternatives are not possible, it's probably fine to go\n> ahead with the current patch with the function renamed to\n> heap_inplace_update_and_unlock (or something similar) as mentioned\n> earlier?\n\nI like that name. The next version will use it.\n\n> > > I see that all the places we check resultRelInfo->ri_needLockTagTuple,\n> > > we can just call\n> > > IsInplaceUpdateRelation(resultRelInfo->ri_RelationDesc). Is there a\n> > > big advantage of storing a separate bool field? Also there is another\n> >\n> > No, not a big advantage. I felt it was more in line with the typical style of\n> > src/backend/executor.\n\n> just find it a smell that there are two fields which are not\n> independent and they go out of sync. And that's why my preference is\n> to not have a dependent field unless there is a specific advantage.\n\nGot it. This check happens for every tuple of every UPDATE, so performance\nmay be a factor. Some designs and their merits:\n\n==== a. ri_needLockTagTuple\nPerformance: best: check one value for nonzero\nDrawback: one more value lifecycle to understand\nDrawback: users of ResultRelInfo w/o InitResultRelInfo() could miss this\n\n==== b. call IsInplaceUpdateRelation\nPerformance: worst: two extern function calls, then compare against two values\n\n==== c. make IsInplaceUpdateRelation() and IsInplaceUpdateOid() inline, and call\nPerformance: high: compare against two values\nDrawback: unlike catalog.c peers\nDrawback: extensions that call these must recompile if these change\n\n==== d. add IsInplaceUpdateRelationInline() and IsInplaceUpdateOidInline(), and call\nPerformance: high: compare against two values\nDrawback: more symbols to understand\nDrawback: extensions might call these, reaching the drawback of (c)\n\nI think my preference order is (a), (c), (d), (b). How do you see it?\n\n> But even for the\n> ri_TrigDesc, CatalogOpenIndexes() still sets it to NULL. So shouldn't\n> ri_needLockTagTuple also be set to a default value of false?\n\nCatalogOpenIndexes() explicitly zero-initializes two fields and relies on\nmakeNode() zeroing for dozens of others. Hence, omitting the initialization\nfits the function's local convention better than including it. (PostgreSQL\nhas no policy or dominant practice about redundant zero-initialization.)\n\n\n", "msg_date": "Tue, 3 Sep 2024 14:23:29 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Sep 4, 2024 at 2:53 AM Noah Misch <[email protected]> wrote:\n>\n>\n> > So this also pulls the invalidation to the genam.c\n> > layer. Am I understanding this correctly?\n>\n> Compared to the v9 patch, the \"call both\" alternative would just move the\n> systable_endscan() call to a new systable_inplace_update_end(). It wouldn't\n> move anything across the genam:heapam boundary.\n> systable_inplace_update_finish() would remain a thin wrapper around a heapam\n> function.\n>\n\nThanks for the clarification.\n\n> > > > > - Hoist the CacheInvalidateHeapTuple() up to the genam.c layer. While\n> > > > > tolerable now, this gets less attractive after the inplace160 patch from\n> > > > > https://postgr.es/m/flat/[email protected]\n> > > > >\n> > > > I skimmed through the inplace160 patch. It wasn't clear to me why this\n> > > > becomes less attractive with the patch. I see there is a new\n> > > > CacheInvalidateHeapTupleInPlace but that looks like it would be called\n> > > > while holding the lock. And then there is an\n> > > > AcceptInvalidationMessages which can perhaps be moved to the genam.c\n> > > > layer too. Is the concern that one invalidation call will be in the\n> > > > heapam layer and the other will be in the genam layer?\n> > >\n> > > That, or a critical section would start in heapam.c, then end in genam.c.\n> > > Current call tree at inplace160 v4:\n>\n> > How about this alternative?\n> >\n> > genam.c:systable_inplace_update_finish\n> > PreInplace_Inval\n> > START_CRIT_SECTION\n> > heapam.c:heap_inplace_update\n> > BUFFER_LOCK_UNLOCK\n> > AtInplace_Inval\n> > END_CRIT_SECTION\n> > UnlockTuple\n> > AcceptInvalidationMessages\n>\n> > Looking at inplace160, it seems that the start of the critical section\n> > is right after PreInplace_Inval. So why not pull START_CRIT_SECTION\n> > and END_CRIT_SECTION out to the genam.c layer?\n>\n> heap_inplace_update() has an elog(ERROR) that needs to happen outside any\n> critical section. Since the condition for that elog deals with tuple header\n> internals, it belongs at the heapam layer more than the systable layer.\n>\n\nUnderstood. How about this alternative then? The tuple length check\nand the elog(ERROR) gets its own function. Something like\nheap_inplace_update_validate or\nheap_inplace_update_validate_tuple_length. So in that case, it would\nlook like this :\n\n genam.c:systable_inplace_update_finish\n heapam.c:heap_inplace_update_validate/heap_inplace_update_precheck\n PreInplace_Inval\n START_CRIT_SECTION\n heapam.c:heap_inplace_update\n BUFFER_LOCK_UNLOCK\n AtInplace_Inval\n END_CRIT_SECTION\n UnlockTuple\n AcceptInvalidationMessages\n\nThis is starting to get complicated though so I don't have any issues\nwith just renaming the heap_inplace_update to\nheap_inplace_update_and_unlock.\n\n> > Alternatively since\n> > heap_inplace_update is commented as a subroutine of\n> > systable_inplace_update_finish, should everything just be moved to the\n> > genam.c layer? Although it looks like you already considered and\n> > rejected this approach.\n>\n> Calling XLogInsert(RM_HEAP_ID) in genam.c would be a worse modularity\n> violation than the one that led to the changes between v8 and v9. I think\n> even calling CacheInvalidateHeapTuple() in genam.c would be a worse modularity\n> violation than the one attributed to v8. Modularity would have the\n> heap_inplace function resemble heap_update() handling of invals.\n>\n\nUnderstood. Thanks.\n\n> > If the above alternatives are not possible, it's probably fine to go\n> > ahead with the current patch with the function renamed to\n> > heap_inplace_update_and_unlock (or something similar) as mentioned\n> > earlier?\n>\n> I like that name. The next version will use it.\n>\n\nSo either we go with this or try the above approach of having a\nseparate function _validate/_precheck/_validate_tuple_length. I don't\nhave a strong opinion on either of these approaches.\n\n> > > > I see that all the places we check resultRelInfo->ri_needLockTagTuple,\n> > > > we can just call\n> > > > IsInplaceUpdateRelation(resultRelInfo->ri_RelationDesc). Is there a\n> > > > big advantage of storing a separate bool field? Also there is another\n> > >\n> > > No, not a big advantage. I felt it was more in line with the typical style of\n> > > src/backend/executor.\n>\n> > just find it a smell that there are two fields which are not\n> > independent and they go out of sync. And that's why my preference is\n> > to not have a dependent field unless there is a specific advantage.\n>\n> Got it. This check happens for every tuple of every UPDATE, so performance\n> may be a factor. Some designs and their merits:\n>\n\nThanks. If performance is a factor, it makes sense to keep it.\n\n> ==== a. ri_needLockTagTuple\n> Performance: best: check one value for nonzero\n> Drawback: one more value lifecycle to understand\n> Drawback: users of ResultRelInfo w/o InitResultRelInfo() could miss this\n>\n> ==== b. call IsInplaceUpdateRelation\n> Performance: worst: two extern function calls, then compare against two values\n>\n> ==== c. make IsInplaceUpdateRelation() and IsInplaceUpdateOid() inline, and call\n> Performance: high: compare against two values\n> Drawback: unlike catalog.c peers\n> Drawback: extensions that call these must recompile if these change\n>\n> ==== d. add IsInplaceUpdateRelationInline() and IsInplaceUpdateOidInline(), and call\n> Performance: high: compare against two values\n> Drawback: more symbols to understand\n> Drawback: extensions might call these, reaching the drawback of (c)\n>\n> I think my preference order is (a), (c), (d), (b). How do you see it?\n>\n\nMy preference order would be the same. In general I like (c) more than\n(a) but recompiling extensions sounds like a major drawback so here\nthe preference is (a).\n\nCan we do (a) along with some extra checks? To elaborate, execMain.c\nhas a function called CheckValidateRelationRel which is called by\nExecInitModifyTable in nodeModifyTable.c. Can we add the following\nassert assert (or just a debug assert) in this function?\n\n Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc)\n\nThis can safeguard against users of ResultRelInfo missing this field.\nAn alternative might be to only do debug assertions in the functions\nwhich use the field. But it seems simpler to just do it once in the\nExecInitModifyTable.\n\n> > But even for the\n> > ri_TrigDesc, CatalogOpenIndexes() still sets it to NULL. So shouldn't\n> > ri_needLockTagTuple also be set to a default value of false?\n>\n> CatalogOpenIndexes() explicitly zero-initializes two fields and relies on\n> makeNode() zeroing for dozens of others. Hence, omitting the initialization\n> fits the function's local convention better than including it. (PostgreSQL\n> has no policy or dominant practice about redundant zero-initialization.)\n\nThanks. Makes sense.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n", "msg_date": "Wed, 4 Sep 2024 21:00:32 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote:\n> How about this alternative then? The tuple length check\n> and the elog(ERROR) gets its own function. Something like\n> heap_inplace_update_validate or\n> heap_inplace_update_validate_tuple_length. So in that case, it would\n> look like this :\n> \n> genam.c:systable_inplace_update_finish\n> heapam.c:heap_inplace_update_validate/heap_inplace_update_precheck\n> PreInplace_Inval\n> START_CRIT_SECTION\n> heapam.c:heap_inplace_update\n> BUFFER_LOCK_UNLOCK\n> AtInplace_Inval\n> END_CRIT_SECTION\n> UnlockTuple\n> AcceptInvalidationMessages\n> \n> This is starting to get complicated though so I don't have any issues\n> with just renaming the heap_inplace_update to\n> heap_inplace_update_and_unlock.\n\nComplexity aside, I don't see the _precheck design qualifying as a modularity\nimprovement.\n\n> Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc)\n> \n> This can safeguard against users of ResultRelInfo missing this field.\n\nv10 does the rename and adds that assertion. This question remains open:\n\nOn Thu, Aug 22, 2024 at 12:32:00AM -0700, Noah Misch wrote:\n> On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> > How many of those for RELKIND_INDEX vs tables? I'm thinking if we should\n> > always require a tuple lock on indexes, if that would make a difference.\n> \n> Three sites. See attached inplace125 patch. Is it a net improvement? If so,\n> I'll squash it into inplace120.\n\nIf nobody has an opinion, I'll discard inplace125. I feel it's not a net\nimprovement, but either way is fine with me.", "msg_date": "Wed, 4 Sep 2024 12:57:20 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Sep 5, 2024 at 1:27 AM Noah Misch <[email protected]> wrote:\n>\n> On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote:\n> > How about this alternative then? The tuple length check\n> > and the elog(ERROR) gets its own function. Something like\n> > heap_inplace_update_validate or\n> > heap_inplace_update_validate_tuple_length. So in that case, it would\n> > look like this :\n> >\n> > genam.c:systable_inplace_update_finish\n> > heapam.c:heap_inplace_update_validate/heap_inplace_update_precheck\n> > PreInplace_Inval\n> > START_CRIT_SECTION\n> > heapam.c:heap_inplace_update\n> > BUFFER_LOCK_UNLOCK\n> > AtInplace_Inval\n> > END_CRIT_SECTION\n> > UnlockTuple\n> > AcceptInvalidationMessages\n> >\n> > This is starting to get complicated though so I don't have any issues\n> > with just renaming the heap_inplace_update to\n> > heap_inplace_update_and_unlock.\n>\n> Complexity aside, I don't see the _precheck design qualifying as a modularity\n> improvement.\n>\n> > Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc)\n> >\n> > This can safeguard against users of ResultRelInfo missing this field.\n>\n> v10 does the rename and adds that assertion. This question remains open:\n>\n\nLooks good. A couple of minor comments :\n1. In the inplace110 commit message, there are still references to\nheap_inplace_update. Should it be clarified that the function has been\nrenamed?\n2. Should there be a comment above the ri_needLockTag definition in\nexecNodes.h that we are caching this value to avoid function calls to\nIsInPlaceUpdateRelation for every tuple? Similar to how the comment\nabove ri_TrigFunctions mentions that it is cached lookup info.\n\n> On Thu, Aug 22, 2024 at 12:32:00AM -0700, Noah Misch wrote:\n> > On Tue, Aug 20, 2024 at 11:59:45AM +0300, Heikki Linnakangas wrote:\n> > > How many of those for RELKIND_INDEX vs tables? I'm thinking if we should\n> > > always require a tuple lock on indexes, if that would make a difference.\n> >\n> > Three sites. See attached inplace125 patch. Is it a net improvement? If so,\n> > I'll squash it into inplace120.\n>\n> If nobody has an opinion, I'll discard inplace125. I feel it's not a net\n> improvement, but either way is fine with me.\n\nSeems moderately simpler to me. But there is still special handling\nfor the RELKIND_INDEX. Just that instead of doing it in\nsystable_inplace_update_begin, we have a special case in heap_update.\nSo overall it's only a small improvement and I'm fine either way.\n\nThanks & Regards\nNitin Motiani\nGoogle\n\n\n", "msg_date": "Thu, 5 Sep 2024 19:10:04 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Sep 05, 2024 at 07:10:04PM +0530, Nitin Motiani wrote:\n> On Thu, Sep 5, 2024 at 1:27 AM Noah Misch <[email protected]> wrote:\n> > On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote:\n> > > Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc)\n> > >\n> > > This can safeguard against users of ResultRelInfo missing this field.\n> >\n> > v10 does the rename and adds that assertion. This question remains open:\n> \n> Looks good. A couple of minor comments :\n> 1. In the inplace110 commit message, there are still references to\n> heap_inplace_update. Should it be clarified that the function has been\n> renamed?\n\nPGXN has only one caller of this function, so I think that wouldn't help\nreaders enough. If someone gets a compiler error about the old name, they'll\nfigure it out without commit log guidance. If a person doesn't get a compiler\nerror, they didn't need to read about the fact of the rename.\n\n> 2. Should there be a comment above the ri_needLockTag definition in\n> execNodes.h that we are caching this value to avoid function calls to\n> IsInPlaceUpdateRelation for every tuple? Similar to how the comment\n> above ri_TrigFunctions mentions that it is cached lookup info.\n\nCurrent comment:\n\n\t/* updates do LockTuple() before oldtup read; see README.tuplock */\n\tbool\t\tri_needLockTagTuple;\n\nOnce the comment doesn't fit in one line, pgindent rules make it take a\nminimum of four lines. I don't think words about avoiding function calls\nwould add enough value to justify the vertical space, because a person\nstarting to remove it would see where it's called. That's not to say the\naddition would be negligent. If someone else were writing the patch and had\nincluded that, I wouldn't be deleting the material.\n\n\n", "msg_date": "Thu, 5 Sep 2024 15:04:34 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Sep 6, 2024 at 3:34 AM Noah Misch <[email protected]> wrote:\n>\n> On Thu, Sep 05, 2024 at 07:10:04PM +0530, Nitin Motiani wrote:\n> > On Thu, Sep 5, 2024 at 1:27 AM Noah Misch <[email protected]> wrote:\n> > > On Wed, Sep 04, 2024 at 09:00:32PM +0530, Nitin Motiani wrote:\n> > > > Assert(rel->ri_needsLockTagTuple == IsInplaceUpdateRelation(rel->relationDesc)\n> > > >\n> > > > This can safeguard against users of ResultRelInfo missing this field.\n> > >\n> > > v10 does the rename and adds that assertion. This question remains open:\n> >\n> > Looks good. A couple of minor comments :\n> > 1. In the inplace110 commit message, there are still references to\n> > heap_inplace_update. Should it be clarified that the function has been\n> > renamed?\n>\n> PGXN has only one caller of this function, so I think that wouldn't help\n> readers enough. If someone gets a compiler error about the old name, they'll\n> figure it out without commit log guidance. If a person doesn't get a compiler\n> error, they didn't need to read about the fact of the rename.\n>\n> > 2. Should there be a comment above the ri_needLockTag definition in\n> > execNodes.h that we are caching this value to avoid function calls to\n> > IsInPlaceUpdateRelation for every tuple? Similar to how the comment\n> > above ri_TrigFunctions mentions that it is cached lookup info.\n>\n> Current comment:\n>\n> /* updates do LockTuple() before oldtup read; see README.tuplock */\n> bool ri_needLockTagTuple;\n>\n> Once the comment doesn't fit in one line, pgindent rules make it take a\n> minimum of four lines. I don't think words about avoiding function calls\n> would add enough value to justify the vertical space, because a person\n> starting to remove it would see where it's called. That's not to say the\n> addition would be negligent. If someone else were writing the patch and had\n> included that, I wouldn't be deleting the material.\n\nThanks. I have no other comments.\n\n\n", "msg_date": "Fri, 6 Sep 2024 15:22:48 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Fri, Sep 06, 2024 at 03:22:48PM +0530, Nitin Motiani wrote:\n> Thanks. I have no other comments.\n\nhttps://commitfest.postgresql.org/49/5090/ remains in status=\"Needs review\".\nWhen someone moves it to status=\"Ready for Committer\", I will commit\ninplace090, inplace110, and inplace120 patches. If one of you is comfortable\nwith that, please modify the status.\n\n\n", "msg_date": "Fri, 6 Sep 2024 11:55:31 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Sat, Sep 7, 2024 at 12:25 AM Noah Misch <[email protected]> wrote:\n>\n> On Fri, Sep 06, 2024 at 03:22:48PM +0530, Nitin Motiani wrote:\n> > Thanks. I have no other comments.\n>\n> https://commitfest.postgresql.org/49/5090/ remains in status=\"Needs review\".\n> When someone moves it to status=\"Ready for Committer\", I will commit\n> inplace090, inplace110, and inplace120 patches. If one of you is comfortable\n> with that, please modify the status.\n\nDone.\n\n\n", "msg_date": "Mon, 9 Sep 2024 10:55:32 +0530", "msg_from": "Nitin Motiani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Mon, Sep 09, 2024 at 10:55:32AM +0530, Nitin Motiani wrote:\n> On Sat, Sep 7, 2024 at 12:25 AM Noah Misch <[email protected]> wrote:\n> > https://commitfest.postgresql.org/49/5090/ remains in status=\"Needs review\".\n> > When someone moves it to status=\"Ready for Committer\", I will commit\n> > inplace090, inplace110, and inplace120 patches. If one of you is comfortable\n> > with that, please modify the status.\n> \n> Done.\n\nFYI, here are the branch-specific patches. I plan to push these after the v17\nrelease freeze lifts next week. Notes from the back-patch:\n\n1. In v13 and v12, \"UPDATE pg_class\" or \"UPDATE pg_database\" can still lose a\nconcurrent inplace update. The v14+ fix relied on commit 86dc900 \"Rework\nplanning and execution of UPDATE and DELETE\", which moved the last fetch of\nthe pre-update tuple into nodeModifyTable.c. Fixing that was always optional.\nI prefer leaving it unfixed in those two branches, as opposed to writing a fix\nspecific to those branches. Here's what I put in v13 and v12:\n\n \t\t/*\n+\t\t * We lack the infrastructure to follow rules in README.tuplock\n+\t\t * section \"Locking to write inplace-updated tables\". Specifically,\n+\t\t * we lack infrastructure to lock tupleid before this file's\n+\t\t * ExecProcNode() call fetches the tuple's old columns. Just take a\n+\t\t * lock that silences check_lock_if_inplace_updateable_rel(). This\n+\t\t * doesn't actually protect inplace updates like those rules intend,\n+\t\t * so we may lose an inplace update that overlaps a superuser running\n+\t\t * \"UPDATE pg_class\" or \"UPDATE pg_database\".\n+\t\t */\n+#ifdef USE_ASSERT_CHECKING\n+\t\tif (IsInplaceUpdateRelation(resultRelationDesc))\n+\t\t{\n+\t\t\tlockedtid = *tupleid;\n+\t\t\tLockTuple(resultRelationDesc, &lockedtid, InplaceUpdateTupleLock);\n+\t\t}\n+\t\telse\n+\t\t\tItemPointerSetInvalid(&lockedtid);\n+#endif\n\n\n2. The other area of tricky conflicts was the back-patch in\nExecMergeMatched(), from v17 to v16.\n\n3. I've added inplace088-SetRelationTableSpace, a back-patch of refactoring\ncommits 4c9c359 and 2484329 to v13 and v12. Before those commits, we held the\nmodifiable copy of the relation's pg_class row throughout a\ntable_relation_copy_data(). Back-patching it avoids a needless long-duration\nLOCKTAG_TUPLE, and it's better than implementing a novel way to avoid that.", "msg_date": "Thu, 19 Sep 2024 14:33:46 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" }, { "msg_contents": "On Thu, Sep 19, 2024 at 02:33:46PM -0700, Noah Misch wrote:\n> On Mon, Sep 09, 2024 at 10:55:32AM +0530, Nitin Motiani wrote:\n> > On Sat, Sep 7, 2024 at 12:25 AM Noah Misch <[email protected]> wrote:\n> > > https://commitfest.postgresql.org/49/5090/ remains in status=\"Needs review\".\n> > > When someone moves it to status=\"Ready for Committer\", I will commit\n> > > inplace090, inplace110, and inplace120 patches. If one of you is comfortable\n> > > with that, please modify the status.\n> > \n> > Done.\n> \n> FYI, here are the branch-specific patches. I plan to push these after the v17\n> release freeze lifts next week.\n\nPushed, but the pushes contained at least one defect:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=akepa&dt=2024-09-24%2022%3A29%3A02\n\nI will act on that and other buildfarm failures that show up.\n\n\n", "msg_date": "Tue, 24 Sep 2024 15:43:52 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: race condition in pg_class" } ]
[ { "msg_contents": "Hello,\n\nWe are aware that, using async connection functions (`PQconnectStart`,\n`PQconnectPoll`), the `connect_timeout` parameter is not supported;\nthis is documented at\nhttps://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS\n\n\"\"\"\nThe connect_timeout connection parameter is ignored when using\nPQconnectPoll; it is the application's responsibility to decide\nwhether an excessive amount of time has elapsed. Otherwise,\nPQconnectStart followed by a PQconnectPoll loop is equivalent to\nPQconnectdb.\n\"\"\"\n\nHowever, ISTM that connecting to multiple hosts is not supported\neither. I have a couple of issues I am looking into in psycopg 3:\n\n- https://github.com/psycopg/psycopg/issues/602\n- https://github.com/psycopg/psycopg/issues/674\n\nDo we have to reimplement the connection attempts loop too?\n\nAre there other policies that we would need to reimplement? Is\n`target_session_attrs` taken care of by PQconnectPoll?\n\nOn my box (testing with psql and libpq itself),\nPQconnect(\"host=8.8.8.8\") fails after 2m10s. Is this the result of\nsome unspecified socket connection timeout on my Ubuntu machine?.\n\nIf we need to reimplement async connection to \"host=X,Y\", we will need\nto use a timeout even if the user didn't specify one, otherwise we\nwill never stop the connection attempt to X and move to Y. What\ntimeout can we specify that will not upset anyone?\n\nThank you very much\n\n-- Daniele\n\n\n", "msg_date": "Wed, 25 Oct 2023 17:03:18 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "libpq async connection and multiple hosts" }, { "msg_contents": "On Wed, 25 Oct 2023 at 17:03, Daniele Varrazzo\n<[email protected]> wrote:\n> However, ISTM that connecting to multiple hosts is not supported\n> either. I have a couple of issues I am looking into in psycopg 3:\n>\n> - https://github.com/psycopg/psycopg/issues/602\n> - https://github.com/psycopg/psycopg/issues/674\n\nAnother approach is to use tcp_user_timeout instead of connect_timeout\nto skip non-responsive hosts. It's not completely equivalent though to\nconnection_timeout though, since it also applies when the connection\nis actually being used. Also it only works on Linux afaik. It could be\nnice to add support for BSD its TCP_CONNECTIONTIMEOUT socket option.\n\n> Do we have to reimplement the connection attempts loop too?\n\nIf you want to support connection_timeout, it seems yes.\n\n> Are there other policies that we would need to reimplement? Is\n> `target_session_attrs` taken care of by PQconnectPoll?\n\nAfaict from the code target_session_attrs are handled inside\nPQconnectPoll, so you would not have to re-implement that.\nPQconnectPoll would simply fail if target_session_attrs don't match\nfor the server. You should implement load_balance_hosts=random though\nby randomizing your hosts list.\n\n\n", "msg_date": "Wed, 25 Oct 2023 17:34:54 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq async connection and multiple hosts" }, { "msg_contents": "On Wed, 25 Oct 2023 at 17:35, Jelte Fennema <[email protected]> wrote:\n\n> You should implement load_balance_hosts=random though\n> by randomizing your hosts list.\n\nGood catch. So it seems that, if someone wants to build an equivalent\nan async version of PQconnectdb, they need to handle on their own:\n\n- connect_timeout\n- multiple host, hostaddr, port\n- load_balance_hosts=random\n\nDoes this list sound complete?\n\n-- Daniele\n\n\n", "msg_date": "Wed, 25 Oct 2023 18:54:15 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq async connection and multiple hosts" }, { "msg_contents": "On Wed, 25 Oct 2023 at 17:35, Jelte Fennema <[email protected]> wrote:\n\n> Another approach is to use tcp_user_timeout instead of connect_timeout\n> to skip non-responsive hosts. It's not completely equivalent though to\n> connection_timeout though, since it also applies when the connection\n> is actually being used. Also it only works on Linux afaik. It could be\n> nice to add support for BSD its TCP_CONNECTIONTIMEOUT socket option.\n\nThis seems brittle and platform-dependent enough that we would surely\nreceive less grief by hardcoding a default two minutes timeout.\n\n-- Daniele\n\n\n", "msg_date": "Wed, 25 Oct 2023 18:58:10 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq async connection and multiple hosts" }, { "msg_contents": "On Wed, 25 Oct 2023 at 18:54, Daniele Varrazzo\n<[email protected]> wrote:\n> - connect_timeout\n> - multiple host, hostaddr, port\n> - load_balance_hosts=random\n>\n> Does this list sound complete?\n\nI think you'd also want to resolve the hostnames to IPs yourself and\niterate over those one-by-one. Otherwise if the first IP returned for\nthe hostname times out, you will never connect to the others.\n\n\n", "msg_date": "Thu, 26 Oct 2023 00:10:39 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq async connection and multiple hosts" }, { "msg_contents": "On Thu, 26 Oct 2023, 00:10 Jelte Fennema, <[email protected]> wrote:\n\n> On Wed, 25 Oct 2023 at 18:54, Daniele Varrazzo\n> <[email protected]> wrote:\n> > - connect_timeout\n> > - multiple host, hostaddr, port\n> > - load_balance_hosts=random\n> >\n> > Does this list sound complete?\n>\n> I think you'd also want to resolve the hostnames to IPs yourself and\n> iterate over those one-by-one. Otherwise if the first IP returned for\n> the hostname times out, you will never connect to the others.\n>\n\nFor async connections we were already unpacking and processing the hosts\nlist, in order to perform non-blocking resolution and populate the\nhostaddr. This already accounted for the possibility of one host resolving\nto more than one address. But then we would have packed everything back\ninto a single conninfo and made a single connection attempt.\n\nhttps://github.com/psycopg/psycopg/blob/14740add6bb1aebf593a65245df21699daabfad5/psycopg/psycopg/conninfo.py#L278\n\nThe goal here was only non-blocking name resolution. Ahaini understand we\nshould do is to split on the hosts for sync connections too, shuffle if\nrequested, and make separate connection attempts.\n\n-- Daniele\n\n>\n\nOn Thu, 26 Oct 2023, 00:10 Jelte Fennema, <[email protected]> wrote:On Wed, 25 Oct 2023 at 18:54, Daniele Varrazzo\n<[email protected]> wrote:\n> - connect_timeout\n> - multiple host, hostaddr, port\n> - load_balance_hosts=random\n>\n> Does this list sound complete?\n\nI think you'd also want to resolve the hostnames to IPs yourself and\niterate over those one-by-one. Otherwise if the first IP returned for\nthe hostname times out, you will never connect to the others.For async connections we were already unpacking and processing the hosts list, in order to perform non-blocking resolution and populate the hostaddr. This already accounted for the possibility of one host resolving to more than one address. But then we would have packed everything back into a single conninfo and made a single connection attempt.https://github.com/psycopg/psycopg/blob/14740add6bb1aebf593a65245df21699daabfad5/psycopg/psycopg/conninfo.py#L278The goal here was only non-blocking name resolution. Ahaini understand we should do is to split on the hosts for sync connections too, shuffle if requested, and make separate  connection attempts.-- Daniele", "msg_date": "Thu, 26 Oct 2023 03:31:33 +0200", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq async connection and multiple hosts" }, { "msg_contents": "On Thu, 26 Oct 2023 at 03:31, Daniele Varrazzo\n<[email protected]> wrote:\n> The goal here was only non-blocking name resolution. Ahaini understand we should do is to split on the hosts for sync connections too, shuffle if requested, and make separate connection attempts.\n\nIf you pack the resolved addresses in same connection string then it\nshould be fine. The different hostaddrs will be shuffled by libpq.\n\n\n", "msg_date": "Thu, 26 Oct 2023 10:00:53 +0200", "msg_from": "Jelte Fennema <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq async connection and multiple hosts" } ]
[ { "msg_contents": "Hi All,\n\nAt present, we represent temp files as a signed long int number. And\ndepending on the system architecture (32 bit or 64 bit), the range of\nsigned long int varies, for example on a 32-bit system it will range\nfrom -2,147,483,648 to 2,147,483,647. AFAIU, this will not allow a\nsession to create more than 2 billion temporary files and that is not\na small number at all, but still what if we make it an unsigned long\nint which will allow a session to create 4 billion temporary files if\nneeded. I might be sounding a little stupid here because 2 billion\ntemporary files is like 2000 peta bytes (2 billion * 1GB), considering\neach temp file is 1GB in size which is not a small data size at all,\nit is a huge amount of data storage. However, since the variable we\nuse to name temporary files is a static long int (static long\ntempFileCounter = 0;), there is a possibility that this number will\nget exhausted soon if the same session is trying to create too many\ntemp files via multiple queries.\n\nJust adding few lines of code related to this from postmaster.c:\n\n/*\n * Number of temporary files opened during the current session;\n * this is used in generation of tempfile names.\n */\nstatic long tempFileCounter = 0;\n\n /*\n * Generate a tempfile name that should be unique within the current\n * database instance.\n */\n snprintf(tempfilepath, sizeof(tempfilepath), \"%s/%s%d.%ld\",\n tempdirpath, PG_TEMP_FILE_PREFIX, MyProcPid, tempFileCounter++);\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 25 Oct 2023 22:57:52 +0530", "msg_from": "Ashutosh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Should we represent temp files as unsigned long int instead of signed\n long int type?" }, { "msg_contents": "Ashutosh Sharma <[email protected]> writes:\n> At present, we represent temp files as a signed long int number. And\n> depending on the system architecture (32 bit or 64 bit), the range of\n> signed long int varies, for example on a 32-bit system it will range\n> from -2,147,483,648 to 2,147,483,647. AFAIU, this will not allow a\n> session to create more than 2 billion temporary files and that is not\n> a small number at all, but still what if we make it an unsigned long\n> int which will allow a session to create 4 billion temporary files if\n> needed.\n\nAFAIK, nothing particularly awful will happen if that counter wraps\naround. Perhaps if you gamed the system really hard, you could cause\na collision with a still-extant temp file from the previous cycle,\nbut I seriously doubt that could happen by accident. So I don't\nthink there's anything to worry about here. Maybe we could make\nthat filename pattern %lu not %ld, but minus sign is a perfectly\nacceptable filename character, so such a change would be cosmetic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:07:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we represent temp files as unsigned long int instead of\n signed long int type?" }, { "msg_contents": "On Wed, Oct 25, 2023 at 1:28 PM Ashutosh Sharma <[email protected]> wrote:\n> At present, we represent temp files as a signed long int number. And\n> depending on the system architecture (32 bit or 64 bit), the range of\n> signed long int varies, for example on a 32-bit system it will range\n> from -2,147,483,648 to 2,147,483,647. AFAIU, this will not allow a\n> session to create more than 2 billion temporary files and that is not\n> a small number at all, but still what if we make it an unsigned long\n> int which will allow a session to create 4 billion temporary files if\n> needed. I might be sounding a little stupid here because 2 billion\n> temporary files is like 2000 peta bytes (2 billion * 1GB), considering\n> each temp file is 1GB in size which is not a small data size at all,\n> it is a huge amount of data storage. However, since the variable we\n> use to name temporary files is a static long int (static long\n> tempFileCounter = 0;), there is a possibility that this number will\n> get exhausted soon if the same session is trying to create too many\n> temp files via multiple queries.\n\nI think we use signed integer types in a bunch of places where an\nunsigned integer type would be straight-up better, and this is one of\nthem.\n\nI don't know whether it really matters, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:10:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we represent temp files as unsigned long int instead of\n signed long int type?" }, { "msg_contents": "On Wed, Oct 25, 2023 at 03:07:39PM -0400, Tom Lane wrote:\n> AFAIK, nothing particularly awful will happen if that counter wraps\n> around. Perhaps if you gamed the system really hard, you could cause\n> a collision with a still-extant temp file from the previous cycle,\n> but I seriously doubt that could happen by accident. So I don't\n> think there's anything to worry about here. Maybe we could make\n> that filename pattern %lu not %ld, but minus sign is a perfectly\n> acceptable filename character, so such a change would be cosmetic.\n\nIn the mood of removing long because it may be 4 bytes or 8 bytes\ndepending on the environment, I'd suggest to change it to either int64\nor uint64. Not that it matters much for this specific case, but that\nmakes the code more portable.\n--\nMichael", "msg_date": "Thu, 26 Oct 2023 09:40:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we represent temp files as unsigned long int instead of\n signed long int type?" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> In the mood of removing long because it may be 4 bytes or 8 bytes\n> depending on the environment, I'd suggest to change it to either int64\n> or uint64. Not that it matters much for this specific case, but that\n> makes the code more portable.\n\nThen you're going to need a not-so-portable conversion spec in the\nsnprintf call. Not sure it's any improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Oct 2023 20:49:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we represent temp files as unsigned long int instead of\n signed long int type?" } ]
[ { "msg_contents": "UCS_BASIC is defined in the standard as a collation based on comparing\nthe code point values, and in UTF8 that is satisfied with memcmp(), so\nthe collation locale for UCS_BASIC in Postgres is simply \"C\".\n\nBut what should the result of UPPER('á' COLLATE UCS_BASIC) be? In\nPostgres, the answer is 'á', but intuitively, one could reasonably\nexpect the answer to be 'Á'.\n\nReading the standard, it seems that LOWER()/UPPER() are defined in\nterms of the Unicode General Category (Section 4.2, \"<fold> is a pair\nof functions...\"). It is somewhat ambiguous about the case mappings,\nbut I could guess that it means the Default Case Algorithm[1].\n\nThat seems to suggest the standard answer should be 'Á' regardless of\nany COLLATE clause (though I could be misreading). I'm a bit confused\nby that... what's the standard-compatible way to specify the locale for\nUPPER()/LOWER()? If there is none, then it makes sense that Postgres\noverloads the COLLATE clause for that purpose so that users can use a\ndifferent locale if they want.\n\nBut given that UCS_BASIC is a collation specified in the standard,\nshouldn't it have ctype behavior that's as close to the standard as\npossible?\n\nRegards,\n\tJeff Davis\n\n[1] https://www.unicode.org/versions/Unicode15.1.0/ch03.pdf#G33992\n\n\n", "msg_date": "Wed, 25 Oct 2023 11:32:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "On 25.10.23 20:32, Jeff Davis wrote:\n> But what should the result of UPPER('á' COLLATE UCS_BASIC) be? In\n> Postgres, the answer is 'á', but intuitively, one could reasonably\n> expect the answer to be 'Á'.\n\nI think that's right. But what would you put into ctype to make that \nhappen?\n\n> That seems to suggest the standard answer should be 'Á' regardless of\n> any COLLATE clause (though I could be misreading). I'm a bit confused\n> by that... what's the standard-compatible way to specify the locale for\n> UPPER()/LOWER()? If there is none, then it makes sense that Postgres\n> overloads the COLLATE clause for that purpose so that users can use a\n> different locale if they want.\n\nThe standard doesn't have the notion of locale-dependent case conversion.\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 16:49:55 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "On Thu, 2023-10-26 at 16:49 +0200, Peter Eisentraut wrote:\n> On 25.10.23 20:32, Jeff Davis wrote:\n> > But what should the result of UPPER('á' COLLATE UCS_BASIC) be? In\n> > Postgres, the answer is 'á', but intuitively, one could reasonably\n> > expect the answer to be 'Á'.\n> \n> I think that's right.  But what would you put into ctype to make that\n> happen?\n\nIt looks like using Unicode files to implement\nupper()/lower()/initcap() behavior would not be very hard. The Default\nCase Algorithm only needs a simple mapping for toUppercase() and\ntoLowercase().\n\nOur initcap() is not defined in the standard, and we document that it\nonly differentiates between alphanumeric and non-alphanumeric\ncharacters, so we could get that behavior pretty easily as well. If we\nwanted to do it the Unicode way instead, we can follow the\ntoTitlecase() part of the Default Case Algorithm, which is based on\nword breaks and would require another lookup table for that.\n\nI've already posted a patch that includes a lookup table for the\nGeneral Category, so that could be used for the rest of the ctype\nbehavior.\n\nDoing ctype based on built-in Unicode tables would be a good use case\nfor the \"builtin\" provider that I had previously proposed.\n\n> The standard doesn't have the notion of locale-dependent case\n> conversion.\n\nThat explains it. Interesting.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 09:21:40 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "On Thu, 2023-10-26 at 09:21 -0700, Jeff Davis wrote:\n> Our initcap() is not defined in the standard, and we document that it\n> only differentiates between alphanumeric and non-alphanumeric\n> characters, so we could get that behavior pretty easily as well. If\n> we\n> wanted to do it the Unicode way instead, we can follow the\n> toTitlecase() part of the Default Case Algorithm, which is based on\n> word breaks and would require another lookup table for that.\n\nCorrection: the rules for word breaks are fairly complex, so it would\nnot be worth it to try to replicate that just to support initcap(). We\ncould just use the simple, existing, and documented rules for initcap()\nwhich only differentiate between alphanumeric and not. Anyone who wants\nthe more sophisticated rules can just use an ICU collation with\ninitcap().\n\nThe point stands that it would be pretty simple to have a collation\nthat handles upper() and lower() in a standards-compliant way without\nrelying on libc or ICU. Unfortunately it's too late to call that\ncollation UCS_BASIC, but it would still be useful.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 11:42:27 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "\tPeter Eisentraut wrote:\n\n> > That seems to suggest the standard answer should be 'Á' regardless of\n> > any COLLATE clause (though I could be misreading). I'm a bit confused\n> > by that... what's the standard-compatible way to specify the locale for\n> > UPPER()/LOWER()? If there is none, then it makes sense that Postgres\n> > overloads the COLLATE clause for that purpose so that users can use a\n> > different locale if they want.\n> \n> The standard doesn't have the notion of locale-dependent case conversion.\n\nNeither does Unicode, which is why the ICU functions like u_isupper()\nor u_toupper() don't take a locale argument.\n\nWith libc, isupper_l() and the other ctype functions need a locale\nargument, but given a locale's value of\n\"language[_territory][.codeset]\", in theory only the codeset part is\nactually useful.\n\nTo me the question of what we should put in pg_collation.collctype\nfor the \"ucs_basic\" collation leads to another question which is:\nwhy do we even consider collctype in the first place?\n\nWithin a database, there's only one \"codeset\", which corresponds\nto pg_database.encoding, and there's a value in pg_database.lc_ctype\nthat is normally compatible with that encoding.\nISTM that UPPER(string COLLATE \"whatever\") should always give\nthe same result than UPPER(string COLLATE pg_catalog.default). And\nlikewise all functions that depend on character categories could\nbasically ignore the COLLATE specification, given that our\ndatabase-wide properties are sufficient to characterize the strings\nwithin.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 26 Oct 2023 23:22:24 +0200", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> To me the question of what we should put in pg_collation.collctype\n> for the \"ucs_basic\" collation leads to another question which is:\n> why do we even consider collctype in the first place?\n\nFor starters, C locale should certainly act different from others.\n\nI'm not sold that arguing from Unicode's behavior to other encodings\nmakes sense, either. Unicode can get away with defining that there's\nonly one case-folding rule because they have the luxury of inventing\nnew code points when the \"same\" glyph should act differently according\nto different languages' rules. Encodings with a small number of code\npoints don't have that luxury. In particular see the mess around dotted\nand dotless I/J in Turkish vs. everywhere else.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Oct 2023 17:32:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "On Thu, 2023-10-26 at 23:22 +0200, Daniel Verite wrote:\n> Neither does Unicode, which is why the ICU functions like u_isupper()\n> or u_toupper() don't take a locale argument.\n\nu_strToUpper() accepts a locale argument:\nhttps://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ustring_8h.html#aa64fbd4ad23af84d01c931d7cfa25f89\n\nSee also the part about tailorings here:\nhttps://www.unicode.org/versions/Unicode15.1.0/ch03.pdf#G33992\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:48:26 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" }, { "msg_contents": "On Thu, 2023-10-26 at 17:32 -0400, Tom Lane wrote:\n> For starters, C locale should certainly act different from others.\n\nAgreed. ctype of \"C\" is 100% stable (as implemented in Postgres with\nspecial ASCII-only semantics) and simple.\n\nI'm looking for a way to offer a new middle ground between plain \"C\"\nand buying into all of the problems with collation providers and\nlocalization. We don't need to remove functionality to do so.\n\nProviding Unicode ctype behavior doesn't look very hard. Collations\ncould select it either with a special name or by using the \"builtin\"\nprovider I proposed earlier. If the behavior does change with a new\nUnicode version it would be easier to see and less likely to affect on-\ndisk structures than a collation change.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 16:27:10 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does UCS_BASIC have the right CTYPE?" } ]
[ { "msg_contents": "Hackers,\n\nIt looks like this code was missed in 39969e2a when exclusive backup was \nremoved.\n\nRegards,\n-David", "msg_date": "Wed, 25 Oct 2023 14:53:31 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Remove dead code in pg_ctl.c" }, { "msg_contents": "On Wed, Oct 25, 2023 at 02:53:31PM -0400, David Steele wrote:\n> It looks like this code was missed in 39969e2a when exclusive backup was\n> removed.\n\nIndeed. I'll plan on committing this shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 15:02:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dead code in pg_ctl.c" }, { "msg_contents": "On Wed, Oct 25, 2023 at 03:02:01PM -0500, Nathan Bossart wrote:\n> On Wed, Oct 25, 2023 at 02:53:31PM -0400, David Steele wrote:\n>> It looks like this code was missed in 39969e2a when exclusive backup was\n>> removed.\n> \n> Indeed. I'll plan on committing this shortly.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:30:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Remove dead code in pg_ctl.c" }, { "msg_contents": "On 10/25/23 17:30, Nathan Bossart wrote:\n> On Wed, Oct 25, 2023 at 03:02:01PM -0500, Nathan Bossart wrote:\n>> On Wed, Oct 25, 2023 at 02:53:31PM -0400, David Steele wrote:\n>>> It looks like this code was missed in 39969e2a when exclusive backup was\n>>> removed.\n>>\n>> Indeed. I'll plan on committing this shortly.\n> \n> Committed.\n\nThank you, Nathan!\n\n-David\n\n\n", "msg_date": "Wed, 25 Oct 2023 23:38:33 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Remove dead code in pg_ctl.c" } ]
[ { "msg_contents": "Hi All,\n\nWe observed the behavioral difference when query(with custom GUC) using\nthe PARALLEL plan vs the Non-PARALLEL plan.\n\nConsider the below test:\n\nI understand the given testcase doesn't make much sense, but this is the\nsimplest\nversion of the test - to demonstrate the problem.\n\ncreate table ptest2(id bigint, tenant_id bigint);\ninsert into ptest2 select g, mod(g,10) from generate_series(1, 1000000) g;\nanalyze ptest2;\n\n-- Run the query by forcing the parallel plan.\npostgres=> set max_parallel_workers_per_gather to 2;\nSET\n-- Error expected as custom GUC not set yet.\npostgres=> select count(*) from ptest2 where current_setting('myapp.blah')\nis null;\nERROR: unrecognized configuration parameter \"myapp.blah\"\n\n-- Set the customer GUC and execute the query.\npostgres=> set myapp.blah to 999;\nSET\npostgres=> select count(*) from ptest2 where current_setting('myapp.blah')\nis null;\ncount\n-------\n0\n(1 row)\n\n\n*-- RESET the custom GUC and rerun the query.*postgres=> reset myapp.blah;\nRESET\n\n\n*-- Query should still run, but with forcing parallel plan, throwing an\nerror.*postgres=> select count(*) from ptest2 where\ncurrent_setting('myapp.blah') is null;\nERROR: unrecognized configuration parameter \"myapp.blah\"\nCONTEXT: parallel worker\n\n-- Disable the parallel plan and query just runs fine.\npostgres=#set max_parallel_workers_per_gather to 0;\nSET\npostgres=#select count(*) from ptest2 where current_setting('myapp.blah')\nis null;\n count\n-------\n 0\n(1 row)\n\n\nLooking at the code, while serializing GUC settings function\nSerializeGUCState()\ncomments says that \"We need only consider GUCs with source not\nPGC_S_DEFAULT\".\nBecause of this when custom GUC is SET, it's an entry there in the\n\"guc_nondef_list\",\nbut when it's RESET, that is not more into \"guc_nondef_list\" and worker\nis unable to access the custom GUC and ends up with the unrecognized\nparameter.\n\nWe might need another placeholder for the custom GUCs. Currently, we are\nmaintaining 3 linked lists in guc.c - guc_nondef_list, guc_stack_list,\nguc_report_list and to fix the above issue either we need a 4th list or do\nchanges in the existing list.\n\nThought/Comments?\n\nRegards,\nRushabh Lathia\nwww.EnterpriseDB.com\n\nHi All,We observed the behavioral difference when query(with custom GUC) usingthe PARALLEL plan vs the Non-PARALLEL plan.Consider the below test:I understand the given testcase doesn't make much sense, but this is the simplestversion of the test - to demonstrate the problem.create table ptest2(id bigint, tenant_id bigint);insert into ptest2 select g, mod(g,10) from generate_series(1, 1000000) g;analyze ptest2;-- Run the query by forcing the parallel plan.postgres=> set max_parallel_workers_per_gather to 2;SET-- Error expected as custom GUC not set yet.postgres=> select count(*) from ptest2 where current_setting('myapp.blah') is null;ERROR:  unrecognized configuration parameter \"myapp.blah\"-- Set the customer GUC and execute the query.postgres=> set myapp.blah to 999;SETpostgres=> select count(*) from ptest2 where current_setting('myapp.blah') is null;\t count\t-------\t\t 0\t(1 row)-- RESET the custom GUC and rerun the query.postgres=> reset myapp.blah;RESET-- Query should still run, but with forcing parallel plan, throwing an error.postgres=> select count(*) from ptest2 where current_setting('myapp.blah') is null;ERROR:  unrecognized configuration parameter \"myapp.blah\"CONTEXT:  parallel worker-- Disable the parallel plan and query just runs fine.postgres=#set max_parallel_workers_per_gather to 0;SETpostgres=#select count(*) from ptest2 where current_setting('myapp.blah') is null; count -------     0(1 row)Looking at the code, while serializing GUC settings function SerializeGUCState()comments says that \"We need only consider GUCs with source not PGC_S_DEFAULT\".Because of this when custom GUC is SET, it's an entry there in the \"guc_nondef_list\",but when it's RESET, that is not more into \"guc_nondef_list\" and workeris unable to access the custom GUC and ends up with the unrecognized parameter.We might need another placeholder for the custom GUCs. Currently, we aremaintaining 3 linked lists in guc.c - guc_nondef_list, guc_stack_list,guc_report_list and to fix the above issue either we need a 4th list or dochanges in the existing list.Thought/Comments?Regards,Rushabh Lathiawww.EnterpriseDB.com", "msg_date": "Thu, 26 Oct 2023 12:40:21 +0530", "msg_from": "Rushabh Lathia <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel query behaving different with custom GUCs" }, { "msg_contents": "On Thu, Oct 26, 2023 at 3:10 AM Rushabh Lathia <[email protected]> wrote:\n> -- RESET the custom GUC and rerun the query.\n> postgres=> reset myapp.blah;\n> RESET\n>\n> -- Query should still run, but with forcing parallel plan, throwing an error.\n> postgres=> select count(*) from ptest2 where current_setting('myapp.blah') is null;\n> ERROR: unrecognized configuration parameter \"myapp.blah\"\n> CONTEXT: parallel worker\n>\n> -- Disable the parallel plan and query just runs fine.\n> postgres=#set max_parallel_workers_per_gather to 0;\n> SET\n> postgres=#select count(*) from ptest2 where current_setting('myapp.blah') is null;\n> count\n> -------\n> 0\n> (1 row)\n>\n> We might need another placeholder for the custom GUCs. Currently, we are\n> maintaining 3 linked lists in guc.c - guc_nondef_list, guc_stack_list,\n> guc_report_list and to fix the above issue either we need a 4th list or do\n> changes in the existing list.\n\nI discussed this a bit with Rushabh off-list before he posted, and was\nhoping someone else would reply, but since no one has:\n\nFormally, I think this is a bug. However, the practical impact of it\nis fairly low, because you have to be using custom GUCs in your query\nand you have to RESET them instead of using SET to put them back to\nthe default value, which I'm guessing is something that not a lot of\npeople do. I'm a bit concerned that adding the necessary tracking\ncould be expensive, and I'm not sure we want to slow down things in\nnormal cases to cater to this somewhat strange case. On the other\nhand, maybe we can fix it without too much cost, in which case that\nwould be good to do.\n\nI'm also alert to my own possible bias. Perhaps since I designed this\nmechanism, I'm more prone to viewing its deficiencies as minor than a\nneutral observer would be. So if anyone is sitting there reading this\nand thinking \"wow, I can't believe Robert doesn't think it's important\nto fix this,\" feel free to write back and say so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 09:21:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel query behaving different with custom GUCs" }, { "msg_contents": "On Mon, Oct 30, 2023 at 09:21:56AM -0400, Robert Haas wrote:\n> I'm also alert to my own possible bias. Perhaps since I designed this\n> mechanism, I'm more prone to viewing its deficiencies as minor than a\n> neutral observer would be. So if anyone is sitting there reading this\n> and thinking \"wow, I can't believe Robert doesn't think it's important\n> to fix this,\" feel free to write back and say so.\n\nFun. Agreed that this is a bug, and that the consequences are of\nnull for most users. And it took 7 years to find that.\n\nIf I may ask, is there an impact with functions that include SET\nclauses with custom parameters that are parallel safe? Saying that,\nif the fix is simple, I see no reason not to do something about it,\neven if that's HEAD-only.\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 13:01:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel query behaving different with custom GUCs" } ]
[ { "msg_contents": "Keeping things up to date. Here is a rebased patch with no changes from previous one.\n\n\n * John Morris", "msg_date": "Thu, 26 Oct 2023 15:00:58 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Thu, Oct 26, 2023 at 03:00:58PM +0000, John Morris wrote:\n> Keeping things up to date. Here is a rebased patch with no changes from previous one.\n\nThis patch looks a little different than the last version I see posted [0].\nThat last version of the patch (which appears to be just about committable)\nstill applies for me, too.\n\n[0] https://postgr.es/m/BYAPR13MB2677ED1797C81779D17B414CA03EA%40BYAPR13MB2677.namprd13.prod.outlook.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:34:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "* This patch looks a little different than the last version I see posted [0].\nThat last version of the patch (which appears to be just about committable)\n\nMy oops – I was looking at the wrong commit. The newer patch has already been committed, so pretend that last message didn’t happen. Thanks,\n John\n\n\n\n\n\n\n\n\n\n\nThis patch looks a little different than the last version I see posted [0].\nThat last version of the patch (which appears to be just about committable)\n\n\nMy oops – I was looking at the wrong commit. The newer patch has already been committed,  so pretend that last message didn’t happen. Thanks,\n   John", "msg_date": "Tue, 31 Oct 2023 18:13:24 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Here is what I meant to do earlier. As it turns out, this patch has not been merged yet.\n\nThis is a rebased version . Even though I labelled it “v3”, there should be no changes from “v2”.", "msg_date": "Wed, 1 Nov 2023 21:15:20 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Wed, Nov 01, 2023 at 09:15:20PM +0000, John Morris wrote:\n> This is a rebased version . Even though I labelled it “v3”, there should be no changes from “v2”.\n\nThanks. I think this is almost ready, but I have to harp on the\npg_atomic_read_u64() business once more. The relevant comment in atomics.h\nhas this note:\n\n * The read is guaranteed to return a value as it has been written by this or\n * another process at some point in the past. There's however no cache\n * coherency interaction guaranteeing the value hasn't since been written to\n * again.\n\nHowever unlikely, this seems to suggest that CreateCheckPoint() could see\nan old value with your patch. Out of an abundance of caution, I'd\nrecommend changing this to pg_atomic_compare_exchange_u64() like\npg_atomic_read_u64_impl() does in generic.h.\n\n@@ -4635,7 +4629,6 @@ XLOGShmemInit(void)\n \n \tSpinLockInit(&XLogCtl->Insert.insertpos_lck);\n \tSpinLockInit(&XLogCtl->info_lck);\n-\tSpinLockInit(&XLogCtl->ulsn_lck);\n }\n\nShouldn't we do the pg_atomic_init_u64() here? We can still set the\ninitial value in StartupXLOG(), but it might be safer to initialize the\nvariable where we are initializing the other shared memory stuff.\n\nSince this isn't a tremendously performance-sensitive area, IMHO we should\ncode defensively to eliminate any doubts about correctness and to make it\neasier to reason about.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Nov 2023 22:40:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Thu, Nov 2, 2023 at 9:10 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Wed, Nov 01, 2023 at 09:15:20PM +0000, John Morris wrote:\n> > This is a rebased version . Even though I labelled it “v3”, there should be no changes from “v2”.\n>\n> Thanks. I think this is almost ready, but I have to harp on the\n> pg_atomic_read_u64() business once more. The relevant comment in atomics.h\n> has this note:\n>\n> * The read is guaranteed to return a value as it has been written by this or\n> * another process at some point in the past. There's however no cache\n> * coherency interaction guaranteeing the value hasn't since been written to\n> * again.\n>\n> However unlikely, this seems to suggest that CreateCheckPoint() could see\n> an old value with your patch. Out of an abundance of caution, I'd\n> recommend changing this to pg_atomic_compare_exchange_u64() like\n> pg_atomic_read_u64_impl() does in generic.h.\n\n+1. pg_atomic_read_u64 provides no barrier semantics whereas\npg_atomic_compare_exchange_u64 does. Without the barrier, it might\nhappen that the value is read while the other backend is changing it.\nI think something like below providing full barrier semantics looks\nfine to me:\n\nXLogRecPtr ulsn;\n\npg_atomic_compare_exchange_u64(&XLogCtl->unloggedLSN, &ulsn, 0);\nControlFile->unloggedLSN = ulsn;\n\n> @@ -4635,7 +4629,6 @@ XLOGShmemInit(void)\n>\n> SpinLockInit(&XLogCtl->Insert.insertpos_lck);\n> SpinLockInit(&XLogCtl->info_lck);\n> - SpinLockInit(&XLogCtl->ulsn_lck);\n> }\n>\n> Shouldn't we do the pg_atomic_init_u64() here? We can still set the\n> initial value in StartupXLOG(), but it might be safer to initialize the\n> variable where we are initializing the other shared memory stuff.\n\nI think no one accesses the unloggedLSN in between\nCreateSharedMemoryAndSemaphores -> XLOGShmemInit and StartupXLOG.\nHowever, I see nothing wrong in doing\npg_atomic_init_u64(&XLogCtl->unloggedLSN, InvalidXLogRecPtr); in\nXLOGShmemInit.\n\n> Since this isn't a tremendously performance-sensitive area, IMHO we should\n> code defensively to eliminate any doubts about correctness and to make it\n> easier to reason about.\n\nRight.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Nov 2023 23:49:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Wed, Nov 01, 2023 at 10:40:06PM -0500, Nathan Bossart wrote:\n> Since this isn't a tremendously performance-sensitive area, IMHO we should\n> code defensively to eliminate any doubts about correctness and to make it\n> easier to reason about.\n\nConcretely, like this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 6 Nov 2023 14:33:50 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "I incorporated your suggestions and added a few more. The changes are mainly related to catching potential errors if some basic assumptions aren’t met.\n\nThere are basically 3 assumptions. Stating them as conditions we want to avoid.\n\n * We should not get an unlogged LSN before reading the control file.\n * We should not get an unlogged LSN when shutting down.\n * The unlogged LSN written out during a checkpoint shouldn’t be used.\n\nYour suggestion addressed the first problem, and it took only minor changes to address the other two.\n\nThe essential idea is, we set a value of zero in each of the 3 situations. Then we throw an Assert() If somebody tries to allocate an unlogged LSN with the value zero.\n\nI found the comment about cache coherency a bit confusing. We are dealing with a single address, so there should be no memory ordering or coherency issues. (Did I misunderstand?) I see it more as a race condition. Rather than merely explaining why it shouldn’t happen, the new version verifies the assumptions and throws an Assert() if something goes wrong.", "msg_date": "Tue, 7 Nov 2023 00:57:32 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Tue, Nov 07, 2023 at 12:57:32AM +0000, John Morris wrote:\n> I incorporated your suggestions and added a few more. The changes are\n> mainly related to catching potential errors if some basic assumptions\n> aren’t met.\n\nHm. Could we move that to a separate patch? We've lived without these\nextra checks for a very long time, and I'm not aware of any related issues,\nso I'm not sure it's worth the added complexity. And IMO it'd be better to\nkeep it separate from the initial atomics conversion, anyway.\n\n> I found the comment about cache coherency a bit confusing. We are dealing\n> with a single address, so there should be no memory ordering or coherency\n> issues. (Did I misunderstand?) I see it more as a race condition. Rather\n> than merely explaining why it shouldn’t happen, the new version verifies\n> the assumptions and throws an Assert() if something goes wrong.\n\nI was thinking of the comment for pg_atomic_read_u32() that I cited earlier\n[0]. This comment also notes that pg_atomic_read_u32/64() has no barrier\nsemantics. My interpretation of that comment is that these functions\nprovide no guarantee that the value returned is the most up-to-date value.\nBut my interpretation could be wrong, and maybe this is meant to highlight\nthat the value might change before we can use the return value in a\ncompare/exchange or something.\n\nI spent a little time earlier today reviewing the various underlying\nimplementations, but apparently I need to spend some more time looking at\nthose...\n\n[0] https://postgr.es/m/20231102034006.GA85609%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 21:35:58 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart ([email protected]) wrote:\n> On Tue, Nov 07, 2023 at 12:57:32AM +0000, John Morris wrote:\n> > I incorporated your suggestions and added a few more. The changes are\n> > mainly related to catching potential errors if some basic assumptions\n> > aren’t met.\n> \n> Hm. Could we move that to a separate patch? We've lived without these\n> extra checks for a very long time, and I'm not aware of any related issues,\n> so I'm not sure it's worth the added complexity. And IMO it'd be better to\n> keep it separate from the initial atomics conversion, anyway.\n\nI do see the value in adding in an Assert though I don't want to throw\naway the info about what the recent unlogged LSN was when we crash. As\nthat basically boils down to a one-line addition, I don't think it\nreally needs to be in a separate patch.\n\n> > I found the comment about cache coherency a bit confusing. We are dealing\n> > with a single address, so there should be no memory ordering or coherency\n> > issues. (Did I misunderstand?) I see it more as a race condition. Rather\n> > than merely explaining why it shouldn’t happen, the new version verifies\n> > the assumptions and throws an Assert() if something goes wrong.\n> \n> I was thinking of the comment for pg_atomic_read_u32() that I cited earlier\n> [0]. This comment also notes that pg_atomic_read_u32/64() has no barrier\n> semantics. My interpretation of that comment is that these functions\n> provide no guarantee that the value returned is the most up-to-date value.\n\nThere seems to be some serious misunderstanding about what is happening\nhere. The value written into the control file for unlogged LSN during\nnormal operation does *not* need to be the most up-to-date value and\ntalking about it as if it needs to be the absolutely most up-to-date and\ncorrect value is, if anything, adding to the confusion, not reducing\nconfusion. The reason to write in anything other than a zero during\nthese routine checkpoints for unlogged LSN is entirely for forensics\npurposes, not because we'll ever actually use the value- during crash\nrecovery and backup/restore, we're going to reset the unlogged LSN\ncounter anyway and we're going to throw away all of unlogged table\ncontents across the entire system.\n\nWe only care about the value of the unlogged LSN being correct during\nnormal shutdown when we're writing out the shutdown checkpoint, but by\nthat time everything else has been shut down and the value absolutely\nshould not be changing.\n\nThanks,\n\nStephen", "msg_date": "Tue, 7 Nov 2023 11:47:46 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Tue, Nov 07, 2023 at 11:47:46AM -0500, Stephen Frost wrote:\n> We only care about the value of the unlogged LSN being correct during\n> normal shutdown when we're writing out the shutdown checkpoint, but by\n> that time everything else has been shut down and the value absolutely\n> should not be changing.\n\nI agree that's all true. I'm trying to connect how this scenario ensures\nwe see the most up-to-date value in light of this comment above\npg_atomic_read_u32():\n\n * The read is guaranteed to return a value as it has been written by this or\n * another process at some point in the past. There's however no cache\n * coherency interaction guaranteeing the value hasn't since been written to\n * again.\n\nIs there something special about all other backends being shut down that\nensures this returns the most up-to-date value and not something from \"some\npoint in the past\" as the stated contract for this function seems to\nsuggest?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 7 Nov 2023 11:02:49 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Hi,\n\nOn 2023-11-07 11:02:49 -0600, Nathan Bossart wrote:\n> On Tue, Nov 07, 2023 at 11:47:46AM -0500, Stephen Frost wrote:\n> > We only care about the value of the unlogged LSN being correct during\n> > normal shutdown when we're writing out the shutdown checkpoint, but by\n> > that time everything else has been shut down and the value absolutely\n> > should not be changing.\n> \n> I agree that's all true. I'm trying to connect how this scenario ensures\n> we see the most up-to-date value in light of this comment above\n> pg_atomic_read_u32():\n> \n> * The read is guaranteed to return a value as it has been written by this or\n> * another process at some point in the past. There's however no cache\n> * coherency interaction guaranteeing the value hasn't since been written to\n> * again.\n> \n> Is there something special about all other backends being shut down that\n> ensures this returns the most up-to-date value and not something from \"some\n> point in the past\" as the stated contract for this function seems to\n> suggest?\n\nPractically yes - getting to the point of writing the shutdown checkpoint\nimplies having gone through a bunch of code that implies memory barriers\n(spinlocks, lwlocks).\n\nHowever, even if there's likely some other implied memory barrier that we\ncould piggyback on, the patch much simpler to understand if it doesn't change\ncoherency rules. There's no way the overhead could matter.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 16:58:16 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Hi,\n\nOn 2023-11-07 00:57:32 +0000, John Morris wrote:\n> I found the comment about cache coherency a bit confusing. We are dealing\n> with a single address, so there should be no memory ordering or coherency\n> issues. (Did I misunderstand?) I see it more as a race condition.\n\nIMO cache coherency covers the value a single variable has in different\nthreads / processes.\n\nIn fact, the only reason there effectively is a guarantee that you're not\nseeing an outdated unloggedLSN value during shutdown checkpoints, even without\nthe spinlock or full barrier atomic op, is that the LWLockAcquire(), a few\nlines above this, would prevent both the compiler and CPU from moving the read\nof unloggedLSN to much earlier. Obviously that lwlock has a different\naddress...\n\n\nIf the patch just had done the minimal conversion, it'd already have been\ncommitted... Even if there'd be a performance reason to get rid of the memory\nbarrier around reading unloggedLSN in CreateCheckPoint(), I'd do the\nconversion in a separate commit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 17:18:11 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Tue, Nov 07, 2023 at 04:58:16PM -0800, Andres Freund wrote:\n> On 2023-11-07 11:02:49 -0600, Nathan Bossart wrote:\n>> Is there something special about all other backends being shut down that\n>> ensures this returns the most up-to-date value and not something from \"some\n>> point in the past\" as the stated contract for this function seems to\n>> suggest?\n> \n> Practically yes - getting to the point of writing the shutdown checkpoint\n> implies having gone through a bunch of code that implies memory barriers\n> (spinlocks, lwlocks).\n\nSure.\n\n> However, even if there's likely some other implied memory barrier that we\n> could piggyback on, the patch much simpler to understand if it doesn't change\n> coherency rules. There's no way the overhead could matter.\n\nI wonder if it's worth providing a set of \"locked read\" functions. Those\ncould just do a compare/exchange with 0 in the generic implementation. For\npatches like this one where the overhead really shouldn't matter, I'd\nencourage folks to use those to make it easy to reason about correctness.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Nov 2023 15:27:33 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Thu, Nov 09, 2023 at 03:27:33PM -0600, Nathan Bossart wrote:\n> I wonder if it's worth providing a set of \"locked read\" functions. Those\n> could just do a compare/exchange with 0 in the generic implementation. For\n> patches like this one where the overhead really shouldn't matter, I'd\n> encourage folks to use those to make it easy to reason about correctness.\n\nI moved this proposal to a new thread [0].\n\n[0] https://postgr.es/m/20231110205128.GB1315705%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 10 Nov 2023 14:54:14 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Here is a new version of the patch that uses the new atomic read/write\nfunctions with full barriers that were added in commit bd5132d. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Feb 2024 10:34:12 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Greetings,\n\n* Nathan Bossart ([email protected]) wrote:\n> Here is a new version of the patch that uses the new atomic read/write\n> functions with full barriers that were added in commit bd5132d. Thoughts?\n\nSaw that commit go in- glad to see it. Thanks for updating this patch\ntoo. The changes look good to me.\n\nThanks again,\n\nStephen", "msg_date": "Thu, 29 Feb 2024 11:45:07 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Looks good to me.\r\n\r\n * John\r\n\r\nFrom: Nathan Bossart <[email protected]>\r\nDate: Thursday, February 29, 2024 at 8:34 AM\r\nTo: Andres Freund <[email protected]>\r\nCc: Stephen Frost <[email protected]>, John Morris <[email protected]>, Bharath Rupireddy <[email protected]>, Michael Paquier <[email protected]>, Robert Haas <[email protected]>, [email protected] <[email protected]>\r\nSubject: Re: Atomic ops for unlogged LSN\r\nHere is a new version of the patch that uses the new atomic read/write\r\nfunctions with full barriers that were added in commit bd5132d. Thoughts?\r\n\r\n--\r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com\r\n\n\n\n\n\n\n\n\n\nLooks good to me.\n\n\n\nJohn\n \n\n\n\nFrom:\r\nNathan Bossart <[email protected]>\nDate: Thursday, February 29, 2024 at 8:34 AM\nTo: Andres Freund <[email protected]>\nCc: Stephen Frost <[email protected]>, John Morris <[email protected]>, Bharath Rupireddy <[email protected]>, Michael Paquier <[email protected]>, Robert Haas <[email protected]>, [email protected] <[email protected]>\nSubject: Re: Atomic ops for unlogged LSN\n\n\nHere is a new version of the patch that uses the new atomic read/write\r\nfunctions with full barriers that were added in commit bd5132d.  Thoughts?\n\r\n-- \r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Feb 2024 16:52:30 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "On Thu, Feb 29, 2024 at 10:04 PM Nathan Bossart\n<[email protected]> wrote:\n>\n> Here is a new version of the patch that uses the new atomic read/write\n> functions with full barriers that were added in commit bd5132d. Thoughts?\n\nThanks for getting the other patch in. The attached v6 patch LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Feb 2024 23:41:45 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" }, { "msg_contents": "Committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Feb 2024 14:37:52 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic ops for unlogged LSN" } ]
[ { "msg_contents": "Hi,\n\nI've seen situations a few times now where somebody has sessions that\nare \"idle in transaction\" for a long time but they feel like it should\nbe harmless because the transaction has no XID. However, the fact that\nthe transaction is idle doesn't mean it isn't running a query, because\nthere could be a cursor from which some but not all results were\nfetched. That query is suspended, but still holds a snapshot and thus\nstill holds back xmin. You can see this from pg_stat_activity because\nbackend_xmin will be set, but I've found that this is easily missed\nand sometimes confusing even when noticed. People don't necessarily\nunderstand how it's possible to have a snapshot if the session is\nidle. And even if somebody has great understanding of system\ninternals, pg_stat_activity doesn't distinguish between a session that\nholds a snapshot because (a) the transaction was started with\nrepeatable read or serializable and it has already executed at least\none command that acquired a snapshot or alternatively (b) the\ntransaction has opened some cursors which it has not closed. (Is there\na (c)? As far as I know, it has to be one of those two things.)\n\nSo I think it would be useful to improve the pg_stat_activity output\nin some way. For instance, the output could say \"idle in transaction\n(with open cursors)\" or something like that. Or we could add a whole\nnew column that specifically gives a count of how many cursors the\nsession has open, or how many active cursors, or something like that.\nI'm not exactly clear on the terminology here. It seems like the thing\nwe internally called a portal is basically a cursor, except there's\nalso an unnamed portal that gets used when you run a query without\nusing a cursor. And I think the cursors that could potentially hold\nsnapshots are the ones that are labelled PORTAL_READY. I think we\ncan't have a PORTAL_ACTIVE portal if we're idle, and that\nPORTAL_{NEW,DEFINED,DONE,FAILED} portals are not capable of holding\nany resources and thus not relevant. But I'm not 100% positive on\nthat, and I'm not exactly sure what terminology the user facing\nreporting should use.\n\nBut I think it would be nice to do something, because the current\nsituation seems like it's more confusing than it needs to be.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 11:47:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "visibility of open cursors in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2023-10-26 11:47:32 -0400, Robert Haas wrote:\n> I've seen situations a few times now where somebody has sessions that\n> are \"idle in transaction\" for a long time but they feel like it should\n> be harmless because the transaction has no XID. However, the fact that\n> the transaction is idle doesn't mean it isn't running a query, because\n> there could be a cursor from which some but not all results were\n> fetched. That query is suspended, but still holds a snapshot and thus\n> still holds back xmin. You can see this from pg_stat_activity because\n> backend_xmin will be set, but I've found that this is easily missed\n> and sometimes confusing even when noticed. People don't necessarily\n> understand how it's possible to have a snapshot if the session is\n> idle. And even if somebody has great understanding of system\n> internals, pg_stat_activity doesn't distinguish between a session that\n> holds a snapshot because (a) the transaction was started with\n> repeatable read or serializable and it has already executed at least\n> one command that acquired a snapshot or alternatively (b) the\n> transaction has opened some cursors which it has not closed. (Is there\n> a (c)? As far as I know, it has to be one of those two things.)\n\nDoes it really matter on that level for the user whether a snapshot exists\nbecause of repeatable read or because of a cursor? If users don't understand\nbackend_xmin - likely largely true - then the consequences of holding a\nsnapshot open because of repeatable read (or even just catalog snapshots!) is\nas severe as an open cursor.\n\n\n> So I think it would be useful to improve the pg_stat_activity output\n> in some way. For instance, the output could say \"idle in transaction\n> (with open cursors)\" or something like that.\n\nGiven snapshots held for other reasons, I think we should expose them\nsimilarly, if we do something for cursors. Otherwise people might start to\nworry only about idle-txn-with-cursors and not the equally harmful\nidle-txn-with-snapshot.\n\nMaybe something roughly like\n idle in transaction [with {snapshot|cursor|locks}]\n?\n\n\n> Or we could add a whole new column that specifically gives a count of how\n> many cursors the session has open, or how many active cursors, or something\n> like that. I'm not exactly clear on the terminology here.\n\nPortals are very weirdly underdocumented and surprisingly complicated :/\n\n\n> It seems like the thing we internally called a portal is basically a cursor,\n> except there's also an unnamed portal that gets used when you run a query\n> without using a cursor.\n\nI think you can also basically use an unnamed portal as a cursor with the\nextended protocol. The only thing is that there can only be one of them.\n\nThe interesting distinction likely is whether we have cursors that are not\nactive.\n\n\n> But I think it would be nice to do something, because the current\n> situation seems like it's more confusing than it needs to be.\n\nI think it'd be nice to make idle-in-txn a bit more informative. Not sure\nthough how much that helps most users, it's still quite granular information.\n\nI still would like a view that shows what's holding back the horizon on a\nsystem wide basis. Something like a view with the following columns and one\nrow for each database\n\n datname\n horizon\n horizon_cause = {xid, snapshot, prepared_xact, replication_connection, ...}\n xid_horizon\n xid_horizon_pid\n snapshot_horizon\n snapshot_horizon_pid\n prepared_xact_horizon\n prepared_xact_horizon_id\n replication_connection_horizon\n replication_connection_horizon_pid\n physical_slot_horizon\n physical_slot_horizon_pid\n physical_slot_horizon_name\n logical_slot_horizon\n logical_slot_horizon_pid\n logical_slot_horizon_name\n\nPerhaps with one additional row with a NULL datname showing the system wide\nhorizons (one database could have the oldest xid_horizon and another the\noldest logical_slot_horizon, so it's not a simple order by).\n\n\nI recently mused in some other thread that I really would like to have an\napproximate xid->timestamp mapping, so that we could assign an age to these in\na unit that makes sense to humans. Particularly snapshots / xmin can be very\nconfusing in that regard because a relatively recent transaction can hold back\nthe overall horizon further than the time the transaction started, if some old\ntransaction was still running at the time.\n\n\nPerhaps we could add at least timestamps to these in some other\nway. E.g. recording a timestamp whenever a transaction is prepared, a slot is\nreleased... Likely recording one whenever a snapshot is acquired would be too\nexpensive tho - but we could use state_change as an approximation?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 26 Oct 2023 10:41:36 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: visibility of open cursors in pg_stat_activity" }, { "msg_contents": "On Thu, Oct 26, 2023 at 1:41 PM Andres Freund <[email protected]> wrote:\n> Does it really matter on that level for the user whether a snapshot exists\n> because of repeatable read or because of a cursor? If users don't understand\n> backend_xmin - likely largely true - then the consequences of holding a\n> snapshot open because of repeatable read (or even just catalog snapshots!) is\n> as severe as an open cursor.\n\nSure it matters. How is the user supposed to know what they need to go\nfix? If there's application code that says BEGIN TRANSACTION\nSERIALIZABLE, that's a different thing to look for than if there's\napplication code that fails to close a cursor somewhere.\n\n> Given snapshots held for other reasons, I think we should expose them\n> similarly, if we do something for cursors. Otherwise people might start to\n> worry only about idle-txn-with-cursors and not the equally harmful\n> idle-txn-with-snapshot.\n>\n> Maybe something roughly like\n> idle in transaction [with {snapshot|cursor|locks}]\n> ?\n\nWell every transaction is going to have a lock on its own VXID, if\nnothing else. And in almost all interesting cases, more than that.\n\nThe point for me is that if you're using cursors, \"idle in\ntransaction\" is misleading in a way that it isn't if you have a\nsnapshot due to serializability or something. Consider two users. Each\nbegins a transaction, then each runs a query that returns a large\nnumber of rows, considerably in excess of what will fit in the network\nbuffer. Each user then reads half of the rows and then goes into the\ntank to process the data they have received thus far. User A does this\nby sending the query using the simple query protocol and reading the\nresults one at a time using single-row mode. User B does this by\nsending the query using the extended query protocol and fetches the\nrows in batches by sending successive Execute messages each with a\nnon-zero row count. When user A goes into the tank, their session is\nshown as active. When user B goes into the tank, their session is\nshown as idle-in-transaction. But these situations are actually very\nsimilar to each other. In both cases, execution is suspended because\nthe client is thinking.\n\nThe case of holding a snapshot because of repeatable read or\nserializable isolation is, in my view, different. In that case, while\nit's true that the session is holding onto resources that might cause\nsome problems for other things happening on the system, saying that\nthe session is idle in transaction is still accurate. The problems are\ncaused by transaction-lifespan resources. But in the case where there\nare active cursors, the backend is actually in the middle of executing\na query, or maybe many of them, but at least one. Sure, at the exact\nmoment that we see the status as \"idle in transaction\", it isn't\nactively trying to run any of them, but that feels like a pedantic\nargument. If you put a pot of water on the stove to boil and wait for\nit to heat up, are you actively cooking or are you idle? As here, I\nthink the answer is \"something in between.\"\n\n> I still would like a view that shows what's holding back the horizon on a\n> system wide basis. Something like a view with the following columns and one\n> row for each database\n\nSeems like it's just the same information we already have in\npg_stat_activity, pg_prepared_xacts, and pg_replslots. Maybe\nreformatting is useful but it doesn't seem groundbreaking. It would be\ngroundbreaking if we could surface information that's not visible now,\nlike the names and associated queries of cursors in sessions not our\nown. But that would be much more expensive to expose.\n\n> I recently mused in some other thread that I really would like to have an\n> approximate xid->timestamp mapping, so that we could assign an age to these in\n> a unit that makes sense to humans. Particularly snapshots / xmin can be very\n> confusing in that regard because a relatively recent transaction can hold back\n> the overall horizon further than the time the transaction started, if some old\n> transaction was still running at the time.\n>\n> Perhaps we could add at least timestamps to these in some other\n> way. E.g. recording a timestamp whenever a transaction is prepared, a slot is\n> released... Likely recording one whenever a snapshot is acquired would be too\n> expensive tho - but we could use state_change as an approximation?\n\nI'm not saying this is a terrible idea or anything, but in my\nexperience the problem isn't usually that people don't understand that\nold XIDs are old -- it's that they don't know where to find the XIDs\nin the first place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:51:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: visibility of open cursors in pg_stat_activity" } ]
[ { "msg_contents": "In bug #18163 [1], Alexander proved the misgivings I had in [2]\nabout catcache detoasting being possibly unsafe:\n\n>> BTW, while nosing around I found what seems like a very nasty related\n>> bug. Suppose that a catalog tuple being loaded into syscache contains\n>> some toasted fields. CatalogCacheCreateEntry will flatten the tuple,\n>> involving fetches from toast tables that will certainly cause\n>> AcceptInvalidationMessages calls. What if one of those should have\n>> invalidated this tuple? We will not notice, because it's not in\n>> the hashtable yet. When we do add it, we will mark it not-dead,\n>> meaning that the stale entry looks fine and could persist for a long\n>> while.\n\nAttached is a POC patch for fixing this. The idea is that if we get\nan invalidation while trying to detoast a catalog tuple, we should\ngo around and re-read the tuple a second time to get an up-to-date\nversion (or, possibly, discover that it no longer exists). In the\ncase of SearchCatCacheList, we have to drop and reload the entire\ncatcache list, but fortunately not a lot of new code is needed.\n\nThe detection of \"get an invalidation\" could be refined: what I did\nhere is to check for any advance of SharedInvalidMessageCounter,\nwhich clearly will have a significant number of false positives.\nHowever, the only way I see to make that a lot better is to\ntemporarily create a placeholder catcache entry (probably a negative\none) with the same keys, and then see if it got marked dead.\nThis seems a little expensive, plus I'm afraid that it'd be actively\nwrong in the recursive-lookup cases that the existing comment in\nSearchCatCacheMiss is talking about (that is, the catcache entry\nmight mislead any recursive lookup that happens).\n\nMoreover, if we did do something like that then the new code\npaths would be essentially untested. As the patch stands, the\nreload path seems to get taken 10 to 20 times during a\n\"make installcheck-parallel\" run of the core regression tests\n(out of about 150 times that catcache detoasting is required).\nProbably all of those are false-positive cases, but at least\nthey're exercising the logic.\n\nSo I'm inclined to leave it like this, but perhaps somebody\nelse will have a different opinion.\n\n(BTW, there's a fair amount of existing catcache.c code that\nwill need to be indented another tab stop, but in the interests\nof keeping the patch legible I didn't do that yet.)\n\nComments?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/18163-859bad19a43edcf6%40postgresql.org\n[2] https://www.postgresql.org/message-id/1389919.1697144487%40sss.pgh.pa.us", "msg_date": "Thu, 26 Oct 2023 16:43:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Recovering from detoast-related catcache invalidations" }, { "msg_contents": "I wrote:\n> In bug #18163 [1], Alexander proved the misgivings I had in [2]\n> about catcache detoasting being possibly unsafe:\n> ...\n> Attached is a POC patch for fixing this.\n\nThe cfbot pointed out that this needed a rebase. No substantive\nchanges.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 17 Nov 2023 15:35:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" } ]
[ { "msg_contents": "Hackers,\n\nThis was originally proposed in [1] but that thread went through a \nnumber of different proposals so it seems better to start anew.\n\nThe basic idea here is to simplify and harden recovery by getting rid of \nbackup_label and storing recovery information directly in pg_control. \nInstead of backup software copying pg_control from PGDATA, it stores an \nupdated version that is returned from pg_backup_stop(). I believe this \nis better for the following reasons:\n\n* The user can no longer remove backup_label and get what looks like a \nsuccessful restore (while almost certainly causing corruption). If \npg_control is removed the cluster will not start. The user may try \npg_resetwal, but I think that tool makes it pretty clear that corruption \nwill result from its use. We could also modify pg_resetwal to complain \nif recovery info is present in pg_control.\n\n* We don't need to worry about backup software seeing a torn copy of \npg_control, since Postgres can safely read it out of memory and provide \na valid copy via pg_backup_stop(). This solves [2] without needing to \nwrite pg_control via a temp file, which may affect performance on a \nstandby. Unfortunately, this solution cannot be back patched.\n\n* For backup from standby, we no longer need to instruct the backup \nsoftware to copy pg_control last. In fact the backup software should not \ncopy pg_control from PGDATA at all.\n\nSince backup_label is now gone, the fields that used to be in \nbackup_label are now provided as columns returned from pg_backup_start() \nand pg_backup_stop() and the backup history file is still written to the \narchive. For pg_basebackup we would have the option of writing the \nfields into the JSON manifest, storing them to a file (e.g. \nbackup.info), or just ignoring them. None of the fields are required for \nrecovery but backup software may be very interested in them.\n\nI updated pg_rewind but I'm not very confident in the tests. When I \nremoved backup_label processing, but before I updated pg_rewind to write \nrecovery info into pg_control, all the rewind tests passed.\n\nThis patch highlights the fact that we still have no tests for the \nlow-level backup method. I modified pgBackRest to work with this patch \nand the entire test suite ran without any issues, but in-core tests \nwould be good to have. I'm planning to work on those myself as a \nseparate patch.\n\nThis patch would also make the proposal in [3] obsolete since there is \nno need to rename backup_label if it is gone.\n\nI know that outputting pg_control as bytea is going to be a bit \ncontroversial. Software that is using psql get run pg_backup_stop() \ncould use encode() to get pg_control as text and then decode it later. \nAlternately, we could update ReadControlFile() to recognize a \nbase64-encoded pg_control file. I'm not sure dealing with binary data is \nthat much of a problem, though, and if the backup software gets it wrong \nthen recovery with fail on an invalid pg_control file.\n\nLastly, I think there are improvements to be made in recovery that go \nbeyond this patch. I originally set out to load the recovery info into \n*just* the existing fields in pg_control but it required so many changes \nto recovery that I decided it was too dangerous to do all in one patch. \nThis patch very much takes the \"backup_label in pg_control\" approach, \nthough I reused fields where possible. The added fields, e.g. \nbackupRecoveryRequested, also allow us to keep the user experience \npretty much the same in terms of messages and errors.\n\nThoughts?\n\nRegards,\n-David\n\n[1] \nhttps://postgresql.org/message-id/1330cb48-4e47-03ca-f2fb-b144b49514d8%40pgmasters.net\n[2] \nhttps://postgresql.org/message-id/20221123014224.xisi44byq3cf5psi%40awork3.anarazel.de\n[3] \nhttps://postgresql.org/message-id/eb3d1aae-1a75-bcd3-692a-38729423168f%40pgmasters.net", "msg_date": "Thu, 26 Oct 2023 17:02:20 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Thu, Oct 26, 2023 at 2:02 PM David Steele <[email protected]> wrote:\n\n> Hackers,\n>\n> This was originally proposed in [1] but that thread went through a\n> number of different proposals so it seems better to start anew.\n>\n> The basic idea here is to simplify and harden recovery by getting rid of\n> backup_label and storing recovery information directly in pg_control.\n> Instead of backup software copying pg_control from PGDATA, it stores an\n> updated version that is returned from pg_backup_stop(). I believe this\n> is better for the following reasons:\n>\n> * The user can no longer remove backup_label and get what looks like a\n> successful restore (while almost certainly causing corruption). If\n> pg_control is removed the cluster will not start. The user may try\n> pg_resetwal, but I think that tool makes it pretty clear that corruption\n> will result from its use. We could also modify pg_resetwal to complain\n> if recovery info is present in pg_control.\n>\n> * We don't need to worry about backup software seeing a torn copy of\n> pg_control, since Postgres can safely read it out of memory and provide\n> a valid copy via pg_backup_stop(). This solves [2] without needing to\n> write pg_control via a temp file, which may affect performance on a\n> standby. Unfortunately, this solution cannot be back patched.\n>\n\nAre we planning on dealing with torn writes in the back branches in some\nway or are we just throwing in the towel and saying the old method is too\nerror-prone to exist/retain and therefore the goal of the v17 changes is to\nnot only provide a better way but also to ensure the old way no longer\nworks? It seems sufficient to change the output signature of\npg_backup_stop to accomplish that goal though I am pondering whether an\nexplicit check and error for seeing the backup_label file would be\nwarranted.\n\nIf we are going to solve the torn writes problem completely then while I\nagree the new way is superior, implementing it doesn't have to mean\nexisting tools built to produce backup_label and rely upon the pg_control\nin the data directory need to be forcibly broken.\n\n\n> I know that outputting pg_control as bytea is going to be a bit\n> controversial. Software that is using psql get run pg_backup_stop()\n> could use encode() to get pg_control as text and then decode it later.\n> Alternately, we could update ReadControlFile() to recognize a\n> base64-encoded pg_control file. I'm not sure dealing with binary data is\n> that much of a problem, though, and if the backup software gets it wrong\n> then recovery with fail on an invalid pg_control file.\n>\n\nCan we not figure out some way to place the relevant files onto the server\nsomewhere so that a simple \"cp\" command would work? Have pg_backup_stop\nreturn paths instead of contents, those paths being \"$TEMP_DIR\"/<random\nunique new directory>/pg_control.conf (and tablespace_map)\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 2:02 PM David Steele <[email protected]> wrote:Hackers,\n\nThis was originally proposed in [1] but that thread went through a \nnumber of different proposals so it seems better to start anew.\n\nThe basic idea here is to simplify and harden recovery by getting rid of \nbackup_label and storing recovery information directly in pg_control. \nInstead of backup software copying pg_control from PGDATA, it stores an \nupdated version that is returned from pg_backup_stop(). I believe this \nis better for the following reasons:\n\n* The user can no longer remove backup_label and get what looks like a \nsuccessful restore (while almost certainly causing corruption). If \npg_control is removed the cluster will not start. The user may try \npg_resetwal, but I think that tool makes it pretty clear that corruption \nwill result from its use. We could also modify pg_resetwal to complain \nif recovery info is present in pg_control.\n\n* We don't need to worry about backup software seeing a torn copy of \npg_control, since Postgres can safely read it out of memory and provide \na valid copy via pg_backup_stop(). This solves [2] without needing to \nwrite pg_control via a temp file, which may affect performance on a \nstandby. Unfortunately, this solution cannot be back patched.Are we planning on dealing with torn writes in the back branches in some way or are we just throwing in the towel and saying the old method is too error-prone to exist/retain and therefore the goal of the v17 changes is to not only provide a better way but also to ensure the old way no longer works?  It seems sufficient to change the output signature of pg_backup_stop to accomplish that goal though I am pondering whether an explicit check and error for seeing the backup_label file would be warranted.If we are going to solve the torn writes problem completely then while I agree the new way is superior, implementing it doesn't have to mean existing tools built to produce backup_label and rely upon the pg_control in the data directory need to be forcibly broken.\n\nI know that outputting pg_control as bytea is going to be a bit \ncontroversial. Software that is using psql get run pg_backup_stop() \ncould use encode() to get pg_control as text and then decode it later. \nAlternately, we could update ReadControlFile() to recognize a \nbase64-encoded pg_control file. I'm not sure dealing with binary data is \nthat much of a problem, though, and if the backup software gets it wrong \nthen recovery with fail on an invalid pg_control file.Can we not figure out some way to place the relevant files onto the server somewhere so that a simple \"cp\" command would work?  Have pg_backup_stop return paths instead of contents, those paths being \"$TEMP_DIR\"/<random unique new directory>/pg_control.conf (and tablespace_map)David J.", "msg_date": "Thu, 26 Oct 2023 14:27:52 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 10/26/23 17:27, David G. Johnston wrote:\n> On Thu, Oct 26, 2023 at 2:02 PM David Steele <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> Are we planning on dealing with torn writes in the back branches in some \n> way or are we just throwing in the towel and saying the old method is \n> too error-prone to exist/retain \n\nWe are still planning to address this issue in the back branches.\n\n> and therefore the goal of the v17 \n> changes is to not only provide a better way but also to ensure the old \n> way no longer works?  It seems sufficient to change the output signature \n> of pg_backup_stop to accomplish that goal though I am pondering whether \n> an explicit check and error for seeing the backup_label file would be \n> warranted.\n\nWell, if the backup tool is just copying the second column of output to \nthe backup_label, then it won't break. Of course in that case, restores \nwon't work correctly but you would not get an error. Testing would show \nthat it is not working properly and backup tools should certainly be tested.\n\nEven so, I'm OK with an explicit check for backup_label. Let's see what \nothers think.\n\n> If we are going to solve the torn writes problem completely then while I \n> agree the new way is superior, implementing it doesn't have to mean \n> existing tools built to produce backup_label and rely upon the \n> pg_control in the data directory need to be forcibly broken.\n\nIt is a pretty easy update to any backup software that supports \nnon-exclusive backup. I was able to make the changes to pgBackRest in \nless than an hour. We've made major changes to backup and restore in \nalmost every major version of PostgreSQL for a while: non-exlusive \nbackup in 9.6, dir renames in 10, variable WAL size in 11, new recovery \nlocation in 12, hard recovery target errors in 13, and changes to \nnon-exclusive backup and removal of exclusive backup in 15. In 17 we are \nalready looking at new page and segment sizes.\n\n> I know that outputting pg_control as bytea is going to be a bit\n> controversial. Software that is using psql get run pg_backup_stop()\n> could use encode() to get pg_control as text and then decode it later.\n> Alternately, we could update ReadControlFile() to recognize a\n> base64-encoded pg_control file. I'm not sure dealing with binary\n> data is\n> that much of a problem, though, and if the backup software gets it\n> wrong\n> then recovery with fail on an invalid pg_control file.\n> \n> Can we not figure out some way to place the relevant files onto the \n> server somewhere so that a simple \"cp\" command would work?  Have \n> pg_backup_stop return paths instead of contents, those paths being \n> \"$TEMP_DIR\"/<random unique new directory>/pg_control.conf (and \n> tablespace_map)\n\nNobody has been able to figure this out, and some of us have been \nthinking about it for years. It just doesn't seem possible to reliably \ntell the difference between a cluster that was copied and one that \nsimply crashed.\n\nIf cp is really the backup tool being employed, I would recommend using \npg_basebackup. cp has flaws that could lead to corruption, and of course \ndoes not at all take into account the archive required to make a backup \nconsistent, directories to be excluded, the order of copying pg_control \non backup from standy, etc., etc.\n\nBackup/restore is not a simple endeavor and we don't do anyone favors \npretending that it is.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:10:42 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Fri, Oct 27, 2023 at 7:10 AM David Steele <[email protected]> wrote:\n\n> On 10/26/23 17:27, David G. Johnston wrote:\n>\n> > Can we not figure out some way to place the relevant files onto the\n> > server somewhere so that a simple \"cp\" command would work? Have\n> > pg_backup_stop return paths instead of contents, those paths being\n> > \"$TEMP_DIR\"/<random unique new directory>/pg_control.conf (and\n> > tablespace_map)\n>\n> Nobody has been able to figure this out, and some of us have been\n> thinking about it for years. It just doesn't seem possible to reliably\n> tell the difference between a cluster that was copied and one that\n> simply crashed.\n>\n> If cp is really the backup tool being employed, I would recommend using\n> pg_basebackup. cp has flaws that could lead to corruption, and of course\n> does not at all take into account the archive required to make a backup\n> consistent, directories to be excluded, the order of copying pg_control\n> on backup from standy, etc., etc.\n>\n>\nLet me modify that to make it a bit more clear, I actually wouldn't care if\npg_backup_end outputs an entire binary pg_control file as part of the SQL\nresultset.\n\nMy proposal would be to, in addition, place in the temporary directory on\nthe server, Postgres-written versions of pg_control and tablespace_map as\npart of the pg_backup_end processing. The client software would then have\na choice. Write the contents of the SQL resultset to newly created binary\nmode files in the destination, or, copy the server-written files from the\ntemporary directory to the destination.\n\nThat said, I'm starting to dislike that idea myself. It only really makes\nsense if the files could be placed in the data directory but that isn't\ndoable given concurrent backups and not wanting to place the source server\ninto an inconsistent state.\n\nDavid J.\n\nOn Fri, Oct 27, 2023 at 7:10 AM David Steele <[email protected]> wrote:On 10/26/23 17:27, David G. Johnston wrote:\n> Can we not figure out some way to place the relevant files onto the \n> server somewhere so that a simple \"cp\" command would work?  Have \n> pg_backup_stop return paths instead of contents, those paths being \n> \"$TEMP_DIR\"/<random unique new directory>/pg_control.conf (and \n> tablespace_map)\n\nNobody has been able to figure this out, and some of us have been \nthinking about it for years. It just doesn't seem possible to reliably \ntell the difference between a cluster that was copied and one that \nsimply crashed.\n\nIf cp is really the backup tool being employed, I would recommend using \npg_basebackup. cp has flaws that could lead to corruption, and of course \ndoes not at all take into account the archive required to make a backup \nconsistent, directories to be excluded, the order of copying pg_control \non backup from standy, etc., etc.Let me modify that to make it a bit more clear, I actually wouldn't care if pg_backup_end outputs an entire binary pg_control file as part of the SQL resultset.My proposal would be to, in addition, place in the temporary directory on the server, Postgres-written versions of pg_control and tablespace_map as part of the pg_backup_end processing.  The client software would then have a choice.  Write the contents of the SQL resultset to newly created binary mode files in the destination, or, copy the server-written files from the temporary directory to the destination.That said, I'm starting to dislike that idea myself.  It only really makes sense if the files could be placed in the data directory but that isn't doable given concurrent backups and not wanting to place the source server into an inconsistent state.David J.", "msg_date": "Fri, 27 Oct 2023 10:45:28 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 10/27/23 13:45, David G. Johnston wrote:\n> \n> Let me modify that to make it a bit more clear, I actually wouldn't care \n> if pg_backup_end outputs an entire binary pg_control file as part of the \n> SQL resultset.\n> \n> My proposal would be to, in addition, place in the temporary directory \n> on the server, Postgres-written versions of pg_control and \n> tablespace_map as part of the pg_backup_end processing.  The client \n> software would then have a choice.  Write the contents of the SQL \n> resultset to newly created binary mode files in the destination, or, \n> copy the server-written files from the temporary directory to the \n> destination.\n> \n> That said, I'm starting to dislike that idea myself.  It only really \n> makes sense if the files could be placed in the data directory but that \n> isn't doable given concurrent backups and not wanting to place the \n> source server into an inconsistent state.\n\nPretty much the conclusion I have come to myself over the years.\n\nRegards,\n-David\n\n\n\n", "msg_date": "Sun, 5 Nov 2023 13:30:07 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Rebased on 151ffcf6.", "msg_date": "Sun, 5 Nov 2023 13:45:39 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Fri, Oct 27, 2023 at 10:10:42AM -0400, David Steele wrote:\n> We are still planning to address this issue in the back branches.\n\nFWIW, redesigning the backend code in charge of doing base backups in\nthe back branches is out of scope. Based on a read of the proposed\npatch, it includes catalog changes which would require a catversion\nbump, so that's not going to work anyway.\n--\nMichael", "msg_date": "Mon, 6 Nov 2023 14:05:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Sun, Nov 05, 2023 at 01:45:39PM -0400, David Steele wrote:\n> Rebased on 151ffcf6.\n\nI like this patch a lot. Even if the backup_label file is removed, we\nstill have all the debug information from the backup history file,\nthanks to its LABEL, BACKUP METHOD and BACKUP FROM, so no information\nis lost. It does a 1:1 replacement of the contents parsed from the\nbackup_label needed by recovery by fetching them from the control\nfile. Sounds like a straight-forward change to me.\n\nThe patch is failing the recovery test 039_end_of_wal.pl. Could you\nlook at the failure?\n\n /* Build and save the contents of the backup history file */\n- history_file = build_backup_content(state, true);\n+ history_file = build_backup_content(state);\n\nbuild_backup_content() sounds like an incorrect name if it is a\nroutine onlyused to build the contents of backup history files. \n\nWhy is there nothing updated in src/bin/pg_controldata/?\n\n+ /* Clear fields used to initialize recovery */\n+ ControlFile->backupCheckPoint = InvalidXLogRecPtr;\n+ ControlFile->backupStartPointTLI = 0;\n+ ControlFile->backupRecoveryRequired = false;\n+ ControlFile->backupFromStandby = false;\n\nThese variables in the control file are cleaned up when the\nbackup_label file was read previously, but backup_label is renamed to\nbackup_label.old a bit later than that. Your logic looks correct seen\nfrom here, but shouldn't these variables be set much later, aka just\n*after* UpdateControlFile(). This gap between the initialization of\nthe control file and the in-memory reset makes the code quite brittle,\nIMO.\n\n-\t\tbasebackup_progress_wait_wal_archive(&state);\n-\t\tdo_pg_backup_stop(backup_state, !opt->nowait);\n\nWhy is that moved?\n\n- The backup label\n- file includes the label string you gave to <function>pg_backup_start</function>,\n- as well as the time at which <function>pg_backup_start</function> was run, and\n- the name of the starting WAL file. In case of confusion it is therefore\n- possible to look inside a backup file and determine exactly which\n- backup session the dump file came from. The tablespace map file includes\n+ The tablespace map file includes\n\nIt may be worth mentioning that the backup history file holds this\ninformation on the primary's pg_wal, as well.\n\nThe changes in sendFileWithContent() may be worth a patch of its own.\n\n--- a/src/include/catalog/pg_control.h\n+++ b/src/include/catalog/pg_control.h\n@@ -146,6 +146,9 @@ typedef struct ControlFileData\n@@ -160,14 +163,25 @@ typedef struct ControlFileData\n XLogRecPtr minRecoveryPoint;\n TimeLineID minRecoveryPointTLI;\n+ XLogRecPtr backupCheckPoint;\n XLogRecPtr backupStartPoint;\n+ TimeLineID backupStartPointTLI;\n XLogRecPtr backupEndPoint;\n+ bool backupRecoveryRequired;\n+ bool backupFromStandby;\n\nThis increases the size of the control file from 296B to 312B with an\n8-byte alignment, as far as I can see. The size of the control file\nhas been always a sensitive subject especially with the hard limit of\nPG_CONTROL_MAX_SAFE_SIZE. Well, the point of this patch is that this\nis the price to pay to prevent users from doing something stupid with\na removal of a backup_label when they should not. Do others have an\nopinion about this increase in size?\n\nActually, grouping backupStartPointTLI and minRecoveryPointTLI should\nreduce more the size with some alignment magic, no?\n\n-\t/*\n-\t * BACKUP METHOD lets us know if this was a typical backup (\"streamed\",\n-\t * which could mean either pg_basebackup or the pg_backup_start/stop\n-\t * method was used) or if this label came from somewhere else (the only\n-\t * other option today being from pg_rewind). If this was a streamed\n-\t * backup then we know that we need to play through until we get to the\n-\t * end of the WAL which was generated during the backup (at which point we\n-\t * will have reached consistency and backupEndRequired will be reset to be\n-\t * false).\n-\t */\n-\tif (fscanf(lfp, \"BACKUP METHOD: %19s\\n\", backuptype) == 1)\n-\t{\n-\t\tif (strcmp(backuptype, \"streamed\") == 0)\n-\t\t\t*backupEndRequired = true;\n-\t}\n\nbackupRecoveryRequired in the control file is switched to false for\npg_rewind and true for streamed backups. My gut feeling is telling me\nthat this should be OK, as out-of-core tools would need an upgrade if\nthey relied on the backend_label file anyway. I can see that this\nchange makes use lose some documentation, unfortunately. Shouldn't\nthese removed lines be moved to pg_control.h instead for the\ndescription of backupEndRequired?\n\ndoc/src/sgml/ref/pg_rewind.sgml and\nsrc/backend/access/transam/xlogrecovery.c still include references to\nthe backup_label file.\n--\nMichael", "msg_date": "Mon, 6 Nov 2023 15:35:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/6/23 01:05, Michael Paquier wrote:\n> On Fri, Oct 27, 2023 at 10:10:42AM -0400, David Steele wrote:\n>> We are still planning to address this issue in the back branches.\n> \n> FWIW, redesigning the backend code in charge of doing base backups in\n> the back branches is out of scope. Based on a read of the proposed\n> patch, it includes catalog changes which would require a catversion\n> bump, so that's not going to work anyway.\n\nI did not mean this patch -- rather some variation of what Thomas has \nbeen working on, more than likely.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 6 Nov 2023 10:48:56 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/6/23 02:35, Michael Paquier wrote:\n> On Sun, Nov 05, 2023 at 01:45:39PM -0400, David Steele wrote:\n>> Rebased on 151ffcf6.\n> \n> I like this patch a lot. Even if the backup_label file is removed, we\n> still have all the debug information from the backup history file,\n> thanks to its LABEL, BACKUP METHOD and BACKUP FROM, so no information\n> is lost. It does a 1:1 replacement of the contents parsed from the\n> backup_label needed by recovery by fetching them from the control\n> file. Sounds like a straight-forward change to me.\n\nThat's the plan, at least!\n\n> The patch is failing the recovery test 039_end_of_wal.pl. Could you\n> look at the failure?\n\nI'm not seeing this failure, and CI seems happy [1]. Can you give \ndetails of the error message?\n\n> /* Build and save the contents of the backup history file */\n> - history_file = build_backup_content(state, true);\n> + history_file = build_backup_content(state);\n> \n> build_backup_content() sounds like an incorrect name if it is a\n> routine onlyused to build the contents of backup history files.\n\nGood point, I have renamed this to build_backup_history_content().\n\n> Why is there nothing updated in src/bin/pg_controldata/?\n\nOops, added.\n\n> + /* Clear fields used to initialize recovery */\n> + ControlFile->backupCheckPoint = InvalidXLogRecPtr;\n> + ControlFile->backupStartPointTLI = 0;\n> + ControlFile->backupRecoveryRequired = false;\n> + ControlFile->backupFromStandby = false;\n> \n> These variables in the control file are cleaned up when the\n> backup_label file was read previously, but backup_label is renamed to\n> backup_label.old a bit later than that. Your logic looks correct seen\n> from here, but shouldn't these variables be set much later, aka just\n> *after* UpdateControlFile(). This gap between the initialization of\n> the control file and the in-memory reset makes the code quite brittle,\n> IMO.\n\nIf we set these fields where backup_label was renamed, the logic would \nnot be exactly the same since pg_control won't be updated until the next \ntime through the loop. Since the fields should be updated before \nUpdateControlFile() I thought it made sense to keep all the updates \ntogether.\n\nOverall I think it is simpler, and we don't need to acquire a lock on \nControlFile.\n\n> -\t\tbasebackup_progress_wait_wal_archive(&state);\n> -\t\tdo_pg_backup_stop(backup_state, !opt->nowait);\n> \n> Why is that moved?\n\ndo_pg_backup_stop() generates the updated pg_control so it needs to run \nbefore we transmit pg_control.\n\n> - The backup label\n> - file includes the label string you gave to <function>pg_backup_start</function>,\n> - as well as the time at which <function>pg_backup_start</function> was run, and\n> - the name of the starting WAL file. In case of confusion it is therefore\n> - possible to look inside a backup file and determine exactly which\n> - backup session the dump file came from. The tablespace map file includes\n> + The tablespace map file includes\n> \n> It may be worth mentioning that the backup history file holds this\n> information on the primary's pg_wal, as well.\n\nOK, reworded.\n\n> The changes in sendFileWithContent() may be worth a patch of its own.\n\nThomas included this change in his pg_basebackup changes so I did the \nsame. Maybe wait a bit before we split this out? Seems like a pretty \nsmall change...\n\n> --- a/src/include/catalog/pg_control.h\n> +++ b/src/include/catalog/pg_control.h\n> @@ -146,6 +146,9 @@ typedef struct ControlFileData\n> @@ -160,14 +163,25 @@ typedef struct ControlFileData\n> XLogRecPtr minRecoveryPoint;\n> TimeLineID minRecoveryPointTLI;\n> + XLogRecPtr backupCheckPoint;\n> XLogRecPtr backupStartPoint;\n> + TimeLineID backupStartPointTLI;\n> XLogRecPtr backupEndPoint;\n> + bool backupRecoveryRequired;\n> + bool backupFromStandby;\n> \n> This increases the size of the control file from 296B to 312B with an\n> 8-byte alignment, as far as I can see. The size of the control file\n> has been always a sensitive subject especially with the hard limit of\n> PG_CONTROL_MAX_SAFE_SIZE. Well, the point of this patch is that this\n> is the price to pay to prevent users from doing something stupid with\n> a removal of a backup_label when they should not. Do others have an\n> opinion about this increase in size?\n> \n> Actually, grouping backupStartPointTLI and minRecoveryPointTLI should\n> reduce more the size with some alignment magic, no?\n\nI thought about this, but it seemed to me that existing fields had been \npositioned to make the grouping logical rather than to optimize \nalignment, e.g. minRecoveryPointTLI. Ideally that would have been placed \nnear backupEndRequired (or vice versa). But if the general opinion is to \nrearrange for alignment, I'm OK with that.\n\n> backupRecoveryRequired in the control file is switched to false for\n> pg_rewind and true for streamed backups. My gut feeling is telling me\n> that this should be OK, as out-of-core tools would need an upgrade if\n> they relied on the backend_label file anyway. I can see that this\n> change makes use lose some documentation, unfortunately. Shouldn't\n> these removed lines be moved to pg_control.h instead for the\n> description of backupEndRequired?\n\nUpdated description in pg_control.h -- it's a bit vague but not sure it \nis a good idea to get into the inner workings of pg_rewind here?\n\n> doc/src/sgml/ref/pg_rewind.sgml and\n> src/backend/access/transam/xlogrecovery.c still include references to\n> the backup_label file.\n\nFixed.\n\nAttached is a new patch based on 18b585155.\n\nRegards,\n-David\n\n[1] https://cirrus-ci.com/build/4939808120766464", "msg_date": "Mon, 6 Nov 2023 17:39:02 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 06, 2023 at 05:39:02PM -0400, David Steele wrote:\n> On 11/6/23 02:35, Michael Paquier wrote:\n>> The patch is failing the recovery test 039_end_of_wal.pl. Could you\n>> look at the failure?\n> \n> I'm not seeing this failure, and CI seems happy [1]. Can you give details of\n> the error message?\n\nI've retested today, and miss the failure. I'll let you know if I see\nthis again.\n\n>> + /* Clear fields used to initialize recovery */\n>> + ControlFile->backupCheckPoint = InvalidXLogRecPtr;\n>> + ControlFile->backupStartPointTLI = 0;\n>> + ControlFile->backupRecoveryRequired = false;\n>> + ControlFile->backupFromStandby = false;\n>> \n>> These variables in the control file are cleaned up when the\n>> backup_label file was read previously, but backup_label is renamed to\n>> backup_label.old a bit later than that. Your logic looks correct seen\n>> from here, but shouldn't these variables be set much later, aka just\n>> *after* UpdateControlFile(). This gap between the initialization of\n>> the control file and the in-memory reset makes the code quite brittle,\n>> IMO.\n\nYeah, sorry, there's a think from me here. I meant to reset these\nvariables just before the UpdateControlFile() after InitWalRecovery()\nin UpdateControlFile(), much closer to it.\n\n> If we set these fields where backup_label was renamed, the logic would not\n> be exactly the same since pg_control won't be updated until the next time\n> through the loop. Since the fields should be updated before\n> UpdateControlFile() I thought it made sense to keep all the updates\n> together.\n> \n> Overall I think it is simpler, and we don't need to acquire a lock on\n> ControlFile.\n\nWhat you are proposing is the same as what we already do for\nbackupEndRequired or backupStartPoint in the control file when\ninitializing recovery, so objection withdrawn.\n\n> Thomas included this change in his pg_basebackup changes so I did the same.\n> Maybe wait a bit before we split this out? Seems like a pretty small\n> change...\n\nSeems like a pretty good argument for refactoring that now, and let\nany other patches rely on it. Would you like to send a separate\npatch?\n\n>> Actually, grouping backupStartPointTLI and minRecoveryPointTLI should\n>> reduce more the size with some alignment magic, no?\n> \n> I thought about this, but it seemed to me that existing fields had been\n> positioned to make the grouping logical rather than to optimize alignment,\n> e.g. minRecoveryPointTLI. Ideally that would have been placed near\n> backupEndRequired (or vice versa). But if the general opinion is to\n> rearrange for alignment, I'm OK with that.\n\nI've not tested, but it looks like moving backupStartPointTLI after\nbackupEndPoint should shave 8 bytes, if you want to maintain a more\ncoherent group for the LSNs.\n--\nMichael", "msg_date": "Tue, 7 Nov 2023 17:20:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Tue, Nov 07, 2023 at 05:20:27PM +0900, Michael Paquier wrote:\n> On Mon, Nov 06, 2023 at 05:39:02PM -0400, David Steele wrote:\n> I've retested today, and miss the failure. I'll let you know if I see\n> this again.\n\nI've done a few more dozen runs, and still nothing. I am wondering\nwhat this disturbance was.\n\n>> If we set these fields where backup_label was renamed, the logic would not\n>> be exactly the same since pg_control won't be updated until the next time\n>> through the loop. Since the fields should be updated before\n>> UpdateControlFile() I thought it made sense to keep all the updates\n>> together.\n>> \n>> Overall I think it is simpler, and we don't need to acquire a lock on\n>> ControlFile.\n> \n> What you are proposing is the same as what we already do for\n> backupEndRequired or backupStartPoint in the control file when\n> initializing recovery, so objection withdrawn.\n> \n>> Thomas included this change in his pg_basebackup changes so I did the same.\n>> Maybe wait a bit before we split this out? Seems like a pretty small\n>> change...\n> \n> Seems like a pretty good argument for refactoring that now, and let\n> any other patches rely on it. Would you like to send a separate\n> patch?\n\nThe split still looks worth doing seen from here, so I am switching\nthe patch as WoA for now.\n\n>>> Actually, grouping backupStartPointTLI and minRecoveryPointTLI should\n>>> reduce more the size with some alignment magic, no?\n>> \n>> I thought about this, but it seemed to me that existing fields had been\n>> positioned to make the grouping logical rather than to optimize alignment,\n>> e.g. minRecoveryPointTLI. Ideally that would have been placed near\n>> backupEndRequired (or vice versa). But if the general opinion is to\n>> rearrange for alignment, I'm OK with that.\n> \n> I've not tested, but it looks like moving backupStartPointTLI after\n> backupEndPoint should shave 8 bytes, if you want to maintain a more\n> coherent group for the LSNs.\n\n+ * backupFromStandby indicates that the backup was taken on a standby. It is\n+ * require to initialize recovery and set to false afterwards.\ns/require/required/.\n\nThe term \"backup recovery\", that we've never used in the tree until\nnow as far as I know. Perhaps this recovery method should just be\nreferred as \"recovery from backup\"?\n\nBy the way, there is another thing that this patch has forgotten: the\nSQL functions that display data from the control file. Shouldn't\npg_control_recovery() be extended with the new fields? These fields\nmay be less critical than the other ones related to recovery, but I\nsuspect that showing them can become handy at least for debugging and\nmonitoring purposes.\n\nSomething in this area is that backupRecoveryRequired is the switch\ncontrolling if the fields set by the recovery initialization. Could\nit be actually useful to leave the other fields as they are and only\nreset backupRecoveryRequired before the first control file update?\nThis would leave a trace of the backup history directly in the control\nfile.\n\nWhat about pg_resetwal and RewriteControlFile()? Shouldn't these\nrecovery fields be reset as well?\n\ngit diff --check is complaining a bit.\n--\nMichael", "msg_date": "Fri, 10 Nov 2023 13:37:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/10/23 00:37, Michael Paquier wrote:\n> On Tue, Nov 07, 2023 at 05:20:27PM +0900, Michael Paquier wrote:\n>> On Mon, Nov 06, 2023 at 05:39:02PM -0400, David Steele wrote:\n>> I've retested today, and miss the failure. I'll let you know if I see\n>> this again.\n> \n> I've done a few more dozen runs, and still nothing. I am wondering\n> what this disturbance was.\n\nOK, hopefully it was just a blip.\n\n>>> If we set these fields where backup_label was renamed, the logic would not\n>>> be exactly the same since pg_control won't be updated until the next time\n>>> through the loop. Since the fields should be updated before\n>>> UpdateControlFile() I thought it made sense to keep all the updates\n>>> together.\n>>>\n>>> Overall I think it is simpler, and we don't need to acquire a lock on\n>>> ControlFile.\n>>\n>> What you are proposing is the same as what we already do for\n>> backupEndRequired or backupStartPoint in the control file when\n>> initializing recovery, so objection withdrawn.\n>>\n>>> Thomas included this change in his pg_basebackup changes so I did the same.\n>>> Maybe wait a bit before we split this out? Seems like a pretty small\n>>> change...\n>>\n>> Seems like a pretty good argument for refactoring that now, and let\n>> any other patches rely on it. Would you like to send a separate\n>> patch?\n> \n> The split still looks worth doing seen from here, so I am switching\n> the patch as WoA for now.\n\nThis has been split out.\n\n>>>> Actually, grouping backupStartPointTLI and minRecoveryPointTLI should\n>>>> reduce more the size with some alignment magic, no?\n>>>\n>>> I thought about this, but it seemed to me that existing fields had been\n>>> positioned to make the grouping logical rather than to optimize alignment,\n>>> e.g. minRecoveryPointTLI. Ideally that would have been placed near\n>>> backupEndRequired (or vice versa). But if the general opinion is to\n>>> rearrange for alignment, I'm OK with that.\n>>\n>> I've not tested, but it looks like moving backupStartPointTLI after\n>> backupEndPoint should shave 8 bytes, if you want to maintain a more\n>> coherent group for the LSNs.\n\nOK, I have moved backupStartPointTLI.\n\n> + * backupFromStandby indicates that the backup was taken on a standby. It is\n> + * require to initialize recovery and set to false afterwards.\n> s/require/required/.\n\nFixed.\n\n> The term \"backup recovery\", that we've never used in the tree until\n> now as far as I know. Perhaps this recovery method should just be\n> referred as \"recovery from backup\"?\n\nWell, \"backup recovery\" is less awkward, I think. For instance \"backup \nrecovery field\" vs \"recovery from backup field\".\n\n> By the way, there is another thing that this patch has forgotten: the\n> SQL functions that display data from the control file. Shouldn't\n> pg_control_recovery() be extended with the new fields? These fields\n> may be less critical than the other ones related to recovery, but I\n> suspect that showing them can become handy at least for debugging and\n> monitoring purposes.\n\nI guess that depends on whether we reset them or not (discussion below). \nRight now they would not be visible since by the time the user could log \non they would be reset.\n\n> Something in this area is that backupRecoveryRequired is the switch\n> controlling if the fields set by the recovery initialization. Could\n> it be actually useful to leave the other fields as they are and only\n> reset backupRecoveryRequired before the first control file update?\n> This would leave a trace of the backup history directly in the control\n> file.\n\nSince the other recovery fields are cleared in ReachedEndOfBackup() this \nwould be a change from what we do now.\n\nNone of these fields are ever visible (with the exception of \nminRecoveryPoint/TLI) since they are reset when the database becomes \nconsistent and before logons are allowed. Viewing them with \npg_controldata makes sense, but I'm not sure pg_control_recovery() does.\n\nIn fact, are backup_start_lsn, backup_end_lsn, and \nend_of_backup_record_required ever non-zero when logged onto Postgres? \nMaybe I'm missing something?\n\n> What about pg_resetwal and RewriteControlFile()? Shouldn't these\n> recovery fields be reset as well?\n\nDone.\n\n> git diff --check is complaining a bit.\n\nFixed.\n\nNew patches attached based on eb81e8e790.\n\nRegards,\n-David", "msg_date": "Fri, 10 Nov 2023 14:55:19 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Fri, Nov 10, 2023 at 02:55:19PM -0400, David Steele wrote:\n> On 11/10/23 00:37, Michael Paquier wrote:\n>> I've done a few more dozen runs, and still nothing. I am wondering\n>> what this disturbance was.\n> \n> OK, hopefully it was just a blip.\n\nStill nothing on this side. So that seems really like a random blip\nin the matrix.\n\n> This has been split out.\n\nThanks, applied 0001.\n\n>> The term \"backup recovery\", that we've never used in the tree until\n>> now as far as I know. Perhaps this recovery method should just be\n>> referred as \"recovery from backup\"?\n> \n> Well, \"backup recovery\" is less awkward, I think. For instance \"backup\n> recovery field\" vs \"recovery from backup field\".\n\nNot sure. I've never used this term when referring to recovery from a\nbackup. Perhaps I'm just not used to it, still that sounds a bit\nconfusing here.\n\n>> Something in this area is that backupRecoveryRequired is the switch\n>> controlling if the fields set by the recovery initialization. Could\n>> it be actually useful to leave the other fields as they are and only\n>> reset backupRecoveryRequired before the first control file update?\n>> This would leave a trace of the backup history directly in the control\n>> file.\n> \n> Since the other recovery fields are cleared in ReachedEndOfBackup() this\n> would be a change from what we do now.\n> \n> None of these fields are ever visible (with the exception of\n> minRecoveryPoint/TLI) since they are reset when the database becomes\n> consistent and before logons are allowed. Viewing them with pg_controldata\n> makes sense, but I'm not sure pg_control_recovery() does.\n> \n> In fact, are backup_start_lsn, backup_end_lsn, and\n> end_of_backup_record_required ever non-zero when logged onto Postgres? Maybe\n> I'm missing something?\n\nYeah, but custom backup/restore tools may want manipulate the contents\nof the control file for their own work, so at least for the sake of\nvisibility it sounds important to me to show all the information at\nhand, and that there is no need to.\n\n- The backup label\n- file includes the label string you gave to <function>pg_backup_start</function>,\n+ The backup history file (which is archived like WAL) includes the label\n+ string you gave to <function>pg_backup_start</function>,\n as well as the time at which <function>pg_backup_start</function> was run, and\n the name of the starting WAL file. In case of confusion it is therefore\n- possible to look inside a backup file and determine exactly which\n+ possible to look inside a backup history file and determine exactly which\n\nAs a side note, it is a bit disappointing that we lose the backup\nlabel from the backup itself, even if the patch changes correctly the\ndocumentation to reflect the new behavior. It is in the backup\nhistory file on the node from where the base backup has been taken or\nin the archives, hopefully. However there is nothing that remains in\nthe base backup itself, and backups can be self-contained (easy with\npg_basebackup --wal-method=stream). I think that we should retain a\nminimum amount of information as a replacement for the backup_label,\nat least. With this state, the current patch slightly reduces the\ndebuggability of deployments. That could be annoying for some users.\n\n> New patches attached based on eb81e8e790.\n\nDiving into the code for references about the backup label file, I\nhave spotted this log in pg_rewind that is now incorrect:\n if (showprogress)\n pg_log_info(\"creating backup label and updating control file\");\n\n+ printf(_(\"Backup start location's timeline: %u\\n\"),\n+ ControlFile->backupStartPointTLI);\n printf(_(\"Backup end location: %X/%X\\n\"),\n LSN_FORMAT_ARGS(ControlFile->backupEndPoint));\nPerhaps these two should be reversed to match with the header file.\n\n\n+ /*\n+ * After pg_backup_stop() returns this field will contain a copy of\n+ * pg_control that should be stored with the backup. Fields have been\n+ * updated for recovery and the CRC has been recalculated. The buffer\n+ * is padded to PG_CONTROL_MAX_SAFE_SIZE so that pg_control is always\n+ * a consistent size but smaller (and hopefully easier to handle) than\n+ * PG_CONTROL_FILE_SIZE. Bytes after sizeof(ControlFileData) are zeroed.\n+ */\n+ uint8_t controlFile[PG_CONTROL_MAX_SAFE_SIZE];\n\nI don't mind the addition of a control file with the max safe size,\nbecause it will never be higher than that. However:\n\n+ /* End the backup before sending pg_control */\n+ basebackup_progress_wait_wal_archive(&state);\n+ do_pg_backup_stop(backup_state, !opt->nowait);\n+\n+ /* Send copy of pg_control containing recovery info */\n+ sendFileWithContent(sink, XLOG_CONTROL_FILE,\n+ (char *)backup_state->controlFile,\n+ PG_CONTROL_MAX_SAFE_SIZE, &manifest);\n\nIt seems to me that the base backup protocol should always send an 8k\nfile for the control file so as we maintain consistency with the\non-disk format. Currently, a base backup taken with this patch\nresults in a control file of size 512B.\n\n+\t/* Build the contents of pg_control */\n+\tpg_control_bytea = (bytea *) palloc(PG_CONTROL_MAX_SAFE_SIZE + VARHDRSZ);\n+\tSET_VARSIZE(pg_control_bytea, PG_CONTROL_MAX_SAFE_SIZE + VARHDRSZ);\n+\tmemcpy(VARDATA(pg_control_bytea), backup_state->controlFile, PG_CONTROL_MAX_SAFE_SIZE);\n\nSimilar comment for the control file returned by pg_backup_stop(),\nwhich could just be made a 8k field?\n\n+ <function>pg_backup_stop</function> returns the\n+ <filename>pg_control</filename> file, which must be stored in the\n+ <filename>global</filename> directory of the backup. It also returns the\n\nAnd perhaps emphasize that this file should be an 8kB file in the\nparagraph mentioning the data returned by pg_backup_stop()?\n\n- Create a <filename>backup_label</filename> file to begin WAL replay at\n+ Update <filename>pg_control</filename> file to begin WAL replay at\n the checkpoint created at failover and configure the\n <filename>pg_control</filename> file with a minimum consistency LSN\n\npg_control is mentioned twice, so perhaps this could be worded better?\n\nPG_CONTROL_VERSION is important to not forget about.. Perhaps this\nshould be noted somewhere, or just changed in the patch itself.\nContrary to catalog changes, we do few of these in the control file so\nthere is close to zero risk of conflicts with other patches in the CF\napp.\n--\nMichael", "msg_date": "Mon, 13 Nov 2023 13:36:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "(I am not exactly sure how, but we've lost pgsql-hackers on the way\nwhen you sent v5. Now added back in CC with the two latest patches\nyou've proposed attached.)\n\nHere is a short summary of what has been missed by the lists:\n- I've commented that the patch should not create, not show up in\nfields returned the SQL functions or stream control files with a size\nof 512B, just stick to 8kB. If this is worth changing this should be\napplied consistently across the board including initdb, discussed on\nits own thread.\n- The backup-related fields in the control file are reset at the end\nof recovery. I've suggested to not do that to keep a trace of what\nwas happening during recovery. The latest version of the patch resets\nthe fields.\n- With the backup_label file gone, we lose some information in the\nbackups themselves, which is not good. Instead, you have suggested an\napproach where this data is added to the backup manifest, meaning that\nno information would be lost, particularly useful for self-contained\nbackups. The fields planned to be added to the backup manifest are:\n-- The start and end time of the backup, the end timestamp being\nuseful to know when stop time can be used for PITR.\n-- The backup label.\nI've agreed that it may be the best thing to do at this end to not\nlose any data related to the removal of the backup_label file.\n\nOn Sun, Nov 19, 2023 at 02:14:32PM -0400, David Steele wrote:\n> On 11/15/23 20:03, Michael Paquier wrote:\n>> As the label is only an informational field, the parsing added to\n>> pg_verifybackup is not really needed because it is used nowhere in the\n>> validation process, so keeping the logic simpler would be the way to\n>> go IMO. This is contrary to the WAL range for example, where start\n>> and end LSNs are used for validation with a pg_waldump command.\n>> Robert, any comments about the addition of the label in the manifest?\n>\n> I'm sure Robert will comment on this when he gets the time, but for now I\n> have backed off on passing the new info to pg_verifybackup and added\n> start/stop time.\n\nFWIW, I'm OK with the bits for the backup manifest as presented. So\nif there are no remarks and/or no objections, I'd like to apply it but\nlet give some room to others to comment on that as there's been a gap\nin the emails exchanged on pgsql-hackers. I hope that the summary\nI've posted above covers everything. So let's see about doing\nsomething around the middle of next week. With Thanksgiving in the\nUS, a lot of folks will not have the time to monitor what's happening\non this thread.\n\n+ The end time for the backup. This is when the backup was stopped in\n+ <productname>PostgreSQL</productname> and represents the earliest time\n+ that can be used for time-based Point-In-Time Recovery.\n\nThis one is actually a very good point. We'd lost this capacity with\nthe backup_label file gone without the end timestamps in the control\nfile.\n\n> New patches attached based on b218fbb7.\n\nI've noticed on the other thread the remark about being less\naggressive with the fields related to recovery in the control file, so\nI assume that this patch should leave the fields be after the end of\nrecovery from the start and only rely on backupRecoveryRequired to\ndecide if the recovery should use the fields or not:\nhttps://www.postgresql.org/message-id/241ccde1-1928-4ba2-a0bb-5350f7b191a8@=pgmasters.net\n\n+\tControlFile->backupCheckPoint = InvalidXLogRecPtr;\n \tControlFile->backupStartPoint = InvalidXLogRecPtr;\n+\tControlFile->backupStartPointTLI = 0;\n \tControlFile->backupEndPoint = InvalidXLogRecPtr;\n+\tControlFile->backupFromStandby = false;\n \tControlFile->backupEndRequired = false;\n\nStill, I get the temptation of being consistent with the current style\non HEAD to reset everything, as well.. \n--\nMichael", "msg_date": "Mon, 20 Nov 2023 10:15:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/19/23 21:15, Michael Paquier wrote:\n> (I am not exactly sure how, but we've lost pgsql-hackers on the way\n> when you sent v5. Now added back in CC with the two latest patches\n> you've proposed attached.)\n\nUgh, I must have hit reply instead of reply all. It's a rookie error and \nyou hate to see it.\n\n> Here is a short summary of what has been missed by the lists:\n> - I've commented that the patch should not create, not show up in\n> fields returned the SQL functions or stream control files with a size\n> of 512B, just stick to 8kB. If this is worth changing this should be\n> applied consistently across the board including initdb, discussed on\n> its own thread.\n> - The backup-related fields in the control file are reset at the end\n> of recovery. I've suggested to not do that to keep a trace of what\n> was happening during recovery. The latest version of the patch resets\n> the fields.\n> - With the backup_label file gone, we lose some information in the\n> backups themselves, which is not good. Instead, you have suggested an\n> approach where this data is added to the backup manifest, meaning that\n> no information would be lost, particularly useful for self-contained\n> backups. The fields planned to be added to the backup manifest are:\n> -- The start and end time of the backup, the end timestamp being\n> useful to know when stop time can be used for PITR.\n> -- The backup label.\n> I've agreed that it may be the best thing to do at this end to not\n> lose any data related to the removal of the backup_label file.\n\nThis looks right to me.\n\n> On Sun, Nov 19, 2023 at 02:14:32PM -0400, David Steele wrote:\n>> On 11/15/23 20:03, Michael Paquier wrote:\n>>> As the label is only an informational field, the parsing added to\n>>> pg_verifybackup is not really needed because it is used nowhere in the\n>>> validation process, so keeping the logic simpler would be the way to\n>>> go IMO. This is contrary to the WAL range for example, where start\n>>> and end LSNs are used for validation with a pg_waldump command.\n>>> Robert, any comments about the addition of the label in the manifest?\n>>\n>> I'm sure Robert will comment on this when he gets the time, but for now I\n>> have backed off on passing the new info to pg_verifybackup and added\n>> start/stop time.\n> \n> FWIW, I'm OK with the bits for the backup manifest as presented. So\n> if there are no remarks and/or no objections, I'd like to apply it but\n> let give some room to others to comment on that as there's been a gap\n> in the emails exchanged on pgsql-hackers. I hope that the summary\n> I've posted above covers everything. So let's see about doing\n> something around the middle of next week. With Thanksgiving in the\n> US, a lot of folks will not have the time to monitor what's happening\n> on this thread.\n\nTiming sounds good to me.\n\n> \n> + The end time for the backup. This is when the backup was stopped in\n> + <productname>PostgreSQL</productname> and represents the earliest time\n> + that can be used for time-based Point-In-Time Recovery.\n> \n> This one is actually a very good point. We'd lost this capacity with\n> the backup_label file gone without the end timestamps in the control\n> file.\n\nYeah, the end time is very important for recovery scenarios. We \ndefinitely need that recorded somewhere.\n\n> I've noticed on the other thread the remark about being less\n> aggressive with the fields related to recovery in the control file, so\n> I assume that this patch should leave the fields be after the end of\n> recovery from the start and only rely on backupRecoveryRequired to\n> decide if the recovery should use the fields or not:\n> https://www.postgresql.org/message-id/241ccde1-1928-4ba2-a0bb-5350f7b191a8@=pgmasters.net\n> \n> +\tControlFile->backupCheckPoint = InvalidXLogRecPtr;\n> \tControlFile->backupStartPoint = InvalidXLogRecPtr;\n> +\tControlFile->backupStartPointTLI = 0;\n> \tControlFile->backupEndPoint = InvalidXLogRecPtr;\n> +\tControlFile->backupFromStandby = false;\n> \tControlFile->backupEndRequired = false;\n> \n> Still, I get the temptation of being consistent with the current style\n> on HEAD to reset everything, as well..\n\nI'd rather reset everything for now (as we do now) and think about \nkeeping these values as a separate patch. It may be that we don't want \nto keep all of them, or we need a separate flag to say recovery was \ncompleted. We are accumulating a lot of booleans here, maybe we need a \nstate var (recoveryRequired, recoveryInProgress, recoveryComplete) and \nthen define which other vars are valid in each state.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 20 Nov 2023 09:50:56 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Sun, Nov 19, 2023 at 8:16 PM Michael Paquier <[email protected]> wrote:\n> (I am not exactly sure how, but we've lost pgsql-hackers on the way\n> when you sent v5. Now added back in CC with the two latest patches\n> you've proposed attached.)\n>\n> Here is a short summary of what has been missed by the lists:\n> - I've commented that the patch should not create, not show up in\n> fields returned the SQL functions or stream control files with a size\n> of 512B, just stick to 8kB. If this is worth changing this should be\n> applied consistently across the board including initdb, discussed on\n> its own thread.\n> - The backup-related fields in the control file are reset at the end\n> of recovery. I've suggested to not do that to keep a trace of what\n> was happening during recovery. The latest version of the patch resets\n> the fields.\n> - With the backup_label file gone, we lose some information in the\n> backups themselves, which is not good. Instead, you have suggested an\n> approach where this data is added to the backup manifest, meaning that\n> no information would be lost, particularly useful for self-contained\n> backups. The fields planned to be added to the backup manifest are:\n> -- The start and end time of the backup, the end timestamp being\n> useful to know when stop time can be used for PITR.\n> -- The backup label.\n> I've agreed that it may be the best thing to do at this end to not\n> lose any data related to the removal of the backup_label file.\n\nI think we need more votes to make a change this big. I have a\nconcern, which I think I've expressed before, that we keep whacking\naround the backup APIs, and that has a cost which is potentially\nlarger than the benefits. The last time we changed the API, we changed\npg_stop_backup to pg_backup_stop, but this doesn't do that, and I\nwonder if that's OK. Even if it is, do we really want to change this\nAPI around again after such a short time?\n\nThat said, I don't have an intrinsic problem with moving this\ninformation from the backup_label to the backup_manifest file since it\nis purely informational. I do think there should perhaps be some\nadditions to the test cases, though.\n\nI am concerned about the interaction of this proposal with incremental\nbackup. When you take an incremental backup, you get something that\nlooks a lot like a usable data directory but isn't. To prevent that\nfrom causing avoidable disasters, the present version of the patch\nadds INCREMENTAL FROM LSN and INCREMENTAL FROM TLI fields to the\nbackup_label. pg_combinebackup knows to look for those fields, and the\nserver knows that if they are present it should refuse to start. With\nthis change, though, I suppose those fields would end up in\npg_control. But that does not feel entirely great, because we have a\ngoal of keeping the amount of real data in pg_control below 512 bytes,\nthe traditional sector size, and this adds another 12 bytes of stuff\nto that file that currently doesn't need to be there. I feel like\nthat's kind of a problem.\n\nBut my main point here is ... if we have a few more senior hackers\nweigh in and vote in favor of this change, well then that's one thing.\nBut IMHO a discussion that's mostly between 2 people is not nearly a\nstrong enough consensus to justify this amount of disruption.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 11:11:13 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "\n\nOn 11/20/23 12:11, Robert Haas wrote:\n> On Sun, Nov 19, 2023 at 8:16 PM Michael Paquier <[email protected]> wrote:\n>> (I am not exactly sure how, but we've lost pgsql-hackers on the way\n>> when you sent v5. Now added back in CC with the two latest patches\n>> you've proposed attached.)\n>>\n>> Here is a short summary of what has been missed by the lists:\n>> - I've commented that the patch should not create, not show up in\n>> fields returned the SQL functions or stream control files with a size\n>> of 512B, just stick to 8kB. If this is worth changing this should be\n>> applied consistently across the board including initdb, discussed on\n>> its own thread.\n>> - The backup-related fields in the control file are reset at the end\n>> of recovery. I've suggested to not do that to keep a trace of what\n>> was happening during recovery. The latest version of the patch resets\n>> the fields.\n>> - With the backup_label file gone, we lose some information in the\n>> backups themselves, which is not good. Instead, you have suggested an\n>> approach where this data is added to the backup manifest, meaning that\n>> no information would be lost, particularly useful for self-contained\n>> backups. The fields planned to be added to the backup manifest are:\n>> -- The start and end time of the backup, the end timestamp being\n>> useful to know when stop time can be used for PITR.\n>> -- The backup label.\n>> I've agreed that it may be the best thing to do at this end to not\n>> lose any data related to the removal of the backup_label file.\n> \n> I think we need more votes to make a change this big. I have a\n> concern, which I think I've expressed before, that we keep whacking\n> around the backup APIs, and that has a cost which is potentially\n> larger than the benefits. \n\n From my perspective it's not that big a change for backup software but \nit does bring a lot of benefits, including fixing an outstanding bug in \nPostgres, i.e. reading pg_control without getting a torn copy.\n\n> The last time we changed the API, we changed\n> pg_stop_backup to pg_backup_stop, but this doesn't do that, and I\n> wonder if that's OK. Even if it is, do we really want to change this\n> API around again after such a short time?\n\nThis is a good point. We could just rename again, but not sure what \nnames to go for this time. OTOH if the backup software is selecting \nfields then they will get an error because the names have changed. If \nthe software is grabbing fields by position then they'll get a \nvalid-looking result (even if querying by position is a terrible idea).\n\nAnother thing we could do is explicitly error if we see backup_label in \nPGDATA during recovery. That's just a few lines of code so would not be \na big deal to maintain. This error would only be visible on restore, so \nit presumes that backup software is being tested.\n\nMaybe just a rename to something like pg_backup_begin/end would be the \nway to go.\n\n> That said, I don't have an intrinsic problem with moving this\n> information from the backup_label to the backup_manifest file since it\n> is purely informational. I do think there should perhaps be some\n> additions to the test cases, though.\n\nA little hard to add to the tests, I think, since they are purely \ninformational, i.e. not pushed up by the parser. Maybe we could just \ngrep for the fields?\n\n> I am concerned about the interaction of this proposal with incremental\n> backup. When you take an incremental backup, you get something that\n> looks a lot like a usable data directory but isn't. To prevent that\n> from causing avoidable disasters, the present version of the patch\n> adds INCREMENTAL FROM LSN and INCREMENTAL FROM TLI fields to the\n> backup_label. pg_combinebackup knows to look for those fields, and the\n> server knows that if they are present it should refuse to start. With\n> this change, though, I suppose those fields would end up in\n> pg_control. But that does not feel entirely great, because we have a\n> goal of keeping the amount of real data in pg_control below 512 bytes,\n> the traditional sector size, and this adds another 12 bytes of stuff\n> to that file that currently doesn't need to be there. I feel like\n> that's kind of a problem.\n\nI think these fields would be handled the same as the rest of the fields \nin backup_label: returned from pg_backup_stop() and also stored in \nbackup_manifest. Third-party software can do as they like with them and \npg_combinebackup can just read from backup_manifest.\n\nAs for the pg_control file -- it might be best to give it a different \nname for backups that are not essentially copies of PGDATA. On the other \nhand, pgBackRest has included pg_control in incremental backups since \nday one and we've never had a user mistakenly do a manual restore of one \nand cause a problem (though manual restores are not the norm). Still, \nprobably can't hurt to be a bit careful.\n\n> But my main point here is ... if we have a few more senior hackers\n> weigh in and vote in favor of this change, well then that's one thing.\n> But IMHO a discussion that's mostly between 2 people is not nearly a\n> strong enough consensus to justify this amount of disruption.\n\nWe absolutely need more people to look at this and sign off. I'm glad \nthey have not so far because it has allowed time to whack the patch \naround and get it into better shape.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 20 Nov 2023 13:53:55 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 12:54 PM David Steele <[email protected]> wrote:\n> Another thing we could do is explicitly error if we see backup_label in\n> PGDATA during recovery. That's just a few lines of code so would not be\n> a big deal to maintain. This error would only be visible on restore, so\n> it presumes that backup software is being tested.\n\nI think that if we do decide to adopt this proposal, that would be a\nsmart precaution.\n\n> A little hard to add to the tests, I think, since they are purely\n> informational, i.e. not pushed up by the parser. Maybe we could just\n> grep for the fields?\n\nHmm. Or should they be pushed up by the parser?\n\n> I think these fields would be handled the same as the rest of the fields\n> in backup_label: returned from pg_backup_stop() and also stored in\n> backup_manifest. Third-party software can do as they like with them and\n> pg_combinebackup can just read from backup_manifest.\n\nI think that would be a bad plan, because this is critical\ninformation, and a backup manifest is not a thing that you're required\nto have. It's not a natural fit at all. We don't want to create a\nsituation where if you nuke the backup_manifest then the server\nforgets that what it has is an incremental backup rather than a usable\ndata directory.\n\n> We absolutely need more people to look at this and sign off. I'm glad\n> they have not so far because it has allowed time to whack the patch\n> around and get it into better shape.\n\nCool.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 13:44:26 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/20/23 14:44, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 12:54 PM David Steele <[email protected]> wrote:\n>> Another thing we could do is explicitly error if we see backup_label in\n>> PGDATA during recovery. That's just a few lines of code so would not be\n>> a big deal to maintain. This error would only be visible on restore, so\n>> it presumes that backup software is being tested.\n> \n> I think that if we do decide to adopt this proposal, that would be a\n> smart precaution.\n\nI'd be OK with it -- what do you think, Michael? Would this be enough \nthat we would not need to rename the functions, or should we just go \nwith the rename?\n\n>> A little hard to add to the tests, I think, since they are purely\n>> informational, i.e. not pushed up by the parser. Maybe we could just\n>> grep for the fields?\n> \n> Hmm. Or should they be pushed up by the parser?\n\nWe could do that. I started on that road, but it's a lot of code for \nfields that aren't used. I think it would be better if the parser also \nloaded a data structure that represented the manifest. Seems to me \nthere's a lot of duplicated code between pg_verifybackup and \npg_combinebackup the way it is now.\n\n>> I think these fields would be handled the same as the rest of the fields\n>> in backup_label: returned from pg_backup_stop() and also stored in\n>> backup_manifest. Third-party software can do as they like with them and\n>> pg_combinebackup can just read from backup_manifest.\n> \n> I think that would be a bad plan, because this is critical\n> information, and a backup manifest is not a thing that you're required\n> to have. It's not a natural fit at all. We don't want to create a\n> situation where if you nuke the backup_manifest then the server\n> forgets that what it has is an incremental backup rather than a usable\n> data directory.\n\nI can't see why a backup would continue to be valid without a manifest \n-- that's not very standard for backup software. If you have the \ncritical info in backup_label, you can't afford to lose that, so why \nshould backup_manifest be any different?\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:41:38 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 2:41 PM David Steele <[email protected]> wrote:\n> I can't see why a backup would continue to be valid without a manifest\n> -- that's not very standard for backup software. If you have the\n> critical info in backup_label, you can't afford to lose that, so why\n> should backup_manifest be any different?\n\nI mean, you can run pg_basebackup --no-manifest.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 Nov 2023 14:47:52 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/20/23 15:47, Robert Haas wrote:\n> On Mon, Nov 20, 2023 at 2:41 PM David Steele <[email protected]> wrote:\n>> I can't see why a backup would continue to be valid without a manifest\n>> -- that's not very standard for backup software. If you have the\n>> critical info in backup_label, you can't afford to lose that, so why\n>> should backup_manifest be any different?\n> \n> I mean, you can run pg_basebackup --no-manifest.\n\nMaybe this would be a good thing to disable for page incremental. With \nall the work being done by pg_combinebackup, it seems like it would be a \ngood idea to be able to verify the final result?\n\nI understand this is an option -- but does it need to be? What is the \nbenefit of excluding the manifest?\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:56:19 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 11:11:13 -0500, Robert Haas wrote:\n> I think we need more votes to make a change this big. I have a\n> concern, which I think I've expressed before, that we keep whacking\n> around the backup APIs, and that has a cost which is potentially\n> larger than the benefits.\n\n+1. The amount of whacking around in this area has been substantial, and it's\nhard for operators to keep up. And realistically, with data sizes today, the\npressure to do basebackups with disk snapshots etc is not going to shrink.\n\n\nLeaving that concern aside, I am still on the fence about this proposal. I\nthink it does decrease the chance of getting things wrong in the\nstreaming-basebackup case. But for external backups, it seems almost\nuniversally worse (with the exception of the torn pg_control issue, that we\nalso can address otherwise):\n\nIt doesn't reduce the risk of getting things wrong, you can still omit placing\na file into the data directory and get silent corruption as a consequence. In\naddition, it's harder to see when looking at a base backup whether the process\nwas right or not, because now the good and bad state look the same if you just\nlook on the filesystem level!\n\nThen there's the issue of making ad-hoc debugging harder by not having a\nhuman readable file with information anymore, including when looking at the\nhistory, via backup_label.old.\n\n\nGiven that, I wonder if what we should do is to just add a new field to\npg_control that says \"error out if backup_label does not exist\", that we set\nwhen creating a streaming base backup\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 12:37:46 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 15:56:19 -0400, David Steele wrote:\n> I understand this is an option -- but does it need to be? What is the\n> benefit of excluding the manifest?\n\nIt's not free to create the manifest, particularly if checksums are enabled.\n\nAlso, for external backups, there's no manifest...\n\n- Andres\n\n\n", "msg_date": "Mon, 20 Nov 2023 12:41:10 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 1:37 PM Andres Freund <[email protected]> wrote:\n\n>\n> Given that, I wonder if what we should do is to just add a new field to\n> pg_control that says \"error out if backup_label does not exist\", that we\n> set\n> when creating a streaming base backup\n>\n>\nI thought this was DOA since we don't want to ever leave the cluster in a\nstate where a crash requires intervention to restart. But I agree that it\nis not possible to fool-proof agaInst a naive backup that copies over the\npg_control file as-is if breaking the crashed cluster option is not in play.\n\nI agree that this works if the pg_control generated by stop backup produces\nthe line and we retain the label file as a separate and now mandatory\ncomponent to using the backup.\n\nOr is the idea to make v17 error if it sees a backup label unless\npg_control has the feature flag field? Which doesn't exist normally, does\nin the basebackup version, and is removed once the backup is restored?\n\nDavid J.\n\nOn Mon, Nov 20, 2023 at 1:37 PM Andres Freund <[email protected]> wrote:\nGiven that, I wonder if what we should do is to just add a new field to\npg_control that says \"error out if backup_label does not exist\", that we set\nwhen creating a streaming base backupI thought this was DOA since we don't want to ever leave the cluster in a state where a crash requires intervention to restart.  But I agree that it is not possible to fool-proof agaInst a naive backup that copies over the pg_control file as-is if breaking the crashed cluster option is not in play.I agree that this works if the pg_control generated by stop backup produces the line and we retain the label file as a separate and now mandatory component to using the backup.Or is the idea to make v17 error if it sees a backup label unless pg_control has the feature flag field?  Which doesn't exist normally, does in the basebackup version, and is removed once the backup is restored?David J.", "msg_date": "Mon, 20 Nov 2023 14:18:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 11:11:13AM -0500, Robert Haas wrote:\n> I think we need more votes to make a change this big. I have a\n> concern, which I think I've expressed before, that we keep whacking\n> around the backup APIs, and that has a cost which is potentially\n> larger than the benefits. The last time we changed the API, we changed\n> pg_stop_backup to pg_backup_stop, but this doesn't do that, and I\n> wonder if that's OK. Even if it is, do we really want to change this\n> API around again after such a short time?\n\nAgreed.\n\n> That said, I don't have an intrinsic problem with moving this\n> information from the backup_label to the backup_manifest file since it\n> is purely informational. I do think there should perhaps be some\n> additions to the test cases, though.\n\nYep, cool. Even if we decide to not go with what's discussed in this\npatch, I think that's useful for some users at the end to get more\nredundancy, as well. And that's in a format easier to parse.\n\n> I am concerned about the interaction of this proposal with incremental\n> backup. When you take an incremental backup, you get something that\n> looks a lot like a usable data directory but isn't. To prevent that\n> from causing avoidable disasters, the present version of the patch\n> adds INCREMENTAL FROM LSN and INCREMENTAL FROM TLI fields to the\n> backup_label. pg_combinebackup knows to look for those fields, and the\n> server knows that if they are present it should refuse to start. With\n> this change, though, I suppose those fields would end up in\n> pg_control. But that does not feel entirely great, because we have a\n> goal of keeping the amount of real data in pg_control below 512 bytes,\n> the traditional sector size, and this adds another 12 bytes of stuff\n> to that file that currently doesn't need to be there. I feel like\n> that's kind of a problem.\n\nI don't recall one time where the addition of new fields to the\ncontrol file was easy to discuss because of its 512B hard limit.\nAnyway, putting the addition aside for a second, and I've not looked\nat the incremental backup patch, does the removal of the backup_label\nmake the combine logic more complicated, or that's just moving a chunk\nof code to do a control file lookup instead of a backup_file parsing?\nMaking the information less readable is definitely an issue for me. A\ndifferent alternative that I've mentioned upthread is to keep an\nequivalent of the backup_label and rename it to something like\nbackup.debug or similar, with a name good enough to tell people that\nwe don't care about it being removed.\n--\nMichael", "msg_date": "Tue, 21 Nov 2023 08:37:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-20 14:18:15 -0700, David G. Johnston wrote:\n> On Mon, Nov 20, 2023 at 1:37 PM Andres Freund <[email protected]> wrote:\n> \n> >\n> > Given that, I wonder if what we should do is to just add a new field to\n> > pg_control that says \"error out if backup_label does not exist\", that we\n> > set\n> > when creating a streaming base backup\n> >\n> >\n> I thought this was DOA since we don't want to ever leave the cluster in a\n> state where a crash requires intervention to restart.\n\nI was trying to suggest that we'd set the field in-memory, when streaming out\na pg_basebackup style backup (by just replacing pg_control with an otherwise\nidentical file that has the flag set). So it'd not have any effect on the\nprimary.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:41:40 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 12:37:46PM -0800, Andres Freund wrote:\n> Given that, I wonder if what we should do is to just add a new field to\n> pg_control that says \"error out if backup_label does not exist\", that we set\n> when creating a streaming base backup\n\nThat would mean that one still needs to take an extra step to update a\ncontrol file with this byte set, which is something you had a concern\nwith in terms of compatibility when it comes to external backup\nsolutions because more steps are necessary to take a backup, no? I\ndon't quite see why it is different than what's proposed on this\nthread, except that you don't need to write one file to the data\nfolder to store the backup label fields, but two, meaning that there's\na risk for more mistakes because a clean backup process would require\nmore steps. \n\nWith the current position of the fields in ControlFileData, there are\nthree free bytes after backupEndRequired, so it is possible to add\nthat for free. Now, would you actually need an extra field knowing\nthat backupStartPoint is around?\n--\nMichael", "msg_date": "Tue, 21 Nov 2023 08:52:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-21 08:52:08 +0900, Michael Paquier wrote:\n> On Mon, Nov 20, 2023 at 12:37:46PM -0800, Andres Freund wrote:\n> > Given that, I wonder if what we should do is to just add a new field to\n> > pg_control that says \"error out if backup_label does not exist\", that we set\n> > when creating a streaming base backup\n>\n> That would mean that one still needs to take an extra step to update a\n> control file with this byte set, which is something you had a concern\n> with in terms of compatibility when it comes to external backup\n> solutions because more steps are necessary to take a backup, no?\n\nI was thinking we'd just set it in the pg_basebackup style path, and we'd\nerror out if it's set and backup_label is present. But we'd still use\nbackup_label without the pg_control flag set.\n\nSo it'd just provide a cross-check that backup_label was not removed for\npg_basebackup style backup, but wouldn't do anything for external backups. But\nimo the proposal to just us pg_control doesn't actually do anything for\nexternal backups either - which is why I think my proposal would achieve as\nmuch, for a much lower price.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 20 Nov 2023 15:58:55 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, Nov 20, 2023 at 03:58:55PM -0800, Andres Freund wrote:\n> I was thinking we'd just set it in the pg_basebackup style path, and we'd\n> error out if it's set and backup_label is present. But we'd still use\n> backup_label without the pg_control flag set.\n>\n> So it'd just provide a cross-check that backup_label was not removed for\n> pg_basebackup style backup, but wouldn't do anything for external backups. But\n> imo the proposal to just us pg_control doesn't actually do anything for\n> external backups either - which is why I think my proposal would achieve as\n> much, for a much lower price.\n\nI don't see why not. It does not increase the number of steps when\ndoing a backup, and backupStartPoint alone would not be able to offer\nthis much protection.\n--\nMichael", "msg_date": "Tue, 21 Nov 2023 12:45:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/20/23 19:58, Andres Freund wrote:\n> \n> On 2023-11-21 08:52:08 +0900, Michael Paquier wrote:\n>> On Mon, Nov 20, 2023 at 12:37:46PM -0800, Andres Freund wrote:\n>>> Given that, I wonder if what we should do is to just add a new field to\n>>> pg_control that says \"error out if backup_label does not exist\", that we set\n>>> when creating a streaming base backup\n>>\n>> That would mean that one still needs to take an extra step to update a\n>> control file with this byte set, which is something you had a concern\n>> with in terms of compatibility when it comes to external backup\n>> solutions because more steps are necessary to take a backup, no?\n> \n> I was thinking we'd just set it in the pg_basebackup style path, and we'd\n> error out if it's set and backup_label is present. But we'd still use\n> backup_label without the pg_control flag set.\n> \n> So it'd just provide a cross-check that backup_label was not removed for\n> pg_basebackup style backup, but wouldn't do anything for external backups. But\n> imo the proposal to just us pg_control doesn't actually do anything for\n> external backups either - which is why I think my proposal would achieve as\n> much, for a much lower price.\n\nI'm not sure why you think the patch under discussion doesn't do \nanything for external backups. It provides the same benefits to both \npg_basebackup and external backups, i.e. they both receive the updated \nversion of pg_control.\n\nI really dislike the idea of pg_basebackup having a special mechanism \nfor making recovery safer that is not generally available to external \nbackup software. It might be easy enough for some (e.g. pgBackRest) to \nmanipulate pg_control but would be out of reach for most.\n\nRegards,\n-David\n\n\n\n", "msg_date": "Tue, 21 Nov 2023 07:42:42 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-21 07:42:42 -0400, David Steele wrote:\n> On 11/20/23 19:58, Andres Freund wrote:\n> > On 2023-11-21 08:52:08 +0900, Michael Paquier wrote:\n> > > On Mon, Nov 20, 2023 at 12:37:46PM -0800, Andres Freund wrote:\n> > > > Given that, I wonder if what we should do is to just add a new field to\n> > > > pg_control that says \"error out if backup_label does not exist\", that we set\n> > > > when creating a streaming base backup\n> > > \n> > > That would mean that one still needs to take an extra step to update a\n> > > control file with this byte set, which is something you had a concern\n> > > with in terms of compatibility when it comes to external backup\n> > > solutions because more steps are necessary to take a backup, no?\n> > \n> > I was thinking we'd just set it in the pg_basebackup style path, and we'd\n> > error out if it's set and backup_label is present. But we'd still use\n> > backup_label without the pg_control flag set.\n> > \n> > So it'd just provide a cross-check that backup_label was not removed for\n> > pg_basebackup style backup, but wouldn't do anything for external backups. But\n> > imo the proposal to just us pg_control doesn't actually do anything for\n> > external backups either - which is why I think my proposal would achieve as\n> > much, for a much lower price.\n> \n> I'm not sure why you think the patch under discussion doesn't do anything\n> for external backups. It provides the same benefits to both pg_basebackup\n> and external backups, i.e. they both receive the updated version of\n> pg_control.\n\nSure. They also receive a backup_label today. If an external solution forgets\nto replace pg_control copied as part of the filesystem copy, they won't get an\nerror after the remove of backup_label, just like they don't get one today if\nthey don't put backup_label in the data directory. Given that users don't do\nthe right thing with backup_label today, why can we rely on them doing the\nright thing with pg_control?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Nov 2023 08:41:09 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/21/23 12:41, Andres Freund wrote:\n> \n> On 2023-11-21 07:42:42 -0400, David Steele wrote:\n>> On 11/20/23 19:58, Andres Freund wrote:\n>>> On 2023-11-21 08:52:08 +0900, Michael Paquier wrote:\n>>>> On Mon, Nov 20, 2023 at 12:37:46PM -0800, Andres Freund wrote:\n>>>>> Given that, I wonder if what we should do is to just add a new field to\n>>>>> pg_control that says \"error out if backup_label does not exist\", that we set\n>>>>> when creating a streaming base backup\n>>>>\n>>>> That would mean that one still needs to take an extra step to update a\n>>>> control file with this byte set, which is something you had a concern\n>>>> with in terms of compatibility when it comes to external backup\n>>>> solutions because more steps are necessary to take a backup, no?\n>>>\n>>> I was thinking we'd just set it in the pg_basebackup style path, and we'd\n>>> error out if it's set and backup_label is present. But we'd still use\n>>> backup_label without the pg_control flag set.\n>>>\n>>> So it'd just provide a cross-check that backup_label was not removed for\n>>> pg_basebackup style backup, but wouldn't do anything for external backups. But\n>>> imo the proposal to just us pg_control doesn't actually do anything for\n>>> external backups either - which is why I think my proposal would achieve as\n>>> much, for a much lower price.\n>>\n>> I'm not sure why you think the patch under discussion doesn't do anything\n>> for external backups. It provides the same benefits to both pg_basebackup\n>> and external backups, i.e. they both receive the updated version of\n>> pg_control.\n> \n> Sure. They also receive a backup_label today. If an external solution forgets\n> to replace pg_control copied as part of the filesystem copy, they won't get an\n> error after the remove of backup_label, just like they don't get one today if\n> they don't put backup_label in the data directory. Given that users don't do\n> the right thing with backup_label today, why can we rely on them doing the\n> right thing with pg_control?\n\nI think reliable backup software does the right thing with backup_label, \nbut if the user starts getting errors on recovery they the decide to \nremove backup_label. I know we can't do much about bad backup software, \nbut we can at least make this a bit more resistant to user error after \nthe fact.\n\nIt doesn't help that one of our hints suggests removing backup_label. In \nhighly automated systems, the user might not even know they just \nrestored from a backup. They are only in the loop because the restore \nfailed and they are trying to figure out what is going wrong. When they \nremove backup_label the cluster comes up just fine. Victory!\n\nThis is the scenario I've seen most often -- not the backup/restore \nprocess getting it wrong but the user removing backup_label on their own \ninitiative. And because it yields such a positive result, at least \ninitially, they remember in the future that the thing to do is to remove \nbackup_label whenever they see the error.\n\nIf they only have pg_control, then their only choice is to get it right \nor run pg_resetwal. Most users have no knowledge of pg_resetwal so it \nwill take them longer to get there. Also, I think that tool make it \npretty clear that corruption will result and the only thing to do is a \nlogical dump and restore after using it.\n\nThere are plenty of ways a user can mess things up. What I'd like to \nprevent is the appearance of everything being OK when in fact they have \ncorrupted their cluster. That's the situation we have now with \nbackup_label. Is this new solution perfect? No, but I do think it checks \nseveral boxes, and is a worthwhile improvement.\n\nRegards,\n-David\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 Nov 2023 13:17:48 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/20/23 16:41, Andres Freund wrote:\n> \n> On 2023-11-20 15:56:19 -0400, David Steele wrote:\n>> I understand this is an option -- but does it need to be? What is the\n>> benefit of excluding the manifest?\n> \n> It's not free to create the manifest, particularly if checksums are enabled.\n\nIt's virtually free, even with the basic CRCs. Anyway, would you really \nwant a backup without a manifest? How would you know something is \nmissing? In particular, for page incremental how do you know something \nis new (but not WAL logged) if there is no manifest? Is the plan to just \nrecopy anything not WAL logged with each incremental?\n\n> Also, for external backups, there's no manifest...\n\nThere certainly is a manifest for many external backup solutions. Not \nhaving a manifest is just running with scissors, backup-wise.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 Nov 2023 13:41:15 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-21 13:41:15 -0400, David Steele wrote:\n> On 11/20/23 16:41, Andres Freund wrote:\n> >\n> > On 2023-11-20 15:56:19 -0400, David Steele wrote:\n> > > I understand this is an option -- but does it need to be? What is the\n> > > benefit of excluding the manifest?\n> >\n> > It's not free to create the manifest, particularly if checksums are enabled.\n>\n> It's virtually free, even with the basic CRCs.\n\nHuh?\n\nperf stat src/bin/pg_basebackup/pg_basebackup -h /tmp/ -p 5440 -D - -cfast -Xnone --format=tar > /dev/null\n\n 4,423.81 msec task-clock # 0.626 CPUs utilized\n 433,475 context-switches # 97.987 K/sec\n 5 cpu-migrations # 1.130 /sec\n 599 page-faults # 135.404 /sec\n 12,208,261,153 cycles # 2.760 GHz\n 6,805,401,520 instructions # 0.56 insn per cycle\n 1,273,896,027 branches # 287.964 M/sec\n 14,233,126 branch-misses # 1.12% of all branches\n\n 7.068946385 seconds time elapsed\n\n 1.106072000 seconds user\n 3.403793000 seconds sys\n\n\nperf stat src/bin/pg_basebackup/pg_basebackup -h /tmp/ -p 5440 -D - -cfast -Xnone --format=tar --manifest-checksums=CRC32C > /dev/null\n\n 4,324.64 msec task-clock # 0.640 CPUs utilized\n 433,306 context-switches # 100.195 K/sec\n 3 cpu-migrations # 0.694 /sec\n 598 page-faults # 138.277 /sec\n 11,952,475,908 cycles # 2.764 GHz\n 6,816,888,845 instructions # 0.57 insn per cycle\n 1,275,949,455 branches # 295.042 M/sec\n 13,721,376 branch-misses # 1.08% of all branches\n\n 6.760321433 seconds time elapsed\n\n 1.113256000 seconds user\n 3.302907000 seconds sys\n\nperf stat src/bin/pg_basebackup/pg_basebackup -h /tmp/ -p 5440 -D - -cfast -Xnone --format=tar --no-manifest > /dev/null\n\n 3,925.38 msec task-clock # 0.823 CPUs utilized\n 257,467 context-switches # 65.590 K/sec\n 4 cpu-migrations # 1.019 /sec\n 552 page-faults # 140.624 /sec\n 11,577,054,842 cycles # 2.949 GHz\n 5,933,731,797 instructions # 0.51 insn per cycle\n 1,108,784,719 branches # 282.466 M/sec\n 11,867,511 branch-misses # 1.07% of all branches\n\n 4.770347012 seconds time elapsed\n\n 1.002521000 seconds user\n 2.991769000 seconds sys\n\n\nI'd not call 7.06->4.77 or 6.76->4.77 \"virtually free\".\n\n\nAnd this actually *under* selling the cost - we waste a lot of cycles due to\nbad buffering decisions. Once we fix that, the cost differential increases\nfurther.\n\n\n> Anyway, would you really want a backup without a manifest? How would you\n> know something is missing? In particular, for page incremental how do you\n> know something is new (but not WAL logged) if there is no manifest? Is the\n> plan to just recopy anything not WAL logged with each incremental?\n\nShrug. If you just want to create a new standby by copying the primary, I\ndon't think creating and then validating the manifest buys you much. Long term\nbackups are a different story, particularly if data files are stored\nindividually, rather than in a single checksummed file.\n\n\n> > Also, for external backups, there's no manifest...\n>\n> There certainly is a manifest for many external backup solutions. Not having\n> a manifest is just running with scissors, backup-wise.\n\nYou mean that you have an external solution gin up a backup manifest? I fail\nto see how that's relevant here?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Nov 2023 09:59:18 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/20/23 16:37, Andres Freund wrote:\n> \n> On 2023-11-20 11:11:13 -0500, Robert Haas wrote:\n>> I think we need more votes to make a change this big. I have a\n>> concern, which I think I've expressed before, that we keep whacking\n>> around the backup APIs, and that has a cost which is potentially\n>> larger than the benefits.\n> \n> +1. The amount of whacking around in this area has been substantial, and it's\n> hard for operators to keep up. And realistically, with data sizes today, the\n> pressure to do basebackups with disk snapshots etc is not going to shrink.\n\nTrue enough, but disk snapshots aren't really backups in themselves, in \nmost scenarios, because they reside on the same storage as the cluster. \nOf course, snapshots can be exported, but that's also expensive.\n\nI see snapshots as an adjunct to backups -- a safe backup offsite \nsomewhere for DR and snapshots for day to day operations. Even so, \nmanaging snapshots as backups is harder than people think. It is easy to \nget wrong and end up with silent corruption.\n\n> Leaving that concern aside, I am still on the fence about this proposal. I\n> think it does decrease the chance of getting things wrong in the\n> streaming-basebackup case. But for external backups, it seems almost\n> universally worse (with the exception of the torn pg_control issue, that we\n> also can address otherwise):\n\nWhy universally worse? The software stores pg_control instead of backup \nlabel. The changes to pg_basebackup were pretty trivial and the changes \nto external backup are pretty much the same, at least in my limited \nsample of one.\n\nAnd I don't believe we have a satisfactory solution to the torn \npg_control issue yet. Certainly it has not been committed and Thomas has \nshown enthusiasm for this approach, to the point of hoping it could be \nback patched (it can't).\n\n> It doesn't reduce the risk of getting things wrong, you can still omit placing\n> a file into the data directory and get silent corruption as a consequence. In\n> addition, it's harder to see when looking at a base backup whether the process\n> was right or not, because now the good and bad state look the same if you just\n> look on the filesystem level!\n\nThis is one of the reasons I thought writing just the first 512 bytes of \npg_control would be valuable. It would give an easy indicator that \npg_control came from a backup. Michael was not in favor of conflating \nthat change with this patch -- but I still think it's a good idea.\n\n> Then there's the issue of making ad-hoc debugging harder by not having a\n> human readable file with information anymore, including when looking at the\n> history, via backup_label.old.\n\nYeah, you'd need to use pg_controldata instead. But as Michael has \nsuggested, we could also write backup_label as backup_info so there is \nhuman-readable information available.\n\n> Given that, I wonder if what we should do is to just add a new field to\n> pg_control that says \"error out if backup_label does not exist\", that we set\n> when creating a streaming base backup\n\nI'm not in favor of a change only accessible to pg_basebackup or \nexternal software that can manipulate pg_control.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 Nov 2023 14:13:58 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/21/23 13:59, Andres Freund wrote:\n> \n> On 2023-11-21 13:41:15 -0400, David Steele wrote:\n>> On 11/20/23 16:41, Andres Freund wrote:\n>>>\n>>> On 2023-11-20 15:56:19 -0400, David Steele wrote:\n>>>> I understand this is an option -- but does it need to be? What is the\n>>>> benefit of excluding the manifest?\n>>>\n>>> It's not free to create the manifest, particularly if checksums are enabled.\n>>\n>> It's virtually free, even with the basic CRCs.\n> \n> Huh?\n\n<snip>\n\n> I'd not call 7.06->4.77 or 6.76->4.77 \"virtually free\".\n\nOK, but how does that look with compression -- to a remote location? \nUncompressed backup to local storage doesn't seem very realistic. With \ngzip compression we measure SHA1 checksums at about 5% of total CPU. \nObviously that goes up with zstd or lz4. but parallelism helps offset \nthat cost, at least in clock time.\n\nI can't understate how valuable checksums are in finding corruption, \nespecially in long-lived backups.\n\n>> Anyway, would you really want a backup without a manifest? How would you\n>> know something is missing? In particular, for page incremental how do you\n>> know something is new (but not WAL logged) if there is no manifest? Is the\n>> plan to just recopy anything not WAL logged with each incremental?\n> \n> Shrug. If you just want to create a new standby by copying the primary, I\n> don't think creating and then validating the manifest buys you much. Long term\n> backups are a different story, particularly if data files are stored\n> individually, rather than in a single checksummed file.\n\nFine, but you are probably not using page incremental if just using \npg_basebackup to create a standby. With page incremental, at least one \nof the backups will already exist, which argues for a manifest.\n\n>>> Also, for external backups, there's no manifest...\n>>\n>> There certainly is a manifest for many external backup solutions. Not having\n>> a manifest is just running with scissors, backup-wise.\n> \n> You mean that you have an external solution gin up a backup manifest? I fail\n> to see how that's relevant here?\n\nJust saying that for external backups there *is* often a manifest and it \nis a good thing to have.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 Nov 2023 14:48:59 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn 2023-11-21 14:48:59 -0400, David Steele wrote:\n> > I'd not call 7.06->4.77 or 6.76->4.77 \"virtually free\".\n> \n> OK, but how does that look with compression\n\nWith compression it's obviously somewhat different - but that part is done in\nparallel, potentially on a different machine with client side compression,\nwhereas I think right now the checksumming is single-threaded, on the server\nside.\n\nWith parallel server side compression, it's still 20% slower with the default\nchecksumming than none. With client side it's 15%.\n\n\n> -- to a remote location?\n\nI think this one unfortunately makes checksums a bigger issue, not a smaller\none. The network interaction piece is single-threaded, adding another\nsignificant use of CPU onto the same thread means that you are hit harder by\nusing substantial amount of CPU for checksumming in the same thread.\n\nOnce you go beyond the small instances, you have plenty network bandwidth in\ncloud environments. We top out well before the network on bigger instances.\n\n\n> Uncompressed backup to local storage doesn't seem very realistic. With gzip\n> compression we measure SHA1 checksums at about 5% of total CPU.\n\nIMO using gzip is basically infeasible for non-toy sized databases today. I\nthink we're using our users a disservice by defaulting to it in a bunch of\nplaces. Even if another default exposes them to difficulty due to potentially\nusing a different compiled binary with fewer supported compression methods -\nthat's gona be very rare in practice.\n\n\n> I can't understate how valuable checksums are in finding corruption,\n> especially in long-lived backups.\n\nI agree! But I think we need faster checksum algorithms or a faster\nimplementation of the existing ones. And probably default to something faster\nonce we have it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 21 Nov 2023 12:00:18 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 11/21/23 16:00, Andres Freund wrote:\n> Hi,\n> \n> On 2023-11-21 14:48:59 -0400, David Steele wrote:\n>>> I'd not call 7.06->4.77 or 6.76->4.77 \"virtually free\".\n>>\n>> OK, but how does that look with compression\n> \n> With compression it's obviously somewhat different - but that part is done in\n> parallel, potentially on a different machine with client side compression,\n> whereas I think right now the checksumming is single-threaded, on the server\n> side.\n\nAh, yes, that's certainly a bottleneck.\n\n> With parallel server side compression, it's still 20% slower with the default\n> checksumming than none. With client side it's 15%.\n\nYeah, that still seems a lot. But to a large extent it sounds like a \nlimitation of the current implementation.\n\n>> -- to a remote location?\n> \n> I think this one unfortunately makes checksums a bigger issue, not a smaller\n> one. The network interaction piece is single-threaded, adding another\n> significant use of CPU onto the same thread means that you are hit harder by\n> using substantial amount of CPU for checksumming in the same thread.\n> \n> Once you go beyond the small instances, you have plenty network bandwidth in\n> cloud environments. We top out well before the network on bigger instances.\n> \n>> Uncompressed backup to local storage doesn't seem very realistic. With gzip\n>> compression we measure SHA1 checksums at about 5% of total CPU.\n> \n> IMO using gzip is basically infeasible for non-toy sized databases today. I\n> think we're using our users a disservice by defaulting to it in a bunch of\n> places. Even if another default exposes them to difficulty due to potentially\n> using a different compiled binary with fewer supported compression methods -\n> that's gona be very rare in practice.\n\nYeah, I don't use gzip anymore, but there are still some platforms that \ndo not provide zstd (at least not easily) and lz4 compresses less. One \nthing people do seem to have is a lot of cores.\n\n>> I can't understate how valuable checksums are in finding corruption,\n>> especially in long-lived backups.\n> \n> I agree! But I think we need faster checksum algorithms or a faster\n> implementation of the existing ones. And probably default to something faster\n> once we have it.\n\nWe've been using xxHash to generate checksums for our block-level \nincremental and it is seriously fast, written by the same guy who did \nzstd and lz4.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 Nov 2023 16:08:35 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Greetings,\n\n* David Steele ([email protected]) wrote:\n> On 11/21/23 12:41, Andres Freund wrote:\n> > Sure. They also receive a backup_label today. If an external solution forgets\n> > to replace pg_control copied as part of the filesystem copy, they won't get an\n> > error after the remove of backup_label, just like they don't get one today if\n> > they don't put backup_label in the data directory. Given that users don't do\n> > the right thing with backup_label today, why can we rely on them doing the\n> > right thing with pg_control?\n> \n> I think reliable backup software does the right thing with backup_label, but\n> if the user starts getting errors on recovery they the decide to remove\n> backup_label. I know we can't do much about bad backup software, but we can\n> at least make this a bit more resistant to user error after the fact.\n> \n> It doesn't help that one of our hints suggests removing backup_label. In\n> highly automated systems, the user might not even know they just restored\n> from a backup. They are only in the loop because the restore failed and they\n> are trying to figure out what is going wrong. When they remove backup_label\n> the cluster comes up just fine. Victory!\n\nYup, this is exactly the issue.\n\n> This is the scenario I've seen most often -- not the backup/restore process\n> getting it wrong but the user removing backup_label on their own initiative.\n> And because it yields such a positive result, at least initially, they\n> remember in the future that the thing to do is to remove backup_label\n> whenever they see the error.\n> \n> If they only have pg_control, then their only choice is to get it right or\n> run pg_resetwal. Most users have no knowledge of pg_resetwal so it will take\n> them longer to get there. Also, I think that tool make it pretty clear that\n> corruption will result and the only thing to do is a logical dump and\n> restore after using it.\n\nAgreed.\n\n> There are plenty of ways a user can mess things up. What I'd like to prevent\n> is the appearance of everything being OK when in fact they have corrupted\n> their cluster. That's the situation we have now with backup_label. Is this\n> new solution perfect? No, but I do think it checks several boxes, and is a\n> worthwhile improvement.\n\n+1.\n\nAs for the complaint about 'operators' having issue with the changes\nwe've been making in this area- where are those people complaining,\nexactly? Who are they? I feel like we keep getting this kind of\npush-back in this area from folks on this list but not from actual\nbackup software authors; all the complaints seem to either be \nspeculative or unattributed pass-through from someone else.\n\nWhat would really be helpful would be hearing from these individuals\ndirectly as to what the issues are with the changes, such that perhaps\nwe can do things better in the future to avoid whatever the issue is\nthey're having with the changes. Simply saying we shouldn't make\nchanges in this area isn't workable and the constant push-back is\nactively discouraging to folks trying to make improvements. Obviously\nit's a biased view, but we've not had issues making the necessary\nadjustments in pgbackrest with each release and I feel like if the\nauthors of wal-g or barman did that they would have spoken up.\n\nMaking a change as suggested which only helps pg_basebackup (and tools\nlike pgbackrest, since it's in C and can also make this particular\nchange) ends up leaving tools like wal-g and barman potentially still\nwith an easy way for users of those tools to corrupt their databases-\neven though we've not heard anything from the authors of those tools\nabout issues with the proposed change, nor have there been a lot of\ncomplaints from them about the prior changes to indicate that they'd\neven have an issue with the more involved change. Given the lack of\ncomplaint about past changes, I'd certainly rather err on the side of\nimproved safety for users than on the side of the authors of these tools\npossibly complaining.\n\nWhat these changes have done is finally break things like omnipitr\ncompletely, which hasn't been maintained in a very long time. The\nchanges in v12 broke recovery with omnipitr but not backup, and folks\nwere trying to use omnipitr as recently as with v13[1]. Certainly\nhaving a backup tool that only works for backup (fsvo works, anyway, as\nit still used exclusive backup mode meaning that a crash during a backup\nwould cause the system to not come back up after...) but doesn't work\nfor recovery isn't exactly great and I'm glad that, now, an attempt to\nuse omnipitr to perform a backup will fail. As with lots of other areas\nof PG, folks need to read the release notes and potentially update their\ncode for new major versions. If anything, the backup area is less of an\nissue for this because the authors of the backup tools are able to make\nthe change (and who are often the ones pushing for these changes) and\nthe end-user isn't impacted at all.\n\nMuch the same can be said for wal-e, with users still trying to use it\neven long after it was stated to be obsolete (the Obsolescence Notice[2]\nwas added in February 2022, though it hadn't been maintained for a while\nbefore that, and an issue was opened in December 2022 asking for it to\nbe updated to v15[3]...).\n\nThanks,\n\nStephen\n\n[1]: https://github.com/omniti-labs/omnipitr/issues/43\n[2]: https://github.com/wal-e/wal-e/commit/f5b3e790fe10daa098b8cbf01d836c4885dc13c7\n[3]: https://github.com/wal-e/wal-e/issues/433", "msg_date": "Sun, 26 Nov 2023 03:42:43 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Sun, Nov 26, 2023 at 3:42 AM Stephen Frost <[email protected]> wrote:\n> What would really be helpful would be hearing from these individuals\n> directly as to what the issues are with the changes, such that perhaps\n> we can do things better in the future to avoid whatever the issue is\n> they're having with the changes. Simply saying we shouldn't make\n> changes in this area isn't workable and the constant push-back is\n> actively discouraging to folks trying to make improvements. Obviously\n> it's a biased view, but we've not had issues making the necessary\n> adjustments in pgbackrest with each release and I feel like if the\n> authors of wal-g or barman did that they would have spoken up.\n\nI'm happy if people show up to comment on proposed changes, but I\nthink you're being a little bit unrealistic here. I have had to help\nplenty of people who have screwed up their backups in one way or\nanother, generally by using some home-grown script, sometimes by\nmisusing some existing backup tool. Those people are EDB customers;\nthey don't read and participate in discussions here. If they did,\nperhaps they wouldn't be paying EDB to have me and my colleagues sort\nthings out for them when it all goes wrong. I'm not trying to say that\nEDB doesn't have customers who participate in mailing list\ndiscussions, because we do, but it's a small minority, and I don't\nthink that should surprise anyone. Moreover, the people who don't\nwouldn't necessarily have the background, expertise, or *time* to\nassess specific proposals in detail. If your point is that my\nperspective on what's helpful or unhelpful is not valid because I've\nonly helped 30 people who had problems in this area, but that the\nperspective of those 30 people who were helped would be more valid,\nwell, I don't agree with that. I think your perspective and David's\nare valid precisely *because* you've worked a lot on pgbackrest and no\ndoubt interacted with lots of users; I think Andres's perspective is\nvalid precisely *because* of his experience working with the fleet at\nMicrosoft and individual customers at EDB and 2Q before that; and I\nthink my perspective is valid for the same kinds of reasons.\n\nI am more in agreement with the idea that it would be nice to hear\nfrom backup tool authors, but I think even that has limited value.\nSurely we can all agree that if the backup tool is correctly written,\nnone of this matters, because you'll make the tool do the right things\nand then you'll be fine. The difficulty here, and the motivation\nbehind this proposal and others like it, is that too many users fail\nto follow the procedure correctly. If we hear from the authors of\nwell-written backup tools, I expect they will tell us they can adapt\ntheir tool to whatever we do. And if we hear from the authors of\npoorly-written tools, well, I don't think their opinions would form a\ngreat basis for making decisions.\n\n> [ lengthy discussion of tools that don't work any more ]\n\nWhat confuses me here is that you seem to be arguing that we should\n*once again* make a breaking change to the backup API, but at the same\ntime you're acknowledging that there are plenty of tools out there on\nthe Internet that have gotten broken by previous rounds of changes.\nIt's only one step from there to conclude that whacking the API around\ndoes more harm than good, but you seem to reject that conclusion.\n\nPersonally, I haven't yet seen any evidence that the removal of\nexclusive backup mode made any real difference one way or the other. I\nthink I've heard about people needing to adjust code for it, but not\nabout that being a problem. I have yet to run into anyone who was\npreviously using it but, because it was deprecated, switched to doing\nsomething better and safer. Have you?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 13:58:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Greetings,\n\n* Robert Haas ([email protected]) wrote:\n> On Sun, Nov 26, 2023 at 3:42 AM Stephen Frost <[email protected]> wrote:\n> > What would really be helpful would be hearing from these individuals\n> > directly as to what the issues are with the changes, such that perhaps\n> > we can do things better in the future to avoid whatever the issue is\n> > they're having with the changes. Simply saying we shouldn't make\n> > changes in this area isn't workable and the constant push-back is\n> > actively discouraging to folks trying to make improvements. Obviously\n> > it's a biased view, but we've not had issues making the necessary\n> > adjustments in pgbackrest with each release and I feel like if the\n> > authors of wal-g or barman did that they would have spoken up.\n> \n> I'm happy if people show up to comment on proposed changes, but I\n> think you're being a little bit unrealistic here. I have had to help\n> plenty of people who have screwed up their backups in one way or\n> another, generally by using some home-grown script, sometimes by\n> misusing some existing backup tool. Those people are EDB customers;\n> they don't read and participate in discussions here. If they did,\n> perhaps they wouldn't be paying EDB to have me and my colleagues sort\n> things out for them when it all goes wrong. I'm not trying to say that\n> EDB doesn't have customers who participate in mailing list\n> discussions, because we do, but it's a small minority, and I don't\n> think that should surprise anyone. Moreover, the people who don't\n> wouldn't necessarily have the background, expertise, or *time* to\n> assess specific proposals in detail. If your point is that my\n> perspective on what's helpful or unhelpful is not valid because I've\n> only helped 30 people who had problems in this area, but that the\n> perspective of those 30 people who were helped would be more valid,\n> well, I don't agree with that. I think your perspective and David's\n> are valid precisely *because* you've worked a lot on pgbackrest and no\n> doubt interacted with lots of users; I think Andres's perspective is\n> valid precisely *because* of his experience working with the fleet at\n> Microsoft and individual customers at EDB and 2Q before that; and I\n> think my perspective is valid for the same kinds of reasons.\n\nI didn't mean to imply that anyone's perspective wasn't valid. I was\nsimply trying to get at the root question of: what *is* the issue with\nthe changes that are being made? If the answer to that is: we made\nthis change, which was hard for folks to deal with, and could have\nbeen avoided by doing X, then I really, really want to hear what X\nwas! If the answer is, well, the changes weren't hard, but we didn't\nlike having to make any changes at all ... then I just don't have any\nsympathy for that. People who write backup software for PG, be it\npgbackrest authors, wal-g authors, or homegrown script authors, will\nneed to adapt between major versions as we discover things that are\nbroken (such as exclusive mode, and such as the clear risk that's been\ndemonstrated of a torn copy of pg_control getting copied, resulting in\na completely invalid backup) and fix them.\n\n> I am more in agreement with the idea that it would be nice to hear\n> from backup tool authors, but I think even that has limited value.\n> Surely we can all agree that if the backup tool is correctly written,\n> none of this matters, because you'll make the tool do the right things\n> and then you'll be fine. The difficulty here, and the motivation\n> behind this proposal and others like it, is that too many users fail\n> to follow the procedure correctly. If we hear from the authors of\n> well-written backup tools, I expect they will tell us they can adapt\n> their tool to whatever we do. And if we hear from the authors of\n> poorly-written tools, well, I don't think their opinions would form a\n> great basis for making decisions.\n\nUhhh. No, I disagree with this- I'd argue that pgbackrest was broken\nuntil the most recently releases where we implemented a check to ensure\nthat the pg_control we copy has a valid PG CRC. Did we know it was\nbroken before this discussion? No, but that doesn't change the fact\nthat we certainly could have ended up copying an invalid pg_control and\nthus have an invalid backup, which even our 'pgbackrest verify' wouldn't\nhave caught because that just checks that the checksum that pgbackrest\ncalculates for every file hasn't changed since we copied it- but that\ndidn't do anything for the issue about pg_control having an invalid\ninternal checksum due to a torn write when we copied it.\n\nSo, yes, it does matter. We didn't make pgbackrest do the right thing\nin this case because we thought it was true that you couldn't get a torn\nread of pg_control; Thomas showed that wasn't true and that puts all of\nour users at risk. Thankfully somewhat minimal since we always copy\npg_control from the primary ... but still, it's not right, and we've\nnow taken steps to address it. Unfortunately, other tools are going to\nhave a more difficult time because they're not written in C, but we\nstill care about them, and that's why we're pushing for this change- to\nallow them to get a pretty much guaranteed valid pg_control from PG to\nstore without having to figure out how to validate it themselves.\n\n> > [ lengthy discussion of tools that don't work any more ]\n> \n> What confuses me here is that you seem to be arguing that we should\n> *once again* make a breaking change to the backup API, but at the same\n> time you're acknowledging that there are plenty of tools out there on\n> the Internet that have gotten broken by previous rounds of changes.\n\nThe broken ones aren't being maintained. Yes, I'm happy to have those\nexplicitly and clearly broken. I don't want people using outdated,\nbroken, and unmaintained tools to backup their PG databases.\n\n> It's only one step from there to conclude that whacking the API around\n> does more harm than good, but you seem to reject that conclusion.\n\nWe change the API because it objectively, clearly, addresses real issues\nthat users can run into that will cause them to have invalid backups if\nleft the way it is. That backup software authors need to adjust to this\nisn't a bad thing- it's a good thing, because we're fixing things and\nthey should be thrilled to have these issues addressed that they may not\nhave even considered.\n\n> Personally, I haven't yet seen any evidence that the removal of\n> exclusive backup mode made any real difference one way or the other. I\n> think I've heard about people needing to adjust code for it, but not\n> about that being a problem. I have yet to run into anyone who was\n> previously using it but, because it was deprecated, switched to doing\n> something better and safer. Have you?\n\nI'm glad that people haven't had a problem adjusting their code to the\nremoval of exclusive backup mode, that's good, and leaves me, again, a\nbit confused at what the issue here is about changing things- apparently\npeople don't actually have a problem with it, yet it keeps getting\nraised as an issue every time we change things in this area. I don't\nunderstand that.\n\nI'm not following the question entirely, I don't think. Most backup\ntool authors actively changed to using non-exclusive backup when\nexclusive backup mode was deprecated, certainly pgbackrest did and we've\nbeen using non-exclusive backup mode since it was available. Are you\nsaying that, because everyone moved off of it, we should have kept it?\nIn that case the answer is clearly no- omnipitr, at the least, didn't\nupdate to non-exclusive and therefore continued to run with the risk\nthat a crash during a backup would result in a cluster that wouldn't\nstart without manual intervention (an issue I've definitely heard about\na number of times, even recently) and that manual intervention (remove\nthe backup_label file) actively results in a *corrupt* cluster if the\nuser is actually restoring from a backup, which makes it really terrible\ndirection to give someone. Here, use this hack- but only if you're 100%\ncoming back from a crash and absolutely never, ever, ever if you're\nactually restoring from a backup.\n\nThanks!\n\nStephen", "msg_date": "Tue, 28 Nov 2023 10:42:50 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Mon, 20 Nov 2023 at 06:46, Michael Paquier <[email protected]> wrote:\n>\n> (I am not exactly sure how, but we've lost pgsql-hackers on the way\n> when you sent v5. Now added back in CC with the two latest patches\n> you've proposed attached.)\n>\n> Here is a short summary of what has been missed by the lists:\n> - I've commented that the patch should not create, not show up in\n> fields returned the SQL functions or stream control files with a size\n> of 512B, just stick to 8kB. If this is worth changing this should be\n> applied consistently across the board including initdb, discussed on\n> its own thread.\n> - The backup-related fields in the control file are reset at the end\n> of recovery. I've suggested to not do that to keep a trace of what\n> was happening during recovery. The latest version of the patch resets\n> the fields.\n> - With the backup_label file gone, we lose some information in the\n> backups themselves, which is not good. Instead, you have suggested an\n> approach where this data is added to the backup manifest, meaning that\n> no information would be lost, particularly useful for self-contained\n> backups. The fields planned to be added to the backup manifest are:\n> -- The start and end time of the backup, the end timestamp being\n> useful to know when stop time can be used for PITR.\n> -- The backup label.\n> I've agreed that it may be the best thing to do at this end to not\n> lose any data related to the removal of the backup_label file.\n>\n> On Sun, Nov 19, 2023 at 02:14:32PM -0400, David Steele wrote:\n> > On 11/15/23 20:03, Michael Paquier wrote:\n> >> As the label is only an informational field, the parsing added to\n> >> pg_verifybackup is not really needed because it is used nowhere in the\n> >> validation process, so keeping the logic simpler would be the way to\n> >> go IMO. This is contrary to the WAL range for example, where start\n> >> and end LSNs are used for validation with a pg_waldump command.\n> >> Robert, any comments about the addition of the label in the manifest?\n> >\n> > I'm sure Robert will comment on this when he gets the time, but for now I\n> > have backed off on passing the new info to pg_verifybackup and added\n> > start/stop time.\n>\n> FWIW, I'm OK with the bits for the backup manifest as presented. So\n> if there are no remarks and/or no objections, I'd like to apply it but\n> let give some room to others to comment on that as there's been a gap\n> in the emails exchanged on pgsql-hackers. I hope that the summary\n> I've posted above covers everything. So let's see about doing\n> something around the middle of next week. With Thanksgiving in the\n> US, a lot of folks will not have the time to monitor what's happening\n> on this thread.\n>\n> + The end time for the backup. This is when the backup was stopped in\n> + <productname>PostgreSQL</productname> and represents the earliest time\n> + that can be used for time-based Point-In-Time Recovery.\n>\n> This one is actually a very good point. We'd lost this capacity with\n> the backup_label file gone without the end timestamps in the control\n> file.\n>\n> > New patches attached based on b218fbb7.\n>\n> I've noticed on the other thread the remark about being less\n> aggressive with the fields related to recovery in the control file, so\n> I assume that this patch should leave the fields be after the end of\n> recovery from the start and only rely on backupRecoveryRequired to\n> decide if the recovery should use the fields or not:\n> https://www.postgresql.org/message-id/241ccde1-1928-4ba2-a0bb-5350f7b191a8@=pgmasters.net\n>\n> + ControlFile->backupCheckPoint = InvalidXLogRecPtr;\n> ControlFile->backupStartPoint = InvalidXLogRecPtr;\n> + ControlFile->backupStartPointTLI = 0;\n> ControlFile->backupEndPoint = InvalidXLogRecPtr;\n> + ControlFile->backupFromStandby = false;\n> ControlFile->backupEndRequired = false;\n>\n> Still, I get the temptation of being consistent with the current style\n> on HEAD to reset everything, as well..\n\nCFBot shows that the patch does not apply anymore as in [1]:\n\n=== Applying patches on top of PostgreSQL commit ID\n7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n=== applying patch ./recovery-in-pgcontrol-v7-0001-add-info-to-manifest.patch\npatching file doc/src/sgml/backup-manifest.sgml\npatching file src/backend/backup/backup_manifest.c\npatching file src/backend/backup/basebackup.c\nHunk #1 succeeded at 238 (offset 13 lines).\nHunk #2 succeeded at 258 (offset 13 lines).\nHunk #3 succeeded at 399 (offset 17 lines).\nHunk #4 succeeded at 652 (offset 17 lines).\ncan't find file to patch at input line 219\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/src/bin/pg_verifybackup/parse_manifest.c\nb/src/bin/pg_verifybackup/parse_manifest.c\n|index bf0227c668..408af88e58 100644\n|--- a/src/bin/pg_verifybackup/parse_manifest.c\n|+++ b/src/bin/pg_verifybackup/parse_manifest.c\n--------------------------\nNo file to patch. Skipping patch.\n9 out of 9 hunks ignored\npatching file src/include/backup/backup_manifest.h\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3511.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 18:27:30 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On Fri, Jan 26, 2024 at 06:27:30PM +0530, vignesh C wrote:\n> Please post an updated version for the same.\n> \n> [1] - http://cfbot.cputube.org/patch_46_3511.log\n\nWith the recent introduction of incremental backups that depend on\nbackup_label and the rather negative feedback received, I think that\nit would be better to return this entry as RwF for now. What do you\nthink?\n--\nMichael", "msg_date": "Mon, 29 Jan 2024 08:11:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 1/28/24 19:11, Michael Paquier wrote:\n> On Fri, Jan 26, 2024 at 06:27:30PM +0530, vignesh C wrote:\n>> Please post an updated version for the same.\n>>\n>> [1] - http://cfbot.cputube.org/patch_46_3511.log\n> \n> With the recent introduction of incremental backups that depend on\n> backup_label and the rather negative feedback received, I think that\n> it would be better to return this entry as RwF for now. What do you\n> think?\n\nI've been thinking it makes little sense to update the patch. It would \nbe a lot of work with all the new changes for incremental backup and \nsince Andres and Robert appear to be very against the idea, I doubt it \nwould be worth the effort.\n\nI have withdrawn the patch.\n\nRegards,\n-David\n\n\n", "msg_date": "Sun, 28 Jan 2024 19:28:41 -0400", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 1/29/24 12:28, David Steele wrote:\n> On 1/28/24 19:11, Michael Paquier wrote:\n>> On Fri, Jan 26, 2024 at 06:27:30PM +0530, vignesh C wrote:\n>>> Please post an updated version for the same.\n>>>\n>>> [1] - http://cfbot.cputube.org/patch_46_3511.log\n>>\n>> With the recent introduction of incremental backups that depend on\n>> backup_label and the rather negative feedback received, I think that\n>> it would be better to return this entry as RwF for now.  What do you\n>> think?\n> \n> I've been thinking it makes little sense to update the patch. It would \n> be a lot of work with all the new changes for incremental backup and \n> since Andres and Robert appear to be very against the idea, I doubt it \n> would be worth the effort.\n\nI've had a new idea which may revive this patch. The basic idea is to \nkeep backup_label but also return a copy of pg_control from \npg_stop_backup(). This copy of pg_control would be safe from tears and \nhave a backupLabelRequired field set (as Andres suggested) so recovery \ncannot proceed without the backup label.\n\nSo, everything will continue to work as it does now. But, backup \nsoftware can be enhanced to write the improved pg_control that is \nguaranteed not to be torn and has protection against a missing backup label.\n\nOf course, pg_basebackup will write the new backupLabelRequired field \ninto pg_control, but this way third party software can also gain \nadvantages from the new field.\n\nThoughts?\n\nRegards,\n-David\n\n\n", "msg_date": "Sun, 10 Mar 2024 16:47:26 +1300", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "Hi,\n\nOn Sunday, March 10th, 2024 at 4:47 AM, David Steele wrote:\n> I've had a new idea which may revive this patch. The basic idea is to\n> keep backup_label but also return a copy of pg_control from\n> pg_stop_backup(). This copy of pg_control would be safe from tears and\n> have a backupLabelRequired field set (as Andres suggested) so recovery\n> cannot proceed without the backup label.\n> \n> So, everything will continue to work as it does now. But, backup\n> software can be enhanced to write the improved pg_control that is\n> guaranteed not to be torn and has protection against a missing backup label.\n> \n> Of course, pg_basebackup will write the new backupLabelRequired field\n> into pg_control, but this way third party software can also gain\n> advantages from the new field.\n\nBump on this idea.\n\nGiven the discussion in [1], even if it obviously makes sense to improve the in core backup capabilities, the more we go in that direction, the more we'll rely on outside orchestration.\nSo IMHO it also worth worrying about given more leverage to such orchestration tools. In that sense, I really like the idea to extend the backup functions.\n\nMore thoughts?\n\nThanks all,\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\n[1] https://www.postgresql.org/message-id/lwXoqQdOT9Nw1tJIx_h7WuqMKrB1YMePQY99RFTZ87H7V52mgUJaSlw2WRbcOgKNUurF1yJqX3nqtZi4hJhtd3e_XlmLsLvnEtGXY-fZPoA%3D%40protonmail.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:55:08 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" }, { "msg_contents": "On 4/16/24 18:55, Stefan Fercot wrote:\n> Hi,\n> \n> On Sunday, March 10th, 2024 at 4:47 AM, David Steele wrote:\n>> I've had a new idea which may revive this patch. The basic idea is to\n>> keep backup_label but also return a copy of pg_control from\n>> pg_stop_backup(). This copy of pg_control would be safe from tears and\n>> have a backupLabelRequired field set (as Andres suggested) so recovery\n>> cannot proceed without the backup label.\n>>\n>> So, everything will continue to work as it does now. But, backup\n>> software can be enhanced to write the improved pg_control that is\n>> guaranteed not to be torn and has protection against a missing backup label.\n>>\n>> Of course, pg_basebackup will write the new backupLabelRequired field\n>> into pg_control, but this way third party software can also gain\n>> advantages from the new field.\n> \n> Bump on this idea.\n> \n> Given the discussion in [1], even if it obviously makes sense to improve the in core backup capabilities, the more we go in that direction, the more we'll rely on outside orchestration.\n> So IMHO it also worth worrying about given more leverage to such orchestration tools. In that sense, I really like the idea to extend the backup functions.\n\nI have implemented this idea and created a new thread [1] for it. \nHopefully it will address the concerns expressed in this thread.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/e2636c5d-c031-43c9-a5d6-5e5c7e4c5514%40pgmasters.net\n\n\n", "msg_date": "Fri, 17 May 2024 12:53:53 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add recovery to pg_control and remove backup_label" } ]
[ { "msg_contents": "Hello.\n\nSome messages recently introduced by commit 29d0a77fa6 seem to deviate\nslightly from our standards.\n\n+\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n+\t\t{\n+\t\t\tereport(ERROR,\n+\t\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t\terrmsg(\"replication slots must not be invalidated during the upgrade\"),\n+\t\t\t\t\terrhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n\nThe message for errhint is not a complete sentence. And errmsg is not\nin telegraph style. The first attached makes minimum changes.\n\nHowever, if allowed, I'd like to propose an alternative set of\nmessages as follows:\n\n+\t\t\t\t\terrmsg(\"replication slot is invalidated during upgrade\"),\n+\t\t\t\t\terrhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to avoid invalidation.\"));\n\nThe second attached does this.\n\nWhat do you think about those?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 27 Oct 2023 11:57:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 8:28 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Hello.\n>\n> Some messages recently introduced by commit 29d0a77fa6 seem to deviate\n> slightly from our standards.\n>\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> + {\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"replication slots must not be invalidated during the upgrade\"),\n> + errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n>\n> The message for errhint is not a complete sentence.\n\nYeah, the hint message should have ended with a period -\nhttps://www.postgresql.org/docs/current/error-style-guide.html#ERROR-STYLE-GUIDE-GRAMMAR-PUNCTUATION.\n\n> The second attached does this.\n>\n> What do you think about those?\n\n+ errmsg(\"replication slot is invalidated during upgrade\"),\n+ errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to\navoid invalidation.\"));\n }\n\nThe above errhint LGTM. How about a slightly different errmsg, like\nthe following?\n\n+ errmsg(\"cannot invalidate replication slots when\nin binary upgrade mode\"),\n+ errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to\navoid invalidation.\"));\n\n\".... when in binary upgrade mode\" is being used in many places.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Oct 2023 08:51:47 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 8:52 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 8:28 AM Kyotaro Horiguchi:\n> The above errhint LGTM. How about a slightly different errmsg, like\n> the following?\n>\n> + errmsg(\"cannot invalidate replication slots when\n> in binary upgrade mode\"),\n> + errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to\n> avoid invalidation.\"));\n>\n> \".... when in binary upgrade mode\" is being used in many places.\n>\n\nBy this time slot may be already invalidated, so how about:\n\"replication slot was invalidated when in binary upgrade mode\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Oct 2023 09:36:13 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 1:58 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Hello.\n>\n> Some messages recently introduced by commit 29d0a77fa6 seem to deviate\n> slightly from our standards.\n>\n> + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> + {\n> + ereport(ERROR,\n> + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"replication slots must not be invalidated during the upgrade\"),\n> + errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n>\n> The message for errhint is not a complete sentence. And errmsg is not\n> in telegraph style. The first attached makes minimum changes.\n>\n> However, if allowed, I'd like to propose an alternative set of\n> messages as follows:\n>\n> + errmsg(\"replication slot is invalidated during upgrade\"),\n> + errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to avoid invalidation.\"));\n>\n> The second attached does this.\n>\n> What do you think about those?\n>\n\nIIUC the only possible way to reach this error (according to the\ncomment preceding it) is by the user overriding the GUC value (which\nwas defaulted -1) on the command line.\n\n+ /*\n+ * The logical replication slots shouldn't be invalidated as\n+ * max_slot_wal_keep_size GUC is set to -1 during the upgrade.\n+ *\n+ * The following is just a sanity check.\n+ */\n\nGiven that, I felt a more relevant msg/hint might be like:\n\nerrmsg(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"),\nerrhint(\"Do not override \\\"max_slot_wal_keep_size\\\" using command line\noptions.\"));\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 27 Oct 2023 15:06:50 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 9:37 AM Peter Smith <[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 1:58 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> >\n> > Hello.\n> >\n> > Some messages recently introduced by commit 29d0a77fa6 seem to deviate\n> > slightly from our standards.\n> >\n> > + if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> > + {\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"replication slots must not be invalidated during the upgrade\"),\n> > + errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n> >\n> > The message for errhint is not a complete sentence. And errmsg is not\n> > in telegraph style. The first attached makes minimum changes.\n> >\n> > However, if allowed, I'd like to propose an alternative set of\n> > messages as follows:\n> >\n> > + errmsg(\"replication slot is invalidated during upgrade\"),\n> > + errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to avoid invalidation.\"));\n> >\n> > The second attached does this.\n> >\n> > What do you think about those?\n> >\n>\n> IIUC the only possible way to reach this error (according to the\n> comment preceding it) is by the user overriding the GUC value (which\n> was defaulted -1) on the command line.\n>\n\nYeah, this is my understanding as well.\n\n> + /*\n> + * The logical replication slots shouldn't be invalidated as\n> + * max_slot_wal_keep_size GUC is set to -1 during the upgrade.\n> + *\n> + * The following is just a sanity check.\n> + */\n>\n> Given that, I felt a more relevant msg/hint might be like:\n>\n> errmsg(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"),\n> errhint(\"Do not override \\\"max_slot_wal_keep_size\\\" using command line\n> options.\"));\n>\n\nBut OTOH, we don't have a value of user-passed options to ensure that.\nSo, how about a slightly different message: \"This can be caused by\noverriding \\\"max_slot_wal_keep_size\\\" using command line options.\" or\nsomething along those lines? I see a somewhat similar message in the\nexisting code (errhint(\"This can be caused ...\")).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Oct 2023 09:51:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 9:36 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Oct 27, 2023 at 8:52 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Fri, Oct 27, 2023 at 8:28 AM Kyotaro Horiguchi:\n> > The above errhint LGTM. How about a slightly different errmsg, like\n> > the following?\n> >\n> > + errmsg(\"cannot invalidate replication slots when\n> > in binary upgrade mode\"),\n> > + errhint(\"Set \\\"max_slot_wal_keep_size\\\" to -1 to\n> > avoid invalidation.\"));\n> >\n> > \".... when in binary upgrade mode\" is being used in many places.\n> >\n>\n> By this time slot may be already invalidated, so how about:\n> \"replication slot was invalidated when in binary upgrade mode\"?\n\nIn this error spot, the is invalidated in memory but the invalidated\nstate is not persisted to disk which happens after somewhere later:\n\n else\n {\n /*\n * We hold the slot now and have already invalidated it; flush it\n * to ensure that state persists.\n *\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:29:28 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 9:52 AM Amit Kapila <[email protected]> wrote:\n>\n> > errmsg(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"),\n> > errhint(\"Do not override \\\"max_slot_wal_keep_size\\\" using command line\n> > options.\"));\n> >\n>\n> But OTOH, we don't have a value of user-passed options to ensure that.\n> So, how about a slightly different message: \"This can be caused by\n> overriding \\\"max_slot_wal_keep_size\\\" using command line options.\" or\n> something along those lines? I see a somewhat similar message in the\n> existing code (errhint(\"This can be caused ...\")).\n\nI get it. I think having errdetail explaining the possible cause of\nthe error is wanted here, something like:\n\nerrmsg(\"cannot invalidate replication slots when in binary upgrade mode\"),\nerrdetail(\"This can be caused by overriding \\\"max_slot_wal_keep_size\\\"\nusing command line options.\"));\nerrhint(\"Do not override or set \\\"max_slot_wal_keep_size\\\" to -1 .\"));\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:59:52 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Fri, 27 Oct 2023 09:51:43 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Fri, Oct 27, 2023 at 9:37 AM Peter Smith <[email protected]> wrote:\r\n> > IIUC the only possible way to reach this error (according to the\r\n> > comment preceding it) is by the user overriding the GUC value (which\r\n> > was defaulted -1) on the command line.\r\n> >\r\n> \r\n> Yeah, this is my understanding as well.\r\n> \r\n> > + /*\r\n> > + * The logical replication slots shouldn't be invalidated as\r\n> > + * max_slot_wal_keep_size GUC is set to -1 during the upgrade.\r\n> > + *\r\n> > + * The following is just a sanity check.\r\n> > + */\r\n> >\r\n> > Given that, I felt a more relevant msg/hint might be like:\r\n> >\r\n> > errmsg(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"),\r\n> > errhint(\"Do not override \\\"max_slot_wal_keep_size\\\" using command line\r\n> > options.\"));\r\n> >\r\n> \r\n> But OTOH, we don't have a value of user-passed options to ensure that.\r\n> So, how about a slightly different message: \"This can be caused by\r\n> overriding \\\"max_slot_wal_keep_size\\\" using command line options.\" or\r\n> something along those lines? I see a somewhat similar message in the\r\n> existing code (errhint(\"This can be caused ...\")).\r\n\r\nThe suggested error message looks to me like that of the GUC\r\nmechanism. While I don't have the wider picture about the feature,\r\nmight we consider rejecting the parameter setting? With that\r\nmodification, this message can be changed to elog one.\r\n\r\nI believe it's somewhat inconsiderate to point out what shouldn't have\r\nbeen done only after a problem has occurred.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Fri, 27 Oct 2023 14:35:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On 2023-Oct-27, Kyotaro Horiguchi wrote:\n\n> @@ -1433,8 +1433,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> \t\t{\n> \t\t\tereport(ERROR,\n> \t\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\terrmsg(\"replication slots must not be invalidated during the upgrade\"),\n> -\t\t\t\t\terrhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n\nHmm, if I read this code right, this error is going to be thrown by the\ncheckpointer while finishing a checkpoint. Fortunately, the checkpoint\nrecord has already been written, but I don't know what would happen if\nthis is thrown while trying to write the shutdown checkpoint. Probably\nnothing terribly good.\n\nI don't think this is useful. If the setting is invalid during binary\nupgrade, let's prevent it from being set at all right from the start of\nthe upgrade process. In InvalidatePossiblyObsoleteSlot() we could have\njust an Assert() or elog(PANIC).\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:31:54 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Oct 27, 2023 at 2:02 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Oct-27, Kyotaro Horiguchi wrote:\n>\n> > @@ -1433,8 +1433,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> > {\n> > ereport(ERROR,\n> > errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > - errmsg(\"replication slots must not be invalidated during the upgrade\"),\n> > - errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n>\n> Hmm, if I read this code right, this error is going to be thrown by the\n> checkpointer while finishing a checkpoint. Fortunately, the checkpoint\n> record has already been written, but I don't know what would happen if\n> this is thrown while trying to write the shutdown checkpoint. Probably\n> nothing terribly good.\n>\n> I don't think this is useful. If the setting is invalid during binary\n> upgrade, let's prevent it from being set at all right from the start of\n> the upgrade process.\n\nWe are already forcing the required setting\n\"max_slot_wal_keep_size=-1\" during the upgrade similar to some of the\nother settings like \"full_page_writes\". However, the user can provide\nan option for \"max_slot_wal_keep_size\" in which case it will be\noverridden. Now, I think (a) we can ensure that our setting always\ntakes precedence in this case. The other idea is (b) to parse the\nuser-provided options and check if \"max_slot_wal_keep_size\" has a\nvalue different than expected and raise an error accordingly. Or we\ncan simply (c) document the usage of max_slot_wal_keep_size in the\nupgrade. I am not sure whether it is worth complicating the code for\nthis as the user shouldn't be using such an option during the upgrade.\nSo, I think doing (a) and (c) could be simpler.\n\n>\n> In InvalidatePossiblyObsoleteSlot() we could have\n> just an Assert() or elog(PANIC).\n>\n\nYeah, we can change to either of those.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 27 Oct 2023 14:57:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Fri, 27 Oct 2023 14:57:10 +0530, Amit Kapila <[email protected]> wrote in \r\n> On Fri, Oct 27, 2023 at 2:02 PM Alvaro Herrera <[email protected]> wrote:\r\n> >\r\n> > On 2023-Oct-27, Kyotaro Horiguchi wrote:\r\n> >\r\n> > > @@ -1433,8 +1433,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\r\n> > > {\r\n> > > ereport(ERROR,\r\n> > > errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > > - errmsg(\"replication slots must not be invalidated during the upgrade\"),\r\n> > > - errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\r\n> >\r\n> > Hmm, if I read this code right, this error is going to be thrown by the\r\n> > checkpointer while finishing a checkpoint. Fortunately, the checkpoint\r\n> > record has already been written, but I don't know what would happen if\r\n> > this is thrown while trying to write the shutdown checkpoint. Probably\r\n> > nothing terribly good.\r\n> >\r\n> > I don't think this is useful. If the setting is invalid during binary\r\n> > upgrade, let's prevent it from being set at all right from the start of\r\n> > the upgrade process.\r\n> \r\n> We are already forcing the required setting\r\n> \"max_slot_wal_keep_size=-1\" during the upgrade similar to some of the\r\n> other settings like \"full_page_writes\". However, the user can provide\r\n> an option for \"max_slot_wal_keep_size\" in which case it will be\r\n> overridden. Now, I think (a) we can ensure that our setting always\r\n> takes precedence in this case. The other idea is (b) to parse the\r\n> user-provided options and check if \"max_slot_wal_keep_size\" has a\r\n> value different than expected and raise an error accordingly. Or we\r\n> can simply (c) document the usage of max_slot_wal_keep_size in the\r\n> upgrade. I am not sure whether it is worth complicating the code for\r\n> this as the user shouldn't be using such an option during the upgrade.\r\n> So, I think doing (a) and (c) could be simpler.\r\n> >\r\n> > In InvalidatePossiblyObsoleteSlot() we could have\r\n> > just an Assert() or elog(PANIC).\r\n> >\r\n> \r\n> Yeah, we can change to either of those.\r\n\r\nThis discussion seems like a bit off from my point. I suggested adding\r\na check for that setting when IsBinaryUpgraded is true at the GUC\r\nlevel as shown in the attached patch. I believe Álvaro made a similar\r\nsuggestion. While the error message is somewhat succinct, I think it\r\nis sufficient given the low possilibility of the scenario and the fact\r\nthat it cannot occur inadvertently.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center", "msg_date": "Mon, 30 Oct 2023 11:28:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Oct 30, 2023 at 7:58 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Fri, 27 Oct 2023 14:57:10 +0530, Amit Kapila <[email protected]> wrote in\n> > On Fri, Oct 27, 2023 at 2:02 PM Alvaro Herrera <[email protected]> wrote:\n> > >\n> > > On 2023-Oct-27, Kyotaro Horiguchi wrote:\n> > >\n> > > > @@ -1433,8 +1433,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> > > > {\n> > > > ereport(ERROR,\n> > > > errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > > - errmsg(\"replication slots must not be invalidated during the upgrade\"),\n> > > > - errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n> > >\n> > > Hmm, if I read this code right, this error is going to be thrown by the\n> > > checkpointer while finishing a checkpoint. Fortunately, the checkpoint\n> > > record has already been written, but I don't know what would happen if\n> > > this is thrown while trying to write the shutdown checkpoint. Probably\n> > > nothing terribly good.\n> > >\n> > > I don't think this is useful. If the setting is invalid during binary\n> > > upgrade, let's prevent it from being set at all right from the start of\n> > > the upgrade process.\n> >\n> > We are already forcing the required setting\n> > \"max_slot_wal_keep_size=-1\" during the upgrade similar to some of the\n> > other settings like \"full_page_writes\". However, the user can provide\n> > an option for \"max_slot_wal_keep_size\" in which case it will be\n> > overridden. Now, I think (a) we can ensure that our setting always\n> > takes precedence in this case. The other idea is (b) to parse the\n> > user-provided options and check if \"max_slot_wal_keep_size\" has a\n> > value different than expected and raise an error accordingly. Or we\n> > can simply (c) document the usage of max_slot_wal_keep_size in the\n> > upgrade. I am not sure whether it is worth complicating the code for\n> > this as the user shouldn't be using such an option during the upgrade.\n> > So, I think doing (a) and (c) could be simpler.\n> > >\n> > > In InvalidatePossiblyObsoleteSlot() we could have\n> > > just an Assert() or elog(PANIC).\n> > >\n> >\n> > Yeah, we can change to either of those.\n>\n> This discussion seems like a bit off from my point. I suggested adding\n> a check for that setting when IsBinaryUpgraded is true at the GUC\n> level as shown in the attached patch. I believe Álvaro made a similar\n> suggestion. While the error message is somewhat succinct, I think it\n> is sufficient given the low possilibility of the scenario and the fact\n> that it cannot occur inadvertently.\n>\n\nI think we can simply change that error message to assert if we want\nto go with the check hook idea of yours. BTW, can we add\nGUC_check_errdetail() with a better message as some of the other check\nfunction uses? Also, I guess we can add some comments or at least\nrefer to the existing comments to explain the reason of such a check.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Oct 2023 08:51:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Monday, October 30, 2023 10:29 AM Kyotaro Horiguchi <[email protected]> wrote:\r\n> \r\n> At Fri, 27 Oct 2023 14:57:10 +0530, Amit Kapila <[email protected]>\r\n> wrote in\r\n> > On Fri, Oct 27, 2023 at 2:02 PM Alvaro Herrera <[email protected]>\r\n> wrote:\r\n> > >\r\n> > > On 2023-Oct-27, Kyotaro Horiguchi wrote:\r\n> > >\r\n> > > > @@ -1433,8 +1433,8 @@\r\n> InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\r\n> > > > {\r\n> > > > ereport(ERROR,\r\n> > > >\r\n> errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > > > - errmsg(\"replication slots must not\r\n> be invalidated during the upgrade\"),\r\n> > > > -\r\n> errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\r\n> > >\r\n> > > Hmm, if I read this code right, this error is going to be thrown by\r\n> > > the checkpointer while finishing a checkpoint. Fortunately, the\r\n> > > checkpoint record has already been written, but I don't know what\r\n> > > would happen if this is thrown while trying to write the shutdown\r\n> > > checkpoint. Probably nothing terribly good.\r\n> > >\r\n> > > I don't think this is useful. If the setting is invalid during\r\n> > > binary upgrade, let's prevent it from being set at all right from\r\n> > > the start of the upgrade process.\r\n> >\r\n> > We are already forcing the required setting\r\n> > \"max_slot_wal_keep_size=-1\" during the upgrade similar to some of the\r\n> > other settings like \"full_page_writes\". However, the user can provide\r\n> > an option for \"max_slot_wal_keep_size\" in which case it will be\r\n> > overridden. Now, I think (a) we can ensure that our setting always\r\n> > takes precedence in this case. The other idea is (b) to parse the\r\n> > user-provided options and check if \"max_slot_wal_keep_size\" has a\r\n> > value different than expected and raise an error accordingly. Or we\r\n> > can simply (c) document the usage of max_slot_wal_keep_size in the\r\n> > upgrade. I am not sure whether it is worth complicating the code for\r\n> > this as the user shouldn't be using such an option during the upgrade.\r\n> > So, I think doing (a) and (c) could be simpler.\r\n> > >\r\n> > > In InvalidatePossiblyObsoleteSlot() we could have just an Assert()\r\n> > > or elog(PANIC).\r\n> > >\r\n> >\r\n> > Yeah, we can change to either of those.\r\n> \r\n> This discussion seems like a bit off from my point. I suggested adding a check\r\n> for that setting when IsBinaryUpgraded is true at the GUC level as shown in the\r\n> attached patch. I believe Álvaro made a similar suggestion. While the error\r\n> message is somewhat succinct, I think it is sufficient given the low possilibility\r\n> of the scenario and the fact that it cannot occur inadvertently.\r\n\r\nThanks for the diff and I think the approach basically works.\r\n\r\nOne notable behavior of this approach it will reject the GUC setting even if there\r\nare no slots on old cluster or user set the value to a big enough value which\r\ndoesn't cause invalidation. The behavior doesn't look bad to me but just mention it\r\nfor reference.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 30 Oct 2023 03:36:41 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Oct 30, 2023 at 8:51 AM Amit Kapila <[email protected]> wrote:\n>\n> > This discussion seems like a bit off from my point. I suggested adding\n> > a check for that setting when IsBinaryUpgraded is true at the GUC\n> > level as shown in the attached patch. I believe Álvaro made a similar\n> > suggestion. While the error message is somewhat succinct, I think it\n> > is sufficient given the low possilibility of the scenario and the fact\n> > that it cannot occur inadvertently.\n> >\n>\n> I think we can simply change that error message to assert if we want\n> to go with the check hook idea of yours. BTW, can we add\n> GUC_check_errdetail() with a better message as some of the other check\n> function uses? Also, I guess we can add some comments or at least\n> refer to the existing comments to explain the reason of such a check.\n\nWill the check_hook approach work correctly? I haven't checked that by\nmyself, but I see InitializeGUCOptions() getting called before\nIsBinaryUpgrade is set to true and the passed-in config options ('c')\nare parsed.\n\nIf the check_hook approach works correctly, I think we must add a test\nhitting the error in check_max_slot_wal_keep_size for the\nIsBinaryUpgrade case. And, I agree with Amit to have a detailed\nmessaging with GUC_check_errmsg/GUC_check_errdetail. Also, IMV,\nleaving the error message in InvalidatePossiblyObsoleteSlot() there\n(if required with a better wording as discussed initially in this\nthread) does no harm. Actually, it acts as another safety net given\nthat max_slot_wal_keep_size GUC is reloadable via SIGHUP.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 12:38:47 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Mon, 30 Oct 2023 08:51:18 +0530, Amit Kapila <[email protected]> wrote in \n> I think we can simply change that error message to assert if we want\n> to go with the check hook idea of yours. BTW, can we add\n> GUC_check_errdetail() with a better message as some of the other check\n> function uses? Also, I guess we can add some comments or at least\n> refer to the existing comments to explain the reason of such a check.\n\nDefinitely. I've attached the following changes.\n\n1. Added GUC_check_errdetail() to the check function.\n\n2. Added a comment to the check function (based on my knowledge about\n the feature).\n\n3. Altered the ereport() into Assert() in\n InvalidatePossiblyObsoleteSlot(). I considered removing the\n !SlotIsLogical() condition since pg_upgrade always sets\n max_slot_wal_keep_size to -1, but I left it unchanged.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 30 Oct 2023 16:40:45 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Mon, 30 Oct 2023 03:36:41 +0000, \"Zhijie Hou (Fujitsu)\" <[email protected]> wrote in \n> Thanks for the diff and I think the approach basically works.\n> \n> One notable behavior of this approach it will reject the GUC setting even if there\n> are no slots on old cluster or user set the value to a big enough value which\n> doesn't cause invalidation. The behavior doesn't look bad to me but just mention it\n> for reference.\n\nIndeed. pg_upgrade anyway sets the variable to -1 irrespective of the\nslot's existence, and I see no justification for allowing users to\nforcibly change it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Oct 2023 16:46:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Mon, 30 Oct 2023 12:38:47 +0530, Bharath Rupireddy <[email protected]> wrote in \n> Will the check_hook approach work correctly? I haven't checked that by\n> myself, but I see InitializeGUCOptions() getting called before\n> IsBinaryUpgrade is set to true and the passed-in config options ('c')\n> are parsed.\n\nI'm not sure about the wanted behavior exactly, but the fast you\npointed doesn't matter because the check is required after parsing the\ncommand line options. On the other hand, I'm not sure about the\nbehavior that a setting in postgresql.conf is rejected.\n\n> If the check_hook approach works correctly, I think we must add a test\n> hitting the error in check_max_slot_wal_keep_size for the\n> IsBinaryUpgrade case. And, I agree with Amit to have a detailed\n> messaging with GUC_check_errmsg/GUC_check_errdetail. Also, IMV,\n> leaving the error message in InvalidatePossiblyObsoleteSlot() there\n> (if required with a better wording as discussed initially in this\n> thread) does no harm. Actually, it acts as another safety net given\n> that max_slot_wal_keep_size GUC is reloadable via SIGHUP.\n\nThe error message, which is deemed impossible, adds an additional\nmessage translation. In another thread, we are discussing the\nreduction of translatable messages. Therefore, I suggest using elog()\nfor the condition at the very least. Whether it should be elog() or\nAssert() remains open for discussion, as I don't have a firm stance on\nit.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Oct 2023 17:12:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Oct 30, 2023 at 1:42 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Mon, 30 Oct 2023 12:38:47 +0530, Bharath Rupireddy <[email protected]> wrote in\n> > Will the check_hook approach work correctly? I haven't checked that by\n> > myself, but I see InitializeGUCOptions() getting called before\n> > IsBinaryUpgrade is set to true and the passed-in config options ('c')\n> > are parsed.\n>\n> I'm not sure about the wanted behavior exactly, but the fast you\n> pointed doesn't matter because the check is required after parsing the\n> command line options. On the other hand, I'm not sure about the\n> behavior that a setting in postgresql.conf is rejected.\n\nYeah. The check_hook is called even after the param is specified in\npostgresql.conf during the upgrade, so I see no problem there.\n\n> > If the check_hook approach works correctly, I think we must add a test\n> > hitting the error in check_max_slot_wal_keep_size for the\n> > IsBinaryUpgrade case. And, I agree with Amit to have a detailed\n> > messaging with GUC_check_errmsg/GUC_check_errdetail. Also, IMV,\n> > leaving the error message in InvalidatePossiblyObsoleteSlot() there\n> > (if required with a better wording as discussed initially in this\n> > thread) does no harm. Actually, it acts as another safety net given\n> > that max_slot_wal_keep_size GUC is reloadable via SIGHUP.\n>\n> The error message, which is deemed impossible, adds an additional\n> message translation. In another thread, we are discussing the\n> reduction of translatable messages. Therefore, I suggest using elog()\n> for the condition at the very least. Whether it should be elog() or\n> Assert() remains open for discussion, as I don't have a firm stance on\n> it.\n\nI get it. I agree to go with just the assert because the GUC\ncheck_hook kinda tightens the screws against setting\nmax_slot_wal_keep_size to a value other than -1 during the binary\nupgrade,\n\nA few comments on your inhibit_m_s_w_k_s_during_upgrade_2.txt:\n\n1.\n\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 14:31:56 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "Dear Bharath,\r\n\r\n> Will the check_hook approach work correctly?\r\n\r\nI tested by using the first version and worked well (rejected). Please see the\r\nlog which recorded the output and log. Below lines were copied from server\r\nlog and found that max_slot_wal_keep_size must not be >= 0.\r\n\r\n```\r\nwaiting for server to start....2023-10-30 08:53:32.529 GMT [6903] FATAL: invalid value for parameter \"max_slot_wal_keep_size\": 1\r\n stopped waiting\r\npg_ctl: could not start serve\r\n```\r\n\r\n> I haven't checked that by\r\n> myself, but I see InitializeGUCOptions() getting called before\r\n> IsBinaryUpgrade is set to true and the passed-in config options ('c')\r\n> are parsed.\r\n\r\nI thought the key point was that user-defined options are aligned after the \"-b\".\r\nUser-defined options are set after the '-b' option, so check_hook could work\r\nas we expected. Thought?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED", "msg_date": "Mon, 30 Oct 2023 09:03:24 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Oct 30, 2023 at 2:31 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n\nNever mind. Previous message was accidentally sent before I finished\nwriting my comments.\n\n> Yeah. The check_hook is called even after the param is specified in\n> postgresql.conf during the upgrade, so I see no problem there.\n>\n> > The error message, which is deemed impossible, adds an additional\n> > message translation. In another thread, we are discussing the\n> > reduction of translatable messages. Therefore, I suggest using elog()\n> > for the condition at the very least. Whether it should be elog() or\n> > Assert() remains open for discussion, as I don't have a firm stance on\n> > it.\n>\n> I get it. I agree to go with just the assert because the GUC\n> check_hook kinda tightens the screws against setting\n> max_slot_wal_keep_size to a value other than -1 during the binary\n> upgrade,\n\n A few comments on your inhibit_m_s_w_k_s_during_upgrade_2.txt:\n\n1.\n+ * All WAL files on the publisher node must be retained during an upgrade to\n+ * maintain connections from downstreams. While pg_upgrade explicitly sets\n\nIt's not just the publisher, anyone using logical slots. Also, no\ndownstream please. If you want, you can repurpose the comment that's\nadded by 29d0a77f.\n\n /*\n * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n * checkpointer process. If WALs required by logical replication slots\n * are removed, the slots are unusable. This setting prevents the\n * invalidation of slots during the upgrade. We set this option when\n * cluster is PG17 or later because logical replication slots can only be\n * migrated since then. Besides, max_slot_wal_keep_size is added in PG13.\n */\n\n2.\n At present, only logical slots really require\n+ * this.\n\nCan we remove the above comment as the code with SlotIsLogical(s)\nexplains it all?\n\n3.\n+ GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set\nto -1 during the upgrade.\");\n+ return false;\n\nHow about we be explicit like the following which helps users reason\nabout this restriction instead of them looking at the comments/docs?\n\n GUC_check_errcode(ERRCODE_INVALID_PARAMETER_VALUE);\n GUC_check_errmsg(\"\"\\\"max_slot_wal_keep_size\\\" must be set\nto -1 when in binary upgrade mode\");\n GUC_check_errdetail(\"A value of -1 prevents the removal of\nWAL required for logical slots upgrade.\");\n return false;\n\n4. I think a test case to hit the error in the check hook in\n003_logical_slots.pl will help a lot here - not only covers the code\nbut also helps demonstrate how one can reach the error.\n\n5. I think the check_hook is better defined in xlog.c the place where\nit's actually being declared and in action. IMV, there's no reason for\nit to be in slot.c as it doesn't deal with any slot related\nvariables/structs. This also avoids an unnecessary \"utils/guc_hooks.h\"\ninclusion in slot.c.\n+bool\n+check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n+{\n\n5. A possible problem with this check_hook approach is that it doesn't\nlet anyone setting max_slot_wal_keep_size to a value other than -1\nduring pg_ugprade even if someone doesn't have logical slots or\ndoesn't want to upgrade logical slots in which case the WAL file\ngrowth during pg_upgrade may be huge (transiently) unless the\npg_resetwal step of pg_upgrade removes it at the end.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 14:55:01 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Mon, 30 Oct 2023 14:55:01 +0530, Bharath Rupireddy <[email protected]> wrote in \n> > I get it. I agree to go with just the assert because the GUC\n> > check_hook kinda tightens the screws against setting\n> > max_slot_wal_keep_size to a value other than -1 during the binary\n> > upgrade,\n\nThanks for being on the same page.\n\n> A few comments on your inhibit_m_s_w_k_s_during_upgrade_2.txt:\n> \n> 1.\n> + * All WAL files on the publisher node must be retained during an upgrade to\n> + * maintain connections from downstreams. While pg_upgrade explicitly sets\n> \n> It's not just the publisher, anyone using logical slots. Also, no\n> downstream please. If you want, you can repurpose the comment that's\n> added by 29d0a77f.\n> \n> /*\n> * Use max_slot_wal_keep_size as -1 to prevent the WAL removal by the\n> * checkpointer process. If WALs required by logical replication slots\n> * are removed, the slots are unusable. This setting prevents the\n> * invalidation of slots during the upgrade. We set this option when\n> * cluster is PG17 or later because logical replication slots can only be\n> * migrated since then. Besides, max_slot_wal_keep_size is added in PG13.\n> */\n\nIt is helpful. Thanks!\n\n> 2.\n> At present, only logical slots really require\n> + * this.\n> \n> Can we remove the above comment as the code with SlotIsLogical(s)\n> explains it all?\n\nmax_slot_wal_keep_size affects both logical and physical\nslots. Therefore, if we are interested only one of the two types of\nslots, it's important to clarify the rationale. Regardless of any\npotential exntension to physical slots, I believe it's essential to\nclarify the rationale. I couldn't determine from that extensive thread\nwhether there's a possible extension to physical slots. Could you\ninform me if such an extension can happen and, if not, provide the\nreason?\n\n\n> 3.\n> + GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set\n> to -1 during the upgrade.\");\n> + return false;\n> \n> How about we be explicit like the following which helps users reason\n> about this restriction instead of them looking at the comments/docs?\n> \n> GUC_check_errcode(ERRCODE_INVALID_PARAMETER_VALUE);\n> GUC_check_errmsg(\"\"\\\"max_slot_wal_keep_size\\\" must be set\n> to -1 when in binary upgrade mode\");\n> GUC_check_errdetail(\"A value of -1 prevents the removal of\n> WAL required for logical slots upgrade.\");\n> return false;\n\n\n\nI don't quite see the reason to provide such a detailed explanation\njust for this error. Additionally, since this check is performed\nregardless of the presence or absense of logical slots, I think the\nerrdetail message might potentially confuse those whosee it. Adding\n\"binary\" looks fine as is and done in the attached.\n\n> 4. I think a test case to hit the error in the check hook in\n> 003_logical_slots.pl will help a lot here - not only covers the code\n> but also helps demonstrate how one can reach the error.\n\nYeah, of course. I was planning to add tests once the direction of the\ndiscussion became clear. I will add them in the next version.\n\n> 5. I think the check_hook is better defined in xlog.c the place where\n> it's actually being declared and in action. IMV, there's no reason for\n> it to be in slot.c as it doesn't deal with any slot related\n> variables/structs. This also avoids an unnecessary \"utils/guc_hooks.h\"\n> inclusion in slot.c.\n> +bool\n> +check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n> +{\n\nSounds reasonable. Moved. I simply moved it to xlog.c, but the\nfunction comment was thoroughly written only for this moved function,\nmaking it somewhat stand out..\n\n> 5. A possible problem with this check_hook approach is that it doesn't\n> let anyone setting max_slot_wal_keep_size to a value other than -1\n> during pg_ugprade even if someone doesn't have logical slots or\n> doesn't want to upgrade logical slots in which case the WAL file\n> growth during pg_upgrade may be huge (transiently) unless the\n> pg_resetwal step of pg_upgrade removes it at the end.\n\nWhile I doubt anyone wishes to set the variable to a specific value\nduring upgrade, think there are individuals who might be reluctant to\nedit the config file due to unclear reasons. While we could consider\nan alternative - checking for logical slots during binary upgrade-\nit's debatable if the effort is justified. (I haven't verified its\nfeasibility, however.)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 31 Oct 2023 17:49:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Tue, Oct 31, 2023 at 2:19 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> > GUC_check_errcode(ERRCODE_INVALID_PARAMETER_VALUE);\n> > GUC_check_errmsg(\"\"\\\"max_slot_wal_keep_size\\\" must be set\n> > to -1 when in binary upgrade mode\");\n> > GUC_check_errdetail(\"A value of -1 prevents the removal of\n> > WAL required for logical slots upgrade.\");\n> > return false;\n>\n> I don't quite see the reason to provide such a detailed explanation\n> just for this error. Additionally, since this check is performed\n> regardless of the presence or absense of logical slots.\n\nOkay, I get it.\n\n> > 4. I think a test case to hit the error in the check hook in\n> > 003_logical_slots.pl will help a lot here - not only covers the code\n> > but also helps demonstrate how one can reach the error.\n>\n> Yeah, of course. I was planning to add tests once the direction of the\n> discussion became clear. I will add them in the next version.\n\nYes, please. The test case to hit the ERROR in\nInvalidatePossiblyObsoleteSlot() is important even if the check_hook\napproach isn't going anywhere.\n\n> function comment was thoroughly written only for this moved function,\n> making it somewhat stand out..\n\nI think that's fine.\n\n> > 5. A possible problem with this check_hook approach is that it doesn't\n> > let anyone setting max_slot_wal_keep_size to a value other than -1\n> > during pg_ugprade even if someone doesn't have logical slots or\n> > doesn't want to upgrade logical slots in which case the WAL file\n> > growth during pg_upgrade may be huge (transiently) unless the\n> > pg_resetwal step of pg_upgrade removes it at the end.\n>\n> While I doubt anyone wishes to set the variable to a specific value\n> during upgrade, think there are individuals who might be reluctant to\n> edit the config file due to unclear reasons. While we could consider\n> an alternative - checking for logical slots during binary upgrade-\n> it's debatable if the effort is justified. (I haven't verified its\n> feasibility, however.)\n\nChecking for logical slots during binary upgrade doesn't help - what\nif there are logical slots present but no upgrade is wanted (via a new\npg_uprade option)? Basically, how will the postgres server know\nwhether someone wants pg_upgrade of logical slots or not? Can we check\nif someone is overriding max_slot_wal_keep_size in pg_upgrade itself\n(via pg_settings query from the server)? If yes, if logical slots\nexist and upgrade is wanted, then disallow the upgrade if GUC is set\nto value other than -1.\n\nI believe disallowing setting max_slot_wal_keep_size to a value other\nthan -1 during binary upgrade may have serious consequences as it\nimpacts WAL retention before the pg_resetwal comes into picture as\npart of pg_upgrade.\n\nOr what if we just live with what we have right now? I mean with ERROR\nin InvalidatePossiblyObsoleteSlot().\n\nOr what if we just remove ERROR in InvalidatePossiblyObsoleteSlot or\nmake it an Assert and say do not override max_slot_wal_keep_size in\ndocs? Even if someone did override, let the pg_upgrade report the slot\nas invalidated and let the user delete the slot or decide what to do\nwith it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:00:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Tue, Oct 31, 2023 at 4:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> > > 5. A possible problem with this check_hook approach is that it doesn't\n> > > let anyone setting max_slot_wal_keep_size to a value other than -1\n> > > during pg_ugprade even if someone doesn't have logical slots or\n> > > doesn't want to upgrade logical slots in which case the WAL file\n> > > growth during pg_upgrade may be huge (transiently) unless the\n> > > pg_resetwal step of pg_upgrade removes it at the end.\n> >\n> > While I doubt anyone wishes to set the variable to a specific value\n> > during upgrade, think there are individuals who might be reluctant to\n> > edit the config file due to unclear reasons. While we could consider\n> > an alternative - checking for logical slots during binary upgrade-\n> > it's debatable if the effort is justified. (I haven't verified its\n> > feasibility, however.)\n>\n> Checking for logical slots during binary upgrade doesn't help - what\n> if there are logical slots present but no upgrade is wanted (via a new\n> pg_uprade option)? Basically, how will the postgres server know\n> whether someone wants pg_upgrade of logical slots or not? Can we check\n> if someone is overriding max_slot_wal_keep_size in pg_upgrade itself\n> (via pg_settings query from the server)? If yes, if logical slots\n> exist and upgrade is wanted, then disallow the upgrade if GUC is set\n> to value other than -1.\n>\n\nI feel we can try to extend the functionality if we really see some\nuser demand. It is not that we can't do it now but it doesn't seem\nprudent to make the functionality/code more complex than really\nrequired.\n\n> I believe disallowing setting max_slot_wal_keep_size to a value other\n> than -1 during binary upgrade may have serious consequences as it\n> impacts WAL retention before the pg_resetwal comes into picture as\n> part of pg_upgrade.\n>\n\nI don't think this is completely true because this setting will only\nimpact if there are active slots and those slots need some WAL which\nwe want to remove. This setting shouldn't be used as often as you are\nimagining.\n\n> Or what if we just live with what we have right now? I mean with ERROR\n> in InvalidatePossiblyObsoleteSlot().\n>\n> Or what if we just remove ERROR in InvalidatePossiblyObsoleteSlot or\n> make it an Assert and say do not override max_slot_wal_keep_size in\n> docs? Even if someone did override, let the pg_upgrade report the slot\n> as invalidated and let the user delete the slot or decide what to do\n> with it.\n>\n\nThe problem is this can happen in the background so it can happen at\nthe time of shutdown when all the upgrade is complete.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:47:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "Dear Horiguchi-san,\n\nThanks for making the patch!\n\n> > 4. I think a test case to hit the error in the check hook in\n> > 003_logical_slots.pl will help a lot here - not only covers the code\n> > but also helps demonstrate how one can reach the error.\n> \n> Yeah, of course. I was planning to add tests once the direction of the\n> discussion became clear. I will add them in the next version.\n\nI tried to make the part. Feel free to include it if not yet. We can check the\nserver log, but I think it may be overkill.\n\nAlso, I have one comment.\n\n```\n+bool\n+check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n+{\n+ if (IsBinaryUpgrade && *newval != -1)\n+ {\n+ GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during binary upgrade mode.\");\n+ return false;\n+ }\n+ return true;\n+}\n```\n\nJust to confirm - should we check the GucSource? Based on ur requirement, it might\nbe enough we avoid overwriting while starting the server.\nPersonally current code is OK because it is simpler, but I want to hear your opinion.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Tue, 31 Oct 2023 13:44:07 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: A recent message added to pg_upgade" }, { "msg_contents": "Hi, here are some minor review comments for the v3 patch.\n\n======\nsrc/backend/access/transam/xlog.c\n\n1. check_max_slot_wal_keep_size\n\n+/*\n+ * GUC check_hook for max_slot_wal_keep_size\n+ *\n+ * If WALs required by logical replication slots are removed, the slots are\n+ * unusable. While pg_upgrade sets this variable to -1 via the command line to\n+ * attempt to prevent such removal during binary upgrade, there are ways for\n+ * users to override it. For the sake of completing the objective, ensure that\n+ * this variable remains unchanged. See InvalidatePossiblyObsoleteSlot() and\n+ * start_postmaster() in pg_upgrade for more details.\n+ */\n\nI asked ChatGPT to suggest alternative wording for that comment, and\nit came up with something that I felt was a slight improvement.\n\nSUGGESTION\n...\nIf WALs needed by logical replication slots are deleted, these slots\nbecome inoperable. During a binary upgrade, pg_upgrade sets this\nvariable to -1 via the command line in an attempt to prevent such\ndeletions, but users have ways to override it. To ensure the\nsuccessful completion of the upgrade, it's essential to keep this\nvariable unaltered.\n...\n\n~~~\n\n2.\n+bool\n+check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n+{\n+ if (IsBinaryUpgrade && *newval != -1)\n+ {\n+ GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set to -1\nduring binary upgrade mode.\");\n+ return false;\n+ }\n+ return true;\n+}\n\nSome of the other GUC_check_errdetail()'s do not include the GUC name\nin the translatable message text. Isn't that a preferred style?\n\nSUGGESTION\nGUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n \"max_slot_wal_keep_size\");\n\n======\nsrc/backend/replication/slot.c\n\n3. InvalidatePossiblyObsoleteSlot\n\n- if (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n- {\n- ereport(ERROR,\n- errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"replication slots must not be invalidated during the upgrade\"),\n- errhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n- }\n+ Assert (!*invalidated || !SlotIsLogical(s) || !IsBinaryUpgrade);\n\nIMO new Assert became trickier to understand than the original condition. YMMV.\n\nSUGGESTION\nAssert(!(*invalidated && SlotIsLogical(s) && IsBinaryUpgrade));\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 1 Nov 2023 18:08:19 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "Thanks you for the comments!\n\nAt Wed, 1 Nov 2023 18:08:19 +1100, Peter Smith <[email protected]> wrote in \n> Hi, here are some minor review comments for the v3 patch.\n> \n> ======\n> src/backend/access/transam/xlog.c\n\n> I asked ChatGPT to suggest alternative wording for that comment, and\n> it came up with something that I felt was a slight improvement.\n> \n> SUGGESTION\n> ...\n> If WALs needed by logical replication slots are deleted, these slots\n> become inoperable. During a binary upgrade, pg_upgrade sets this\n> variable to -1 via the command line in an attempt to prevent such\n> deletions, but users have ways to override it. To ensure the\n> successful completion of the upgrade, it's essential to keep this\n> variable unaltered.\n> ...\n> \n> ~~~\n\nChatGPT seems to tend to generate sentences in a slightly different\nfrom our usual writing. While I tried to retain the original phrasing\nin the patch, I don't mind using the suggested version. Used as is.\n\n> 2.\n\n> + GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set to -1\n> during binary upgrade mode.\");\n\n> Some of the other GUC_check_errdetail()'s do not include the GUC name\n> in the translatable message text. Isn't that a preferred style?\n\n> SUGGESTION\n> GUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n> \"max_slot_wal_keep_size\");\n\nI believe that that style was adopted to minimize translatable\nmessages by consolidting identical ones that only differ in variable\nnames. I see both versions in the tree. I didn't find necessity to\nadopt this approach for this specific message, especially since I'm\nskeptical about adding new messages that end with \"must be set to -1\nduring binary upgrade mode\". (pg_upgrade sets synchronous_commit,\nfsync and full_page_writes to \"off\".)\n\nHowever, some unique messages are in this style, so I'm fine with\nusing that style. Revised accordingly.\n\n> ======\n> src/backend/replication/slot.c\n> \n> 3. InvalidatePossiblyObsoleteSlot\n\n> + Assert (!*invalidated || !SlotIsLogical(s) || !IsBinaryUpgrade);\n> \n> IMO new Assert became trickier to understand than the original condition. YMMV.\n> \n> SUGGESTION\n> Assert(!(*invalidated && SlotIsLogical(s) && IsBinaryUpgrade));\n\nYeah, I also liked that style and considered using it, but I didn't\nfeel it was too hard to read in this particular case, so I ended up\nusing the current way. Just like with the point of other comments,\nI'm not particularly attached to this style. Thus if someone find it\ndifficult to read, I have no issue with changing it. Revised as\nsuggested.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 02 Nov 2023 11:58:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 2, 2023 at 1:58 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> Thanks you for the comments!\n>\n> At Wed, 1 Nov 2023 18:08:19 +1100, Peter Smith <[email protected]> wrote in\n> > Hi, here are some minor review comments for the v3 patch.\n> >\n> > ======\n> > src/backend/access/transam/xlog.c\n>\n...\n> > 2.\n>\n> > + GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set to -1\n> > during binary upgrade mode.\");\n>\n> > Some of the other GUC_check_errdetail()'s do not include the GUC name\n> > in the translatable message text. Isn't that a preferred style?\n>\n> > SUGGESTION\n> > GUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n> > \"max_slot_wal_keep_size\");\n>\n> I believe that that style was adopted to minimize translatable\n> messages by consolidting identical ones that only differ in variable\n> names. I see both versions in the tree. I didn't find necessity to\n> adopt this approach for this specific message, especially since I'm\n> skeptical about adding new messages that end with \"must be set to -1\n> during binary upgrade mode\". (pg_upgrade sets synchronous_commit,\n> fsync and full_page_writes to \"off\".)\n>\n> However, some unique messages are in this style, so I'm fine with\n> using that style. Revised accordingly.\n>\n\nChecking this patch yesterday prompted me to create a new thread\nquestioning the inconsistencies of the \"GUC names in messages\". In\nthat thread, Tom Lane replied and gave some background information [1]\nabout the GUC name embedding versus substitution. In hindsight, I\nthink your original message was fine as-is, but there seem to be\nexamples of every kind of style, so whatever you do would have some\nprecedent.\n\nThe patch v4 LGTM.\n\n======\n[1] https://www.postgresql.org/message-id/2758485.1698848717%40sss.pgh.pa.us\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Nov 2023 14:25:53 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 2, 2023 at 2:25 PM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Nov 2, 2023 at 1:58 PM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> >\n> > Thanks you for the comments!\n> >\n> > At Wed, 1 Nov 2023 18:08:19 +1100, Peter Smith <[email protected]> wrote in\n> > > Hi, here are some minor review comments for the v3 patch.\n> > >\n> > > ======\n> > > src/backend/access/transam/xlog.c\n> >\n> ...\n> > > 2.\n> >\n> > > + GUC_check_errdetail(\"\\\"max_slot_wal_keep_size\\\" must be set to -1\n> > > during binary upgrade mode.\");\n> >\n> > > Some of the other GUC_check_errdetail()'s do not include the GUC name\n> > > in the translatable message text. Isn't that a preferred style?\n> >\n> > > SUGGESTION\n> > > GUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n> > > \"max_slot_wal_keep_size\");\n> >\n> > I believe that that style was adopted to minimize translatable\n> > messages by consolidting identical ones that only differ in variable\n> > names. I see both versions in the tree. I didn't find necessity to\n> > adopt this approach for this specific message, especially since I'm\n> > skeptical about adding new messages that end with \"must be set to -1\n> > during binary upgrade mode\". (pg_upgrade sets synchronous_commit,\n> > fsync and full_page_writes to \"off\".)\n> >\n> > However, some unique messages are in this style, so I'm fine with\n> > using that style. Revised accordingly.\n> >\n>\n> Checking this patch yesterday prompted me to create a new thread\n> questioning the inconsistencies of the \"GUC names in messages\". In\n> that thread, Tom Lane replied and gave some background information [1]\n> about the GUC name embedding versus substitution. In hindsight, I\n> think your original message was fine as-is, but there seem to be\n> examples of every kind of style, so whatever you do would have some\n> precedent.\n>\n> The patch v4 LGTM.\n>\n\nTo clarify, all the current code LGTM, but the patch is still missing\na guc_hook test case, right?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Nov 2023 14:32:07 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 02, 2023 at 02:32:07PM +1100, Peter Smith wrote:\n> On Thu, Nov 2, 2023 at 2:25 PM Peter Smith <[email protected]> wrote:\n>> Checking this patch yesterday prompted me to create a new thread\n>> questioning the inconsistencies of the \"GUC names in messages\". In\n>> that thread, Tom Lane replied and gave some background information [1]\n>> about the GUC name embedding versus substitution. In hindsight, I\n>> think your original message was fine as-is, but there seem to be\n>> examples of every kind of style, so whatever you do would have some\n>> precedent.\n>>\n>> The patch v4 LGTM.\n> \n> To clarify, all the current code LGTM, but the patch is still missing\n> a guc_hook test case, right?\n\n-\t\tNULL, NULL, NULL\n+\t\tcheck_max_slot_wal_keep_size, NULL, NULL\n\nFWIW, I am +-0 with what you are proposing here. I don't quite get\nwhy one may want to enforce this specific GUC at upgrade. Anyway, if\nthey do, I'd be curious to hear why this is required and this patch\nwould prevent them to do so. Actually, this could be a good reason\nfor making the logical slot handling during pg_upgrade an option\nrather than a mandatory thing.\n--\nMichael", "msg_date": "Thu, 2 Nov 2023 15:02:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 2, 2023 at 11:32 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 02, 2023 at 02:32:07PM +1100, Peter Smith wrote:\n> > On Thu, Nov 2, 2023 at 2:25 PM Peter Smith <[email protected]> wrote:\n> >> Checking this patch yesterday prompted me to create a new thread\n> >> questioning the inconsistencies of the \"GUC names in messages\". In\n> >> that thread, Tom Lane replied and gave some background information [1]\n> >> about the GUC name embedding versus substitution. In hindsight, I\n> >> think your original message was fine as-is, but there seem to be\n> >> examples of every kind of style, so whatever you do would have some\n> >> precedent.\n> >>\n> >> The patch v4 LGTM.\n> >\n> > To clarify, all the current code LGTM, but the patch is still missing\n> > a guc_hook test case, right?\n>\n> - NULL, NULL, NULL\n> + check_max_slot_wal_keep_size, NULL, NULL\n>\n> FWIW, I am +-0 with what you are proposing here. I don't quite get\n> why one may want to enforce this specific GUC at upgrade.\n>\n\nI also can't think of a good reason to do so but OTOH, I can't imagine\nall possible scenarios. As this setting is invalid or can cause\nproblems, it seems people favor preventing it. Alvaro also voted in\nfavor of preventing it, so we are considering to proceed with it\nunless more people think otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 2 Nov 2023 14:36:09 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 2, 2023 at 2:36 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Nov 2, 2023 at 11:32 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Thu, Nov 02, 2023 at 02:32:07PM +1100, Peter Smith wrote:\n> > > On Thu, Nov 2, 2023 at 2:25 PM Peter Smith <[email protected]> wrote:\n> > >> Checking this patch yesterday prompted me to create a new thread\n> > >> questioning the inconsistencies of the \"GUC names in messages\". In\n> > >> that thread, Tom Lane replied and gave some background information [1]\n> > >> about the GUC name embedding versus substitution. In hindsight, I\n> > >> think your original message was fine as-is, but there seem to be\n> > >> examples of every kind of style, so whatever you do would have some\n> > >> precedent.\n> > >>\n> > >> The patch v4 LGTM.\n> > >\n> > > To clarify, all the current code LGTM, but the patch is still missing\n> > > a guc_hook test case, right?\n> >\n> > - NULL, NULL, NULL\n> > + check_max_slot_wal_keep_size, NULL, NULL\n> >\n> > FWIW, I am +-0 with what you are proposing here. I don't quite get\n> > why one may want to enforce this specific GUC at upgrade.\n> >\n>\n> I also can't think of a good reason to do so but OTOH, I can't imagine\n> all possible scenarios. As this setting is invalid or can cause\n> problems, it seems people favor preventing it. Alvaro also voted in\n> favor of preventing it, so we are considering to proceed with it\n> unless more people think otherwise.\n>\n\nNow, that Michael also committed another similar change in commit\n7021d3b176, it is better to be consistent in both cases. So, either we\nshould have check hooks for both parameters or follow another route\nwhere we always forcibly override these parameters (which means the\nuser-provided values for these parameters will be ignored) in\npg_upgrade and document it. Yet another simple way is to simply\ndocument the current behavior. In the future, if we see users complain\nabout this or have use cases to use these parameters during an\nupgrade, we can accordingly try to adapt the behavior.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Nov 2023 07:41:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Nov 3, 2023 at 1:11 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Nov 2, 2023 at 2:36 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Nov 2, 2023 at 11:32 AM Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Thu, Nov 02, 2023 at 02:32:07PM +1100, Peter Smith wrote:\n> > > > On Thu, Nov 2, 2023 at 2:25 PM Peter Smith <[email protected]> wrote:\n> > > >> Checking this patch yesterday prompted me to create a new thread\n> > > >> questioning the inconsistencies of the \"GUC names in messages\". In\n> > > >> that thread, Tom Lane replied and gave some background information [1]\n> > > >> about the GUC name embedding versus substitution. In hindsight, I\n> > > >> think your original message was fine as-is, but there seem to be\n> > > >> examples of every kind of style, so whatever you do would have some\n> > > >> precedent.\n> > > >>\n> > > >> The patch v4 LGTM.\n> > > >\n> > > > To clarify, all the current code LGTM, but the patch is still missing\n> > > > a guc_hook test case, right?\n> > >\n> > > - NULL, NULL, NULL\n> > > + check_max_slot_wal_keep_size, NULL, NULL\n> > >\n> > > FWIW, I am +-0 with what you are proposing here. I don't quite get\n> > > why one may want to enforce this specific GUC at upgrade.\n> > >\n> >\n> > I also can't think of a good reason to do so but OTOH, I can't imagine\n> > all possible scenarios. As this setting is invalid or can cause\n> > problems, it seems people favor preventing it. Alvaro also voted in\n> > favor of preventing it, so we are considering to proceed with it\n> > unless more people think otherwise.\n> >\n>\n> Now, that Michael also committed another similar change in commit\n> 7021d3b176, it is better to be consistent in both cases. So, either we\n\nI agree. Both patches are setting a special GUC value at the command\nline, and both of them don't want the user to somehow override that.\nSince the requirements are the same, I felt the implementations\n(regardless if they use a guc hook or something else) should also be\ndone the same way. Yesterday I posted a review comment on the other\nthread [1] (#2c) trying to express the same point about consistency.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPsCzt%3DO3_xkyrskaZ3SMxaXoN4L5Z5CgvaGPNx3mXXxOQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 3 Nov 2023 13:33:26 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Nov 03, 2023 at 01:33:26PM +1100, Peter Smith wrote:\n> On Fri, Nov 3, 2023 at 1:11 PM Amit Kapila <[email protected]> wrote:\n>> Now, that Michael also committed another similar change in commit\n>> 7021d3b176, it is better to be consistent in both cases. So, either we\n> \n> I agree. Both patches are setting a special GUC value at the command\n> line, and both of them don't want the user to somehow override that.\n> Since the requirements are the same, I felt the implementations\n> (regardless if they use a guc hook or something else) should also be\n> done the same way. Yesterday I posted a review comment on the other\n> thread [1] (#2c) trying to express the same point about consistency.\n\nYeah, I certainly agree about consistency in the implementation for\nboth sides of the coin.\n\nNevertheless, I'm still +-0 on the GUC hook addition as I am wondering\nif there could be a case where one would be interested in enforcing\nthe state of the GUCs anyway, and we'd prevent entirely that. Another\nthing that we can do for max_logical_replication_workers, rather than\na GUC hook, is to add a check on IsBinaryUpgrade in\nApplyLauncherRegister(). At least that would be consistent with what\nwe do for autovacuum as the apply worker is just a bgworker.\n--\nMichael", "msg_date": "Sun, 5 Nov 2023 09:03:06 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Sun, Nov 5, 2023 at 5:33 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Nov 03, 2023 at 01:33:26PM +1100, Peter Smith wrote:\n> > On Fri, Nov 3, 2023 at 1:11 PM Amit Kapila <[email protected]> wrote:\n> >> Now, that Michael also committed another similar change in commit\n> >> 7021d3b176, it is better to be consistent in both cases. So, either we\n> >\n> > I agree. Both patches are setting a special GUC value at the command\n> > line, and both of them don't want the user to somehow override that.\n> > Since the requirements are the same, I felt the implementations\n> > (regardless if they use a guc hook or something else) should also be\n> > done the same way. Yesterday I posted a review comment on the other\n> > thread [1] (#2c) trying to express the same point about consistency.\n>\n> Yeah, I certainly agree about consistency in the implementation for\n> both sides of the coin.\n>\n> Nevertheless, I'm still +-0 on the GUC hook addition as I am wondering\n> if there could be a case where one would be interested in enforcing\n> the state of the GUCs anyway, and we'd prevent entirely that. Another\n> thing that we can do for max_logical_replication_workers, rather than\n> a GUC hook, is to add a check on IsBinaryUpgrade in\n> ApplyLauncherRegister().\n>\n\nDo you mean to say that if 'IsBinaryUpgrade' is true then let's not\nallow to launch launcher or apply worker? If so, I guess this won't be\nany better than prohibiting at an early stage or explicitly overriding\nthose with internal values and documenting it, at least that way we\ncan be consistent for both variables (max_logical_replication_workers\nand max_slot_wal_keep_size).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Nov 2023 07:59:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Tue, Nov 07, 2023 at 07:59:46AM +0530, Amit Kapila wrote:\n> Do you mean to say that if 'IsBinaryUpgrade' is true then let's not\n> allow to launch launcher or apply worker? If so, I guess this won't be\n> any better than prohibiting at an early stage or explicitly overriding\n> those with internal values and documenting it, at least that way we\n> can be consistent for both variables (max_logical_replication_workers\n> and max_slot_wal_keep_size).\n\nYes, I mean to paint an extra IsBinaryUpgrade before registering the\napply worker launcher. That would be consistent with what we do for\nautovacuum in the postmaster.\n--\nMichael", "msg_date": "Tue, 7 Nov 2023 11:41:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Tue, Nov 7, 2023 at 8:12 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Nov 07, 2023 at 07:59:46AM +0530, Amit Kapila wrote:\n> > Do you mean to say that if 'IsBinaryUpgrade' is true then let's not\n> > allow to launch launcher or apply worker? If so, I guess this won't be\n> > any better than prohibiting at an early stage or explicitly overriding\n> > those with internal values and documenting it, at least that way we\n> > can be consistent for both variables (max_logical_replication_workers\n> > and max_slot_wal_keep_size).\n>\n> Yes, I mean to paint an extra IsBinaryUpgrade before registering the\n> apply worker launcher. That would be consistent with what we do for\n> autovacuum in the postmaster.\n>\n\nBut then we don't need the hardcoded value of\nmax_logical_replication_workers as zero by pg_upgrade. I think doing\nIsBinaryUpgrade for slots won't be neat, so we anyway need to keep\nusing the special value of max_slot_wal_keep_size GUC. Though the\nhandling for both won't be the same but I guess given the situation,\nthat seems like a reasonable thing to do. If we follow that then we\ncan have this special GUC hook only for max_slot_wal_keep_size GUC.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Nov 2023 16:16:21 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Tue, Nov 7, 2023 at 4:16 PM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Nov 7, 2023 at 8:12 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Nov 07, 2023 at 07:59:46AM +0530, Amit Kapila wrote:\n> > > Do you mean to say that if 'IsBinaryUpgrade' is true then let's not\n> > > allow to launch launcher or apply worker? If so, I guess this won't be\n> > > any better than prohibiting at an early stage or explicitly overriding\n> > > those with internal values and documenting it, at least that way we\n> > > can be consistent for both variables (max_logical_replication_workers\n> > > and max_slot_wal_keep_size).\n> >\n> > Yes, I mean to paint an extra IsBinaryUpgrade before registering the\n> > apply worker launcher. That would be consistent with what we do for\n> > autovacuum in the postmaster.\n> >\n>\n> But then we don't need the hardcoded value of\n> max_logical_replication_workers as zero by pg_upgrade. I think doing\n> IsBinaryUpgrade for slots won't be neat, so we anyway need to keep\n> using the special value of max_slot_wal_keep_size GUC. Though the\n> handling for both won't be the same but I guess given the situation,\n> that seems like a reasonable thing to do. If we follow that then we\n> can have this special GUC hook only for max_slot_wal_keep_size GUC.\n>\n\nMichael, Horiguchi-San, and others, do you have any thoughts on what\nis the best way to proceed?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Nov 2023 09:53:07 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 09, 2023 at 09:53:07AM +0530, Amit Kapila wrote:\n> On Tue, Nov 7, 2023 at 4:16 PM Amit Kapila <[email protected]> wrote:\n>> But then we don't need the hardcoded value of\n>> max_logical_replication_workers as zero by pg_upgrade. I think doing\n>> IsBinaryUpgrade for slots won't be neat, so we anyway need to keep\n>> using the special value of max_slot_wal_keep_size GUC. Though the\n>> handling for both won't be the same but I guess given the situation,\n>> that seems like a reasonable thing to do. If we follow that then we\n>> can have this special GUC hook only for max_slot_wal_keep_size GUC.\n> \n> Michael, Horiguchi-San, and others, do you have any thoughts on what\n> is the best way to proceed?\n\nNo problem for me to use a GUC hook for the WAL retention GUCs if you\nfeel strongly about it at the end, but I'd rather use an approach\nbased on IsBinaryUpgrade for the logical worker launcher to be\nconsistent with autovacuum (where there's also an argument to refactor\nit to use a bgworker registration, centralizing the checks on\nIsBinaryUpgrade for all bgworkers, but that would be material for a\ndifferent thread, if there's interest in doing that).\n\nThe two situations we are trying to prevent (slot invalidation and\nbgworker launch) can be triggered under different contexts, so they\ndon't have to use the same mechanisms to prevent what should not\nhappen during an upgrade.\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 13:54:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 9, 2023 at 3:55 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 09, 2023 at 09:53:07AM +0530, Amit Kapila wrote:\n> > On Tue, Nov 7, 2023 at 4:16 PM Amit Kapila <[email protected]> wrote:\n> >> But then we don't need the hardcoded value of\n> >> max_logical_replication_workers as zero by pg_upgrade. I think doing\n> >> IsBinaryUpgrade for slots won't be neat, so we anyway need to keep\n> >> using the special value of max_slot_wal_keep_size GUC. Though the\n> >> handling for both won't be the same but I guess given the situation,\n> >> that seems like a reasonable thing to do. If we follow that then we\n> >> can have this special GUC hook only for max_slot_wal_keep_size GUC.\n> >\n> > Michael, Horiguchi-San, and others, do you have any thoughts on what\n> > is the best way to proceed?\n>\n> No problem for me to use a GUC hook for the WAL retention GUCs if you\n> feel strongly about it at the end, but I'd rather use an approach\n> based on IsBinaryUpgrade for the logical worker launcher to be\n> consistent with autovacuum (where there's also an argument to refactor\n> it to use a bgworker registration, centralizing the checks on\n> IsBinaryUpgrade for all bgworkers, but that would be material for a\n> different thread, if there's interest in doing that).\n>\n> The two situations we are trying to prevent (slot invalidation and\n> bgworker launch) can be triggered under different contexts, so they\n> don't have to use the same mechanisms to prevent what should not\n> happen during an upgrade.\n> --\n\nHaving a GUC hook for the \"max_slot_wal_keep_size\" seemed OK to me. If\nthe user overrides a GUC value (admittedly, maybe there is no reason\nwhy they would want to) then at least the hook will give an error,\nrather than us silently overwriting the user's value with -1.\n\nSo, patch v4 LGTM, except it is better to include a test case.\n\n~\n\nMeanwhile, if preventing the apply worker launch is considered better\nto be implemented differently in ApplyLauncherRegister, then so be it.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 9 Nov 2023 17:04:28 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Thu, 9 Nov 2023 09:53:07 +0530, Amit Kapila <[email protected]> wrote in \n> Michael, Horiguchi-San, and others, do you have any thoughts on what\n> is the best way to proceed?\n\nAs I previously mentioned, I believe that if rejection is to be the\ncourse of action, it would be best to proceed with it sooner rather\nthan later. On the other hand, I am concerned about the need for users\nto perform extra steps depending on the source cluster\nconrfiguration. Therefore, another possible approach could be to\nsimply ignore the given settings in the assignment hook rather than\nrejecting by the check hook, and forcibuly apply -1.\n\nWhat do you think about this third approach?\n\nI haven't checked this with pg_upgrade, but a standalone postmaster\nwould emit the following messages.\n\n> postgres -b -c max_slot_wal_keep_size=-1\n> LOG: \"max_slot_wal_keep_size\" is foced to set to -1 during binary upgrade mode.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 09 Nov 2023 15:10:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 9, 2023 at 11:40 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Thu, 9 Nov 2023 09:53:07 +0530, Amit Kapila <[email protected]> wrote in\n> > Michael, Horiguchi-San, and others, do you have any thoughts on what\n> > is the best way to proceed?\n>\n> As I previously mentioned, I believe that if rejection is to be the\n> course of action, it would be best to proceed with it sooner rather\n> than later. On the other hand, I am concerned about the need for users\n> to perform extra steps depending on the source cluster\n> conrfiguration. Therefore, another possible approach could be to\n> simply ignore the given settings in the assignment hook rather than\n> rejecting by the check hook, and forcibuly apply -1.\n>\n> What do you think about this third approach?\n>\n\nI have also proposed that as one of the alternatives but didn't get\nmany votes. And, I think if the user is passing a special value of\nmax_slot_wal_keep_size during the upgrade, it has to be a special\ncase, and rejecting it upfront doesn't seem unreasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Nov 2023 12:00:59 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "At Thu, 9 Nov 2023 12:00:59 +0530, Amit Kapila <[email protected]> wrote in \n> I have also proposed that as one of the alternatives but didn't get\n> many votes. And, I think if the user is passing a special value of\n> max_slot_wal_keep_size during the upgrade, it has to be a special\n> case, and rejecting it upfront doesn't seem unreasonable to me.\n\nOops. Sorry, and understood.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Nov 2023 15:42:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 09, 2023 at 05:04:28PM +1100, Peter Smith wrote:\n> Having a GUC hook for the \"max_slot_wal_keep_size\" seemed OK to me. If\n> the user overrides a GUC value (admittedly, maybe there is no reason\n> why they would want to) then at least the hook will give an error,\n> rather than us silently overwriting the user's value with -1.\n> \n> So, patch v4 LGTM, except it is better to include a test case.\n\nWhere's this v4? I may be missing, but it does not seem to be\nattached to this thread..\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 16:08:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 9, 2023 at 12:38 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 09, 2023 at 05:04:28PM +1100, Peter Smith wrote:\n> > Having a GUC hook for the \"max_slot_wal_keep_size\" seemed OK to me. If\n> > the user overrides a GUC value (admittedly, maybe there is no reason\n> > why they would want to) then at least the hook will give an error,\n> > rather than us silently overwriting the user's value with -1.\n> >\n> > So, patch v4 LGTM, except it is better to include a test case.\n>\n> Where's this v4?\n>\n\nI think it is in an email[1]. I can take care of this unless we see\nsome opposition to this idea.\n\n[1] - https://www.postgresql.org/message-id/20231102.115834.1012152975995247837.horikyota.ntt%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 9 Nov 2023 13:12:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 09, 2023 at 01:12:54PM +0530, Amit Kapila wrote:\n> I think it is in an email[1].\n\nNoted.\n\n> I can take care of this unless we see some opposition to this idea.\n\nThanks!\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 16:52:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "Dear Horiguchi-san, hackers,\n\n> Thanks you for the comments!\n\nThanks for updating the patch!\nI'm not sure it is intentional, but you might miss my post...I suggested to add a\ntestcase.\n\nI attached the updated version which is almost the same as Horiguchi-san's one,\nbut has a test. How do you think? Do you have other idea for testing?\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Thu, 9 Nov 2023 08:20:40 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: A recent message added to pg_upgade" }, { "msg_contents": "On 2023-Nov-02, Kyotaro Horiguchi wrote:\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index b541be8eec..46833f6ecd 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -2063,6 +2063,29 @@ check_wal_segment_size(int *newval, void **extra, GucSource source)\n> \treturn true;\n> }\n> \n> +/*\n> + * GUC check_hook for max_slot_wal_keep_size\n> + *\n> + * If WALs needed by logical replication slots are deleted, these slots become\n> + * inoperable. During a binary upgrade, pg_upgrade sets this variable to -1 via\n> + * the command line in an attempt to prevent such deletions, but users have\n> + * ways to override it. To ensure the successful completion of the upgrade,\n> + * it's essential to keep this variable unaltered. See\n> + * InvalidatePossiblyObsoleteSlot() and start_postmaster() in pg_upgrade for\n> + * more details.\n> + */\n> +bool\n> +check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n> +{\n> +\tif (IsBinaryUpgrade && *newval != -1)\n> +\t{\n> +\t\tGUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n> +\t\t\t\"max_slot_wal_keep_size\");\n> +\t\treturn false;\n> +\t}\n> +\treturn true;\n> +}\n\nOne sentence in that comment reads weird. I'd do this:\n\ns/To ensure the ... unaltered/This check callback ensures the value is\nnot overridden by the user/\n\n\n> diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\n> index 99823df3c7..5c3d2b1082 100644\n> --- a/src/backend/replication/slot.c\n> +++ b/src/backend/replication/slot.c\n> @@ -1424,18 +1424,12 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> \t\tSpinLockRelease(&s->mutex);\n> \n> \t\t/*\n> -\t\t * The logical replication slots shouldn't be invalidated as\n> -\t\t * max_slot_wal_keep_size GUC is set to -1 during the upgrade.\n> -\t\t *\n> -\t\t * The following is just a sanity check.\n> +\t\t * check_max_slot_wal_keep_size() ensures max_slot_wal_keep_size is set\n> +\t\t * to -1, so, slot invalidation for logical slots shouldn't happen\n> +\t\t * during an upgrade. At present, only logical slots really require\n> +\t\t * this.\n> \t\t */\n> -\t\tif (*invalidated && SlotIsLogical(s) && IsBinaryUpgrade)\n> -\t\t{\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\terrcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> -\t\t\t\t\terrmsg(\"replication slots must not be invalidated during the upgrade\"),\n> -\t\t\t\t\terrhint(\"\\\"max_slot_wal_keep_size\\\" must be set to -1 during the upgrade\"));\n> -\t\t}\n> +\t\tAssert (!(*invalidated && SlotIsLogical(s) && IsBinaryUpgrade));\n\nI think it's worth adding a comment here, pointing to\ncheck_old_cluster_for_valid_slots() verifying that no\nalready-invalidated slots exist before the upgrade starts.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 9 Nov 2023 11:39:21 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 9, 2023 at 4:09 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-02, Kyotaro Horiguchi wrote:\n>\n> > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > index b541be8eec..46833f6ecd 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -2063,6 +2063,29 @@ check_wal_segment_size(int *newval, void **extra, GucSource source)\n> > return true;\n> > }\n> >\n> > +/*\n> > + * GUC check_hook for max_slot_wal_keep_size\n> > + *\n> > + * If WALs needed by logical replication slots are deleted, these slots become\n> > + * inoperable. During a binary upgrade, pg_upgrade sets this variable to -1 via\n> > + * the command line in an attempt to prevent such deletions, but users have\n> > + * ways to override it. To ensure the successful completion of the upgrade,\n> > + * it's essential to keep this variable unaltered. See\n> > + * InvalidatePossiblyObsoleteSlot() and start_postmaster() in pg_upgrade for\n> > + * more details.\n> > + */\n> > +bool\n> > +check_max_slot_wal_keep_size(int *newval, void **extra, GucSource source)\n> > +{\n> > + if (IsBinaryUpgrade && *newval != -1)\n> > + {\n> > + GUC_check_errdetail(\"\\\"%s\\\" must be set to -1 during binary upgrade mode.\",\n> > + \"max_slot_wal_keep_size\");\n> > + return false;\n> > + }\n> > + return true;\n> > +}\n>\n> One sentence in that comment reads weird. I'd do this:\n>\n> s/To ensure the ... unaltered/This check callback ensures the value is\n> not overridden by the user/\n>\n\nThese comments appear mostly repetitive to what is already mentioned\nin start_postmaster(). So, I have changed those referred to already\nwritten comments, and slightly adjusted the comments at another place.\nSee attached. Personally, I don't see the need for a test for this, so\nremoved the same but can add it back if you or others think so.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 9 Nov 2023 18:58:28 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On 2023-Nov-09, Amit Kapila wrote:\n\n> These comments appear mostly repetitive to what is already mentioned\n> in start_postmaster(). So, I have changed those referred to already\n> written comments, and slightly adjusted the comments at another place.\n> See attached.\n\nI'd still rather mention check_old_cluster_for_valid_slots() just above\nthe Assert() in InvalidatePossiblyObsoleteSlot(). It looks too bare to\nme otherwise.\n\n> Personally, I don't see the need for a test for this, so removed the\n> same but can add it back if you or others think so.\n\nI'm neutral on having a test for this. I'm not sure this is easy to\nbreak unintentionally. OTOH the test is cheap, since it only has to run\npg_upgrade itself and not, say, another initdb. On the (as Robert says)\nthird hand, would we have tests for each possible GUC that we'd like not\nto be changed during pg_upgrade? I suspect not, which suggests we don't\nwant this one either.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n", "msg_date": "Thu, 9 Nov 2023 15:24:23 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Nov 09, 2023 at 04:52:32PM +0900, Michael Paquier wrote:\n> Thanks!\n\nAlso, please see also a patch about switching the logirep launcher to\nrely on IsBinaryUpgrade to prevent its startup. Any thoughts about\nthat?\n--\nMichael", "msg_date": "Fri, 10 Nov 2023 11:20:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Nov 10, 2023 at 7:50 AM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 09, 2023 at 04:52:32PM +0900, Michael Paquier wrote:\n> > Thanks!\n>\n> Also, please see also a patch about switching the logirep launcher to\n> rely on IsBinaryUpgrade to prevent its startup. Any thoughts about\n> that?\n>\n\nPreventing these\n+ * processes from starting while upgrading avoids any activity on the new\n+ * cluster before the physical files are put in place, which could cause\n+ * corruption on the new cluster upgrading to.\n\nI don't think this comment is correct because there won't be any apply\nactivity on the new cluster as after restoration subscriptions should\nbe disabled. On the old cluster, I think one problem is that the\norigins may move forward after we copy them which can cause data\ninconsistency issues. The other is that we may not prefer to generate\nadditional data and WAL during the upgrade. Also, I am not completely\nsure about using the word 'corruption' in this context.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 10 Nov 2023 15:27:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Fri, Nov 10, 2023 at 03:27:25PM +0530, Amit Kapila wrote:\n> I don't think this comment is correct because there won't be any apply\n> activity on the new cluster as after restoration subscriptions should\n> be disabled. On the old cluster, I think one problem is that the\n> origins may move forward after we copy them which can cause data\n> inconsistency issues. The other is that we may not prefer to generate\n> additional data and WAL during the upgrade. Also, I am not completely\n> sure about using the word 'corruption' in this context.\n\nWhat is your suggestion here? Would it be better to just aim for\nsimplicity and just say that we don't want it to run because \"it can\nlead to some inconsistent behaviors\"?\n--\nMichael", "msg_date": "Sat, 11 Nov 2023 09:16:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Sat, Nov 11, 2023 at 5:46 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Nov 10, 2023 at 03:27:25PM +0530, Amit Kapila wrote:\n> > I don't think this comment is correct because there won't be any apply\n> > activity on the new cluster as after restoration subscriptions should\n> > be disabled. On the old cluster, I think one problem is that the\n> > origins may move forward after we copy them which can cause data\n> > inconsistency issues. The other is that we may not prefer to generate\n> > additional data and WAL during the upgrade. Also, I am not completely\n> > sure about using the word 'corruption' in this context.\n>\n> What is your suggestion here? Would it be better to just aim for\n> simplicity and just say that we don't want it to run because \"it can\n> lead to some inconsistent behaviors\"?\n>\n\nI think we can be specific about logical replication stuff. I have not\ndone any study on autovacuum behavior related to this, so we can\nupdate about it separately if required. I could think of something\nlike the following:\n\n- /* Use -b to disable autovacuum. */\n+ /*\n+ * Use -b to disable autovacuum and logical replication launcher\n+ * (effective in PG17 or later for the latter).\n+ *\n+ * Logical replication workers can stream data during the\nupgrade which can\n+ * cause replication origins to move forward after we have copied them.\n+ * It can cause the system to request the data which is already present\n+ * in the new cluster.\n+ */\n\nNow, ideally, such a comment change makes more sense along with the\nmain patch, so either we can go without this comment change or\nprobably wait till the main patch is ready and merge just before it or\nalong with it. I am fine either way.\n\nBTW, it is not clear to me another part of the comment \"... for the\nlatter\" in the proposed wording. Is there any typo there or am I\nmissing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 13 Nov 2023 08:45:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Nov 13, 2023 at 08:45:12AM +0530, Amit Kapila wrote:\n> I think we can be specific about logical replication stuff. I have not\n> done any study on autovacuum behavior related to this, so we can\n> update about it separately if required.\n\nAutovacuum, as far as I recall, could decide to do some work before\nfiles are physically copied from the old to the new cluster,\ncorrupting the new cluster. Per 76dd09bbec89:\n\n+ * If we have lost the autovacuum launcher, try to start a new one.\n+ * We don't want autovacuum to run in binary upgrade mode because\n+ * autovacuum might update relfrozenxid for empty tables before\n+ * the physical files are put in place.\n\n> - /* Use -b to disable autovacuum. */\n> + /*\n> + * Use -b to disable autovacuum and logical replication launcher\n> + * (effective in PG17 or later for the latter).\n> + *\n> + * Logical replication workers can stream data during the\n> upgrade which can\n> + * cause replication origins to move forward after we have copied them.\n> + * It can cause the system to request the data which is already present\n> + * in the new cluster.\n> + */\n> \n> Now, ideally, such a comment change makes more sense along with the\n> main patch, so either we can go without this comment change or\n> probably wait till the main patch is ready and merge just before it or\n> along with it. I am fine either way.\n\nAnother location would be to document that stuff directly in\nlauncher.c where the check for IsBinaryUpgrade would be added. You\nare right that it makes little sense to document that now, so how\nabout:\n1) keeping pg_upgrade.c minimal, say:\n- /* Use -b to disable autovacuum. */\n+ /*\n+ * Use -b to disable autovacuum and logical replication\n+ * launcher (in 17~).\n+ */\nWith a removal of the comment block related to\nmax_logical_replication_workers=0?\n2) Document that in ApplyLauncherRegister() as part of the main patch\nfor the subscribers?\n\n> BTW, it is not clear to me another part of the comment \"... for the\n> latter\" in the proposed wording. Is there any typo there or am I\n> missing something?\n\nThe \"latter\" refers to the logirep launcher here, as -b would affect\nit only in 17~ with the patch I sent previously.\n--\nMichael", "msg_date": "Mon, 13 Nov 2023 13:49:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Mon, Nov 13, 2023 at 10:19 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 13, 2023 at 08:45:12AM +0530, Amit Kapila wrote:\n> > I think we can be specific about logical replication stuff. I have not\n> > done any study on autovacuum behavior related to this, so we can\n> > update about it separately if required.\n>\n> Autovacuum, as far as I recall, could decide to do some work before\n> files are physically copied from the old to the new cluster,\n> corrupting the new cluster. Per 76dd09bbec89:\n>\n> + * If we have lost the autovacuum launcher, try to start a new one.\n> + * We don't want autovacuum to run in binary upgrade mode because\n> + * autovacuum might update relfrozenxid for empty tables before\n> + * the physical files are put in place.\n>\n> > - /* Use -b to disable autovacuum. */\n> > + /*\n> > + * Use -b to disable autovacuum and logical replication launcher\n> > + * (effective in PG17 or later for the latter).\n> > + *\n> > + * Logical replication workers can stream data during the\n> > upgrade which can\n> > + * cause replication origins to move forward after we have copied them.\n> > + * It can cause the system to request the data which is already present\n> > + * in the new cluster.\n> > + */\n> >\n> > Now, ideally, such a comment change makes more sense along with the\n> > main patch, so either we can go without this comment change or\n> > probably wait till the main patch is ready and merge just before it or\n> > along with it. I am fine either way.\n>\n> Another location would be to document that stuff directly in\n> launcher.c where the check for IsBinaryUpgrade would be added. You\n> are right that it makes little sense to document that now, so how\n> about:\n> 1) keeping pg_upgrade.c minimal, say:\n> - /* Use -b to disable autovacuum. */\n> + /*\n> + * Use -b to disable autovacuum and logical replication\n> + * launcher (in 17~).\n> + */\n> With a removal of the comment block related to\n> max_logical_replication_workers=0?\n> 2) Document that in ApplyLauncherRegister() as part of the main patch\n> for the subscribers?\n>\n\nI am fine with this but there is no harm in doing this before or along\nwith the main patch. As of now, I don't see any problem but as the\nmain patch is still under review, so thought we could even wait for\nthe patch to become \"Ready For Committer\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 15 Nov 2023 07:58:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Wed, Nov 15, 2023 at 07:58:06AM +0530, Amit Kapila wrote:\n> I am fine with this but there is no harm in doing this before or along\n> with the main patch. As of now, I don't see any problem but as the\n> main patch is still under review, so thought we could even wait for\n> the patch to become \"Ready For Committer\".\n\nWFM to wait until the other patch is ready before doing something\nhere.\n--\nMichael", "msg_date": "Wed, 15 Nov 2023 11:30:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Wed, Nov 15, 2023 at 07:58:06AM +0530, Amit Kapila wrote:\n> I am fine with this but there is no harm in doing this before or along\n> with the main patch. As of now, I don't see any problem but as the\n> main patch is still under review, so thought we could even wait for\n> the patch to become \"Ready For Committer\".\n\nMy apologies for the delay.\n\nNow that 9a17be1e244a is in the tree, please find attached a patch to\nrestrict the startup of the launcher using IsBinaryUpgrade in\nApplyLauncherRegister(), with adjustments to the surrounding comments.\n\nWas there anything else you wanted to be covered and/or updated?\n--\nMichael", "msg_date": "Wed, 10 Jan 2024 13:41:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Wed, Jan 10, 2024 at 10:11 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Nov 15, 2023 at 07:58:06AM +0530, Amit Kapila wrote:\n> > I am fine with this but there is no harm in doing this before or along\n> > with the main patch. As of now, I don't see any problem but as the\n> > main patch is still under review, so thought we could even wait for\n> > the patch to become \"Ready For Committer\".\n>\n> My apologies for the delay.\n>\n> Now that 9a17be1e244a is in the tree, please find attached a patch to\n> restrict the startup of the launcher using IsBinaryUpgrade in\n> ApplyLauncherRegister(), with adjustments to the surrounding comments.\n>\n\n- if (max_logical_replication_workers == 0)\n+ /*\n+ * The logical replication launcher is disabled during binary upgrades,\n+ * as logical replication workers can stream data during the upgrade\n+ * which can cause replication origins to move forward after we have\n+ * copied them. It can cause the system to request the data which is\n+ * already present in the new cluster.\n+ */\n+ if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n\nThis comment is not very clear to me. The first part of the sentence\ncan't apply to the new cluster as after the upgrade, subscriptions\nwill be disabled and the second part talks about requesting the wrong\ndata in the new cluster. As per my understanding, the problem here is\nthat, on the old cluster, the origins may move forward after we copy\nthem and then we copy physical files. Now, in the new cluster when we\ntry to request the data, it will be already present.\n\n> Was there anything else you wanted to be covered and/or updated?\n>\n\nNo, only this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Jan 2024 18:02:12 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Wed, Jan 10, 2024 at 06:02:12PM +0530, Amit Kapila wrote:\n> - if (max_logical_replication_workers == 0)\n> + /*\n> + * The logical replication launcher is disabled during binary upgrades,\n> + * as logical replication workers can stream data during the upgrade\n> + * which can cause replication origins to move forward after we have\n> + * copied them. It can cause the system to request the data which is\n> + * already present in the new cluster.\n> + */\n> + if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n> \n> This comment is not very clear to me. The first part of the sentence\n> can't apply to the new cluster as after the upgrade, subscriptions\n> will be disabled and the second part talks about requesting the wrong\n> data in the new cluster. As per my understanding, the problem here is\n> that, on the old cluster, the origins may move forward after we copy\n> them and then we copy physical files. Now, in the new cluster when we\n> try to request the data, it will be already present.\n\nAs far as I understand your complaint is about being more precise\nabout where the workers could run when we do an upgrade. My patch\ncovers the reason why it would be a problem, and I agree that it could\nbe more detailed.\n\nHence, how about something like that:\n\"The logical replication launcher is disabled during binary upgrades,\nas a logical replication workers running on the cluster upgrading from\ncould cause replication origins to move forward after they are copied\nto the cluster upgrading to, creating potentially conflicts with the\nphysical files copied over.\" \n\nIf you have a better suggestion, feel free.\n--\nMichael", "msg_date": "Thu, 11 Jan 2024 12:37:58 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Jan 11, 2024 at 9:08 AM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Jan 10, 2024 at 06:02:12PM +0530, Amit Kapila wrote:\n> > - if (max_logical_replication_workers == 0)\n> > + /*\n> > + * The logical replication launcher is disabled during binary upgrades,\n> > + * as logical replication workers can stream data during the upgrade\n> > + * which can cause replication origins to move forward after we have\n> > + * copied them. It can cause the system to request the data which is\n> > + * already present in the new cluster.\n> > + */\n> > + if (max_logical_replication_workers == 0 || IsBinaryUpgrade)\n> >\n> > This comment is not very clear to me. The first part of the sentence\n> > can't apply to the new cluster as after the upgrade, subscriptions\n> > will be disabled and the second part talks about requesting the wrong\n> > data in the new cluster. As per my understanding, the problem here is\n> > that, on the old cluster, the origins may move forward after we copy\n> > them and then we copy physical files. Now, in the new cluster when we\n> > try to request the data, it will be already present.\n>\n> As far as I understand your complaint is about being more precise\n> about where the workers could run when we do an upgrade. My patch\n> covers the reason why it would be a problem, and I agree that it could\n> be more detailed.\n>\n> Hence, how about something like that:\n> \"The logical replication launcher is disabled during binary upgrades,\n> as a logical replication workers running on the cluster upgrading from\n> could cause replication origins to move forward after they are copied\n> to the cluster upgrading to, creating potentially conflicts with the\n> physical files copied over.\"\n>\n\nLooks better. One minor nitpick: /potentially/potential\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:25:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Jan 11, 2024 at 11:25:44AM +0530, Amit Kapila wrote:\n> On Thu, Jan 11, 2024 at 9:08 AM Michael Paquier <[email protected]> wrote:\n>> Hence, how about something like that:\n>> \"The logical replication launcher is disabled during binary upgrades,\n>> as a logical replication workers running on the cluster upgrading from\n>> could cause replication origins to move forward after they are copied\n>> to the cluster upgrading to, creating potentially conflicts with the\n>> physical files copied over.\"\n> \n> Looks better. One minor nitpick: /potentially/potential\n\nSure, WFM. Let's wait a bit and see if others have more comments to\noffer.\n--\nMichael", "msg_date": "Thu, 11 Jan 2024 15:04:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On 2024-Jan-11, Michael Paquier wrote:\n\n> Hence, how about something like that:\n> \"The logical replication launcher is disabled during binary upgrades,\n> as a logical replication workers running on the cluster upgrading from\n> could cause replication origins to move forward after they are copied\n> to the cluster upgrading to, creating potentially conflicts with the\n> physical files copied over.\" \n\n\"The logical replication launcher is disabled during binary upgrades, to\navoid logical replication workers running on the source cluster. That\nwould cause replication origins to move forward after having been copied\nto the target cluster, potentially creating conflicts with the copied\ndata files.\"\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n", "msg_date": "Thu, 11 Jan 2024 13:04:18 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> \"The logical replication launcher is disabled during binary upgrades, to\n> avoid logical replication workers running on the source cluster. That\n> would cause replication origins to move forward after having been copied\n> to the target cluster, potentially creating conflicts with the copied\n> data files.\"\n\n\"avoid logical replication workers running\" still seems like shaky\ngrammar. Perhaps s/avoid/avoid having/, or write \"to prevent logical\nreplication workers from running ...\".\n\nAlso perhaps s/would/could/.\n\nOtherwise +1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jan 2024 10:01:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:01:16AM -0500, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> \"The logical replication launcher is disabled during binary upgrades, to\n>> avoid logical replication workers running on the source cluster. That\n>> would cause replication origins to move forward after having been copied\n>> to the target cluster, potentially creating conflicts with the copied\n>> data files.\"\n> \n> \"avoid logical replication workers running\" still seems like shaky\n> grammar. Perhaps s/avoid/avoid having/, or write \"to prevent logical\n> replication workers from running ...\".\n\nAfter sleeping on it, your last suggestion sounds better to me, so\nI've incorporated that with Alvaro's wording (also cleaner than what I\nhave posted), and applied the patch on HEAD.\n\n> Also perhaps s/would/could/.\n\nYep.\n--\nMichael", "msg_date": "Fri, 12 Jan 2024 08:37:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A recent message added to pg_upgade" } ]
[ { "msg_contents": "hi.\nThe test seems to assume the following sql query should return zero row.\nbut it does not. I don't know much about the \"relreplident\" column.\n\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/type_sanity.out#n499\ndemo: https://dbfiddle.uk/QFM88S2e\n\ntest1=# \\dt\nDid not find any relations.\ntest1=# SELECT c1.oid, c1.relname, relkind, relpersistence, relreplident\nFROM pg_class as c1\nWHERE relkind NOT IN ('r', 'i', 'S', 't', 'v', 'm', 'c', 'f', 'p') OR\n relpersistence NOT IN ('p', 'u', 't') OR\n relreplident NOT IN ('d', 'n', 'f', 'i');\n oid | relname | relkind | relpersistence | relreplident\n-----+---------+---------+----------------+--------------\n(0 rows)\n\ntest1=# CREATE TABLE test_partition (\n id int4range,\n valid_at daterange,\n name text,\n CONSTRAINT test_partition_uq1 UNIQUE (id, valid_at )\n) PARTITION BY LIST (id);\nCREATE TABLE\ntest1=# SELECT c1.oid, c1.relname, relkind, relpersistence, relreplident\nFROM pg_class as c1\nWHERE relkind NOT IN ('r', 'i', 'S', 't', 'v', 'm', 'c', 'f', 'p') OR\n relpersistence NOT IN ('p', 'u', 't') OR\n relreplident NOT IN ('d', 'n', 'f', 'i');\n oid | relname | relkind | relpersistence | relreplident\n---------+--------------------+---------+----------------+--------------\n 1034304 | test_partition_uq1 | I | p | n\n(1 row)\n\ntest1=# select version();\n version\n--------------------------------------------------------------------\n PostgreSQL 16beta1 on x86_64-linux, compiled by gcc-11.3.0, 64-bit\n(1 row)\n\n\n", "msg_date": "Fri, 27 Oct 2023 11:45:44 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "maybe a type_sanity. sql bug" }, { "msg_contents": "On Fri, Oct 27, 2023 at 11:45:44AM +0800, jian he wrote:\n> The test seems to assume the following sql query should return zero row.\n> but it does not. I don't know much about the \"relreplident\" column.\n\nThis is not about relreplident here, that refers to a relation's\nreplica identity.\n\n> test1=# SELECT c1.oid, c1.relname, relkind, relpersistence, relreplident\n> FROM pg_class as c1\n> WHERE relkind NOT IN ('r', 'i', 'S', 't', 'v', 'm', 'c', 'f', 'p') OR\n> relpersistence NOT IN ('p', 'u', 't') OR\n> relreplident NOT IN ('d', 'n', 'f', 'i');\n> oid | relname | relkind | relpersistence | relreplident\n> -----+---------+---------+----------------+--------------\n> (0 rows)\n\nThe problem is about relkind, as 'I' refers to a partitioned index.\nThat is a legal value in pg_class.relkind, but we forgot to list it in\nthis test.\n--\nMichael", "msg_date": "Fri, 27 Oct 2023 14:10:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maybe a type_sanity. sql bug" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Fri, Oct 27, 2023 at 11:45:44AM +0800, jian he wrote:\n>> The test seems to assume the following sql query should return zero row.\n>> but it does not. I don't know much about the \"relreplident\" column.\n\n> The problem is about relkind, as 'I' refers to a partitioned index.\n> That is a legal value in pg_class.relkind, but we forgot to list it in\n> this test.\n\nYeah, in principle this check should allow any permissible relkind\nvalue. In practice, because it runs so early in the regression tests,\nthere's not many values present. I added a quick check and found that\ntype_sanity only sees these values:\n \n -- **************** pg_class ****************\n -- Look for illegal values in pg_class fields\n+select distinct relkind from pg_class order by 1;\n+ relkind \n+---------\n+ i\n+ r\n+ t\n+ v\n+(4 rows)\n+\n SELECT c1.oid, c1.relname\n FROM pg_class as c1\n WHERE relkind NOT IN ('r', 'i', 'S', 't', 'v', 'm', 'c', 'f', 'p') OR\n\nWe've had some prior discussions about moving type_sanity, opr_sanity\netc to run later when there's a wider variety of objects present.\nI'm not sure about that being a great idea though --- for example,\nthere's a test that creates an intentionally incomplete opclass\nand even leaves it around for pg_dump stress testing. That'd\nprobably annoy opr_sanity if it ran after that one.\n\nThe original motivation for type_sanity and friends was mostly\nto detect mistakes in the hand-rolled initial catalog contents,\nand for that purpose it's fine if they run early. Some of what\nthey check is now redundant with genbki.pl I think.\n\nAnyway, we should fix this if only for clarity's sake.\nI do not feel a need to back-patch though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 21:44:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maybe a type_sanity. sql bug" }, { "msg_contents": "On Fri, Oct 27, 2023 at 09:44:30PM -0400, Tom Lane wrote:\n> Anyway, we should fix this if only for clarity's sake.\n> I do not feel a need to back-patch though.\n\nAgreed. Thanks for the commit.\n--\nMichael", "msg_date": "Sat, 28 Oct 2023 20:01:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maybe a type_sanity. sql bug" }, { "msg_contents": "looking around.\nI found other three minor issues. attached.\n\nI am not sure the pg_class \"relam\" description part is correct. since\npartitioned indexes (relkind \"I\") also have the access method, but no\nstorage.\n\"\nIf this is a table or an index, the access method used (heap, B-tree,\nhash, etc.); otherwise zero (zero occurs for sequences, as well as\nrelations without storage, such as views)\n\"", "msg_date": "Sat, 11 Nov 2023 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maybe a type_sanity. sql bug" }, { "msg_contents": "On Sat, Nov 11, 2023 at 08:00:00AM +0800, jian he wrote:\n> I am not sure the pg_class \"relam\" description part is correct. since\n> partitioned indexes (relkind \"I\") also have the access method, but no\n> storage.\n> \"\n> If this is a table or an index, the access method used (heap, B-tree,\n> hash, etc.); otherwise zero (zero occurs for sequences, as well as\n> relations without storage, such as views)\n> \"\n\nThis should be adjusted as well in the docs, IMO. I would propose\nsomething slightly more complicated:\n\"\nIf this is a table, index, materialized view or partitioned index, the\naccess method used (heap, B-tree, hash, etc.); otherwise zero (zero\noccurs for sequences, as well as relations without storage, like\nviews).\n\"\n\n> diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql\n> index a546ba89..6d806941 100644\n> --- a/src/test/regress/sql/type_sanity.sql\n> +++ b/src/test/regress/sql/type_sanity.sql\n\nAhah, nice catches. I'll go adjust that on HEAD like the other one\nyou pointed out. Just note that materialized views have a relam\ndefined, so the first comment you have changed is not completely\ncorrect.\n--\nMichael", "msg_date": "Sat, 11 Nov 2023 09:38:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maybe a type_sanity. sql bug" }, { "msg_contents": "On Sat, Nov 11, 2023 at 09:38:53AM +0900, Michael Paquier wrote:\n> On Sat, Nov 11, 2023 at 08:00:00AM +0800, jian he wrote:\n>> diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql\n>> index a546ba89..6d806941 100644\n>> --- a/src/test/regress/sql/type_sanity.sql\n>> +++ b/src/test/regress/sql/type_sanity.sql\n> \n> Ahah, nice catches. I'll go adjust that on HEAD like the other one\n> you pointed out. Just note that materialized views have a relam\n> defined, so the first comment you have changed is not completely\n> correct.\n\nFixed all that with a9f19c1349c2 for now.\n--\nMichael", "msg_date": "Sun, 12 Nov 2023 10:09:02 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maybe a type_sanity. sql bug" } ]
[ { "msg_contents": "Hi,\n\nI as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.\n\nShould it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.\n\nSent from Mail for Windows\n\n\nHi, I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it. Sent from Mail for Windows", "msg_date": "Fri, 27 Oct 2023 15:25:30 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump not dumping the run_as_owner setting from version 16?" }, { "msg_contents": "Further to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);` has no influence on the `subrunasowner` column of pg_subscriptions.\n\nSent from Mail for Windows\n\nFrom: Philip Warner\nSent: Friday, 27 October 2023 3:26 PM\nTo: [email protected]\nSubject: pg_dump not dumping the run_as_owner setting from version 16?\n\nHi,\n\nI as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.\n\nShould it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.\n\nSent from Mail for Windows\n\n\n\nFurther to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);` has no influence on the `subrunasowner` column of pg_subscriptions. Sent from Mail for Windows From: Philip WarnerSent: Friday, 27 October 2023 3:26 PMTo: [email protected]: pg_dump not dumping the run_as_owner setting from version 16? Hi, I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it. Sent from Mail for Windows", "msg_date": "Fri, 27 Oct 2023 18:05:30 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_dump not dumping the run_as_owner setting from version 16?" }, { "msg_contents": "On Fri, 2023-10-27 at 18:05 +1100, Philip Warner wrote:\n> I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.\n> \n> Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.\n\nYes, it certainly should. That is an omission in 482675987b.\nGo ahead and write a fix!\n\n\n> Further to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);`\n> has no influence on the `subrunasowner` column of pg_subscriptions.\n\nThis seems to have been fixed in f062cddafe.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Oct 2023 09:52:31 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump not dumping the run_as_owner setting from version 16?" }, { "msg_contents": "> > I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.\n> > \n> > Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.\n> \n> Yes, it certainly should. That is an omission in 482675987b.\n> Go ahead and write a fix!\n\nPlease find attached a patch for pg_dump to honour the setting of `run_as_owner`; I believe that effective pre-16 behavious was to run as owner, so I have set the flag to ‘t’ for pre-16 versions. Please let me know if you would prefer the opposite.\n\n\n> > Further to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);`\n> > has no influence on the `subrunasowner` column of pg_subscriptions.\n> \n> This seems to have been fixed in f062cddafe.\n\nYes, I can confirm that in the current head `pg_subscriptions` reflects the setting correctly.\n\n> > I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.> > > > Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.> > Yes, it certainly should.  That is an omission in 482675987b.> Go ahead and write a fix! Please find attached a patch for pg_dump to honour the setting of `run_as_owner`; I believe that effective pre-16 behavious was to run as owner, so I have set the flag to ‘t’ for pre-16 versions. Please let me know if you would prefer the opposite.  > > Further to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);`> > has no influence on the `subrunasowner` column of pg_subscriptions.> > This seems to have been fixed in f062cddafe. Yes, I can confirm that in the current head `pg_subscriptions` reflects the setting correctly.", "msg_date": "Sat, 28 Oct 2023 19:03:13 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_dump not dumping the run_as_owner setting from version 16?" }, { "msg_contents": "...patch actually attached this time...\n\n> > I as far as I can tell, pg_dump does not dup the ‘run_as_owner` setting for a subscription.\n> > \n> > Should it? Should I submit a patch? It seems pretty trivial to fix if anyone else is working on it.\n> \n> Yes, it certainly should.  That is an omission in 482675987b.\n> Go ahead and write a fix!\n\nPlease find attached a patch for pg_dump to honour the setting of `run_as_owner`; I believe that effective pre-16 behavious was to run as owner, so I have set the flag to ‘t’ for pre-16 versions. Please let me know if you would prefer the opposite.\n\n\n> > Further to this: it seems that `Alter Subscription X Set(Run_As_Owner=True);`\n> > has no influence on the `subrunasowner` column of pg_subscriptions.\n> \n> This seems to have been fixed in f062cddafe.\n\nYes, I can confirm that in the current head `pg_subscriptions` reflects the setting correctly.", "msg_date": "Sat, 28 Oct 2023 19:54:18 +1100", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": true, "msg_subject": "RE: pg_dump not dumping the run_as_owner setting from version 16?" }, { "msg_contents": "Philip Warner <[email protected]> writes:\n> Please find attached a patch for pg_dump to honour the setting of `run_as_owner`; I believe that effective pre-16 behavious was to run as owner, so I have set the flag to ‘t’ for pre-16 versions. Please let me know if you would prefer the opposite.\n\nI think that's the correct choice. Fix pushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Oct 2023 12:57:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump not dumping the run_as_owner setting from version 16?" } ]
[ { "msg_contents": "Hello.\n\nI found the following message recently introduced in pg_upgrade:\n\n>\t\tpg_log(PG_VERBOSE, \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n>\t\t\t slot_info->slotname,\n>\t\t\t slot_info->plugin,\n>\t\t\t slot_info->two_phase ? \"true\" : \"false\");\n\nIf the labels correspond to the struct member names, the first label\nought to be \"slotname\". If not, all labels of this type, including\nthose adjucent, should have a more natural spelling.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 27 Oct 2023 14:20:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrade's object listing" }, { "msg_contents": "On Friday, October 27, 2023 1:21 PM Kyotaro Horiguchi <[email protected]> wrote:\n> \n> Hello.\n> \n> I found the following message recently introduced in pg_upgrade:\n> \n> >\t\tpg_log(PG_VERBOSE, \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\",\n> two_phase: %s\",\n> >\t\t\t slot_info->slotname,\n> >\t\t\t slot_info->plugin,\n> >\t\t\t slot_info->two_phase ? \"true\" : \"false\");\n> \n> If the labels correspond to the struct member names, the first label ought to be\n> \"slotname\". If not, all labels of this type, including those adjucent, should have a\n> more natural spelling.\n> \n> What do you think about this?\n\nThanks for reporting. But I am not sure if rename to slotname or others will be an\nimprovement. I think we don't have a rule to make the output the same as struct\nfield. Existing message also don't follow it[1]. So, the current message looks\nOK to me.\n\n[1]\n pg_log(PG_VERBOSE, \"relname: \\\"%s.%s\\\", reloid: %u, reltblspace: \\\"%s\\\"\",\n rel_arr->rels[relnum].nspname,\n rel_arr->rels[relnum].relname,\n rel_arr->rels[relnum].reloid,\n rel_arr->rels[relnum].tablespace);\n\nBest Regards,\nHou zj\n\n\n", "msg_date": "Fri, 27 Oct 2023 05:56:31 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: pg_upgrade's object listing" }, { "msg_contents": "At Fri, 27 Oct 2023 05:56:31 +0000, \"Zhijie Hou (Fujitsu)\" <[email protected]> wrote in \n> On Friday, October 27, 2023 1:21 PM Kyotaro Horiguchi <[email protected]> wrote:\n> > \n> > Hello.\n> > \n> > I found the following message recently introduced in pg_upgrade:\n> > \n> > >\t\tpg_log(PG_VERBOSE, \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\",\n> > two_phase: %s\",\n> > >\t\t\t slot_info->slotname,\n> > >\t\t\t slot_info->plugin,\n> > >\t\t\t slot_info->two_phase ? \"true\" : \"false\");\n> > \n> > If the labels correspond to the struct member names, the first label ought to be\n> > \"slotname\". If not, all labels of this type, including those adjucent, should have a\n> > more natural spelling.\n> > \n> > What do you think about this?\n> \n> Thanks for reporting. But I am not sure if rename to slotname or others will be an\n> improvement. I think we don't have a rule to make the output the same as struct\n> field. Existing message also don't follow it[1]. So, the current message looks\n> OK to me.\n> \n> [1]\n> pg_log(PG_VERBOSE, \"relname: \\\"%s.%s\\\", reloid: %u, reltblspace: \\\"%s\\\"\",\n> rel_arr->rels[relnum].nspname,\n> rel_arr->rels[relnum].relname,\n> rel_arr->rels[relnum].reloid,\n> rel_arr->rels[relnum].tablespace);\n\nThanks for sharing your perspectie. I share similar sentiments. The\ninitial question arose during the message translation. For the\nsubsequent one, I opted not to translate the labels as they looked to\nbe member names. From this viewpoint, \"slot_name\" is rather ambiguous.\n\nIf there's no interest in modifying it, I will retain the original\nlabels in translated messages, and that should suffice.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 27 Oct 2023 15:18:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade's object listing" }, { "msg_contents": "Hi,\n\nOn 2023-Oct-27, Kyotaro Horiguchi wrote:\n\n> I found the following message recently introduced in pg_upgrade:\n> \n> >\t\tpg_log(PG_VERBOSE, \"slot_name: \\\"%s\\\", plugin: \\\"%s\\\", two_phase: %s\",\n> >\t\t\t slot_info->slotname,\n> >\t\t\t slot_info->plugin,\n> >\t\t\t slot_info->two_phase ? \"true\" : \"false\");\n> \n> If the labels correspond to the struct member names, the first label\n> ought to be \"slotname\". If not, all labels of this type, including\n> those adjucent, should have a more natural spelling.\n> \n> What do you think about this?\n\nI think this shouldn't be a translatable message in the first place.\n\nLooking at the wording of other messages in pg_upgrade --verbose,it\ndoesn't look like any of it is intended for user consumption. I mean,\nlook at this monstrosity\n\n\t\tpg_log(PG_VERBOSE, \"relname: \\\"%s.%s\\\", reloid: %u, reltblspace: \\\"%s\\\"\",\n\nBefore 249d74394500 it used to be even more hideous. This message comes\nstraight from the initial pg_upgrade commit in 2010, c2e9b2f28818, where\nit was a debug message. We seem to have promoted it to a verbose\nmessage (commit 717f6d60859c) for no particular reason and without\ncareful consideration.\n\nI honestly doubt that this sort of message is in any way useful, other\nthan for program debugging. Maybe listing databases and perhaps slots\nin verbose mode is OK, but tables? I don't think so.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:44:17 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's object listing" }, { "msg_contents": "> On 27 Oct 2023, at 10:44, Alvaro Herrera <[email protected]> wrote:\n\n> I honestly doubt that this sort of message is in any way useful, other\n> than for program debugging. Maybe listing databases and perhaps slots\n> in verbose mode is OK, but tables? I don't think so.\n\nOutputting this in verbose mode is unlikely to help in regular usage and\ninstead risk drown out other outputs. It would be more useful to be able to\nspecify a logfile for objects and keep verbose output for more informational\nand actionable messages.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:50:02 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade's object listing" } ]
[ { "msg_contents": "Hi, hackers!\n\nI recently encountered strange behavior when, after running the \ncreate_ms.sql test, I ran the last query from this test. In general, the \nplayback looks like this:\n\n\\i src/test/regress/sql/create_misc.sql\n\nI added Assert(0) in create_sort_plan() before calling \ncreate_plan_recurse and restarted postgres. After that I run query:\n\nSELECT relname, reltoastrelid <> 0 AS has_toast_table\n    FROM pg_class\n    WHERE oid::regclass IN ('a_star', 'c_star')\n    ORDER BY 1;\n\nI found Invalid_path in cheapest_startup_path:\n\n  (gdb) p *(IndexPath *)((SortPath \n*)best_path)->subpath->parent->cheapest_startup_path\n$12 = {path = {type = T_Invalid, pathtype = T_Invalid, parent = \n0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f, param_info = \n0x7f7f7f7f7f7f7f7f,\n     parallel_aware = 127, parallel_safe = 127, parallel_workers = \n2139062143 <tel:2139062143>, rows = 1.3824172084878715e+306, \nstartup_cost = 1.3824172084878715e+306,\n     total_cost = 1.3824172084878715e+306, pathkeys = \n0x7f7f7f7f7f7f7f7f}, indexinfo = 0x7f7f7f7f7f7f7f7f, indexclauses = \n0x7f7f7f7f7f7f7f7f,\n   indexorderbys = 0x7f7f7f7f7f7f7f7f, indexorderbycols = \n0x7f7f7f7f7f7f7f7f, indexscandir = 2139062143 <tel:2139062143>, \nindextotalcost = 1.3824172084878715e+306,\n   indexselectivity = 1.3824172084878715e+306}\n\n(gdb) p (IndexPath *)((SortPath \n*)best_path)->subpath->parent->cheapest_startup_path\n$11 = (IndexPath *) 0x555febc66160\n\nI found that this beginning since creation upperrel (fetch_upper_rel \nfunction):\n\n/* primary planning entry point (may recurse for subqueries) */  root = \nsubquery_planner(glob, parse, NULL,        false, tuple_fraction);  /* \nSelect best Path and turn it into a Plan */ * final_rel = \nfetch_upper_rel(root, UPPERREL_FINAL, NULL);*  best_path = \nget_cheapest_fractional_path(final_rel, tuple_fraction);\nRed Heart\n\n(gdb) p *(IndexPath *)((SortPath *)final_rel->cheapest_total_path \n)->subpath->parent->cheapest_startup_path\n$15 = {path = {type = T_Invalid, pathtype = T_Invalid, parent = \n0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f, param_info = \n0x7f7f7f7f7f7f7f7f,\n     parallel_aware = 127, parallel_safe = 127, parallel_workers = \n2139062143 <tel:2139062143>, rows = 1.3824172084878715e+306, \nstartup_cost = 1.3824172084878715e+306,\n     total_cost = 1.3824172084878715e+306, pathkeys = \n0x7f7f7f7f7f7f7f7f}, indexinfo = 0x7f7f7f7f7f7f7f7f, indexclauses = \n0x7f7f7f7f7f7f7f7f,\n   indexorderbys = 0x7f7f7f7f7f7f7f7f, indexorderbycols = \n0x7f7f7f7f7f7f7f7f, indexscandir = 2139062143 <tel:2139062143>, \nindextotalcost = 1.3824172084878715e+306,\n   indexselectivity = 1.3824172084878715e+306}\n(gdb) p (IndexPath *)((SortPath *)final_rel->cheapest_total_path \n)->subpath->parent->cheapest_startup_path\n$16 = (IndexPath *) 0x555febc66160\n\nI know it doesn't cause a crash anywhere, but can anybody explain me \nwhat's going on here and why Invalid Path appears?\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nHi, hackers!\nI recently encountered strange behavior when, after running the\n create_ms.sql test, I ran the last query from this test. In\n general, the playback looks like this:\n\n\\i src/test/regress/sql/create_misc.sql\nI added Assert(0) in create_sort_plan() before calling\n create_plan_recurse and restarted postgres. After that I run\n query:\n\nSELECT relname, reltoastrelid <> 0 AS\n has_toast_table\n    FROM pg_class\n    WHERE oid::regclass IN ('a_star', 'c_star')\n    ORDER BY 1;\nI found Invalid_path in cheapest_startup_path:\n (gdb) p *(IndexPath *)((SortPath\n *)best_path)->subpath->parent->cheapest_startup_path $12\n = {path = {type = T_Invalid, pathtype = T_Invalid, parent =\n 0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f, param_info\n = 0x7f7f7f7f7f7f7f7f,     parallel_aware\n = 127, parallel_safe = 127, parallel_workers = 2139062143,\n rows = 1.3824172084878715e+306, startup_cost =\n 1.3824172084878715e+306,     total_cost\n = 1.3824172084878715e+306, pathkeys = 0x7f7f7f7f7f7f7f7f},\n indexinfo = 0x7f7f7f7f7f7f7f7f, indexclauses =\n 0x7f7f7f7f7f7f7f7f,   indexorderbys\n = 0x7f7f7f7f7f7f7f7f, indexorderbycols = 0x7f7f7f7f7f7f7f7f,\n indexscandir = 2139062143,\n indextotalcost = 1.3824172084878715e+306,   indexselectivity\n = 1.3824172084878715e+306}\n(gdb) p (IndexPath *)((SortPath\n *)best_path)->subpath->parent->cheapest_startup_path $11\n = (IndexPath *) 0x555febc66160\nI found that this beginning since creation upperrel\n (fetch_upper_rel function):\n\n/* primary planning entry point (may recurse for subqueries) */\n root = subquery_planner(glob, parse, NULL,\n       false, tuple_fraction);\n\n /* Select best Path and turn it into a Plan */\n final_rel = fetch_upper_rel(root, UPPERREL_FINAL, NULL);\n best_path = get_cheapest_fractional_path(final_rel, tuple_fraction);\n\n\n\n(gdb) p *(IndexPath *)((SortPath\n *)final_rel->cheapest_total_path\n )->subpath->parent->cheapest_startup_path$15\n = {path = {type = T_Invalid, pathtype = T_Invalid, parent =\n 0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f, param_info\n = 0x7f7f7f7f7f7f7f7f,     parallel_aware\n = 127, parallel_safe = 127, parallel_workers = 2139062143,\n rows = 1.3824172084878715e+306, startup_cost =\n 1.3824172084878715e+306,     total_cost\n = 1.3824172084878715e+306, pathkeys = 0x7f7f7f7f7f7f7f7f},\n indexinfo = 0x7f7f7f7f7f7f7f7f, indexclauses =\n 0x7f7f7f7f7f7f7f7f,   indexorderbys\n = 0x7f7f7f7f7f7f7f7f, indexorderbycols = 0x7f7f7f7f7f7f7f7f,\n indexscandir = 2139062143,\n indextotalcost = 1.3824172084878715e+306,   indexselectivity\n = 1.3824172084878715e+306}(gdb)\n p (IndexPath *)((SortPath *)final_rel->cheapest_total_path\n )->subpath->parent->cheapest_startup_path$16\n = (IndexPath *) 0x555febc66160\nI know it doesn't cause a crash anywhere, but can\n anybody explain me what's going on here and why Invalid Path\n appears?\n\n-- \nRegards,\nAlena Rybakina", "msg_date": "Fri, 27 Oct 2023 11:53:12 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Invalid Path with UpperRel" } ]
[ { "msg_contents": "In contrib/btree_gin, leftmostvalue_interval() does this:\n\nleftmostvalue_interval(void)\n{\n Interval *v = palloc(sizeof(Interval));\n\n v->time = DT_NOBEGIN;\n v->day = 0;\n v->month = 0;\n return IntervalPGetDatum(v);\n}\n\nwhich is a long way short of the minimum possible interval value.\n\nAs a result, a < or <= query using a GIN index on an interval column\nmay miss values. For example:\n\nCREATE EXTENSION btree_gin;\nCREATE TABLE foo (a interval);\nINSERT INTO foo VALUES ('-1000000 years');\nCREATE INDEX foo_idx ON foo USING gin (a);\n\nSET enable_seqscan = off;\nSELECT * FROM foo WHERE a < '1 year';\n a\n---\n(0 rows)\n\nAttached is a patch fixing this by setting all the fields to their\nminimum values, which is guaranteed to be less than any other\ninterval.\n\nNote that this doesn't affect the contents of the index itself, so\nreindexing is not necessary.\n\nRegards,\nDean", "msg_date": "Fri, 27 Oct 2023 10:26:53 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "btree_gin: Incorrect leftmost interval value" }, { "msg_contents": "On 27/10/2023 12:26, Dean Rasheed wrote:\n> In contrib/btree_gin, leftmostvalue_interval() does this:\n> \n> leftmostvalue_interval(void)\n> {\n> Interval *v = palloc(sizeof(Interval));\n> \n> v->time = DT_NOBEGIN;\n> v->day = 0;\n> v->month = 0;\n> return IntervalPGetDatum(v);\n> }\n> \n> which is a long way short of the minimum possible interval value.\n\nGood catch!\n\n> Attached is a patch fixing this by setting all the fields to their\n> minimum values, which is guaranteed to be less than any other\n> interval.\n\nLGTM. I wish extractQuery could return \"leftmost\" more explicitly, so \nthat we didn't need to construct these leftmost values. But I don't \nthink that's supported by the current extractQuery interface.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 27 Oct 2023 15:42:52 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree_gin: Incorrect leftmost interval value" }, { "msg_contents": "On Fri, Oct 27, 2023 at 2:57 PM Dean Rasheed <[email protected]> wrote:\n>\n> In contrib/btree_gin, leftmostvalue_interval() does this:\n>\n> leftmostvalue_interval(void)\n> {\n> Interval *v = palloc(sizeof(Interval));\n>\n> v->time = DT_NOBEGIN;\n> v->day = 0;\n> v->month = 0;\n> return IntervalPGetDatum(v);\n> }\n>\n> which is a long way short of the minimum possible interval value.\n>\n> As a result, a < or <= query using a GIN index on an interval column\n> may miss values. For example:\n>\n> CREATE EXTENSION btree_gin;\n> CREATE TABLE foo (a interval);\n> INSERT INTO foo VALUES ('-1000000 years');\n> CREATE INDEX foo_idx ON foo USING gin (a);\n>\n> SET enable_seqscan = off;\n> SELECT * FROM foo WHERE a < '1 year';\n> a\n> ---\n> (0 rows)\n>\n> Attached is a patch fixing this by setting all the fields to their\n> minimum values, which is guaranteed to be less than any other\n> interval.\n\nShould we change this to call INTERVAL_NOBEGIN() to be added by\ninfinite interval patches? It's the same effect but looks similar to\nleftmostvalue_timestamp/float8 etc. It will need to wait for the\ninfinite interval patches to commit but I guess, the wait won't be too\nlong and the outcome will be better. I can include this change in\nthose patches.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 27 Oct 2023 18:26:14 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree_gin: Incorrect leftmost interval value" }, { "msg_contents": "On Fri, 27 Oct 2023 at 13:56, Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Should we change this to call INTERVAL_NOBEGIN() to be added by\n> infinite interval patches?\n>\n\nGiven that this is a bug that can lead to incorrect query results, I\nplan to back-patch it, and INTERVAL_NOBEGIN() wouldn't make sense in\nback-branches that don't have infinite intervals.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 27 Oct 2023 14:05:21 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree_gin: Incorrect leftmost interval value" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Fri, 27 Oct 2023 at 13:56, Ashutosh Bapat\n> <[email protected]> wrote:\n>> Should we change this to call INTERVAL_NOBEGIN() to be added by\n>> infinite interval patches?\n\n> Given that this is a bug that can lead to incorrect query results, I\n> plan to back-patch it, and INTERVAL_NOBEGIN() wouldn't make sense in\n> back-branches that don't have infinite intervals.\n\nAgreed. When/if the infinite interval patch lands, it could update\nthis function to use the macro.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:27:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree_gin: Incorrect leftmost interval value" } ]
[ { "msg_contents": "Hi All,\nReading [1] I have been experimenting with behaviour of identity\ncolumns and serial column in case of partitioned tables. My\nobservations related to serial column can be found at [2]. This email\nis about identity column behaviour with partitioned tables. Serial and\nidentity columns have sufficiently different behaviour and\nimplementation to have separate discussions on their behaviour. I\ndon't want to mix this with [1] since that thread is about replacing\nserial with identity. The discussion in this and [2] will be useful to\ndrive [1] forward.\n\nBehaviour 1\n=========\nIf a partitioned table has an identity column, the partitions do not\ninherit identity property.\n#create table tpart (a int generated always as identity primary key,\n src varchar) partition by range(a);\n#create table t_p1 partition of tpart for values from (1) to (3);\n#\\d tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+------------------------------\n a | integer | | not null | generated always\nas identity\n src | character varying | | |\nPartition key: RANGE (a)\nIndexes:\n \"tpart_pkey\" PRIMARY KEY, btree (a)\nNumber of partitions: 2 (Use \\d+ to list them.)\n\n#\\d t_p1\n Table \"public.t_p1\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (1) TO (3)\nIndexes:\n \"t_p1_pkey\" PRIMARY KEY, btree (a)\n\nNotice that the default column of t_p1. This means that a direct\nINSERT into partition will fail if it does not specify value for the\nidentity column. As a consequence such a value may conflict with an\nexisting value or a future value of the identity column. In the\nexample, the identity column is a primary key and also a partition\nkey, thus the conflict would result in an error. But when it's not a\npartition key (and hence a primary key), it will just allow those\nconflicting values.\n\nBehaviour 2\n=========\nIf a table being attached as a partition to a partitioned table and\nboth of them have column with same name as identity column, the ATTACH\nsucceeds and allow both tables to use different sequences.\n#create table t_p5 (a int primary key, b int generated always as\nidentity, src varchar);\n#alter table tpart attach partition t_p5 for values from (7) to (9);\n#\\d t_p5\n Table \"public.t_p5\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+------------------------------\n a | integer | | not null |\n b | integer | | not null | generated always\nas identity\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (7) TO (9)\nIndexes:\n \"t_p5_pkey\" PRIMARY KEY, btree (a)\n\nAs a consequence a direct INSERT into the partition will result in a\nvalue for identity column which conflicts with an existing value or a\nfuture value in the partitioned table. Again, if the identity column\nin a primary key (and partition key), the conflicting INSERT will\nfail. But otherwise, the conflicting values will go unnoticed.\n\nI consulted Vik Fearing, offline, about SQL standard's take on\nidentity columns in partitioned table. SQL standard does not specify\npartitioned tables as a separate entity. Thus a partitioned table is\nat par with a regular table. Hence an identity column in partitioned\ntable should share the same identity space across all the partitions.\n\nBehaviour 3\n=========\nWe allow identity column to be added to a partitioned table which\ndoesn't have partitions but we disallow adding identity column to a\npartitioned table which has partitions.\n#create table tpart (a int primary key,\n src varchar) partition by range(a);\n#create table t_p1 partition of tpart for values from (1) to (3);\n#alter table tpart add column b int generated always as identity;\nERROR: cannot recursively add identity column to table that has child tables\n\nI don't understand why is that prohibited. If we allow partitions to\nbe added to a partitioned table with identity column, we should allow\nan identity column to be added to a partitioned table with partitions.\n\nBehaviour 4\n=========\nEven though we don't allow an identity column to be added to a\npartitioned table with partitions, we allow an existing column to be\nconverted to an identity column in such a table.\n\n#create table tpart (a int primary key,\n src varchar) partition by range(a);\n#create table t_p1 partition of tpart for values from (1) to (3);\n#create table t_p2 partition of tpart for values from (3) to (5);\n#alter table tpart alter column a add generated always as identity;\n\n#\\d tpart\n Partitioned table \"public.tpart\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+------------------------------\n a | integer | | not null | generated always\nas identity\n src | character varying | | |\nPartition key: RANGE (a)\nIndexes:\n \"tpart_pkey\" PRIMARY KEY, btree (a)\nNumber of partitions: 2 (Use \\d+ to list them.)\n\n#\\d t_p1\n Table \"public.t_p1\"\n Column | Type | Collation | Nullable | Default\n--------+-------------------+-----------+----------+---------\n a | integer | | not null |\n src | character varying | | |\nPartition of: tpart FOR VALUES FROM (1) TO (3)\nIndexes:\n \"t_p1_pkey\" PRIMARY KEY, btree (a)\n\nBehaviour 3 and 4 are conflicting with each other themselves.\n\nI think we should fix these anomalies as follows\n1. Allow identity columns to be added to the partitioned table\nirrespective of whether they have partitions of not.\n2. Propagate identity property to partitions.\n3. Use the same underlying sequence for getting default value of an\nidentity column when INSERTing directly in a partition.\n4. Disallow attaching a partition with identity column.\n\n1 will fix inconsistencies in Behaviour 3 and 4. 2 and 3 will fix\nanomalies in Behaviour 1. 4 will fix Behaviour 2.\n\nNote on point 3: The current implementation uses pg_depend to find the\nsequence associated with the identity column. We don't necessarily\nneed to add dependencies for individual partitions though. Instead we\ncould use the dependency on the partitioned table itself. I haven't\nchecked feasibility of this option, but it makes things simpler esp.\nfor DETACH and DROP of partition.\n\nNote on point 4: The proposal again simplifies DETACH and DROP. If we\ndecide to somehow coalesce the identity columns of partition and\npartitioned table, it would make DETACH and DROP complex. Also if the\nidentity column of the partition being attached is not identity column\nin partition table, INSERT on partition table would fail in case of\nALWAYS. Of course the risk is we will break backward compatibility.\nBut given that the current behaviour is quite erroneous, I doubt if\nthere are users relying on this behaviour.\n\nThoughts?\n\n\n[1] https://www.postgresql.org/message-id/flat/70be435b-05db-06f2-7c01-9bb8ee2fccce%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/CAExHW5toAsjc7uwSeSzX6sgvktFxsv7pd606zP6DnTX7Y6O4jg@mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 27 Oct 2023 17:02:11 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning and identity column" }, { "msg_contents": "On 27.10.23 13:32, Ashutosh Bapat wrote:\n> I think we should fix these anomalies as follows\n> 1. Allow identity columns to be added to the partitioned table\n> irrespective of whether they have partitions of not.\n> 2. Propagate identity property to partitions.\n> 3. Use the same underlying sequence for getting default value of an\n> identity column when INSERTing directly in a partition.\n> 4. Disallow attaching a partition with identity column.\n> \n> 1 will fix inconsistencies in Behaviour 3 and 4. 2 and 3 will fix\n> anomalies in Behaviour 1. 4 will fix Behaviour 2.\n\nThis makes sense to me.\n\nNote, here is a writeup about the behavior of generated columns with \npartitioning: \nhttps://www.postgresql.org/docs/devel/ddl-generated-columns.html. It \nwould be useful if we documented the behavior of identity columns \nsimilarly. (I'm not saying the behavior has to match.)\n\nOne thing that's not clear to me is what should happen if you have a \npartitioned table with an identity column and you try to attach a \npartition that has its own identity definition for that column. I \nsuppose we shouldn't allow that. (The equivalent case for generated \ncolumns is allowed.)\n\n\n\n", "msg_date": "Mon, 13 Nov 2023 11:21:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Mon, Nov 13, 2023 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 27.10.23 13:32, Ashutosh Bapat wrote:\n> > I think we should fix these anomalies as follows\n> > 1. Allow identity columns to be added to the partitioned table\n> > irrespective of whether they have partitions of not.\n> > 2. Propagate identity property to partitions.\n> > 3. Use the same underlying sequence for getting default value of an\n> > identity column when INSERTing directly in a partition.\n> > 4. Disallow attaching a partition with identity column.\n> >\n> > 1 will fix inconsistencies in Behaviour 3 and 4. 2 and 3 will fix\n> > anomalies in Behaviour 1. 4 will fix Behaviour 2.\n>\n> This makes sense to me.\n>\n> Note, here is a writeup about the behavior of generated columns with\n> partitioning:\n> https://www.postgresql.org/docs/devel/ddl-generated-columns.html. It\n> would be useful if we documented the behavior of identity columns\n> similarly. (I'm not saying the behavior has to match.)\n\nYes. Will add the documentation while working on the code.\n\n>\n> One thing that's not clear to me is what should happen if you have a\n> partitioned table with an identity column and you try to attach a\n> partition that has its own identity definition for that column. I\n> suppose we shouldn't allow that.\n\nThat's point 4 above. We shouldn't allow that case.\n\n> (The equivalent case for generated\n> columns is allowed.)\n>\n\nThere might be some weird behaviour because of that like virtual\ncolumns from the same partition reporting different values based on\nthe table used in the SELECT clause OR stored generated columns having\ndifferent values for two rows with the same underlying columns just\nbecause they were INSERTed into different tables (partitioned vs\npartition). That may have some impact on the logical replication. I\nhaven't tested this myself. Maybe a topic for a separate thread.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 16 Nov 2023 16:23:25 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Mon, Nov 13, 2023 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 27.10.23 13:32, Ashutosh Bapat wrote:\n> > I think we should fix these anomalies as follows\n> > 1. Allow identity columns to be added to the partitioned table\n> > irrespective of whether they have partitions of not.\n> > 2. Propagate identity property to partitions.\n> > 3. Use the same underlying sequence for getting default value of an\n> > identity column when INSERTing directly in a partition.\n> > 4. Disallow attaching a partition with identity column.\n> >\n> > 1 will fix inconsistencies in Behaviour 3 and 4. 2 and 3 will fix\n> > anomalies in Behaviour 1. 4 will fix Behaviour 2.\n>\n> This makes sense to me.\n\nPFA WIP patches implementing/fixing identity support for partitioned\ntables as outlined above.\n\nA partitioned table is a single relation and thus an identity column\nof a partitioned table should use the same identity space across all\nthe partitions. This means that the sequence underlying the identity\ncolumn will be shared by all the partitions of a partitioned table and\nthe column will have the same identity properties across all the\npartitions. Thus\n1. When a new partition is added or a table is attached as a\npartition, it inherits the identity column along with the underlying\nsequence from the partitioned table. It can not have an identity\ncolumn of its own.\n2. Since a partition never had its own identity column, when detaching\na partition, it will loose identity property of any column that had\nit. If it were to retain the identity property it can not use\nunderlying sequence. That's not possible anyway.\n\nThis is different from the way we treat identity in inheritance.\nChildren in inheritance hierarchy are independent enough to have\nseparate identity columns and sequences of their own. So the above\ndiscussion applies only to partitioned table. The patches too deal\nwith only partitioned tables.\n\nAt this point I am looking for opinions on the above rules and whether\nthe implementation is on the right track.\n\nThe work consists of many small code changes. In order to know which\ncode change is associated with which SQL each patch has test changes\nand associated code change. Each patch has a commit message explaining\nthe changes in detail (and some times repeating the above rules again,\nsorry for the repetition). These patches will be merged into a single\npatch or a couple patches at most. Here's what each patch does\n\n0001 - change to get_partition_ancestors() prologue. Can be reviewed\nand committed independent of other patches.\n\n0002 - A new partition inherits identity column and uses the\nunderlying sequence for direct INSERTs\n\n0004 - An attached partition inherits identity property and uses the\nunderlying sequence for direct INSERTs. When inheriting the identity\nproperty it should also inherit the NOT NULL constraint, but that's a\nTODO in this patch. We expect matching NOT NULL constraints to be\npresent in the partition being attached. I am not sure whether we want\nto add NOT NULL constraints automatically for an identity column. We\nrequire a NOT NULL constraint to be present when adding identity\nproperty to a column. The behavior in the patch seems to be consistent\nwith this.\n\n0006 - supports ADD COLUMN ... GENERATED AS IDENTITY on a partitioned\ntable. identity property is propagated down the partition hierarchy.\n\n0008 - A TODO: that I need verify/address before finalizing these\npatches. Any hints are welcome.\n\n0009 - supports ALTER COLUMN ... ADD GENERATED AS IDENTITY on a\npartitioned table. Propagates the identity property down the partition\nhierarchy. It requires adding NOT NULL constraint before adding\nidentity property just like regular table.\n\n0011 - dropping identity property of a column of a partitioned table\ndrops it from the corresponding columns of all of its partitions. But\nthe NOT NULL constraint is retained just like in case of regular\ntable.\n\n0013 - detaching a partition, drops identity property from all the\ncolumns of partition.\n\npatches 0003, 0005, 0007, 0010, 0012, 0014 have detailed white box\ntests testing the catalog changes for each SQL. But they are not meant\nto be part of the final patch-set.\n\n\n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 19 Dec 2023 16:17:38 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On 19.12.23 11:47, Ashutosh Bapat wrote:\n> At this point I am looking for opinions on the above rules and whether\n> the implementation is on the right track.\n\nThis looks on the right track to me.\n\n> 0001 - change to get_partition_ancestors() prologue. Can be reviewed\n> and committed independent of other patches.\n\nI committed that.\n\n> 0004 - An attached partition inherits identity property and uses the\n> underlying sequence for direct INSERTs. When inheriting the identity\n> property it should also inherit the NOT NULL constraint, but that's a\n> TODO in this patch. We expect matching NOT NULL constraints to be\n> present in the partition being attached. I am not sure whether we want\n> to add NOT NULL constraints automatically for an identity column. We\n> require a NOT NULL constraint to be present when adding identity\n> property to a column. The behavior in the patch seems to be consistent\n> with this.\n\nI think it makes sense that the NOT NULL constraint must be added \nmanually before attaching is allowed.\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 12:02:02 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Thu, Dec 21, 2023 at 4:32 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 19.12.23 11:47, Ashutosh Bapat wrote:\n> > At this point I am looking for opinions on the above rules and whether\n> > the implementation is on the right track.\n>\n> This looks on the right track to me.\n\nThanks.\n\n>\n> > 0001 - change to get_partition_ancestors() prologue. Can be reviewed\n> > and committed independent of other patches.\n>\n> I committed that.\n\nThanks.\n\n>\n> > 0004 - An attached partition inherits identity property and uses the\n> > underlying sequence for direct INSERTs. When inheriting the identity\n> > property it should also inherit the NOT NULL constraint, but that's a\n> > TODO in this patch. We expect matching NOT NULL constraints to be\n> > present in the partition being attached. I am not sure whether we want\n> > to add NOT NULL constraints automatically for an identity column. We\n> > require a NOT NULL constraint to be present when adding identity\n> > property to a column. The behavior in the patch seems to be consistent\n> > with this.\n>\n> I think it makes sense that the NOT NULL constraint must be added\n> manually before attaching is allowed.\n>\nOk. I have modified the test case to add NOT NULL constraint.\n\nHere's complete patch-set.\n0001 - fixes unrelated documentation style - can be committed\nindependently OR ignored\n0002 - adds an Assert in related code - can be independently committed\n\nOn Mon, Nov 13, 2023 at 3:51 PM Peter Eisentraut <[email protected]> wrote:\n> Note, here is a writeup about the behavior of generated columns with\n> partitioning:\n> https://www.postgresql.org/docs/devel/ddl-generated-columns.html. It\n> would be useful if we documented the behavior of identity columns\n> similarly. (I'm not saying the behavior has to match.)\n0003 - addresses this request\n\n0004 - 0011 - each patch contains code changes and SQL testing those\nchanges for ease of review. Each patch has commit message that\ndescribes the changes and rationale, if any, behind those changes.\n0012 - test changes\n0013 - expected output change because of code changes\nAll these patches should be committed as a single commit finally.\nPlease let me know when I can squash those all together. We may commit\n0003 separately or along with 0004-0013.\n\n0014 and 0015 - pg_dump/restore and pg_upgrade tests. But these\npatches are not expected to be committed for the reasons explained in\nthe commit message. Since identity columns of a partitioned table are\nnot marked as such in partitions in the older version, I tested their\nupgrade from PG 14 through the changes in 0015. pg_dumpall_14.out\ncontains the dump file from PG 14 I used for this testing.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 9 Jan 2024 19:40:33 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On 09.01.24 15:10, Ashutosh Bapat wrote:\n> Here's complete patch-set.\n\nLooks good! Committed.\n\n\n\n", "msg_date": "Tue, 16 Jan 2024 19:59:59 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Wed, Jan 17, 2024 at 12:30 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 09.01.24 15:10, Ashutosh Bapat wrote:\n> > Here's complete patch-set.\n>\n> Looks good! Committed.\n>\n\nThanks a lot Peter.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 17 Jan 2024 11:06:33 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On 17.01.24 06:36, Ashutosh Bapat wrote:\n> On Wed, Jan 17, 2024 at 12:30 AM Peter Eisentraut <[email protected]> wrote:\n>>\n>> On 09.01.24 15:10, Ashutosh Bapat wrote:\n>>> Here's complete patch-set.\n>>\n>> Looks good! Committed.\n>>\n> \n> Thanks a lot Peter.\n\nI found another piece of code that might need updating, or at least the \ncomment.\n\nIn MergeAttributes(), in the part that merges the specified column \ndefinitions into the inherited ones, it says\n\n /*\n * Identity is never inherited. The new column can have an\n * identity definition, so we always just take that one.\n */\n def->identity = newdef->identity;\n\nThis is still correct for regular inheritance, but not for partitioning. \n I think for partitioning, this is not reachable because you can't \nspecify identity information when you create a partition(?). So maybe \nsomething like\n\n if (newdef->identity)\n {\n Assert(!is_partioning);\n /*\n * Identity is never inherited. The new column can have an\n * identity definition, so we always just take that one.\n */\n def->identity = newdef->identity;\n }\n\nThoughts?\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:02:30 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Mon, Jan 22, 2024 at 5:32 PM Peter Eisentraut <[email protected]> wrote:\n>\n> I found another piece of code that might need updating, or at least the\n> comment.\n>\n> In MergeAttributes(), in the part that merges the specified column\n> definitions into the inherited ones, it says\n>\n> /*\n> * Identity is never inherited. The new column can have an\n> * identity definition, so we always just take that one.\n> */\n> def->identity = newdef->identity;\n>\n> This is still correct for regular inheritance, but not for partitioning.\n> I think for partitioning, this is not reachable because you can't\n> specify identity information when you create a partition(?). So maybe\n> something like\n\nYou may specify the information when creating a partition, but it will\ncause an error. We have tests in identity.sql for the same (look for\npitest1_pfail).\n\n>\n> if (newdef->identity)\n> {\n> Assert(!is_partioning);\n> /*\n> * Identity is never inherited. The new column can have an\n> * identity definition, so we always just take that one.\n> */\n> def->identity = newdef->identity;\n> }\n>\n> Thoughts?\n\nThat code block already has Assert(!is_partition) at line 3085. I\nthought that Assert is enough.\n\nThere's another thing I found. The file isn't using\ncheck_stack_depth() in the function which traverse inheritance\nhierarchies. This isn't just a problem of the identity related\nfunction but most of the functions in that file. Do you think it's\nworth fixing it?\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:53:19 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On 22.01.24 13:23, Ashutosh Bapat wrote:\n>> if (newdef->identity)\n>> {\n>> Assert(!is_partioning);\n>> /*\n>> * Identity is never inherited. The new column can have an\n>> * identity definition, so we always just take that one.\n>> */\n>> def->identity = newdef->identity;\n>> }\n>>\n>> Thoughts?\n> \n> That code block already has Assert(!is_partition) at line 3085. I\n> thought that Assert is enough.\n\nOk. Maybe just rephrase that comment somehow then?\n\n> There's another thing I found. The file isn't using\n> check_stack_depth() in the function which traverse inheritance\n> hierarchies. This isn't just a problem of the identity related\n> function but most of the functions in that file. Do you think it's\n> worth fixing it?\n\nI suppose the number of inheritance levels is usually not a problem for \nstack depth?\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 19:59:32 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Tue, Jan 23, 2024 at 12:29 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 22.01.24 13:23, Ashutosh Bapat wrote:\n> >> if (newdef->identity)\n> >> {\n> >> Assert(!is_partioning);\n> >> /*\n> >> * Identity is never inherited. The new column can have an\n> >> * identity definition, so we always just take that one.\n> >> */\n> >> def->identity = newdef->identity;\n> >> }\n> >>\n> >> Thoughts?\n> >\n> > That code block already has Assert(!is_partition) at line 3085. I\n> > thought that Assert is enough.\n>\n> Ok. Maybe just rephrase that comment somehow then?\n\nPlease see refactoring patches attached to [1]. Refactoring that way\nmakes it unnecessary to mention \"regular inheritance\" in each comment.\nYet I have included a modified version of the comment in that patch\nset.\n\n>\n> > There's another thing I found. The file isn't using\n> > check_stack_depth() in the function which traverse inheritance\n> > hierarchies. This isn't just a problem of the identity related\n> > function but most of the functions in that file. Do you think it's\n> > worth fixing it?\n>\n> I suppose the number of inheritance levels is usually not a problem for\n> stack depth?\n>\n\nPractically it should not. I would rethink the application design if\nit requires so many inheritance or partition levels. But functions in\noptimizer like try_partitionwise_join() and set_append_rel_size() call\n\n/* Guard against stack overflow due to overly deep inheritance tree. */\ncheck_stack_depth();\n\nI am fine if we want to skip this.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 24 Jan 2024 12:04:43 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Wed, Jan 24, 2024 at 12:04 PM Ashutosh Bapat\n<[email protected]> wrote:\n> >\n> > Ok. Maybe just rephrase that comment somehow then?\n>\n> Please see refactoring patches attached to [1]. Refactoring that way\n> makes it unnecessary to mention \"regular inheritance\" in each comment.\n> Yet I have included a modified version of the comment in that patch\n> set.\n\nSorry forgot to add the reference. Here it is.\n\n[1] https://www.postgresql.org/message-id/CAExHW5vz7A-skzt05=4frFx9-VPjfjK4jKQZT7ufRNh4J7=xmQ@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 24 Jan 2024 12:32:30 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Hello Ashutosh,\n\n24.01.2024 09:34, Ashutosh Bapat wrote:\n>\n>>> There's another thing I found. The file isn't using\n>>> check_stack_depth() in the function which traverse inheritance\n>>> hierarchies. This isn't just a problem of the identity related\n>>> function but most of the functions in that file. Do you think it's\n>>> worth fixing it?\n>> I suppose the number of inheritance levels is usually not a problem for\n>> stack depth?\n>>\n> Practically it should not. I would rethink the application design if\n> it requires so many inheritance or partition levels. But functions in\n> optimizer like try_partitionwise_join() and set_append_rel_size() call\n>\n> /* Guard against stack overflow due to overly deep inheritance tree. */\n> check_stack_depth();\n>\n> I am fine if we want to skip this.\n\nI've managed to reach stack overflow inside ATExecSetIdentity() with\nthe following script:\n(echo \"CREATE TABLE tp0 (a int PRIMARY KEY,\n         b int GENERATED ALWAYS AS IDENTITY) PARTITION BY RANGE (a);\";\nfor ((i=1;i<=80000;i++)); do\n   echo \"CREATE TABLE tp$i PARTITION OF tp$(( $i - 1 ))\n          FOR VALUES FROM ($i) TO (1000000) PARTITION BY RANGE (a);\";\ndone;\necho \"ALTER TABLE tp0 ALTER COLUMN b SET GENERATED BY DEFAULT;\") | psql >psql.log\n\n(with max_locks_per_transaction = 400 in the config)\n\nIt runs about 15 minutes for me and ends with:\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\n#0  0x000055a8ced20de9 in LWLockAcquire (lock=0x7faec200b900, mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1169\n1169    {\n(gdb) bt\n#0  0x000055a8ced20de9 in LWLockAcquire (lock=0x7faec200b900, mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1169\n#1  0x000055a8cea0342d in WALInsertLockAcquire () at xlog.c:1389\n#2  XLogInsertRecord (rdata=0x55a8cf1ccee8 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=1261347512, flags=0 '\\000', \nnum_fpi=num_fpi@entry=0, topxid_included=false) at xlog.c:817\n#3  0x000055a8cea1396e in XLogInsert (rmid=rmid@entry=11 '\\v', info=<optimized out>) at xloginsert.c:524\n#4  0x000055a8ce9c1541 in _bt_insertonpg (rel=0x7faeb8478c98, heaprel=0x7faecf63d378, \nitup_key=itup_key@entry=0x55a8d5064678, buf=3210, cbuf=cbuf@entry=0, stack=stack@entry=0x55a8d1063d08, \nitup=0x55a8d5064658, itemsz=16,\n     newitemoff=<optimized out>, postingoff=0, split_only_page=<optimized out>) at nbtinsert.c:1389\n#5  0x000055a8ce9bf9a7 in _bt_doinsert (rel=<optimized out>, rel@entry=0x7faeb8478c98, itup=<optimized out>, \nitup@entry=0x55a8d5064658, checkUnique=<optimized out>, checkUnique@entry=UNIQUE_CHECK_YES, indexUnchanged=<optimized out>,\n     heapRel=<optimized out>, heapRel@entry=0x7faecf63d378) at nbtinsert.c:260\n#6  0x000055a8ce9c92ad in btinsert (rel=0x7faeb8478c98, values=<optimized out>, isnull=<optimized out>, \nht_ctid=0x55a8d50643cc, heapRel=0x7faecf63d378, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=<optimized out>,\n     indexInfo=<optimized out>) at nbtree.c:205\n#7  0x000055a8cea41391 in CatalogIndexInsert (indstate=indstate@entry=0x55a8d0fc03e8, heapTuple=<optimized out>, \nheapTuple@entry=0x55a8d50643c8, updateIndexes=<optimized out>) at indexing.c:170\n#8  0x000055a8cea4172c in CatalogTupleUpdate (heapRel=heapRel@entry=0x7faecf63d378, otid=0x55a8d50643cc, \ntup=tup@entry=0x55a8d50643c8) at indexing.c:324\n#9  0x000055a8ceb18173 in ATExecSetIdentity (rel=0x7faeab1288a8, colName=colName@entry=0x55a8d0fbc2b8 \"b\", \ndef=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8307\n#10 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab127f28, colName=colName@entry=0x55a8d0fbc2b8 \"b\", \ndef=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n#11 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab1275a8, colName=colName@entry=0x55a8d0fbc2b8 \"b\", \ndef=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n#12 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab126c28, colName=colName@entry=0x55a8d0fbc2b8 \"b\", \ndef=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n...\n\nFunctions ATExecAddIdentity() and ATExecDropIdentity() are recursive too,\nso I think they can be exploited as well.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 15 Feb 2024 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Thu, Feb 15, 2024 at 11:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Ashutosh,\n>\n> 24.01.2024 09:34, Ashutosh Bapat wrote:\n> >\n> >>> There's another thing I found. The file isn't using\n> >>> check_stack_depth() in the function which traverse inheritance\n> >>> hierarchies. This isn't just a problem of the identity related\n> >>> function but most of the functions in that file. Do you think it's\n> >>> worth fixing it?\n> >> I suppose the number of inheritance levels is usually not a problem for\n> >> stack depth?\n> >>\n> > Practically it should not. I would rethink the application design if\n> > it requires so many inheritance or partition levels. But functions in\n> > optimizer like try_partitionwise_join() and set_append_rel_size() call\n> >\n> > /* Guard against stack overflow due to overly deep inheritance tree. */\n> > check_stack_depth();\n> >\n> > I am fine if we want to skip this.\n>\n> I've managed to reach stack overflow inside ATExecSetIdentity() with\n> the following script:\n> (echo \"CREATE TABLE tp0 (a int PRIMARY KEY,\n> b int GENERATED ALWAYS AS IDENTITY) PARTITION BY RANGE (a);\";\n> for ((i=1;i<=80000;i++)); do\n> echo \"CREATE TABLE tp$i PARTITION OF tp$(( $i - 1 ))\n> FOR VALUES FROM ($i) TO (1000000) PARTITION BY RANGE (a);\";\n> done;\n> echo \"ALTER TABLE tp0 ALTER COLUMN b SET GENERATED BY DEFAULT;\") | psql >psql.log\n>\n> (with max_locks_per_transaction = 400 in the config)\n>\n> It runs about 15 minutes for me and ends with:\n> Program terminated with signal SIGSEGV, Segmentation fault.\n>\n> #0 0x000055a8ced20de9 in LWLockAcquire (lock=0x7faec200b900, mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1169\n> 1169 {\n> (gdb) bt\n> #0 0x000055a8ced20de9 in LWLockAcquire (lock=0x7faec200b900, mode=mode@entry=LW_EXCLUSIVE) at lwlock.c:1169\n> #1 0x000055a8cea0342d in WALInsertLockAcquire () at xlog.c:1389\n> #2 XLogInsertRecord (rdata=0x55a8cf1ccee8 <hdr_rdt>, fpw_lsn=fpw_lsn@entry=1261347512, flags=0 '\\000',\n> num_fpi=num_fpi@entry=0, topxid_included=false) at xlog.c:817\n> #3 0x000055a8cea1396e in XLogInsert (rmid=rmid@entry=11 '\\v', info=<optimized out>) at xloginsert.c:524\n> #4 0x000055a8ce9c1541 in _bt_insertonpg (rel=0x7faeb8478c98, heaprel=0x7faecf63d378,\n> itup_key=itup_key@entry=0x55a8d5064678, buf=3210, cbuf=cbuf@entry=0, stack=stack@entry=0x55a8d1063d08,\n> itup=0x55a8d5064658, itemsz=16,\n> newitemoff=<optimized out>, postingoff=0, split_only_page=<optimized out>) at nbtinsert.c:1389\n> #5 0x000055a8ce9bf9a7 in _bt_doinsert (rel=<optimized out>, rel@entry=0x7faeb8478c98, itup=<optimized out>,\n> itup@entry=0x55a8d5064658, checkUnique=<optimized out>, checkUnique@entry=UNIQUE_CHECK_YES, indexUnchanged=<optimized out>,\n> heapRel=<optimized out>, heapRel@entry=0x7faecf63d378) at nbtinsert.c:260\n> #6 0x000055a8ce9c92ad in btinsert (rel=0x7faeb8478c98, values=<optimized out>, isnull=<optimized out>,\n> ht_ctid=0x55a8d50643cc, heapRel=0x7faecf63d378, checkUnique=UNIQUE_CHECK_YES, indexUnchanged=<optimized out>,\n> indexInfo=<optimized out>) at nbtree.c:205\n> #7 0x000055a8cea41391 in CatalogIndexInsert (indstate=indstate@entry=0x55a8d0fc03e8, heapTuple=<optimized out>,\n> heapTuple@entry=0x55a8d50643c8, updateIndexes=<optimized out>) at indexing.c:170\n> #8 0x000055a8cea4172c in CatalogTupleUpdate (heapRel=heapRel@entry=0x7faecf63d378, otid=0x55a8d50643cc,\n> tup=tup@entry=0x55a8d50643c8) at indexing.c:324\n> #9 0x000055a8ceb18173 in ATExecSetIdentity (rel=0x7faeab1288a8, colName=colName@entry=0x55a8d0fbc2b8 \"b\",\n> def=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8307\n> #10 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab127f28, colName=colName@entry=0x55a8d0fbc2b8 \"b\",\n> def=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n> #11 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab1275a8, colName=colName@entry=0x55a8d0fbc2b8 \"b\",\n> def=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n> #12 0x000055a8ceb18251 in ATExecSetIdentity (rel=0x7faeab126c28, colName=colName@entry=0x55a8d0fbc2b8 \"b\",\n> def=def@entry=0x55a8d1063918, lockmode=lockmode@entry=8, recurse=true, recursing=<optimized out>) at tablecmds.c:8337\n> ...\n>\n> Functions ATExecAddIdentity() and ATExecDropIdentity() are recursive too,\n> so I think they can be exploited as well.\n\nnot just Identity related functions, but many other functions in\ntablecmds.c have that problem as I mentioned earlier.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 19 Feb 2024 17:47:34 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Hello Ashutosh,\n\n19.02.2024 15:17, Ashutosh Bapat wrote:\n>\n>> Functions ATExecAddIdentity() and ATExecDropIdentity() are recursive too,\n>> so I think they can be exploited as well.\n> not just Identity related functions, but many other functions in\n> tablecmds.c have that problem as I mentioned earlier.\n>\n\nCould you please name functions, which you suspect, for me to recheck them?\nPerhaps we should consider fixing all of such functions, in light of\nb0f7dd915 and d57b7cc33...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 19 Feb 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Mon, Feb 19, 2024 at 8:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Ashutosh,\n>\n> 19.02.2024 15:17, Ashutosh Bapat wrote:\n> >\n> >> Functions ATExecAddIdentity() and ATExecDropIdentity() are recursive too,\n> >> so I think they can be exploited as well.\n> > not just Identity related functions, but many other functions in\n> > tablecmds.c have that problem as I mentioned earlier.\n> >\n>\n> Could you please name functions, which you suspect, for me to recheck them?\n> Perhaps we should consider fixing all of such functions, in light of\n> b0f7dd915 and d57b7cc33...\n\nLooks like the second commit has fixed all other places I knew except\nIdentity related functions. So worth fixing identity related functions\ntoo. I see\ndropconstraint_internal() has two calls to check_stack_depth() back to\nback. The second one is not needed?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 20 Feb 2024 10:27:49 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "20.02.2024 07:57, Ashutosh Bapat wrote:\n>> Could you please name functions, which you suspect, for me to recheck them?\n>> Perhaps we should consider fixing all of such functions, in light of\n>> b0f7dd915 and d57b7cc33...\n> Looks like the second commit has fixed all other places I knew except\n> Identity related functions. So worth fixing identity related functions\n> too. I see\n> dropconstraint_internal() has two calls to check_stack_depth() back to\n> back. The second one is not needed?\n\nYeah, that's funny. It looks like such a double protection emerged\nbecause Alvaro protected the function (in b0f7dd915), which was waiting for\nadding check_stack_depth() in the other thread (resulted in d57b7cc33).\n\nThank you for spending time on this!\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 20 Feb 2024 09:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Tue, Feb 20, 2024 at 8:00 AM Alexander Lakhin <[email protected]> wrote:\n> 20.02.2024 07:57, Ashutosh Bapat wrote:\n> >> Could you please name functions, which you suspect, for me to recheck them?\n> >> Perhaps we should consider fixing all of such functions, in light of\n> >> b0f7dd915 and d57b7cc33...\n> > Looks like the second commit has fixed all other places I knew except\n> > Identity related functions. So worth fixing identity related functions\n> > too. I see\n> > dropconstraint_internal() has two calls to check_stack_depth() back to\n> > back. The second one is not needed?\n>\n> Yeah, that's funny. It looks like such a double protection emerged\n> because Alvaro protected the function (in b0f7dd915), which was waiting for\n> adding check_stack_depth() in the other thread (resulted in d57b7cc33).\n>\n> Thank you for spending time on this!\n\nThank you, I removed the second check.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 21 Feb 2024 02:51:20 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Hello Ashutosh and Peter,\n\n16.01.2024 21:59, Peter Eisentraut wrote:\n> On 09.01.24 15:10, Ashutosh Bapat wrote:\n>> Here's complete patch-set.\n>\n> Looks good!  Committed.\n>\n\nPlease take a look at a new error case introduced by 699586315:\nCREATE TABLE tbl1 (a int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY)\n   PARTITION BY LIST (a);\nCREATE TABLE tbl2 PARTITION OF tbl1 DEFAULT;\n\nCREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\nERROR:  no owned sequence found\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 26 Apr 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Thanks Alexander for the report.\n\nOn Fri, Apr 26, 2024 at 5:30 PM Alexander Lakhin <[email protected]>\nwrote:\n\n> Hello Ashutosh and Peter,\n>\n> 16.01.2024 21:59, Peter Eisentraut wrote:\n> > On 09.01.24 15:10, Ashutosh Bapat wrote:\n> >> Here's complete patch-set.\n> >\n> > Looks good! Committed.\n> >\n>\n> Please take a look at a new error case introduced by 699586315:\n> CREATE TABLE tbl1 (a int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY)\n> PARTITION BY LIST (a);\n> CREATE TABLE tbl2 PARTITION OF tbl1 DEFAULT;\n>\n> CREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\n> ERROR: no owned sequence found\n>\n\nI don't think creating a table like a partition is common or even useful.\nUsually it would create it from partitithe oned table. But if we consider\nthat to be a use case, I think the error is expected since a partition\ndoesn't have its own identity; it shares it with the partitioned table.\nMaybe we could give a better message. But I will look into this and fix it\nif the solution makes sense.\n\nDo you want to track this in open items?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nThanks Alexander for the report.On Fri, Apr 26, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:Hello Ashutosh and Peter,\n\n16.01.2024 21:59, Peter Eisentraut wrote:\n> On 09.01.24 15:10, Ashutosh Bapat wrote:\n>> Here's complete patch-set.\n>\n> Looks good!  Committed.\n>\n\nPlease take a look at a new error case introduced by 699586315:\nCREATE TABLE tbl1 (a int PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY)\n   PARTITION BY LIST (a);\nCREATE TABLE tbl2 PARTITION OF tbl1 DEFAULT;\n\nCREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\nERROR:  no owned sequence foundI don't think creating a table like a partition is common or even useful. Usually it would create it from partitithe oned table. But if we consider that to be a use case, I think the error is expected since a partition doesn't have its own identity; it shares it with the partitioned table. Maybe we could give a better message. But I will look into this and fix it if the solution makes sense.Do you want to track this in open items?-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 26 Apr 2024 18:27:58 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "26.04.2024 15:57, Ashutosh Bapat wrote:\n> Thanks Alexander for the report.\n>\n> On Fri, Apr 26, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:\n>\n>\n> CREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\n> ERROR:  no owned sequence found\n>\n>\n> I don't think creating a table like a partition is common or even useful. Usually it would create it from partitithe \n> oned table. But if we consider that to be a use case, I think the error is expected since a partition doesn't have its \n> own identity; it shares it with the partitioned table. Maybe we could give a better message. But I will look into this \n> and fix it if the solution makes sense.\n\nMaybe it's uncommon, but it's allowed, so users may want to\nCREATE TABLE sometable (LIKE partX INCLUDING ALL), for example, if the\npartition has a somewhat different structure. And thinking about how such\na restriction could be described in the docs, I would prefer to avoid this\nerror at the implementation level.\n\n>\n> Do you want to track this in open items?\n>\n\nIf you are inclined to fix this behavior,  I would add this item.\n\nBest regards,\nAlexander\n\n\n\n\n\n26.04.2024 15:57, Ashutosh Bapat wrote:\n\n\n\n\nThanks Alexander for the report.\n\n\n\nOn Fri, Apr 26, 2024 at\n 5:30 PM Alexander Lakhin <[email protected]>\n wrote:\n\n\n CREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\n ERROR:  no owned sequence found\n\n\n\nI don't think creating a table like a partition is common\n or even useful. Usually it would create it from partitithe\n oned table. But if we consider that to be a use case, I\n think the error is expected since a partition doesn't have\n its own identity; it shares it with the partitioned table.\n Maybe we could give a better message. But I will look into\n this and fix it if the solution makes sense.\n\n\n\n\n Maybe it's uncommon, but it's allowed, so users may want to\n CREATE TABLE sometable (LIKE partX INCLUDING ALL), for example, if\n the\n partition has a somewhat different structure. And thinking about how\n such\n a restriction could be described in the docs, I would prefer to\n avoid this\n error at the implementation level.\n\n\n\n\n\n\nDo you want to track this in open items?\n\n\n\n\n\n\n If you are inclined to fix this behavior,  I would add this item.\n\n Best regards,\n Alexander", "msg_date": "Fri, 26 Apr 2024 21:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Hello Ashutosh,\n\n26.04.2024 21:00, Alexander Lakhin wrote:\n> 26.04.2024 15:57, Ashutosh Bapat wrote:\n>> Thanks Alexander for the report.\n>>\n>> On Fri, Apr 26, 2024 at 5:30 PM Alexander Lakhin <[email protected]> wrote:\n>>\n>>\n>> CREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\n>> ERROR:  no owned sequence found\n>>\n>>\n>\n>>\n>> Do you want to track this in open items?\n>>\n>\n> If you are inclined to fix this behavior,  I would add this item.\n\n\nPlease look also at another script, which produces the same error:\nCREATE TABLE tbl1 (a int GENERATED BY DEFAULT AS IDENTITY, b text)\n    PARTITION BY LIST (b);\nCREATE TABLE tbl2 PARTITION OF tbl1 DEFAULT;\n\nALTER TABLE tbl1 ALTER COLUMN a SET DATA TYPE bigint;\nERROR:  no owned sequence found\n\n(On 699586315~1, it executes successfully and changes the data type of the\nidentity column and it's sequence.)\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Ashutosh,\n\n 26.04.2024 21:00, Alexander Lakhin wrote:\n\n\n\n26.04.2024 15:57, Ashutosh Bapat\n wrote:\n\n\n\n\nThanks Alexander for the report.\n\n\n\nOn Fri, Apr 26, 2024 at\n 5:30 PM Alexander Lakhin <[email protected]>\n wrote:\n\n\n CREATE TABLE tbl3 (LIKE tbl2 INCLUDING IDENTITY);\n ERROR:  no owned sequence found\n\n\n\n\n\n\n\n\n\n\n\nDo you want to track this in open items?\n\n\n\n\n\n\n If you are inclined to fix this behavior,  I would add this item.\n\n\n\n Please look also at another script, which produces the same error:\n CREATE TABLE tbl1 (a int GENERATED BY DEFAULT AS IDENTITY, b text)\n    PARTITION BY LIST (b);\n CREATE TABLE tbl2 PARTITION OF tbl1 DEFAULT;\n\n ALTER TABLE tbl1 ALTER COLUMN a SET DATA TYPE bigint;\n ERROR:  no owned sequence found\n\n (On 699586315~1, it executes successfully and changes the data type\n of the\n identity column and it's sequence.)\n\n Best regards,\n Alexander", "msg_date": "Sat, 27 Apr 2024 18:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "27.04.2024 18:00, Alexander Lakhin wrote:\n>\n> Please look also at another script, which produces the same error:\n\nI've discovered yet another problematic case:\nCREATE TABLE tbl1 (a int GENERATED ALWAYS AS IDENTITY, b text)\n     PARTITION BY LIST (a);\nCREATE TABLE tbl2 (b text, a int NOT NULL);\nALTER TABLE tbl1 ATTACH PARTITION tbl2 DEFAULT;\n\nINSERT INTO tbl2 DEFAULT VALUES;\nERROR:  no owned sequence found\n\nThough it works with tbl2(a int NOT NULL, b text)...\nTake a look at this too, please.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 28 Apr 2024 09:59:44 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Sun, Apr 28, 2024 at 12:29 PM Alexander Lakhin <[email protected]>\nwrote:\n\n> 27.04.2024 18:00, Alexander Lakhin wrote:\n> >\n> > Please look also at another script, which produces the same error:\n>\n> I've discovered yet another problematic case:\n> CREATE TABLE tbl1 (a int GENERATED ALWAYS AS IDENTITY, b text)\n> PARTITION BY LIST (a);\n> CREATE TABLE tbl2 (b text, a int NOT NULL);\n> ALTER TABLE tbl1 ATTACH PARTITION tbl2 DEFAULT;\n>\n> INSERT INTO tbl2 DEFAULT VALUES;\n> ERROR: no owned sequence found\n>\n> Though it works with tbl2(a int NOT NULL, b text)...\n> Take a look at this too, please.\n>\n\nThanks Alexander for the report.\n\nPFA patch which fixes all the three problems.\n\nI had not fixed getIdentitySequence() to fetch identity sequence associated\nwith the partition because I thought it would be better to fail with an\nerror when it's not used correctly. But these bugs show 1. the error is\nmisleading and unwanted 2. there are more places where adding that logic\nto getIdentitySequence() makes sense. Fixed the function in these patches.\nNow callers like transformAlterTableStmt have to be careful not to call the\nfunction on a partition.\n\nI have examined all the callers of getIdentitySequence() and they seem to\nbe fine. The code related to SetIdentity, DropIdentity is not called for\npartitions, errors for which are thrown elsewhere earlier.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 30 Apr 2024 16:29:11 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Tue, Apr 30, 2024 at 04:29:11PM +0530, Ashutosh Bapat wrote:\n> PFA patch which fixes all the three problems.\n\nPlease note that this was not tracked as an open item, so I have added\none referring to the failures reported by Alexander.\n--\nMichael", "msg_date": "Wed, 1 May 2024 12:53:31 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "> On Tue, Apr 30, 2024 at 04:29:11PM +0530, Ashutosh Bapat wrote:\n> On Sun, Apr 28, 2024 at 12:29 PM Alexander Lakhin <[email protected]>\n> wrote:\n>\n> > 27.04.2024 18:00, Alexander Lakhin wrote:\n> > >\n> > > Please look also at another script, which produces the same error:\n> >\n> > I've discovered yet another problematic case:\n> > CREATE TABLE tbl1 (a int GENERATED ALWAYS AS IDENTITY, b text)\n> > PARTITION BY LIST (a);\n> > CREATE TABLE tbl2 (b text, a int NOT NULL);\n> > ALTER TABLE tbl1 ATTACH PARTITION tbl2 DEFAULT;\n> >\n> > INSERT INTO tbl2 DEFAULT VALUES;\n> > ERROR: no owned sequence found\n> >\n> > Though it works with tbl2(a int NOT NULL, b text)...\n> > Take a look at this too, please.\n> >\n>\n> Thanks Alexander for the report.\n>\n> PFA patch which fixes all the three problems.\n>\n> I had not fixed getIdentitySequence() to fetch identity sequence associated\n> with the partition because I thought it would be better to fail with an\n> error when it's not used correctly. But these bugs show 1. the error is\n> misleading and unwanted 2. there are more places where adding that logic\n> to getIdentitySequence() makes sense. Fixed the function in these patches.\n> Now callers like transformAlterTableStmt have to be careful not to call the\n> function on a partition.\n>\n> I have examined all the callers of getIdentitySequence() and they seem to\n> be fine. The code related to SetIdentity, DropIdentity is not called for\n> partitions, errors for which are thrown elsewhere earlier.\n\nThanks for the fix.\n\nI had a quick look, it covers the issues mentioned above in the thread.\nFew nitpicks/questions:\n\n* I think it makes sense to verify if the ptup is valid. This approach\n would fail if the target column of the root partition is marked as\n attisdropped.\n\n Oid\n -getIdentitySequence(Oid relid, AttrNumber attnum, bool missing_ok)\n +getIdentitySequence(Relation rel, AttrNumber attnum, bool missing_ok)\n {\n\n [...]\n\n +\t\trelid = llast_oid(ancestors);\n +\t\tptup = SearchSysCacheAttName(relid, attname);\n +\t\tattnum = ((Form_pg_attribute) GETSTRUCT(ptup))->attnum;\n\n* getIdentitySequence is used in build_column_default, which in turn\n often appears in loops over table attributes. AFAICT it means that the\n same root partition search will be repeated multiple times in such\n situations if there is more than one identity. I assume the\n performance impact of this repetition is negligible?\n\n* Maybe a silly question, but since I'm not aware about all the details\n here, I'm curious -- the approach of mapping attributes of a partition\n to the root partition attributes, how robust is it? I guess there is\n no way that the root partition column will be not what is expected,\n e.g. due to some sort of concurrency?\n\n\n", "msg_date": "Sat, 4 May 2024 22:13:19 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On Sun, May 5, 2024 at 1:43 AM Dmitry Dolgov <[email protected]> wrote:\n\n> > On Tue, Apr 30, 2024 at 04:29:11PM +0530, Ashutosh Bapat wrote:\n> > On Sun, Apr 28, 2024 at 12:29 PM Alexander Lakhin <[email protected]>\n> > wrote:\n> >\n> > > 27.04.2024 18:00, Alexander Lakhin wrote:\n> > > >\n> > > > Please look also at another script, which produces the same error:\n> > >\n> > > I've discovered yet another problematic case:\n> > > CREATE TABLE tbl1 (a int GENERATED ALWAYS AS IDENTITY, b text)\n> > > PARTITION BY LIST (a);\n> > > CREATE TABLE tbl2 (b text, a int NOT NULL);\n> > > ALTER TABLE tbl1 ATTACH PARTITION tbl2 DEFAULT;\n> > >\n> > > INSERT INTO tbl2 DEFAULT VALUES;\n> > > ERROR: no owned sequence found\n> > >\n> > > Though it works with tbl2(a int NOT NULL, b text)...\n> > > Take a look at this too, please.\n> > >\n> >\n> > Thanks Alexander for the report.\n> >\n> > PFA patch which fixes all the three problems.\n> >\n> > I had not fixed getIdentitySequence() to fetch identity sequence\n> associated\n> > with the partition because I thought it would be better to fail with an\n> > error when it's not used correctly. But these bugs show 1. the error is\n> > misleading and unwanted 2. there are more places where adding that logic\n> > to getIdentitySequence() makes sense. Fixed the function in these\n> patches.\n> > Now callers like transformAlterTableStmt have to be careful not to call\n> the\n> > function on a partition.\n> >\n> > I have examined all the callers of getIdentitySequence() and they seem to\n> > be fine. The code related to SetIdentity, DropIdentity is not called for\n> > partitions, errors for which are thrown elsewhere earlier.\n>\n> Thanks for the fix.\n>\n> I had a quick look, it covers the issues mentioned above in the thread.\n> Few nitpicks/questions:\n>\n> * I think it makes sense to verify if the ptup is valid. This approach\n> would fail if the target column of the root partition is marked as\n> attisdropped.\n>\n\nThe column is searched by name which is derived from attno of child\npartition. So it has to exist in the root partition. If it doesn't\nsomething is seriously wrong. Do you have a reproducer? We may want to add\nAssert(HeapTupleIsValid(ptup)) just in case. But it seems unnecessary to me.\n\n\n>\n> Oid\n> -getIdentitySequence(Oid relid, AttrNumber attnum, bool missing_ok)\n> +getIdentitySequence(Relation rel, AttrNumber attnum, bool missing_ok)\n> {\n>\n> [...]\n>\n> + relid = llast_oid(ancestors);\n> + ptup = SearchSysCacheAttName(relid, attname);\n> + attnum = ((Form_pg_attribute) GETSTRUCT(ptup))->attnum;\n>\n> * getIdentitySequence is used in build_column_default, which in turn\n> often appears in loops over table attributes. AFAICT it means that the\n> same root partition search will be repeated multiple times in such\n> situations if there is more than one identity. I assume the\n> performance impact of this repetition is negligible?\n>\n\nI thought having multiple identity columns would be rare and hence avoided\nmaking code complex. Otherwise we have to get root partition somewhere in\nthe caller hierarchy separately the logic much farther apart. Usually the\nancestor entries will be somewhere in the cache\n\n\n>\n> * Maybe a silly question, but since I'm not aware about all the details\n> here, I'm curious -- the approach of mapping attributes of a partition\n> to the root partition attributes, how robust is it? I guess there is\n> no way that the root partition column will be not what is expected,\n> e.g. due to some sort of concurrency?\n>\n\nAny such thing would require a lock on the partition relation in the\nquestion which is locked before passing rel around? So it shouldn't happen.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Sun, May 5, 2024 at 1:43 AM Dmitry Dolgov <[email protected]> wrote:> On Tue, Apr 30, 2024 at 04:29:11PM +0530, Ashutosh Bapat wrote:\n> On Sun, Apr 28, 2024 at 12:29 PM Alexander Lakhin <[email protected]>\n> wrote:\n>\n> > 27.04.2024 18:00, Alexander Lakhin wrote:\n> > >\n> > > Please look also at another script, which produces the same error:\n> >\n> > I've discovered yet another problematic case:\n> > CREATE TABLE tbl1 (a int GENERATED ALWAYS AS IDENTITY, b text)\n> >      PARTITION BY LIST (a);\n> > CREATE TABLE tbl2 (b text, a int NOT NULL);\n> > ALTER TABLE tbl1 ATTACH PARTITION tbl2 DEFAULT;\n> >\n> > INSERT INTO tbl2 DEFAULT VALUES;\n> > ERROR:  no owned sequence found\n> >\n> > Though it works with tbl2(a int NOT NULL, b text)...\n> > Take a look at this too, please.\n> >\n>\n> Thanks Alexander for the report.\n>\n> PFA patch which fixes all the three problems.\n>\n> I had not fixed getIdentitySequence() to fetch identity sequence associated\n> with the partition because I thought it would be better to fail with an\n> error when it's not used correctly. But these bugs show 1. the error is\n> misleading and unwanted 2. there are more places where adding that logic\n> to  getIdentitySequence() makes sense. Fixed the function in these patches.\n> Now callers like transformAlterTableStmt have to be careful not to call the\n> function on a partition.\n>\n> I have examined all the callers of getIdentitySequence() and they seem to\n> be fine. The code related to SetIdentity, DropIdentity is not called for\n> partitions, errors for which are thrown elsewhere earlier.\n\nThanks for the fix.\n\nI had a quick look, it covers the issues mentioned above in the thread.\nFew nitpicks/questions:\n\n* I think it makes sense to verify if the ptup is valid. This approach\n  would fail if the target column of the root partition is marked as\n  attisdropped.The column is searched by name which is derived from attno of child partition. So it has to exist in the root partition. If it doesn't something is seriously wrong. Do you have a reproducer? We may want to add Assert(HeapTupleIsValid(ptup)) just in case. But it seems unnecessary to me. \n\n     Oid\n    -getIdentitySequence(Oid relid, AttrNumber attnum, bool missing_ok)\n    +getIdentitySequence(Relation rel, AttrNumber attnum, bool missing_ok)\n     {\n\n    [...]\n\n    +           relid = llast_oid(ancestors);\n    +           ptup = SearchSysCacheAttName(relid, attname);\n    +           attnum = ((Form_pg_attribute) GETSTRUCT(ptup))->attnum;\n\n* getIdentitySequence is used in build_column_default, which in turn\n  often appears in loops over table attributes. AFAICT it means that the\n  same root partition search will be repeated multiple times in such\n  situations if there is more than one identity. I assume the\n  performance impact of this repetition is negligible?I thought having multiple identity columns would be rare and hence avoided making code complex. Otherwise we have to get root partition somewhere in the caller hierarchy separately the logic much farther apart. Usually the ancestor entries will be somewhere in the cache  \n\n* Maybe a silly question, but since I'm not aware about all the details\n  here, I'm curious -- the approach of mapping attributes of a partition\n  to the root partition attributes, how robust is it? I guess there is\n  no way that the root partition column will be not what is expected,\n  e.g. due to some sort of concurrency?\nAny such thing would require a lock on the partition relation in the question which is locked before passing rel around? So it shouldn't happen.-- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 6 May 2024 18:52:41 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "> On Mon, May 06, 2024 at 06:52:41PM +0530, Ashutosh Bapat wrote:\n> On Sun, May 5, 2024 at 1:43 AM Dmitry Dolgov <[email protected]> wrote:\n> > I had a quick look, it covers the issues mentioned above in the thread.\n> > Few nitpicks/questions:\n> >\n> > * I think it makes sense to verify if the ptup is valid. This approach\n> > would fail if the target column of the root partition is marked as\n> > attisdropped.\n> >\n>\n> The column is searched by name which is derived from attno of child\n> partition. So it has to exist in the root partition. If it doesn't\n> something is seriously wrong. Do you have a reproducer? We may want to add\n> Assert(HeapTupleIsValid(ptup)) just in case. But it seems unnecessary to me.\n\nSure, normally it should work. I don't have any particular situation in\nmind, when attisdropped might be set on a root partition, but obviously\nsetting it manually crashes this path. Consider it mostly as suggestion\nfor a more defensive implementation \"just in case\".\n\n> > Oid\n> > -getIdentitySequence(Oid relid, AttrNumber attnum, bool missing_ok)\n> > +getIdentitySequence(Relation rel, AttrNumber attnum, bool missing_ok)\n> > {\n> >\n> > [...]\n> >\n> > + relid = llast_oid(ancestors);\n> > + ptup = SearchSysCacheAttName(relid, attname);\n> > + attnum = ((Form_pg_attribute) GETSTRUCT(ptup))->attnum;\n> >\n> > * getIdentitySequence is used in build_column_default, which in turn\n> > often appears in loops over table attributes. AFAICT it means that the\n> > same root partition search will be repeated multiple times in such\n> > situations if there is more than one identity. I assume the\n> > performance impact of this repetition is negligible?\n> >\n>\n> I thought having multiple identity columns would be rare and hence avoided\n> making code complex. Otherwise we have to get root partition somewhere in\n> the caller hierarchy separately the logic much farther apart. Usually the\n> ancestor entries will be somewhere in the cache\n\nYeah, agree, it's reasonable to expect that the case with multiple\nidentity columns will be rare.\n\n\n", "msg_date": "Mon, 6 May 2024 16:01:33 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "On 30.04.24 12:59, Ashutosh Bapat wrote:\n> PFA patch which fixes all the three problems.\n\nI have committed this patch. Thanks all.\n\n\n", "msg_date": "Tue, 7 May 2024 23:04:51 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and identity column" }, { "msg_contents": "Thanks a lot Peter.\n\nOn Wed, May 8, 2024 at 2:34 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 30.04.24 12:59, Ashutosh Bapat wrote:\n> > PFA patch which fixes all the three problems.\n>\n> I have committed this patch. Thanks all.\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nThanks a lot Peter.On Wed, May 8, 2024 at 2:34 AM Peter Eisentraut <[email protected]> wrote:On 30.04.24 12:59, Ashutosh Bapat wrote:\n> PFA patch which fixes all the three problems.\n\nI have committed this patch.  Thanks all.\n-- Best Wishes,Ashutosh Bapat", "msg_date": "Fri, 10 May 2024 02:02:08 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and identity column" } ]
[ { "msg_contents": "Hi hackers!\n\nWe run a large amount of PostgreSQL clusters in our production. They differ\nby versions (we have 11-16 pg), load, amount of data, schema, etc. From\ntime to time, postgresql corruption happens. It says\nERROR,XX001,\"missing chunk number 0 for toast value 18767319 in\npg_toast_2619\",,,,,,\"vacuum full ;\"\n\nin logs. the missing chunk number almost every is equal to zero, while\nother values vary. There are no known patterns, which triggers this issue.\nMoreover, if trying to rerun the VACUUM statement against relations from a\nlog message, it succeeds all the time. So, we just ignore these errors.\nMaybe it is just some wierd data race?\n\nWe don't know how to trigger this problem, or why it occurs. I'm not asking\nyou to resolve this issue, but to help with debugging. What can we do to\ndeduct failure reasons? Maybe we can add more logging somewhere (we can\ndeploy a special patched PostgreSQL version everywhere), to have more\ninformation about the issue, when it happens next time?\n\nHi hackers!We run a large amount of PostgreSQL clusters in our production. They differ by versions (we have 11-16 pg), load, amount of data, schema, etc. From time to time, postgresql corruption happens. It says ERROR,XX001,\"missing chunk number 0 for toast value 18767319 in pg_toast_2619\",,,,,,\"vacuum full ;\"in logs. the missing chunk number  almost every is equal to zero, while other values vary. There are no known patterns, which triggers this issue. Moreover, if trying to rerun the VACUUM statement against relations from a log message, it succeeds all the time.  So, we just ignore these errors. Maybe it is just some wierd data race? We don't know how to trigger this problem, or why it occurs. I'm not asking you to resolve this issue, but to help with debugging. What can we do to deduct failure reasons? Maybe we can add more logging somewhere (we can deploy a special patched PostgreSQL version everywhere), to have more information about the issue, when it happens next time?", "msg_date": "Fri, 27 Oct 2023 17:19:27 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "Annoying corruption in PostgreSQL." }, { "msg_contents": "\n\nOn 10/27/23 14:19, Kirill Reshke wrote:\n> Hi hackers!\n> \n> We run a large amount of PostgreSQL clusters in our production. They\n> differ by versions (we have 11-16 pg), load, amount of data, schema,\n> etc. From time to time, postgresql corruption happens. It says\n> ERROR,XX001,\"missing chunk number 0 for toast value 18767319 in\n> pg_toast_2619\",,,,,,\"vacuum full ;\"\n> \n> in logs. the missing chunk number  almost every is equal to zero, while\n> other values vary. There are no known patterns, which triggers this\n> issue. Moreover, if trying to rerun the VACUUM statement against\n> relations from a log message, it succeeds all the time.  So, we just\n> ignore these errors. Maybe it is just some wierd data race?\n> \n> We don't know how to trigger this problem, or why it occurs. I'm not\n> asking you to resolve this issue, but to help with debugging. What can\n> we do to deduct failure reasons? Maybe we can add more logging somewhere\n> (we can deploy a special patched PostgreSQL version everywhere), to have\n> more information about the issue, when it happens next time?\n> \n\nFor starters, it'd be good to know something about the environment, and\nstuff that'd tell us if there's some possible pattern:\n\n1) Which exact PG versions are you observing these errors on?\n\n2) In the error example you shared it's pg_toast_2619, which is the\nTOAST table for pg_statistic (probably). Is it always this relation? Or\nwhat relations you noticed this for?\n\n3) What kind of commands are triggering this? In the example it seems to\nbe vacuum full. Did you see it for other commands too? People generally\ndon't do VACUUM FULL very often, particularly not in environments with\nconcurrent activity.\n\nConsidering you don't know what's causing this, or what to look for, I\nthink it might be interesting to use pg_waldump, and investigate what\nhappened to the page containing the TOAST chunk and to the page\nreferencing it. Do you have physical backups and ability to do PITR?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 27 Oct 2023 20:28:09 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Annoying corruption in PostgreSQL." }, { "msg_contents": "Sorry, seems that i replied only to Tomas, so forwarding message.\n---------- Forwarded message ---------\nFrom: Kirill Reshke <[email protected]>\nDate: Sat, 28 Oct 2023 at 02:06\nSubject: Re: Annoying corruption in PostgreSQL.\nTo: Tomas Vondra <[email protected]>\n\n\nHi Tomas!\n\nThanks for the explanation!\n\n1) 11 to 15. This week there were 14.9 and 12.16 reproductions. Two weeks\nago there was 15.4 and 11.21 repro. Unfortunately, there is no info about\nrepro which were month old or more, but I found in our work chats that\nthere was repro on PostgreSQL 13 in April, a minor version unknown.\nOverall, we observed this issue for over a year on all pgdg supported\nversions.\n\n2) Searching out bug tracker i have found:\n\n1. missing chunk number 0 for toast value 592966012 in pg_toast_563953150\n(some user relation)\n2. missing chunk number 0 for toast value 18019714 in pg_toast_17706963\n(some user relation)\n3. missing chunk number 0 for toast value 52677740 in pg_toast_247794\n\nSo, this is not always pg_catalog. There toast tables were toast to some\nuser relations.\n\n3) It is always about VACUUM FULL (FREEZE/VERBOSE/ANALYZE) / autovacuum.\n\nWe have physical backups and we can PITR. But restoring a cluster to some\npoint in the past is a bit of a different task: we need our client's\napproval for these operations, since we are a Managed DBs Cloud Provider.\nWill try to ask someone.\n\nBest regards\n\n\nOn Fri, 27 Oct 2023 at 23:28, Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 10/27/23 14:19, Kirill Reshke wrote:\n> > Hi hackers!\n> >\n> > We run a large amount of PostgreSQL clusters in our production. They\n> > differ by versions (we have 11-16 pg), load, amount of data, schema,\n> > etc. From time to time, postgresql corruption happens. It says\n> > ERROR,XX001,\"missing chunk number 0 for toast value 18767319 in\n> > pg_toast_2619\",,,,,,\"vacuum full ;\"\n> >\n> > in logs. the missing chunk number almost every is equal to zero, while\n> > other values vary. There are no known patterns, which triggers this\n> > issue. Moreover, if trying to rerun the VACUUM statement against\n> > relations from a log message, it succeeds all the time. So, we just\n> > ignore these errors. Maybe it is just some wierd data race?\n> >\n> > We don't know how to trigger this problem, or why it occurs. I'm not\n> > asking you to resolve this issue, but to help with debugging. What can\n> > we do to deduct failure reasons? Maybe we can add more logging somewhere\n> > (we can deploy a special patched PostgreSQL version everywhere), to have\n> > more information about the issue, when it happens next time?\n> >\n>\n> For starters, it'd be good to know something about the environment, and\n> stuff that'd tell us if there's some possible pattern:\n>\n> 1) Which exact PG versions are you observing these errors on?\n>\n> 2) In the error example you shared it's pg_toast_2619, which is the\n> TOAST table for pg_statistic (probably). Is it always this relation? Or\n> what relations you noticed this for?\n>\n> 3) What kind of commands are triggering this? In the example it seems to\n> be vacuum full. Did you see it for other commands too? People generally\n> don't do VACUUM FULL very often, particularly not in environments with\n> concurrent activity.\n>\n> Considering you don't know what's causing this, or what to look for, I\n> think it might be interesting to use pg_waldump, and investigate what\n> happened to the page containing the TOAST chunk and to the page\n> referencing it. Do you have physical backups and ability to do PITR?\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nSorry, seems that i replied only to Tomas, so forwarding message.---------- Forwarded message ---------From: Kirill Reshke <[email protected]>Date: Sat, 28 Oct 2023 at 02:06Subject: Re: Annoying corruption in PostgreSQL.To: Tomas Vondra <[email protected]>Hi Tomas!Thanks for the explanation!1) 11 to 15. This week there were 14.9 and 12.16 reproductions. Two weeks ago there was 15.4 and 11.21 repro. Unfortunately, there is no info about repro which were month old or more, but I found in our work chats that there was repro on PostgreSQL 13 in April, a minor version unknown. Overall, we observed this issue for over a year on all pgdg supported versions. 2) Searching out bug tracker i have found: 1. missing chunk number 0 for toast value 592966012 in pg_toast_563953150 (some user relation)2. missing chunk number 0 for toast value 18019714 in pg_toast_17706963 (some user relation)3. missing chunk number 0 for toast value 52677740 in pg_toast_247794So, this is not always pg_catalog. There toast tables were toast to some user relations.3) It is always about VACUUM FULL (FREEZE/VERBOSE/ANALYZE) / autovacuum. We have physical backups and we can PITR. But restoring a cluster to some point in the past is a bit of a different task: we need our client's approval for these operations, since we are a Managed DBs Cloud Provider. Will try to ask someone. Best regards On Fri, 27 Oct 2023 at 23:28, Tomas Vondra <[email protected]> wrote:\n\nOn 10/27/23 14:19, Kirill Reshke wrote:\n> Hi hackers!\n> \n> We run a large amount of PostgreSQL clusters in our production. They\n> differ by versions (we have 11-16 pg), load, amount of data, schema,\n> etc. From time to time, postgresql corruption happens. It says\n> ERROR,XX001,\"missing chunk number 0 for toast value 18767319 in\n> pg_toast_2619\",,,,,,\"vacuum full ;\"\n> \n> in logs. the missing chunk number  almost every is equal to zero, while\n> other values vary. There are no known patterns, which triggers this\n> issue. Moreover, if trying to rerun the VACUUM statement against\n> relations from a log message, it succeeds all the time.  So, we just\n> ignore these errors. Maybe it is just some wierd data race?\n> \n> We don't know how to trigger this problem, or why it occurs. I'm not\n> asking you to resolve this issue, but to help with debugging. What can\n> we do to deduct failure reasons? Maybe we can add more logging somewhere\n> (we can deploy a special patched PostgreSQL version everywhere), to have\n> more information about the issue, when it happens next time?\n> \n\nFor starters, it'd be good to know something about the environment, and\nstuff that'd tell us if there's some possible pattern:\n\n1) Which exact PG versions are you observing these errors on?\n\n2) In the error example you shared it's pg_toast_2619, which is the\nTOAST table for pg_statistic (probably). Is it always this relation? Or\nwhat relations you noticed this for?\n\n3) What kind of commands are triggering this? In the example it seems to\nbe vacuum full. Did you see it for other commands too? People generally\ndon't do VACUUM FULL very often, particularly not in environments with\nconcurrent activity.\n\nConsidering you don't know what's causing this, or what to look for, I\nthink it might be interesting to use pg_waldump, and investigate what\nhappened to the page containing the TOAST chunk and to the page\nreferencing it. Do you have physical backups and ability to do PITR?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 28 Oct 2023 02:10:49 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Annoying corruption in PostgreSQL." }, { "msg_contents": "On 10/27/23 23:10, Kirill Reshke wrote:\n> \n> Sorry, seems that i replied only to Tomas, so forwarding message.\n> ---------- Forwarded message ---------\n> From: *Kirill Reshke* <[email protected]\n> <mailto:[email protected]>>\n> Date: Sat, 28 Oct 2023 at 02:06\n> Subject: Re: Annoying corruption in PostgreSQL.\n> To: Tomas Vondra <[email protected]\n> <mailto:[email protected]>>\n> \n> \n> Hi Tomas!\n> \n> Thanks for the explanation!\n> \n> 1) 11 to 15. This week there were 14.9 and 12.16 reproductions. Two\n> weeks ago there was 15.4 and 11.21 repro. Unfortunately, there is no\n> info about repro which were month old or more, but I found in our work\n> chats that there was repro on PostgreSQL 13 in April, a minor version\n> unknown. Overall, we observed this issue for over a year on all pgdg\n> supported versions.\n> \n> 2) Searching out bug tracker i have found:\n> \n> 1. missing chunk number 0 for toast value 592966012 in\n> pg_toast_563953150 (some user relation)\n> |2. missing chunk number 0 for toast value 18019714 in\n> pg_toast_17706963| (some user relation)\n> 3. missing chunk number 0 for toast value 52677740 in pg_toast_247794\n> \n> So, this is not always pg_catalog. There toast tables were toast to some\n> user relations.\n> \n\nOK.\n\n> 3) It is always about VACUUM FULL (FREEZE/VERBOSE/ANALYZE) / autovacuum.\n> \n\nHmm, so it's always one of these VACUUM processes complaining?\n\n> We have physical backups and we can PITR. But restoring a cluster to\n> some point in the past is a bit of a different task: we need our\n> client's approval for these operations, since we are a Managed DBs Cloud\n> Provider. Will try to ask someone.\n> \n\nThat's what I'd try, to get some sense of what state the vacuum saw,\nwhat were the transactions modifying the TOAST + parent table doing,\netc, how much stuff the transactions did, if maybe there are some\naborts, that sort of thing. Hard to try reproducing this without any\nknowledge of the workload. The WAL might tell us if\n\nHow often do you actually see this issue? Once of twice a week?\n\nAre you using some extensions that might interfere with this?\n\nAnd you mentioned you're running large number of clusters - are those\nrunning similar workloads, or are they unrelated?\n\nActually, can you elaborate why are you running VACUUM FULL etc? That\ngenerally should not be necessary, so maybe we can learn something about\nthat about your workload.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 27 Oct 2023 23:32:20 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Annoying corruption in PostgreSQL." }, { "msg_contents": "Greetings,\n\nPlease don't top-post on these lists.\n\n* Kirill Reshke ([email protected]) wrote:\n> We have physical backups and we can PITR. But restoring a cluster to some\n> point in the past is a bit of a different task: we need our client's\n> approval for these operations, since we are a Managed DBs Cloud Provider.\n> Will try to ask someone.\n\nDo you have page-level checksums enabled for these PG instances?\n\nAre you able to see if these clusters which are reporting the corruption\nhave been restored in the past from a backup? What are you using to\nperform your backups and perform your restores?\n\nAre you able to see if these clusters have ever crashed and come back up\nafter by doing WAL replay?\n\nWhere I'm heading with these questions is essentially: I suspect either\nyour backup/restore procedure is broken or you're running on a system\nthat doesn't properly fsync data. Or possibly both.\n\nOh, and you should probably have checksums enabled.\n\nThanks,\n\nStephen", "msg_date": "Mon, 30 Oct 2023 10:46:11 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Annoying corruption in PostgreSQL." } ]
[ { "msg_contents": "Collegues,\n\nI've encountered following problem compiling PostgreSQL 15.4 with just\nreleased Ubuntu 23.10.\n\nI'm compiling postgres with --with-system-tzdata and then regression\ntest sysviews fails with following diff:\n\n\n--- /home/test/pg-tests/postgresql-15.4/src/test/regress/expected/sysviews.out\t2023-10-26 19:06:02.000000000 +0000\n+++ /home/test/pg-tests/postgresql-15.4/src/test/regress/results/sysviews.out\t2023-10-27 07:10:22.214698986 +0000\n@@ -147,23 +147,14 @@\n (1 row)\n \n select count(distinct utc_offset) >= 24 as ok from pg_timezone_abbrevs;\n- ok \n-----\n- t\n-(1 row)\n-\n+ERROR: time zone \"Pacific/Enderbury\" not recognized\n+DETAIL: This time zone name appears in the configuration file for time zone abbreviation \"phot\".\n\n\nwith more such errors follows.\n\nInvestigation shows, that this timezone was long ago declared\ndeprecated, and eventually disappeared from tzdata package in Ubuntu\neven as symlink to Pasific/Kanton (which is equivalent).\n\nBut this timezone present in src/timezone/tznames/Default, so this\nerror message is appears any time one access pg_timezone_abbrevs\nregardless of Pacific region is included in results or not.\n\nMay be, Enderbury should be replaced by Kanton in\nsrc/timezone/tznames/Default and src/timezone/tznames/Pacific.txt?\n\n-- \n Victor Wagner <[email protected]>\n\n\n", "msg_date": "Fri, 27 Oct 2023 15:20:49 +0300", "msg_from": "Victor Wagner <[email protected]>", "msg_from_op": true, "msg_subject": "Enderbury Island disappeared from timezone database" }, { "msg_contents": "Victor Wagner <[email protected]> writes:\n> I've encountered following problem compiling PostgreSQL 15.4 with just\n> released Ubuntu 23.10.\n\n> I'm compiling postgres with --with-system-tzdata and then regression\n> test sysviews fails with following diff:\n> +ERROR: time zone \"Pacific/Enderbury\" not recognized\n> +DETAIL: This time zone name appears in the configuration file for time zone abbreviation \"phot\".\n\nHmph. Pacific/Enderbury is still defined according to tzdata 2023c,\nwhich is the latest release:\n\n$ grep Enderbury src/timezone/data/tzdata.zi\nL Pacific/Kanton Pacific/Enderbury\n\nDid Ubuntu decide to remove *all* backzone links from their data?\nOr just that one? Either way, I think they're going to get a tsunami\nof pushback pretty quickly. People like their obsolete zone names.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 10:25:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "В Fri, 27 Oct 2023 10:25:57 -0400\nTom Lane <[email protected]> пишет:\n\n> Victor Wagner <[email protected]> writes:\n> > I've encountered following problem compiling PostgreSQL 15.4 with\n> > just released Ubuntu 23.10. \n> \n> > I'm compiling postgres with --with-system-tzdata and then regression\n> > test sysviews fails with following diff:\n> > +ERROR: time zone \"Pacific/Enderbury\" not recognized\n> > +DETAIL: This time zone name appears in the configuration file for\n> > time zone abbreviation \"phot\". \n> \n> Hmph. Pacific/Enderbury is still defined according to tzdata 2023c,\n> which is the latest release:\n> \n> $ grep Enderbury src/timezone/data/tzdata.zi\n> L Pacific/Kanton Pacific/Enderbury\n>\n> Did Ubuntu decide to remove *all* backzone links from their data?\n> Or just that one? Either way, I think they're going to get a tsunami\n> of pushback pretty quickly. People like their obsolete zone names.\n\nThey split tzdata packages into tzdata and tzdata-legacy (just for\nthose who like obsolete zone names), and into latter one gone 121 links,\nnot counting \"right\" subdirectory. It is actually Debian unstable\nfeature that got impored into ubuntu. But my\ntest machines with debian testing do not use --with-system-tzdata, so\nI've not noticed this earlier.\n\nIt has following entry in changelog:\n\ntzdata (2023c-8) unstable; urgency=medium\n\n * Update Dutch debconf translation.\n Thanks to Frans Spiesschaert <[email protected]>\n (Closes: #1041278)\n * Ship only timezones in tzdata that follow the current rules of\n geographical\n region (continent or ocean) and city name. Move all legacy timezone\n symlinks\n (that are upgraded during package update) to tzdata-legacy. This\n includes\n dropping the special handling for US/* timezones. (Closes: #1040997)\n\n -- Benjamin Drung <[email protected]> Mon, 07 Aug 2023 15:02:14 +0200\n\nI.e. they move obsolete timezones into separate package just for people\nwho like them.\n\nDescription of that package ends with:\n\n This package also contains legacy timezone symlinks that are not\n following\n the current rule of using the geographical region (continent or ocean)\n and\n city name.\n .\n You do not need this package if you are unsure.\n\nReally I think that if at least some distirbutions don't like this\nnames, it is better to have postgres pass its regression tests without\nthese names as well as with them.\n\n\n\n\n\n\n\n-- \n Victor Wagner <[email protected]>\n\n\n", "msg_date": "Fri, 27 Oct 2023 18:00:51 +0300", "msg_from": "Victor Wagner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "Victor Wagner <[email protected]> writes:\n> Tom Lane <[email protected]> пишет:\n>> Did Ubuntu decide to remove *all* backzone links from their data?\n>> Or just that one? Either way, I think they're going to get a tsunami\n>> of pushback pretty quickly. People like their obsolete zone names.\n\n> They split tzdata packages into tzdata and tzdata-legacy (just for\n> those who like obsolete zone names), and into latter one gone 121 links,\n> not counting \"right\" subdirectory.\n\nFun. I bet that breaks more than just Pacific/Enderbury.\nCan you try changing that entry to Pacific/Kanton, and repeat?\nAnd then check the non-Default timezonesets lists too?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 11:17:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "В Fri, 27 Oct 2023 11:17:03 -0400\nTom Lane <[email protected]> пишет:\n\n> Victor Wagner <[email protected]> writes:\n> > Tom Lane <[email protected]> пишет: \n> >> Did Ubuntu decide to remove *all* backzone links from their data?\n> >> Or just that one? Either way, I think they're going to get a\n> >> tsunami of pushback pretty quickly. People like their obsolete\n> >> zone names. \n> \n> > They split tzdata packages into tzdata and tzdata-legacy (just for\n> > those who like obsolete zone names), and into latter one gone 121\n> > links, not counting \"right\" subdirectory. \n> \n> Fun. I bet that breaks more than just Pacific/Enderbury.\n> Can you try changing that entry to Pacific/Kanton, and repeat?\n\nI did. No more problems. \n\nI.e. I've invoked\n\nsed -i 's/Enderburry/Kanton/' $prefix/share/timezonesets/* \n\nand rerun tests. No failures.\n\nIt seems that Pacific/Enerberry was only one obsolete name which got\nits way into abbreviations list.\n\n\n> And then check the non-Default timezonesets lists too?\n\nEnderbury аppears in two files in the timezonesets - Default\nand Pacific.txt.\n\n-- \n Victor Wagner <[email protected]>\n\n\n", "msg_date": "Fri, 27 Oct 2023 19:19:09 +0300", "msg_from": "Victor Wagner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "Victor Wagner <[email protected]> writes:\n> Tom Lane <[email protected]> пишет:\n>> Fun. I bet that breaks more than just Pacific/Enderbury.\n>> Can you try changing that entry to Pacific/Kanton, and repeat?\n\n> I did. No more problems. \n> I.e. I've invoked\n> sed -i 's/Enderburry/Kanton/' $prefix/share/timezonesets/* \n> and rerun tests. No failures.\n\nI was concerned about the non-Default timezonesets too, but\nhaving now spun up a copy of Ubuntu 23.10 I see that those\nwork fine once Default is fixed. So indeed this is the only\nzone causing us problems. That's probably because only a\nrelatively small fraction of the timezonesets entries depend\nexplicitly on named zones --- most of them are just numeric\nUTC offsets.\n\nAnyway, looking into the tzdata NEWS file I found\n\nRelease 2021b - 2021-09-24 16:23:00 -0700\n\n Rename Pacific/Enderbury to Pacific/Kanton. When we added\n Enderbury in 1993, we did not know that it is uninhabited and that\n Kanton (population two dozen) is the only inhabited location in\n that timezone. The old name is now a backward-compatibility link.\n\nThis means that if we substitute Kanton for Enderbury, things\nwill work fine against tzdata 2021b or later, but will fail in\nthe reverse way against older tzdata sets. Do we want to\nbet that everybody in the world has up-to-date tzdata installed?\nI guess the contract for using --with-system-tzdata is that it's\nup to you to maintain that, but still I don't like the odds.\n\nThe alternative I'm wondering about is whether to just summarily\nremove the PHOT entry from timezonesets/Default. It's a made-up\nzone abbreviation in the first place, and per the above NEWS entry,\nthere's only a couple dozen people in the world who might even\nbe candidates to consider using it. It seems highly likely that\nnobody would care if we just dropped it from the Default list.\n(We could keep the Pacific.txt entry, although re-pointing it\nto Pacific/Kanton seems advisable.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 14:00:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "В Fri, 27 Oct 2023 14:00:38 -0400\nTom Lane <[email protected]> пишет:\n\n> This means that if we substitute Kanton for Enderbury, things\n> will work fine against tzdata 2021b or later, but will fail in\n> the reverse way against older tzdata sets. Do we want to\n> bet that everybody in the world has up-to-date tzdata installed?\n\nYou are right. When nightly builds came, they showed problems with\nPacific/Kanton in\nDebian 10, 11 and Ubuntu 20.04 (we do not more test ubuntu 18.04 as 5\nyear support period is ended). \n\nI haven't applied 'fix' to rpm-based disitrubutions, because none of\nthem as I'm aware of split tzdata into two packages.\n\n> I guess the contract for using --with-system-tzdata is that it's\n> up to you to maintain that, but still I don't like the odds.\n> \n> The alternative I'm wondering about is whether to just summarily\n> remove the PHOT entry from timezonesets/Default. It's a made-up\n> zone abbreviation in the first place, and per the above NEWS entry,\n> there's only a couple dozen people in the world who might even\n> be candidates to consider using it. It seems highly likely that\n> nobody would care if we just dropped it from the Default list.\n> (We could keep the Pacific.txt entry, although re-pointing it\n> to Pacific/Kanton seems advisable.)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n\n-- \n Victor Wagner <[email protected]>\n\n\n", "msg_date": "Sat, 28 Oct 2023 16:48:29 +0300", "msg_from": "Victor Wagner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enderbury Island disappeared from timezone database" }, { "msg_contents": "Victor Wagner <[email protected]> writes:\n> Tom Lane <[email protected]> пишет:\n>> This means that if we substitute Kanton for Enderbury, things\n>> will work fine against tzdata 2021b or later, but will fail in\n>> the reverse way against older tzdata sets. Do we want to\n>> bet that everybody in the world has up-to-date tzdata installed?\n\n> You are right. When nightly builds came, they showed problems with\n> Pacific/Kanton in\n> Debian 10, 11 and Ubuntu 20.04 (we do not more test ubuntu 18.04 as 5\n> year support period is ended). \n\nOK. Let's just remove the PHOT entry then. It's not like it's\nhard to make a custom abbreviation list, in case there's actually\nsomebody out there who needs it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Oct 2023 11:28:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enderbury Island disappeared from timezone database" } ]
[ { "msg_contents": "Hi hackers,\n\nI found that 'make update-po' for PGXS does not work.\nEven if execute 'make update-po', but xx.po.new is not generated.\nI don't test and check for meson build system, but I post it tentatively.\n\nI attached patch and test set.\n'update-po' tries to find *.po files $top_srcdir, but there is no po-file in PGXS system because $top_srcdir is install directory.\n\nPlease check it.\n\nThank you.\n\nBest Regards\nRyo Matsumura", "msg_date": "Fri, 27 Oct 2023 19:07:36 +0000", "msg_from": "\"Ryo Matsumura (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Buf fix: update-po for PGXS does not work" }, { "msg_contents": "On 2023-Oct-27, Ryo Matsumura (Fujitsu) wrote:\n\n> Hi hackers,\n> \n> I found that 'make update-po' for PGXS does not work.\n> Even if execute 'make update-po', but xx.po.new is not generated.\n> I don't test and check for meson build system, but I post it tentatively.\n> \n> I attached patch and test set.\n> 'update-po' tries to find *.po files $top_srcdir, but there is no po-file in PGXS system because $top_srcdir is install directory.\n\nThanks. I think you have the order of the ifdef nest backwards; even in\nthe PGXS case we should have \"ALL_LANGUAGES = $(AVAIL_LANGUAGES)\" unless\nwe're making update-po. Here I present it the other way around.\n\nRegards\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)", "msg_date": "Tue, 20 Aug 2024 20:56:54 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Buf fix: update-po for PGXS does not work" } ]
[ { "msg_contents": "As this change showed a really good improvement in my micro-benchmark, I\ndecided to give it a try on something that's a bit more of a real world\nbenchmark: PostgreSQL.\n\nHence, I'm also Cc'ing the postgres maintainers in case they are interested.\nTo bring them up to speed, here's what's been going on:\n\n A new feature is being discussed to add NEED_RESCHED_LAZY that will allow us\n (the kernel developers) to remove CONFIG_PREEMPT_NONE and\n CONFIG_PREEMPT_VOLUNTARY, and move that to user space runtime switches. The\n idea is that when the scheduler tick goes off and it's time to schedule out a\n SCHED_OTHER task, user space can have the option of not doing that in the\n kernel and wait till it enters back into user space before scheduling\n (simulating PREEMPT_NONE). The scheduler tick will set NEED_RESCHED_LAZY\n (instead of NEED_RESCHED which always schedules when possible), and that bit\n will only be looked at when exiting back into user space, where it will\n perform the schedule if it is set.\n\n My idea is to extend this into user space as well. Using the restartable\n sequences infrastructure (https://lwn.net/Articles/697979/) that maps memory\n between kernel and user space for threads, I'll use two bits (or one bit and a\n counter, but that's for later, I'll just discuss the current implementation).\n\n bit 0: is set by user space to tell the kernel that it's in a critical\n section.\n\n bit 1: is set by the kernel telling user space that it granted it a bit more\n time and that it should call back into the kernel with any system call\n (sched_yield() or gettid()), when it is out of its critical section.\n\n Bit 1 will never be set if bit 0 is not set (Note, there's talk about making\n bit 0 the one set by the kernel, or use a different word entirely to allow\n the rest of the bits to be used as a counter for nested critical sections).\n\n Now when returning back to user space, if the critical section bit (or\n counter) is set, then it will not call schedule when NEED_RESCHED_LAZY is set.\n Note that it will still always call schedule on NEED_RESCHED. This gives user\n space one more tick (1 ms with 1000 HZ kernel config, to 4 ms with 250 HZ\n kernel config). When user space is done with its critical section, it should\n check the bit that can be set by the kernel to see if it should then schedule.\n\n If the user side bit is not cleared after a tick, then the kernel will set\n NEED_RESCHED which will force a schedule no matter where user space happens to\n be. Note, this could also hurt that task in that the scheduler will take away\n the eligibility of that task to balance out the amount of extra time the task\n ran for, not to mention, the force schedule could now land in a critical\n section.\n\nBack in 2014 at the Linux Collaboration Summit in Napa Valley I had a nice\nconversation with Robet Haas about user space spin locks. He told me how they\nare used in PostgreSQL where futexes did not meet their needs. This\nconversation kicked off the idea about implementing user space adaptive spin\nlocks (which last year in Belgium, I asked André Almeida to implement -\nhttps://lwn.net/Articles/931789/).\n\nEven though user space adaptive spinning would greatly help out the contention\nof a lock, there's still the issue of a lock owner being preempted which would\ncause all those waiters to also go into the kernel and delay access to the\ncritical sections. In the real time kernel, this was solved by\nNEED_RESCHED_LAZY:\n\n https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/tree/patches/preempt-lazy-support.patch?h=v5.9-rc2-rt1-patches\n\nNow Thomas has proposed using a similar solution to solve the PREEMPT_NONE /\nVOLUNTARY issue.\n\n https://lore.kernel.org/lkml/87cyyfxd4k.ffs@tglx/\n\nWhich now has a prototype in the rt-devel tree:\n\n https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git/tree/patches/PREEMPT_AUTO.patch?h=v6.6-rc6-rt10-patches\n\nFor which I applied to v6.6-rc4 (my current working branch), and applied the\nsolution I explained above (POC with debugging still in it):\n\n https://lore.kernel.org/all/[email protected]/\n\nNow I downloaded the latest postgres from:\n\n https://github.com/postgres/postgres.git\n\nAnd built sha: 26f988212eada9c586223cbbf876c7eb455044d9\n\nAfter installing it, I rebooted the machine with the updated kernel (requires\nCONFIG_PREEMPT_AUTO being set), and ran the unmodified version of postgres\npgbench test:\n\n pgbench -c 100 -T 300 -j 8 -S -n\n\nI ran it 16 times and looked at the transactions per second counter (tps).\nNote, I threw out the first run as it had horrible numbers probably due to\neverything in cold cache (memory and file system).\n\nThen I applied the below patch, did a make clean, make install, rebooted the\nbox again and ran the test for another 16 times (again, the first run was\nhorrible).\n\nHere are the results of the tests: I only used the 15 runs after the first run\nfor comparisons.\n\nWithout the patched postgres executable:\n\n First run:\n tps = 72573.188203 (without initial connection time)\n\n 15 runs:\n tps = 74315.731978 (without initial connection time)\n tps = 74448.130108 (without initial connection time)\n tps = 74662.246100 (without initial connection time)\n tps = 73124.961311 (without initial connection time)\n tps = 74653.611878 (without initial connection time)\n tps = 74765.296134 (without initial connection time)\n tps = 74497.066104 (without initial connection time)\n tps = 74541.664031 (without initial connection time)\n tps = 74595.032066 (without initial connection time)\n tps = 74545.876793 (without initial connection time)\n tps = 74762.560651 (without initial connection time)\n tps = 74528.657018 (without initial connection time)\n tps = 74814.700753 (without initial connection time)\n tps = 74687.980967 (without initial connection time)\n tps = 74973.185122 (without initial connection time)\n\nWith the patched postgres executable:\n\n First run:\n tps = 73562.005970 (without initial connection time)\n\n 15 runs:\n tps = 74560.101322 (without initial connection time)\n tps = 74711.177071 (without initial connection time)\n tps = 74551.093281 (without initial connection time)\n tps = 74559.452628 (without initial connection time)\n tps = 74737.604361 (without initial connection time)\n tps = 74521.606019 (without initial connection time)\n tps = 74870.859166 (without initial connection time)\n tps = 74545.423471 (without initial connection time)\n tps = 74805.939815 (without initial connection time)\n tps = 74665.240730 (without initial connection time)\n tps = 74701.479550 (without initial connection time)\n tps = 74897.154079 (without initial connection time)\n tps = 74879.687067 (without initial connection time)\n tps = 74792.563116 (without initial connection time)\n tps = 74852.101317 (without initial connection time)\n\nWithout the patch:\n\n Average: 74527.7800\n Std Dev: 420.6304\n\nWith the patch:\n Average: 74710.0988\n Std Dev: 136.7250\n\nNotes about my setup. I ran this on one of my older test boxes (pretty much the\nlast of the bare metal machines I test on, as now I do most on VMs, but did not\nwant to run these tests on VMs).\n\nIt's a 4 core (2 hyper threaded) total of 8 CPUs:\n\n Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz\n\n 32G of RAM.\n\nPostgres has several types of locks. I first applied the extend()/unextend() to\nonly the raw spin locks but that didn't make much difference. When I traced\nit, I found that the time slice seldom landed when one of these spin locks\nwere held. 10 or so during a 5 minute run (I added writes to the kernel\ntracing buffer via trace_marker to know when the locks were held, and also\nrecorded sched_switch to see if it was ever preempted). So I expanded the\nusage to the \"light weight locks\" (lwlocks), which are similar to the spin\nlocks but does some complex backing off. Basically a heuristic spin. Anyway,\nafter adding the logic to these locks, it definitely made a difference. Not a\nhuge one, but it was noticeable beyond the noise. I can imagine that if this\nwas implemented on a machine with many more CPUs than 8, it would make an even\nbigger difference.\n\nI also had to rerun my tests because I left some kernel config options enabled\nthat affected performance. I didn't want that to skew the results. But the\nresults were similar, except that with the slower kernel, the worse\nperformance with the patch was better than the best performance without it.\nNot by much, but still. After removing the slowdown, that was no longer the\ncase. But I noticed that with the patch, the standard deviation was much\nsmaller than without the patch. I'm guessing that without the patch it depends\non how the scheduler interacts with the locking much more and a good run\nwithout the patch was just the test being \"lucky\" that it didn't get preempted\nas much in a critical section.\n\nI personally think the smaller standard deviation is a win as it makes the\ndatabase run with a more deterministic behavior.\n\nAnyway, I'd really like to know what others think about this, and perhaps they\ncan run this on their own testing infrastructure. All the code is available to\nreproduce. If you want to reproduce it like I did. Checkout the latest Linus\ntree (or do what I have which is v6.6-rc4), apply the above mentioned\nPREEMPT_AUTO.patch, then apply my kernel patch. Select CONFIG_PREEMPT_AUTO and\nbuild the kernel. Don't worry about that big banner that is on the kernel\nconsole at boot up telling you that you are running a debug kernel. You are\nrunning one, and it's because I still have a couple of trace_printk()s in it.\n\nRun your postgres tests with and without the patch and let me know if there's a\ndifference. Only if you think it's worth it. Let me know if you don't ;-)\n\n-- Steve\n\n[ The biggest part of the below change is adding in the standard rseq_abi.h header ]\n\n\ndiff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c\nindex 315a78cda9..652d3a5560 100644\n--- a/src/backend/storage/lmgr/lwlock.c\n+++ b/src/backend/storage/lmgr/lwlock.c\n@@ -89,11 +89,12 @@\n #include \"storage/spin.h\"\n #include \"utils/memutils.h\"\n \n+#include \"storage/rseq-abi.h\"\n+\n #ifdef LWLOCK_STATS\n #include \"utils/hsearch.h\"\n #endif\n \n-\n /* We use the ShmemLock spinlock to protect LWLockCounter */\n extern slock_t *ShmemLock;\n \n@@ -841,6 +842,8 @@ LWLockAttemptLock(LWLock *lock, LWLockMode mode)\n \t\t\t\tdesired_state += LW_VAL_SHARED;\n \t\t}\n \n+\t\tif (lock_free)\n+\t\t\textend();\n \t\t/*\n \t\t * Attempt to swap in the state we are expecting. If we didn't see\n \t\t * lock to be free, that's just the old value. If we saw it as free,\n@@ -863,9 +866,14 @@ LWLockAttemptLock(LWLock *lock, LWLockMode mode)\n #endif\n \t\t\t\treturn false;\n \t\t\t}\n-\t\t\telse\n+\t\t\telse {\n+\t\t\t\tif (lock_free)\n+\t\t\t\t\tunextend();\n \t\t\t\treturn true;\t/* somebody else has the lock */\n+\t\t\t}\n \t\t}\n+\t\tif (lock_free)\n+\t\t\tunextend();\n \t}\n \tpg_unreachable();\n }\n@@ -1868,6 +1876,7 @@ LWLockRelease(LWLock *lock)\n \t\tLWLockWakeup(lock);\n \t}\n \n+\tunextend();\n \t/*\n \t * Now okay to allow cancel/die interrupts.\n \t */\ndiff --git a/src/backend/storage/lmgr/s_lock.c b/src/backend/storage/lmgr/s_lock.c\nindex 327ac64f7c..c22310cfe3 100644\n--- a/src/backend/storage/lmgr/s_lock.c\n+++ b/src/backend/storage/lmgr/s_lock.c\n@@ -55,6 +55,8 @@\n #include \"storage/s_lock.h\"\n #include \"utils/wait_event.h\"\n \n+#include \"storage/rseq-abi.h\"\n+\n #define MIN_SPINS_PER_DELAY 10\n #define MAX_SPINS_PER_DELAY 1000\n #define NUM_DELAYS\t\t\t1000\n@@ -66,7 +68,6 @@ slock_t\t\tdummy_spinlock;\n \n static int\tspins_per_delay = DEFAULT_SPINS_PER_DELAY;\n \n-\n /*\n * s_lock_stuck() - complain about a stuck spinlock\n */\n@@ -94,6 +95,8 @@ s_lock(volatile slock_t *lock, const char *file, int line, const char *func)\n {\n \tSpinDelayStatus delayStatus;\n \n+\tunextend();\n+\n \tinit_spin_delay(&delayStatus, file, line, func);\n \n \twhile (TAS_SPIN(lock))\n@@ -102,6 +105,7 @@ s_lock(volatile slock_t *lock, const char *file, int line, const char *func)\n \t}\n \n \tfinish_spin_delay(&delayStatus);\n+\textend();\n \n \treturn delayStatus.delays;\n }\ndiff --git a/src/include/storage/rseq-abi.h b/src/include/storage/rseq-abi.h\nnew file mode 100644\nindex 0000000000..b858cf1d6f\n--- /dev/null\n+++ b/src/include/storage/rseq-abi.h\n@@ -0,0 +1,174 @@\n+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */\n+#ifndef _RSEQ_ABI_H\n+#define _RSEQ_ABI_H\n+\n+/*\n+ * rseq-abi.h\n+ *\n+ * Restartable sequences system call API\n+ *\n+ * Copyright (c) 2015-2022 Mathieu Desnoyers <[email protected]>\n+ */\n+\n+#include <ctype.h>\n+#include <asm/types.h>\n+\n+enum rseq_abi_cpu_id_state {\n+\tRSEQ_ABI_CPU_ID_UNINITIALIZED\t\t\t= -1,\n+\tRSEQ_ABI_CPU_ID_REGISTRATION_FAILED\t\t= -2,\n+};\n+\n+enum rseq_abi_flags {\n+\tRSEQ_ABI_FLAG_UNREGISTER = (1 << 0),\n+};\n+\n+enum rseq_abi_cs_flags_bit {\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT\t= 0,\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT\t= 1,\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT\t= 2,\n+};\n+\n+enum rseq_abi_cs_flags {\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT\t=\n+\t\t(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT_BIT),\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL\t=\n+\t\t(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL_BIT),\n+\tRSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE\t=\n+\t\t(1U << RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE_BIT),\n+};\n+\n+/*\n+ * struct rseq_abi_cs is aligned on 4 * 8 bytes to ensure it is always\n+ * contained within a single cache-line. It is usually declared as\n+ * link-time constant data.\n+ */\n+struct rseq_abi_cs {\n+\t/* Version of this structure. */\n+\t__u32 version;\n+\t/* enum rseq_abi_cs_flags */\n+\t__u32 flags;\n+\t__u64 start_ip;\n+\t/* Offset from start_ip. */\n+\t__u64 post_commit_offset;\n+\t__u64 abort_ip;\n+} __attribute__((aligned(4 * sizeof(__u64))));\n+\n+/*\n+ * struct rseq_abi is aligned on 4 * 8 bytes to ensure it is always\n+ * contained within a single cache-line.\n+ *\n+ * A single struct rseq_abi per thread is allowed.\n+ */\n+struct rseq_abi {\n+\t/*\n+\t * Restartable sequences cpu_id_start field. Updated by the\n+\t * kernel. Read by user-space with single-copy atomicity\n+\t * semantics. This field should only be read by the thread which\n+\t * registered this data structure. Aligned on 32-bit. Always\n+\t * contains a value in the range of possible CPUs, although the\n+\t * value may not be the actual current CPU (e.g. if rseq is not\n+\t * initialized). This CPU number value should always be compared\n+\t * against the value of the cpu_id field before performing a rseq\n+\t * commit or returning a value read from a data structure indexed\n+\t * using the cpu_id_start value.\n+\t */\n+\t__u32 cpu_id_start;\n+\t/*\n+\t * Restartable sequences cpu_id field. Updated by the kernel.\n+\t * Read by user-space with single-copy atomicity semantics. This\n+\t * field should only be read by the thread which registered this\n+\t * data structure. Aligned on 32-bit. Values\n+\t * RSEQ_CPU_ID_UNINITIALIZED and RSEQ_CPU_ID_REGISTRATION_FAILED\n+\t * have a special semantic: the former means \"rseq uninitialized\",\n+\t * and latter means \"rseq initialization failed\". This value is\n+\t * meant to be read within rseq critical sections and compared\n+\t * with the cpu_id_start value previously read, before performing\n+\t * the commit instruction, or read and compared with the\n+\t * cpu_id_start value before returning a value loaded from a data\n+\t * structure indexed using the cpu_id_start value.\n+\t */\n+\t__u32 cpu_id;\n+\t/*\n+\t * Restartable sequences rseq_cs field.\n+\t *\n+\t * Contains NULL when no critical section is active for the current\n+\t * thread, or holds a pointer to the currently active struct rseq_cs.\n+\t *\n+\t * Updated by user-space, which sets the address of the currently\n+\t * active rseq_cs at the beginning of assembly instruction sequence\n+\t * block, and set to NULL by the kernel when it restarts an assembly\n+\t * instruction sequence block, as well as when the kernel detects that\n+\t * it is preempting or delivering a signal outside of the range\n+\t * targeted by the rseq_cs. Also needs to be set to NULL by user-space\n+\t * before reclaiming memory that contains the targeted struct rseq_cs.\n+\t *\n+\t * Read and set by the kernel. Set by user-space with single-copy\n+\t * atomicity semantics. This field should only be updated by the\n+\t * thread which registered this data structure. Aligned on 64-bit.\n+\t */\n+\tunion {\n+\t\t__u64 ptr64;\n+\n+\t\t/*\n+\t\t * The \"arch\" field provides architecture accessor for\n+\t\t * the ptr field based on architecture pointer size and\n+\t\t * endianness.\n+\t\t */\n+\t\tstruct {\n+#ifdef __LP64__\n+\t\t\t__u64 ptr;\n+#elif defined(__BYTE_ORDER) ? (__BYTE_ORDER == __BIG_ENDIAN) : defined(__BIG_ENDIAN)\n+\t\t\t__u32 padding;\t\t/* Initialized to zero. */\n+\t\t\t__u32 ptr;\n+#else\n+\t\t\t__u32 ptr;\n+\t\t\t__u32 padding;\t\t/* Initialized to zero. */\n+#endif\n+\t\t} arch;\n+\t} rseq_cs;\n+\n+\t/*\n+\t * Restartable sequences flags field.\n+\t *\n+\t * This field should only be updated by the thread which\n+\t * registered this data structure. Read by the kernel.\n+\t * Mainly used for single-stepping through rseq critical sections\n+\t * with debuggers.\n+\t *\n+\t * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_PREEMPT\n+\t * Inhibit instruction sequence block restart on preemption\n+\t * for this thread.\n+\t * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_SIGNAL\n+\t * Inhibit instruction sequence block restart on signal\n+\t * delivery for this thread.\n+\t * - RSEQ_ABI_CS_FLAG_NO_RESTART_ON_MIGRATE\n+\t * Inhibit instruction sequence block restart on migration for\n+\t * this thread.\n+\t */\n+\t__u32 flags;\n+\n+\t/*\n+\t * Restartable sequences node_id field. Updated by the kernel. Read by\n+\t * user-space with single-copy atomicity semantics. This field should\n+\t * only be read by the thread which registered this data structure.\n+\t * Aligned on 32-bit. Contains the current NUMA node ID.\n+\t */\n+\t__u32 node_id;\n+\n+\t/*\n+\t * Restartable sequences mm_cid field. Updated by the kernel. Read by\n+\t * user-space with single-copy atomicity semantics. This field should\n+\t * only be read by the thread which registered this data structure.\n+\t * Aligned on 32-bit. Contains the current thread's concurrency ID\n+\t * (allocated uniquely within a memory map).\n+\t */\n+\t__u32 mm_cid;\n+\n+\t__u32 cr_flags;\n+\t/*\n+\t * Flexible array member at end of structure, after last feature field.\n+\t */\n+\tchar end[];\n+} __attribute__((aligned(4 * sizeof(__u64))));\n+\n+#endif /* _RSEQ_ABI_H */\ndiff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h\nindex c9fa84cc43..999a056c6f 100644\n--- a/src/include/storage/s_lock.h\n+++ b/src/include/storage/s_lock.h\n@@ -96,6 +96,10 @@\n #ifndef S_LOCK_H\n #define S_LOCK_H\n \n+#define __GNU_SOURCE\n+#include <unistd.h>\n+#include \"storage/rseq-abi.h\"\n+\n #ifdef FRONTEND\n #error \"s_lock.h may not be included from frontend code\"\n #endif\n@@ -131,6 +135,53 @@\n *----------\n */\n \n+extern ptrdiff_t __rseq_offset;\n+extern unsigned int __rseq_size;\n+\n+static inline unsigned clear_extend(volatile unsigned *ptr)\n+{\n+\tunsigned ret;\n+\n+\tasm volatile(\"andb %b1,%0\"\n+\t\t : \"+m\" (*(volatile char *)ptr)\n+\t\t : \"iq\" (~0x1)\n+\t\t : \"memory\");\n+\n+\tret = *ptr;\n+\t*ptr = 0;\n+\n+\treturn ret;\n+}\n+\n+static inline void extend(void)\n+{\n+\tstruct rseq_abi *rseq_ptr;\n+\n+\tif (!__rseq_size)\n+\t\treturn;\n+\n+\trseq_ptr = (void *)((unsigned long)__builtin_thread_pointer() + __rseq_offset);\n+\trseq_ptr->cr_flags = 1;\n+}\n+\n+static inline void unextend(void)\n+{\n+\tstruct rseq_abi *rseq_ptr;\n+\tunsigned prev;\n+\n+\tif (!__rseq_size)\n+\t\treturn;\n+\n+\trseq_ptr = (void *)((unsigned long)__builtin_thread_pointer() + __rseq_offset);\n+\n+\tprev = clear_extend(&rseq_ptr->cr_flags);\n+\tif (prev & 2) {\n+\t\tgettid();\n+\t}\n+}\n+\n+#define __S_UNLOCK(lock) do { S_UNLOCK(lock); unextend(); } while (0)\n+#define __S_LOCK(lock) do { extend(); S_LOCK(lock); } while (0)\n \n #ifdef __i386__\t\t/* 32-bit i386 */\n #define HAS_TEST_AND_SET\ndiff --git a/src/include/storage/spin.h b/src/include/storage/spin.h\nindex 5d809cc980..4230025748 100644\n--- a/src/include/storage/spin.h\n+++ b/src/include/storage/spin.h\n@@ -59,9 +59,9 @@\n \n #define SpinLockInit(lock)\tS_INIT_LOCK(lock)\n \n-#define SpinLockAcquire(lock) S_LOCK(lock)\n+#define SpinLockAcquire(lock) __S_LOCK(lock)\n \n-#define SpinLockRelease(lock) S_UNLOCK(lock)\n+#define SpinLockRelease(lock) __S_UNLOCK(lock)\n \n #define SpinLockFree(lock)\tS_LOCK_FREE(lock)\n \n\n\n", "msg_date": "Fri, 27 Oct 2023 17:52:16 -0400", "msg_from": "Steven Rostedt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [POC][RFC][PATCH v2] sched: Extended Scheduler Time Slice" } ]
[ { "msg_contents": "Hello\n\nThe following was tested in a PostgreSQL (16) database. In my opinion queries based on Information_schema.views sometimes give unexpected results.\n\nCREATE TABLE Dept(deptno SMALLINT NOT NULL,\ndname VARCHAR(50) NOT NULL,\nCONSTRAINT pk_dept PRIMARY KEY (deptno));\n\nCREATE TABLE Emp(empno INTEGER NOT NULL,\nename VARCHAR(50) NOT NULL,\ndeptno SMALLINT NOT NULL,\nCONSTRAINT pk_emp PRIMARY KEY (empno),\nCONSTRAINT fk_emp_dept FOREIGN KEY (deptno) REFERENCES Dept(deptno) ON UPDATE CASCADE);\n\nCREATE VIEW emps AS SELECT *\nFROM Dept INNER JOIN Emp USING (deptno);\n\nUPDATE Emps SET ename=Upper(ename);\n/*ERROR: cannot update view \"emps\"\nDETAIL: Views that do not select from a single table or view are not automatically updatable.\nHINT: To enable updating the view, provide an INSTEAD OF UPDATE trigger or an unconditional ON UPDATE DO INSTEAD rule.*/\n\nSELECT table_schema AS schema, table_name AS view, is_updatable, is_insertable_into\nFROM Information_schema.views\nWHERE table_name='emps';\n\n/*is_updatable=NO and is_insertable_into=NO*/\n\nCREATE OR REPLACE RULE emps_insert AS ON INSERT\nTO Emps\nDO INSTEAD NOTHING;\n\n/*After that: is_insertable_into=YES*/\n\nCREATE OR REPLACE RULE emps_update AS ON UPDATE\nTO Emps\nDO INSTEAD NOTHING;\n\n/*After that: is_updatable=NO*/\n\nCREATE OR REPLACE RULE emps_delete AS ON DELETE\nTO Emps\nDO INSTEAD NOTHING;\n\n/*After that: is_updatable=YES*/\n\n1. Indeed, now I can execute INSERT/UPDATE/DELETE against the view without getting an error. However, I still cannot change the data in the database through the views.\n2. is_updatable=YES only after I add both UPDATE and DELETE DO INSTEAD NOTHING rules.\n\nMy question is: are 1 and 2 the expected behaviour or is there a mistake in the implementation of the information_schema view?\n\nBest regards\nErki Eessaar\n\n\n\n\n\n\n\n\nHello \n\n\n\nThe following was tested in a PostgreSQL (16) database. In my opinion queries based on Information_schema.views sometimes give unexpected results.\n\n\n\nCREATE TABLE Dept(deptno SMALLINT NOT NULL,\ndname VARCHAR(50) NOT NULL,\nCONSTRAINT pk_dept PRIMARY KEY (deptno));\n\n\nCREATE TABLE Emp(empno INTEGER NOT NULL,\nename VARCHAR(50) NOT NULL,\ndeptno SMALLINT NOT NULL,\nCONSTRAINT pk_emp PRIMARY KEY (empno),\nCONSTRAINT fk_emp_dept FOREIGN KEY (deptno) REFERENCES Dept(deptno) ON UPDATE CASCADE);\n\n\nCREATE VIEW emps AS SELECT *\nFROM Dept INNER JOIN Emp USING (deptno);\n\n\nUPDATE Emps SET ename=Upper(ename);\n/*ERROR:  cannot update view \"emps\"\nDETAIL:  Views that do not select from a single table or view are not automatically updatable.\nHINT:  To enable updating the view, provide an INSTEAD OF UPDATE trigger or an unconditional ON UPDATE DO INSTEAD rule.*/\n\n\nSELECT table_schema AS schema, table_name AS view, is_updatable, is_insertable_into\nFROM Information_schema.views\nWHERE table_name='emps';\n\n\n/*is_updatable=NO and is_insertable_into=NO*/\n\n\nCREATE OR REPLACE RULE emps_insert AS ON INSERT\nTO Emps\nDO INSTEAD NOTHING;\n\n\n/*After that: is_insertable_into=YES*/\n\n\nCREATE OR REPLACE RULE emps_update AS ON UPDATE\nTO Emps\nDO INSTEAD NOTHING;\n\n\n/*After that: is_updatable=NO*/\n\n\nCREATE OR REPLACE RULE emps_delete AS ON DELETE\nTO Emps\nDO INSTEAD NOTHING;\n\n\n/*After that: is_updatable=YES*/\n\n\n1. Indeed, now I can execute INSERT/UPDATE/DELETE against the view without getting an error. However, I still cannot change the data in the database through the views.\n2. is_updatable=YES only after I add both UPDATE and DELETE DO INSTEAD NOTHING rules.\n\n\nMy question is: are 1 and 2 the expected behaviour or is there a mistake in the implementation of the information_schema view?\n\n\nBest regards\nErki Eessaar", "msg_date": "Sat, 28 Oct 2023 09:27:33 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "Issues with Information_schema.views" }, { "msg_contents": "On Sat, Oct 28, 2023 at 5:27 PM Erki Eessaar <[email protected]> wrote:\n>\n> Hello\n>\n>\n> /*After that: is_updatable=YES*/\n>\n> 1. Indeed, now I can execute INSERT/UPDATE/DELETE against the view without getting an error. However, I still cannot change the data in the database through the views.\n\nhttps://www.postgresql.org/docs/current/sql-createview.html\n\"\nA more complex view that does not satisfy all these conditions is\nread-only by default: the system will not allow an insert, update, or\ndelete on the view. You can get the effect of an updatable view by\ncreating INSTEAD OF triggers on the view, which must convert attempted\ninserts, etc. on the view into appropriate actions on other tables.\nFor more information see CREATE TRIGGER. Another possibility is to\ncreate rules (see CREATE RULE), but in practice triggers are easier to\nunderstand and use correctly.\n\"\nYou CAN get the effect of an updateable view. But you need to make the\nrule/triggers correct.\n\nthe following RULE can get the expected result.\n CREATE OR REPLACE RULE emps_update AS ON UPDATE\n TO Emps\n DO INSTEAD UPDATE emp SET\n empno = NEW.empno,\n ename = NEW.ename,\n deptno = NEW.deptno;\nyou can also look at src/test/regress/sql/triggers.sql,\nsrc/test/regress/sql/rules.sql for more test cases.\n\n\n", "msg_date": "Sat, 28 Oct 2023 18:38:16 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with Information_schema.views" }, { "msg_contents": "Hello\r\n\r\nThank you! I know that.\r\n\r\nDO INSTEAD NOTHING rules on updatable views could be used as a way to implement WITH READ ONLY constraint (one can define such constraint in Oracle). However, one could accidentally add such rule to non-updatable view as well.\r\n\r\nI tried to construct a system-catalog based query to find database rules that are unnecessary. Thus, for the testing purposes I added a DO INSTEAD NOTHING rule to already non-updatable view and was a bit surprised that INFORMATION_SCHEMA-based check showed that the view had become updatable. A possible reasoning is that I can update the view without getting an error. However, I still cannot change data in base tables.\r\n\r\nSecondly, the rule you demonstrated does not alone change IS_UPDATABLE value to YES. I have to create two rules:\r\n\r\n CREATE OR REPLACE RULE emps_update AS ON UPDATE\r\n TO Emps\r\n DO INSTEAD UPDATE emp SET\r\n empno = NEW.empno,\r\n ename = NEW.ename,\r\n deptno = NEW.deptno;\r\n\r\n CREATE OR REPLACE RULE emps_delete AS ON DELETE\r\n TO Emps\r\n DO INSTEAD DELETE FROM Emp WHERE empno=OLD.empno;\r\n\r\nMy question is - is all of this the intended behaviour by the implementers?\r\n\r\nBest regards\r\nErki Eessaar\r\n\r\n________________________________\r\nFrom: jian he <[email protected]>\r\nSent: Saturday, October 28, 2023 13:38\r\nTo: Erki Eessaar <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nSubject: Re: Issues with Information_schema.views\r\n\r\nOn Sat, Oct 28, 2023 at 5:27 PM Erki Eessaar <[email protected]> wrote:\r\n>\r\n> Hello\r\n>\r\n>\r\n> /*After that: is_updatable=YES*/\r\n>\r\n> 1. Indeed, now I can execute INSERT/UPDATE/DELETE against the view without getting an error. However, I still cannot change the data in the database through the views.\r\n\r\nhttps://www.postgresql.org/docs/current/sql-createview.html\r\n\"\r\nA more complex view that does not satisfy all these conditions is\r\nread-only by default: the system will not allow an insert, update, or\r\ndelete on the view. You can get the effect of an updatable view by\r\ncreating INSTEAD OF triggers on the view, which must convert attempted\r\ninserts, etc. on the view into appropriate actions on other tables.\r\nFor more information see CREATE TRIGGER. Another possibility is to\r\ncreate rules (see CREATE RULE), but in practice triggers are easier to\r\nunderstand and use correctly.\r\n\"\r\nYou CAN get the effect of an updateable view. But you need to make the\r\nrule/triggers correct.\r\n\r\nthe following RULE can get the expected result.\r\n CREATE OR REPLACE RULE emps_update AS ON UPDATE\r\n TO Emps\r\n DO INSTEAD UPDATE emp SET\r\n empno = NEW.empno,\r\n ename = NEW.ename,\r\n deptno = NEW.deptno;\r\nyou can also look at src/test/regress/sql/triggers.sql,\r\nsrc/test/regress/sql/rules.sql for more test cases.\r\n\n\n\n\n\n\n\n\r\nHello\n\n\n\n\r\nThank you! I know that.\n\n\n\n\r\nDO INSTEAD NOTHING rules on updatable views could be used as a way to implement WITH READ ONLY constraint (one can define such constraint in Oracle).  However, one could accidentally add such rule to non-updatable view as well.\n\n\n\n\n\r\nI tried to construct a system-catalog based query to find database rules that are unnecessary. Thus, for the testing purposes I added a DO INSTEAD NOTHING rule to already non-updatable view and was a bit surprised that INFORMATION_SCHEMA-based check showed\r\n that the view had become updatable. A possible reasoning is that I can update the view without getting an error.  However, I still cannot change data in base tables.\n\n\n\n\r\nSecondly, the rule you demonstrated does not alone change IS_UPDATABLE value to YES. I have to create two rules:\n\n\n\n\r\n CREATE OR REPLACE RULE emps_update AS ON UPDATE\r\n    TO Emps\n    DO INSTEAD UPDATE emp SET\n        empno = NEW.empno,\n        ename = NEW.ename,\r\n        deptno = NEW.deptno;\n\n\n\n\n\r\n CREATE OR REPLACE RULE emps_delete AS ON DELETE\r\n    TO Emps\r\n    DO INSTEAD DELETE FROM Emp WHERE empno=OLD.empno;\n\n\n\n\n\r\nMy question is - is all of this the intended behaviour by the implementers?\n\n\n\n\n\r\nBest regards\n\r\nErki Eessaar\n\n\n\n\n\n\nFrom: jian he <[email protected]>\nSent: Saturday, October 28, 2023 13:38\nTo: Erki Eessaar <[email protected]>\nCc: [email protected] <[email protected]>\nSubject: Re: Issues with Information_schema.views\n \n\n\nOn Sat, Oct 28, 2023 at 5:27 PM Erki Eessaar <[email protected]> wrote:\r\n>\r\n> Hello\r\n>\r\n>\r\n> /*After that: is_updatable=YES*/\r\n>\r\n> 1. Indeed, now I can execute INSERT/UPDATE/DELETE against the view without getting an error. However, I still cannot change the data in the database through the views.\n\nhttps://www.postgresql.org/docs/current/sql-createview.html\r\n\"\r\nA more complex view that does not satisfy all these conditions is\r\nread-only by default: the system will not allow an insert, update, or\r\ndelete on the view. You can get the effect of an updatable view by\r\ncreating INSTEAD OF triggers on the view, which must convert attempted\r\ninserts, etc. on the view into appropriate actions on other tables.\r\nFor more information see CREATE TRIGGER. Another possibility is to\r\ncreate rules (see CREATE RULE), but in practice triggers are easier to\r\nunderstand and use correctly.\r\n\"\r\nYou CAN get the effect of an updateable view. But you need to make the\r\nrule/triggers correct.\n\r\nthe following RULE can get the expected result.\r\n    CREATE OR REPLACE RULE emps_update AS ON UPDATE\r\n    TO Emps\r\n    DO INSTEAD UPDATE emp SET\r\n        empno = NEW.empno,\r\n        ename = NEW.ename,\r\n        deptno = NEW.deptno;\r\nyou can also look at src/test/regress/sql/triggers.sql,\r\nsrc/test/regress/sql/rules.sql for more test cases.", "msg_date": "Sun, 29 Oct 2023 08:05:34 +0000", "msg_from": "Erki Eessaar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issues with Information_schema.views" }, { "msg_contents": "On Sun, Oct 29, 2023 at 4:05 PM Erki Eessaar <[email protected]> wrote:\n>\n> Hello\n>\n> Thank you! I know that.\n>\n>\n> Secondly, the rule you demonstrated does not alone change IS_UPDATABLE value to YES. I have to create two rules:\n>\n> CREATE OR REPLACE RULE emps_update AS ON UPDATE\n> TO Emps\n> DO INSTEAD UPDATE emp SET\n> empno = NEW.empno,\n> ename = NEW.ename,\n> deptno = NEW.deptno;\n>\n> CREATE OR REPLACE RULE emps_delete AS ON DELETE\n> TO Emps\n> DO INSTEAD DELETE FROM Emp WHERE empno=OLD.empno;\n>\n> My question is - is all of this the intended behaviour by the implementers?\n>\n> Best regards\n> Erki Eessaar\n>\n\nper test, it's the expected behavior.\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/updatable_views.out#n569\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/updatable_views.out#n603\nhttps://git.postgresql.org/cgit/postgresql.git/tree/src/test/regress/expected/updatable_views.out#n637\n\nyou need CREATE RULE AS ON DELETE and CREATE RULE AS ON UPDATE to\nmark the view as is_updatable.\n\n\n", "msg_date": "Sun, 29 Oct 2023 19:53:29 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with Information_schema.views" }, { "msg_contents": "Erki Eessaar <[email protected]> writes:\n> My question is - is all of this the intended behaviour by the implementers?\n\nYes, I'd say so. If you are expecting that the is_updatable flag\nwill check to see if the behavior provided by the view's rules\ncorresponds to something that a human would call a corresponding\nupdate of the view's output, you're out of luck. There's a little\nissue called the halting problem. So the actual check just looks\nto see if there's unconditional DO INSTEAD rules of the appropriate\ntypes, and doesn't probe into what those rules do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Oct 2023 10:30:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issues with Information_schema.views" } ]
[ { "msg_contents": "Hi,\n\nHow about adding code indent checks (like what BF member koel has) to\nthe SanityCheck CI task? This helps catch indentation problems way\nbefore things are committed so that developers can find them out in\ntheir respective CI runs and lets developers learn the postgres code\nindentation stuff. It also saves committers time - git log | grep\n'koel' | wc -l gives me 11 commits and git log | grep 'indentation' |\nwc -l gives me 97. Most, if not all of these commits went into fixing\ncode indentation problems that could have been reported a bit early\nand fixed by developers/patch authors themselves.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 12:50:44 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Mon, Oct 30, 2023 at 12:50 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> How about adding code indent checks (like what BF member koel has) to\n> the SanityCheck CI task? This helps catch indentation problems way\n> before things are committed so that developers can find them out in\n> their respective CI runs and lets developers learn the postgres code\n> indentation stuff. It also saves committers time - git log | grep\n> 'koel' | wc -l gives me 11 commits and git log | grep 'indentation' |\n> wc -l gives me 97. Most, if not all of these commits went into fixing\n> code indentation problems that could have been reported a bit early\n> and fixed by developers/patch authors themselves.\n>\n> Thoughts?\n\nCurrently some of the code indentation issues that pgindent reports\nare caught after the code gets committed either via the buildfarm\nmember koel or when someone runs pgindent before the code release.\nThese indentation issues are then fixed mostly by the committers in\nseparate commits. This is taking away the development time and causing\nback and forth emails in mailing lists.\n\nAs of this writing, git log | grep 'koel' | wc -l gives 13 commits and\ngit log | grep 'indentation' | wc -l gives 100 commits (all may not be\nrelated to code indentation per se). Almost all of these commits went\ninto fixing code indentation problems that could have been reported a\nbit early and fixed by developers/patch authors themselves.\n\nThe attached patch adds a new cirrus-ci task with minimal resources\n(with 1 CPU and ccache 150MB) that fails when pgindent is not happy\nwith any of the changes. This helps catch code indentation issues in\ndevelopment phase way before things get committed. This step can kick\nin developers cirrus-ci runs in their own accounts if cirrus-ci is\nenabled in their development repos. Otherwise, it can also be enabled\nto kick in cfbot runs (I've not explored this yet).\n\nIf we don't want this new task to drain the free credits/run time that\ncirrus-ci offers, one possible way is to cook the code indentation\ncheck into either SanityCheck or CompilerWarnings task to save on the\nresources. If at all an indentation check like this is needed, we can\nthink of adding pgperltidy check as well.\n\nHere's a development branch to see the new task in action\nhttps://github.com/BRupireddy2/postgres/tree/add_code_indentation_check_to_cirrus_ci\n- an intentional pgindent failure is detected by the new task where\nthe diff is uploaded as an artifact -\nhttps://api.cirrus-ci.com/v1/artifact/task/6127561344811008/indentation/pgindent.diffs.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 8 Dec 2023 17:09:31 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Add code indentation check to cirrus-ci (was Re: Add BF member\n koel-like indentation checks to SanityCheck CI)" }, { "msg_contents": "Hi,\n\nYou may want to check out the WIP patch [1] about adding meson targets\nto run pgindent by Andres.\n\n[1] https://www.postgresql.org/message-id/20231019044907.ph6dw637loqg3lqk%40awork3.anarazel.de\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 14 Dec 2023 11:49:04 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add code indentation check to cirrus-ci (was Re: Add BF member\n koel-like indentation checks to SanityCheck CI)" }, { "msg_contents": "On Mon, Oct 30, 2023 at 2:21 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> How about adding code indent checks (like what BF member koel has) to\n> the SanityCheck CI task? This helps catch indentation problems way\n> before things are committed so that developers can find them out in\n> their respective CI runs and lets developers learn the postgres code\n> indentation stuff. It also saves committers time - git log | grep\n> 'koel' | wc -l gives me 11 commits and git log | grep 'indentation' |\n> wc -l gives me 97. Most, if not all of these commits went into fixing\n> code indentation problems that could have been reported a bit early\n> and fixed by developers/patch authors themselves.\n>\n> Thoughts?\n\nThere are three possible avenues here:\n\n1) Accept that there are going to be wrong indents committed sometimes\n2) Write off buildfarm member koel as an experiment, remove it, and do\nreindents periodically. After having been \"trained\", committers will\nstill make an effort to indent, and failure to do so won't be a\nhouse-on-fire situation.\n3) The current proposal\n\nNumber three is the least attractive option -- it makes everyone's\ndevelopment more demanding, with more CI failures where it's not\nhelpful. If almost everything shows red in CI, that's too much noise,\nand a red sanity check will just start to be ignored. I don't indent\nduring most of development, and don't intend to start. Inexperienced\ndevelopers will think they have to jump through more hoops in order to\nget acceptance, making submissions more difficult, with no\ncorresponding upside for them. Also, imagine a CF manager sending 100\nemails complaining about indentation.\n\nSo, -1 from me.\n\n\n", "msg_date": "Tue, 9 Jan 2024 16:00:45 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": " From the previous thread on this issue. I think the following proposal\nseemed like it had the most buy-in from committers. But so far nobody\nimplemented it:\n\nOn Wed, 18 Oct 2023 at 16:07, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Oct 18, 2023 at 3:21 AM Peter Eisentraut <[email protected]> wrote:\n> > On 18.10.23 06:40, David Rowley wrote:\n> > > I agree that it's not nice to add yet another way of breaking the\n> > > buildfarm and even more so when the committer did make check-world\n> > > before committing. We have --enable-tap-tests, we could have\n> > > --enable-indent-checks and have pgindent check the code is correctly\n> > > indented during make check-world. Then just not have\n> > > --enable-indent-checks in CI.\n> >\n> > This approach seems like a good improvement, even independent of\n> > everything else we might do about this. Making it easier to use and\n> > less likely to be forgotten. Also, this way, non-committer contributors\n> > can opt-in, if they want to earn bonus points.\n>\n> Yeah. I'm not going to push anything that doesn't pass make\n> check-world, so this is appealing in that something that I'm already\n> doing would (or could be configured to) catch this problem also.\n\nSource: https://www.postgresql.org/message-id/flat/CA%2BTgmobWXtSciC6hahE0J5w01D6Z3LPv9ctb5Ty_ory4m-NiXQ%40mail.gmail.com#c0534b1ad7ac4ef301cd431e8a222e6c\n\n(CC Tristan since he was making changes to pgindent recently, and so I\nhad pinged him off-list on this exact topic before)\n\n\n", "msg_date": "Tue, 9 Jan 2024 18:59:34 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Tue Jan 9, 2024 at 3:00 AM CST, John Naylor wrote:\n> On Mon, Oct 30, 2023 at 2:21 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > How about adding code indent checks (like what BF member koel has) to\n> > the SanityCheck CI task? This helps catch indentation problems way\n> > before things are committed so that developers can find them out in\n> > their respective CI runs and lets developers learn the postgres code\n> > indentation stuff. It also saves committers time - git log | grep\n> > 'koel' | wc -l gives me 11 commits and git log | grep 'indentation' |\n> > wc -l gives me 97. Most, if not all of these commits went into fixing\n> > code indentation problems that could have been reported a bit early\n> > and fixed by developers/patch authors themselves.\n> >\n> > Thoughts?\n>\n> There are three possible avenues here:\n>\n> 1) Accept that there are going to be wrong indents committed sometimes\n> 2) Write off buildfarm member koel as an experiment, remove it, and do\n> reindents periodically. After having been \"trained\", committers will\n> still make an effort to indent, and failure to do so won't be a\n> house-on-fire situation.\n> 3) The current proposal\n>\n> Number three is the least attractive option -- it makes everyone's\n> development more demanding, with more CI failures where it's not\n> helpful. If almost everything shows red in CI, that's too much noise,\n> and a red sanity check will just start to be ignored.\n\nYou can't ignore something that has to be required. We could tell \ncommitters that they shouldn't commit patches that don't pass pgindent, \nwhich might even be the current case; I'm not sure.\n\n> I don't indent during most of development, and don't intend to start.\n\nCould you expand on why you don't? I could understand as you're writing, \nbut I would think formatting on save, might be useful.\n\n> Inexperienced developers will think they have to jump through more\n> hoops in order to get acceptance, making submissions more difficult,\n> with no corresponding upside for them.\n\nThe modern developer is well accustomed to code formatting/linting \nrequirements. Languages that attract the average developer like Rust, \nPython, and JavaScript all have well-known tools that many projects use \nlike rustfmt, black, or prettier.\n\n> Also, imagine a CF manager sending 100 emails complaining about indentation.\n> So, -1 from me.\n\nYes, this would be annoying for a CF manager, and for that reason, \nI would agree with your assessment. But I think this issue speaks more \nto how tooling around Postgres hacking works in general. For instance, \nif we look at something like SourceHut, they send emails from their CI \nto the patchset it tested, which gives submitters pretty immediate \nfeedback about whether their patch meets all the contributing \nrequirements. See the aerc-devel mailing list for an example[1].\n\nI don't want to diminish the thankless work that goes into maintaining \nthe current tooling however. These aren't easy problems to solve, and \nI know most people would rather hack on Postgres than cfbot, etc. Thanks \nfor keeping the Postgres lights on!\n\nI think the current proposal is good if the development experience \naround pgindent was better. I've tried to help with this. I created \na VSCode extension[0], which developers can use to auto-format Postgres \nand extension source code if set up properly. My next plan is to \nintegrate pgindent into a Neovim workflow for myself, that I can maybe \npackage into a plugin for others. I'd also like to get to the suggestion \nthat Jelte sent about adding pgindent checks to check-world. In Meson, \nI will add a run_target() for it too. If we can lower the burden of \nrunning pgindent, the more chances that people will actually use it!\n\nProjects of similarly large scope like LLVM manage to gate pull requests \non code formatting requirements, so it is definitely in the realm of \npossibility. Unfortunately for Postgres, we are fighting an uphill \nbattle where life isn't as simple as opening a PR and GitHub Actions \ntells you pretty quickly if your code isn't formatted properly. We don't \neven run CI on all patches that get submitted to the list. They have to \nbe added to the commitfests. I know part of this is to save resources, \nbut maybe we could start manually running CI on patches on the list by \nCCing cfbot or something. Just an idea.\n\nPerhaps the hardest thing to change is the culture of Postgres \ndevelopment. If we can't all agree that we want formatted code, then \nthere is no point in anything that I discussed.\n\n[0]: https://marketplace.visualstudio.com/items?itemName=tristan957.postgres-hacker\n[1]: https://lists.sr.ht/~rjarry/aerc-devel/patches/48415\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 09 Jan 2024 13:20:37 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Tue, Jan 9, 2024 at 2:20 PM Tristan Partin <[email protected]> wrote:\n> > I don't indent during most of development, and don't intend to start.\n>\n> Could you expand on why you don't? I could understand as you're writing,\n> but I would think formatting on save, might be useful.\n\nJohn might have his own answer to this, but here's mine: it's a pain\nin the rear end. By the time I'm getting close to committing something\nI try to ensure that everything I'm posting is indented. But for early\nversions of work it adds a lot of paper-pushing with little\ncorresponding benefit. I've been doing this long enough that my\nnatural coding style is close to what pgindent would produce, but\nfiguring out how many tab stops are needed after a variable name to\nmake the result agree with pgindent's sentiments is not something I\ncan do reliably.\n\n> Perhaps the hardest thing to change is the culture of Postgres\n> development. If we can't all agree that we want formatted code, then\n> there is no point in anything that I discussed.\n\nI think we're basically committed to that at this point, and long have\nbeen. Before koel started grumping, people would periodically pgindent\nparticular files because if you wanted to indent your new patch, you\nhad to run pgindent on the file and then back out the changes that\nwere due to the preexisting file contents rather than your patch. That\nwas maddening in its own way. The new system is annoying a slightly\ndifferent set of people for a slightly different set of reasons, but\neverybody understands that in the end, it's all gonna get pgindented.\n\nI also agree with you that the culture of Postgres development is hard\nto change. This is the only OSS project that I've ever worked on, and\nI still do it the same way I worked on code 30 years ago, except now I\nuse git instead of cvs. I can't imagine how we could modernize some of\nour development practices without causing unacceptable collateral\ndamage, but maybe there's a way, and for sure the way we do things\naround here is pretty far out of the 2023 mainstream. That's something\nwe should be grappling with, somehow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 15:49:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Tue Jan 9, 2024 at 2:49 PM CST, Robert Haas wrote:\n> On Tue, Jan 9, 2024 at 2:20 PM Tristan Partin <[email protected]> wrote:\n> > > I don't indent during most of development, and don't intend to start.\n> >\n> > Could you expand on why you don't? I could understand as you're writing,\n> > but I would think formatting on save, might be useful.\n>\n> John might have his own answer to this, but here's mine: it's a pain\n> in the rear end. By the time I'm getting close to committing something\n> I try to ensure that everything I'm posting is indented. But for early\n> versions of work it adds a lot of paper-pushing with little\n> corresponding benefit. I've been doing this long enough that my\n> natural coding style is close to what pgindent would produce, but\n> figuring out how many tab stops are needed after a variable name to\n> make the result agree with pgindent's sentiments is not something I\n> can do reliably.\n\nInteresting that you think this way. I generally setup format on save in \nmy editors and never think about things again. I agree that the indents \nafter variables is the hardest thing to internalize!\n\n> > Perhaps the hardest thing to change is the culture of Postgres\n> > development. If we can't all agree that we want formatted code, then\n> > there is no point in anything that I discussed.\n>\n> I think we're basically committed to that at this point, and long have\n> been. Before koel started grumping, people would periodically pgindent\n> particular files because if you wanted to indent your new patch, you\n> had to run pgindent on the file and then back out the changes that\n> were due to the preexisting file contents rather than your patch. That\n> was maddening in its own way. The new system is annoying a slightly\n> different set of people for a slightly different set of reasons, but\n> everybody understands that in the end, it's all gonna get pgindented.\n\nI've seen this in the git-blame-ignore-revs file. Good to know the \nhistorical context.\n\n> I also agree with you that the culture of Postgres development is hard\n> to change. This is the only OSS project that I've ever worked on, and\n> I still do it the same way I worked on code 30 years ago, except now I\n> use git instead of cvs. I can't imagine how we could modernize some of\n> our development practices without causing unacceptable collateral\n> damage, but maybe there's a way, and for sure the way we do things\n> around here is pretty far out of the 2023 mainstream. That's something\n> we should be grappling with, somehow.\n\nI'm just a newcomer, but I have had some ideas that _don't_ involve \nleaving the mailing list paradigm behind, but I will leave those for \nanother day and another thread :). Perhaps it is worth a talk at \na conference sometime.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 09 Jan 2024 15:08:19 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jan 9, 2024 at 2:20 PM Tristan Partin <[email protected]> wrote:\n>>> I don't indent during most of development, and don't intend to start.\n\n>> Could you expand on why you don't? I could understand as you're writing,\n>> but I would think formatting on save, might be useful.\n\n> John might have his own answer to this, but here's mine: it's a pain\n> in the rear end. By the time I'm getting close to committing something\n> I try to ensure that everything I'm posting is indented. But for early\n> versions of work it adds a lot of paper-pushing with little\n> corresponding benefit. I've been doing this long enough that my\n> natural coding style is close to what pgindent would produce, but\n> figuring out how many tab stops are needed after a variable name to\n> make the result agree with pgindent's sentiments is not something I\n> can do reliably.\n\nFWIW, I rely on Emacs C mode during initial development, and while\nit's not far off from what pgindent does there are certain things it\ndoesn't match (notably, alignment of variable declarations). I just\ndon't worry about that at that stage. Once I have something that's\nturning over, I'll pgindent it before final review and showing it to\nother people. That's mostly because I've been reading Postgres code\nfor so long that anything that isn't pgindented looks subtly wrong,\nso reviewing it annoys my hindbrain. But trying to match pgindent's\nrules by hand, in an editor that doesn't provide help for that, is not\nworth the mental effort.\n\n>> Perhaps the hardest thing to change is the culture of Postgres\n>> development. If we can't all agree that we want formatted code, then\n>> there is no point in anything that I discussed.\n\n> I think we're basically committed to that at this point, and long have\n> been.\n\nAgreed. What's at stake here is not whether the final product will\nbe pgindented, but when that happens and who's responsible for making\nit happen. We're trying to switch from \"fix it once a year or so\"\nto \"make sure it's right at the point of commit\", which is a problem\nfor committers who up to now weren't in the habit of automatically\npgindenting. I don't think it's time to give up on the project of\nchanging those habits; and if we do give up, the answer surely must\nnot be to push the problem further upstream. Occasional contributors\nare even less likely to be able to cope with this.\n\nIn short, I don't think that putting this into CI is the answer.\nPutting it into committers' standard workflow is a better idea,\nif we can get all the committers on board with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 16:20:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "> On 9 Jan 2024, at 22:20, Tom Lane <[email protected]> wrote:\n\n> In short, I don't think that putting this into CI is the answer.\n> Putting it into committers' standard workflow is a better idea,\n> if we can get all the committers on board with that.\n\n+many\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 22:42:20 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On 1/9/24 3:20 PM, Tom Lane wrote:\n> In short, I don't think that putting this into CI is the answer.\n> Putting it into committers' standard workflow is a better idea,\n> if we can get all the committers on board with that.\n\nFWIW, that's the approach that go takes - not only for committing to go \nitself, but it is *strongly* recommended[1] that anyone writing any code \nin go makes running `go fmt` a standard part of their workflow. In my \nexperience, it makes collaborating noticably easier because you never \nneed to worry about formatting differences. FYI, vim makes this easy via \nvim-autoformat[2] (which also supports line-by-line formatting if the \nformat tool allows it); presumably any modern editor has similar support.\n\n1: Literally 3rd item at https://go.dev/doc/effective_go\n2: https://github.com/vim-autoformat/vim-autoformat\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 15:50:14 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Tue, Jan 9, 2024 at 4:42 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 9 Jan 2024, at 22:20, Tom Lane <[email protected]> wrote:\n> > In short, I don't think that putting this into CI is the answer.\n> > Putting it into committers' standard workflow is a better idea,\n> > if we can get all the committers on board with that.\n>\n> +many\n\nI think we need to do that, too, but the question is how. The best\nsuggestion I've heard so far was to make it part of the build, or part\nof the test suite, so that if you don't do it, some part of what you\nwere going to do anyway actually fails. That avoids making it an extra\nstep that you have to remember separately. We have an absolutely\ninsane number of things-you-must-always-remember-to-do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 17:15:10 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I think we need to do that, too, but the question is how. The best\n> suggestion I've heard so far was to make it part of the build, or part\n> of the test suite, so that if you don't do it, some part of what you\n> were going to do anyway actually fails. That avoids making it an extra\n> step that you have to remember separately. We have an absolutely\n> insane number of things-you-must-always-remember-to-do.\n\nI thought we had a consensus that there should be a way to enable\nrunning it as part of check-world, or some related target. But\nnobody's written that patch yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 17:27:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Wed, Jan 10, 2024 at 2:20 AM Tristan Partin <[email protected]> wrote:\n>\n> On Tue Jan 9, 2024 at 3:00 AM CST, John Naylor wrote:\n> > I don't indent during most of development, and don't intend to start.\n>\n> Could you expand on why you don't? I could understand as you're writing,\n> but I would think formatting on save, might be useful.\n\nOff the top of my head, I like to use '//' comments as quick notes to\nmyself that stand out from normal code comments, and I'm in the habit\nof putting debug print statements flush against the left margin so\nthey're really obvious. Both of these would be wiped out by pgindent.\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:14:25 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "John Naylor <[email protected]> writes:\n> Off the top of my head, I like to use '//' comments as quick notes to\n> myself that stand out from normal code comments, and I'm in the habit\n> of putting debug print statements flush against the left margin so\n> they're really obvious. Both of these would be wiped out by pgindent.\n\n+1. I do both of those things, partly because pgindent would reformat\nthem so that it'd be obvious if I forgot to remove them. (Yes, I\nlook at the diffs pgindent wants to make...)\n\nSo that leads to the conclusion that I wouldn't want an automatic\npgindent check to happen during \"make all\" or \"make check\", because\nI want those things to succeed before I consider pgindent'ing.\nMaybe it's okay to include it as part of check-world, but I'm\nnot quite sure about that either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jan 2024 01:25:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Wed, Jan 10, 2024 at 01:25:36AM -0500, Tom Lane wrote:\n> John Naylor <[email protected]> writes:\n>> Off the top of my head, I like to use '//' comments as quick notes to\n>> myself that stand out from normal code comments, and I'm in the habit\n>> of putting debug print statements flush against the left margin so\n>> they're really obvious. Both of these would be wiped out by pgindent.\n> \n> +1. I do both of those things, partly because pgindent would reformat\n> them so that it'd be obvious if I forgot to remove them. (Yes, I\n> look at the diffs pgindent wants to make...)\n\nI don't do the debug stuff on the left margin even if I force my way\nwith custom elogs when running regression tests as it is quicker than\nusing a coverage report. I also take notes while reviewing or\nimplementing things with the '//' comments, so seeing these gone after\na check run would be sad.\n\n> So that leads to the conclusion that I wouldn't want an automatic\n> pgindent check to happen during \"make all\" or \"make check\", because\n> I want those things to succeed before I consider pgindent'ing.\n> Maybe it's okay to include it as part of check-world, but I'm\n> not quite sure about that either.\n\nAnother possibility would be to hide the test behind a PG_TEST_EXTRA.\n--\nMichael", "msg_date": "Wed, 10 Jan 2024 16:10:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Wed, Jan 10, 2024 at 01:25:36AM -0500, Tom Lane wrote:\n>> So that leads to the conclusion that I wouldn't want an automatic\n>> pgindent check to happen during \"make all\" or \"make check\", because\n>> I want those things to succeed before I consider pgindent'ing.\n>> Maybe it's okay to include it as part of check-world, but I'm\n>> not quite sure about that either.\n\n> Another possibility would be to hide the test behind a PG_TEST_EXTRA.\n\nYeah. I'm not quite sure what's a good way to make this work, but\nit seems like having \"make check-world\" always invoke it would not\nbe desirable. Making that conditional on an environment variable\nsetting could be a better idea, perhaps?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jan 2024 02:24:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Wed, Jan 10, 2024 at 12:54 PM Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > On Wed, Jan 10, 2024 at 01:25:36AM -0500, Tom Lane wrote:\n> >> So that leads to the conclusion that I wouldn't want an automatic\n> >> pgindent check to happen during \"make all\" or \"make check\", because\n> >> I want those things to succeed before I consider pgindent'ing.\n> >> Maybe it's okay to include it as part of check-world, but I'm\n> >> not quite sure about that either.\n>\n> > Another possibility would be to hide the test behind a PG_TEST_EXTRA.\n>\n> Yeah. I'm not quite sure what's a good way to make this work, but\n> it seems like having \"make check-world\" always invoke it would not\n> be desirable. Making that conditional on an environment variable\n> setting could be a better idea, perhaps?\n\nIt's easy to miss setting the environment variable and eventually end\nup with code incompatible with pgindent committed. IMO, running the\npgindent in at least one of the CI systems if not all (either as part\ntask SyanityCheck or task Linux - Debian Bullseye - Autoconf) help\ncatches things early on in CF bot runs itself. This saves committers\ntime but at the cost of free run-time that cirrus-ci provides.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:14:58 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "Bharath Rupireddy <[email protected]> writes:\n> On Wed, Jan 10, 2024 at 12:54 PM Tom Lane <[email protected]> wrote:\n>> Yeah. I'm not quite sure what's a good way to make this work, but\n>> it seems like having \"make check-world\" always invoke it would not\n>> be desirable. Making that conditional on an environment variable\n>> setting could be a better idea, perhaps?\n\n> It's easy to miss setting the environment variable and eventually end\n> up with code incompatible with pgindent committed.\n\nWell, we expect committers to know what they're doing. I'm not\nquite suggesting that committers add \"export I_AM_A_PG_COMMITTER=1\"\nin their ~/.profile and then have the Makefiles check that to decide\nwhat tests are invoked by \"make check-world\" ... but it doesn't\nseem like a totally untenable idea, either.\n\n> IMO, running the\n> pgindent in at least one of the CI systems if not all (either as part\n> task SyanityCheck or task Linux - Debian Bullseye - Autoconf) help\n> catches things early on in CF bot runs itself. This saves committers\n> time but at the cost of free run-time that cirrus-ci provides.\n\nBut that puts the burden of pgindent-cleanliness onto initial patch\nsubmitters, which I think is the wrong thing for reasons mentioned\nupthread. We want to enforce this at commit into the master repo, but\nI fear enforcing it earlier will drive novice contributors away for\nno very good reason.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jan 2024 02:58:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "On Wed Jan 10, 2024 at 1:58 AM CST, Tom Lane wrote:\n> Bharath Rupireddy <[email protected]> writes:\n> > IMO, running the\n> > pgindent in at least one of the CI systems if not all (either as part\n> > task SyanityCheck or task Linux - Debian Bullseye - Autoconf) help\n> > catches things early on in CF bot runs itself. This saves committers\n> > time but at the cost of free run-time that cirrus-ci provides.\n>\n> But that puts the burden of pgindent-cleanliness onto initial patch\n> submitters, which I think is the wrong thing for reasons mentioned\n> upthread. We want to enforce this at commit into the master repo, but\n> I fear enforcing it earlier will drive novice contributors away for\n> no very good reason.\n\nIf we are worried about turning away novice contributors, there are much \nbigger fish to fry than worrying if we will turn them away due to \nrequiring code to be formatted a certain way. Like I said earlier, \nformatters are pretty common tools to be using these days. go fmt, deno \nfmt, rustfmt, prettier, black, clang-format, uncrustify, etc.\n\nCode formatting requirements are more likely to turn someone away from \ncontributing if code reviews are spent making numerous comments. \nLuckily, we can just say \"run this command line incantation of \npgindent,\" which in the grand scheme of things is easy compared to all \nthe other things you have to be aware of to contribute to Postgres.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 10 Jan 2024 10:39:30 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add BF member koel-like indentation checks to SanityCheck CI" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there was some CFbot test failure last time it was run [2].\nPlease have a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4691/\n[2] https://cirrus-ci.com/task/5033191522697216\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 14:19:51 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add code indentation check to cirrus-ci (was Re: Add BF member\n koel-like indentation checks to SanityCheck CI)" }, { "msg_contents": "\nOn 2024-01-21 Su 22:19, Peter Smith wrote:\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> like there was some CFbot test failure last time it was run [2].\n> Please have a look and post an updated version if necessary.\n>\n\nI don't think there's a consensus that we want this. It should probably \nbe returned with feedback.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 10:18:41 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add code indentation check to cirrus-ci (was Re: Add BF member\n koel-like indentation checks to SanityCheck CI)" }, { "msg_contents": "On Mon, Jan 22, 2024 at 8:48 PM Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 2024-01-21 Su 22:19, Peter Smith wrote:\n> > 2024-01 Commitfest.\n> >\n> > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > like there was some CFbot test failure last time it was run [2].\n> > Please have a look and post an updated version if necessary.\n> >\n>\n> I don't think there's a consensus that we want this. It should probably\n> be returned with feedback.\n\nI've withdrawn the CF entry as the idea is being discussed in a\nseparate thread -\nhttps://www.postgresql.org/message-id/20231019044907.ph6dw637loqg3lqk%40awork3.anarazel.de\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 07:51:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add code indentation check to cirrus-ci (was Re: Add BF member\n koel-like indentation checks to SanityCheck CI)" } ]
[ { "msg_contents": "Hi,\n\nAfter 96f052613f3, we have below 6 types of parameter for \npg_stat_reset_shared().\n\n \"archiver\", \"bgwriter\", \"checkpointer\", \"io\", \"recovery_prefetch\", \n\"wal\"\n\nHow about adding a new option 'all' to delete all targets above?\n\nI imagine there are cases where people want to initialize all of them at \nthe same time in addition to initializing one at a time.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Mon, 30 Oct 2023 16:35:19 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "At Mon, 30 Oct 2023 16:35:19 +0900, torikoshia <[email protected]> wrote in \n> Hi,\n> \n> After 96f052613f3, we have below 6 types of parameter for\n> pg_stat_reset_shared().\n> \n> \"archiver\", \"bgwriter\", \"checkpointer\", \"io\", \"recovery_prefetch\",\n> \"wal\"\n> \n> How about adding a new option 'all' to delete all targets above?\n> \n> I imagine there are cases where people want to initialize all of them\n> at the same time in addition to initializing one at a time.\n\nFWIW, I fairly often wanted it, but forgot about that:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Oct 2023 17:17:13 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Oct 30, 2023 at 1:47 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Mon, 30 Oct 2023 16:35:19 +0900, torikoshia <[email protected]> wrote in\n> > Hi,\n> >\n> > After 96f052613f3, we have below 6 types of parameter for\n> > pg_stat_reset_shared().\n> >\n> > \"archiver\", \"bgwriter\", \"checkpointer\", \"io\", \"recovery_prefetch\",\n> > \"wal\"\n> >\n> > How about adding a new option 'all' to delete all targets above?\n> >\n> > I imagine there are cases where people want to initialize all of them\n> > at the same time in addition to initializing one at a time.\n>\n> FWIW, I fairly often wanted it, but forgot about that:p\n\nIsn't calling pg_stat_reset_shared() for all stats types helping you\nout? Is there any problem with it? Can you be more specific about the\nuse-case?\n\nIMV, I don't see any point for adding another pseudo (rather\nnon-existent) shared stats target which might confuse users - it's\neasy to specify pg_stat_reset_shared('all'); to clear things out when\nsomeone actually doesn't want to reset all - an accidental usage of\nthe 'all' option will reset all shared memory stats.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 14:15:53 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "At Mon, 30 Oct 2023 14:15:53 +0530, Bharath Rupireddy <[email protected]> wrote in \n> > > I imagine there are cases where people want to initialize all of them\n> > > at the same time in addition to initializing one at a time.\n> >\n> > FWIW, I fairly often wanted it, but forgot about that:p\n> \n> Isn't calling pg_stat_reset_shared() for all stats types helping you\n> out? Is there any problem with it? Can you be more specific about the\n> use-case?\n\nOh. Sorry, it's my bad. pg_stat_reset_shared() is sufficient.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 31 Oct 2023 11:59:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Oct 30, 2023 at 5:46 PM Bharath Rupireddy \n<[email protected]> wrote:\n\nThanks for the comments!\n\n> Isn't calling pg_stat_reset_shared() for all stats types helping you\n> out? Is there any problem with it? Can you be more specific about the\n> use-case?\n\nYes, calling pg_stat_reset_shared() for all stats types can do what I \nwanted to do.\nBut calling it with 6 different parameters seems tiresome and I thought \nit would be convenient to have a parameter to delete all cluster-wide \nstatistics at once.\n\nI may be wrong, but I imagine that it's more common to want to delete \nall of the statistics for an entire cluster rather than just a portion \nof it.\n\n\n> IMV, I don't see any point for adding another pseudo (rather\n> non-existent) shared stats target which might confuse users - it's\n> easy to specify pg_stat_reset_shared('all'); to clear things out when\n> someone actually doesn't want to reset all - an accidental usage of\n> the 'all' option will reset all shared memory stats.\n\nI once considered changing the pg_stat_reset_shared() to delete all \nstats when called without parameters like pg_stat_statements_reset(), \nbut gave it up since it can confuse users as you described.\n\nI was hoping that the need to specify 'all' would remind users that the \ntarget can be specified individually.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:26:18 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Tue, Oct 31, 2023 at 04:26:18PM +0900, torikoshia wrote:\n> Yes, calling pg_stat_reset_shared() for all stats types can do what I wanted\n> to do.\n> But calling it with 6 different parameters seems tiresome and I thought it\n> would be convenient to have a parameter to delete all cluster-wide\n> statistics at once.\n> \n> I may be wrong, but I imagine that it's more common to want to delete all of\n> the statistics for an entire cluster rather than just a portion of it.\n\nIf more flexibility is wanted in this function, could it be an option\nto consider a flavor like pg_stat_reset_shared(text[]), where it is\npossible to specify a list of shared stats types to reset? Perhaps\nthere are no real use cases for it, just wanted to mention it anyway\nregarding the fact that it could have benefits to refactor this code\nto use a bitwise operator for its internals with bit flags for each\ntype.\n--\nMichael", "msg_date": "Wed, 1 Nov 2023 07:54:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 1, 2023 at 4:24 AM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Oct 31, 2023 at 04:26:18PM +0900, torikoshia wrote:\n> > Yes, calling pg_stat_reset_shared() for all stats types can do what I wanted\n> > to do.\n> > But calling it with 6 different parameters seems tiresome and I thought it\n> > would be convenient to have a parameter to delete all cluster-wide\n> > statistics at once.\n> >\n> > I may be wrong, but I imagine that it's more common to want to delete all of\n> > the statistics for an entire cluster rather than just a portion of it.\n>\n> If more flexibility is wanted in this function, could it be an option\n> to consider a flavor like pg_stat_reset_shared(text[]), where it is\n> possible to specify a list of shared stats types to reset? Perhaps\n> there are no real use cases for it, just wanted to mention it anyway\n> regarding the fact that it could have benefits to refactor this code\n> to use a bitwise operator for its internals with bit flags for each\n> type.\n\nI don't see a strong reason to introduce yet-another API when someone\ncan just call things in a loop. I could recollect a recent analogy - a\nproposal to have a way to define multiple custom wait events with a\nsingle function call instead of callers defining in a loop didn't draw\nmuch interest.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Nov 2023 00:55:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Thu, 2 Nov 2023 at 20:26, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Nov 1, 2023 at 4:24 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Oct 31, 2023 at 04:26:18PM +0900, torikoshia wrote:\n> > > Yes, calling pg_stat_reset_shared() for all stats types can do what I wanted\n> > > to do.\n> > > But calling it with 6 different parameters seems tiresome and I thought it\n> > > would be convenient to have a parameter to delete all cluster-wide\n> > > statistics at once.\n> > >\n> > > I may be wrong, but I imagine that it's more common to want to delete all of\n> > > the statistics for an entire cluster rather than just a portion of it.\n> >\n> > If more flexibility is wanted in this function, could it be an option\n> > to consider a flavor like pg_stat_reset_shared(text[]), where it is\n> > possible to specify a list of shared stats types to reset? Perhaps\n> > there are no real use cases for it, just wanted to mention it anyway\n> > regarding the fact that it could have benefits to refactor this code\n> > to use a bitwise operator for its internals with bit flags for each\n> > type.\n>\n> I don't see a strong reason to introduce yet-another API when someone\n> can just call things in a loop. I could recollect a recent analogy - a\n> proposal to have a way to define multiple custom wait events with a\n> single function call instead of callers defining in a loop didn't draw\n> much interest.\n\nKnowing that your metrics have a shared starting point can be quite\nvaluable, as it allows you to do some math that would otherwise be\nmuch less accurate when working with stats over a short amount of\ntime. I've not used these stats systems much myself, but skew between\nmetrics caused by different reset points can be difficult to detect\nand debug, so I think an atomic call to reset all these stats could be\nworth implementing.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 2 Nov 2023 21:17:09 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Hi,\n\nOn 2023-11-03 00:55:00 +0530, Bharath Rupireddy wrote:\n> On Wed, Nov 1, 2023 at 4:24 AM Michael Paquier <[email protected]> wrote:\n> >\n> > On Tue, Oct 31, 2023 at 04:26:18PM +0900, torikoshia wrote:\n> > > Yes, calling pg_stat_reset_shared() for all stats types can do what I wanted\n> > > to do.\n> > > But calling it with 6 different parameters seems tiresome and I thought it\n> > > would be convenient to have a parameter to delete all cluster-wide\n> > > statistics at once.\n> > > I may be wrong, but I imagine that it's more common to want to delete all of\n> > > the statistics for an entire cluster rather than just a portion of it.\n\nYes, agreed. However, I'd suggest adding pg_stat_reset_shared(), instead of\npg_stat_reset_shared('all') for this purpose.\n\n\n> > If more flexibility is wanted in this function, could it be an option\n> > to consider a flavor like pg_stat_reset_shared(text[]), where it is\n> > possible to specify a list of shared stats types to reset? Perhaps\n> > there are no real use cases for it, just wanted to mention it anyway\n> > regarding the fact that it could have benefits to refactor this code\n> > to use a bitwise operator for its internals with bit flags for each\n> > type.\n\nI don't think there is much point in such an API - if the caller actually\nwants to delete individual stats, it's not too hard to loop.\n\nBut most of the time resetting individual stats doesn't make sense. IMO it was\na mistake to ever add the ability. But that ship has sailed.\n\n\n> I don't see a strong reason to introduce yet-another API when someone\n> can just call things in a loop.\n\nI don't agree at all. That requires callers to know the set of possible values\nthat stats need to be reset for - which has grown over time. But nearly all\nthe time the right thing to do is to reset *all* shared stats, not just some.\n\n> I could recollect a recent analogy - a\n> proposal to have a way to define multiple custom wait events with a\n> single function call instead of callers defining in a loop didn't draw\n> much interest.\n\nThat's not analoguous - in your example the caller by definition knows the set\nof wait events it wants to create. Introducing a batch API wouldn't change\nthat. But in case of resetting all stats the caller does *not* inherently\nknow the set of stats types.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Nov 2023 18:49:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Thanks all for the comments!\n\nOn Fri, Nov 3, 2023 at 5:17 AM Matthias van de Meent \n<[email protected]> wrote:\n> Knowing that your metrics have a shared starting point can be quite\n> valuable, as it allows you to do some math that would otherwise be\n> much less accurate when working with stats over a short amount of\n> time. I've not used these stats systems much myself, but skew between\n> metrics caused by different reset points can be difficult to detect\n> and debug, so I think an atomic call to reset all these stats could be\n> worth implementing.\n\nSince each stats, except wal_prefetch was reset acquiring LWLock, \nattached PoC patch makes the call atomic by using these LWlocks.\n\nIf this is the right direction, I'll try to make wal_prefetch also take \nLWLock.\n\nOn 2023-11-04 10:49, Andres Freund wrote:\n\n> Yes, agreed. However, I'd suggest adding pg_stat_reset_shared(), \n> instead of\n> pg_stat_reset_shared('all') for this purpose.\n\nIn the attached PoC patch the shared statistics are reset by calling \npg_stat_reset_shared() not pg_stat_reset_shared('all').\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Mon, 06 Nov 2023 15:09:28 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Sat, Nov 4, 2023 at 7:19 AM Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-03 00:55:00 +0530, Bharath Rupireddy wrote:\n> > On Wed, Nov 1, 2023 at 4:24 AM Michael Paquier <[email protected]> wrote:\n> > >\n> > > On Tue, Oct 31, 2023 at 04:26:18PM +0900, torikoshia wrote:\n> > > > Yes, calling pg_stat_reset_shared() for all stats types can do what I wanted\n> > > > to do.\n> > > > But calling it with 6 different parameters seems tiresome and I thought it\n> > > > would be convenient to have a parameter to delete all cluster-wide\n> > > > statistics at once.\n> > > > I may be wrong, but I imagine that it's more common to want to delete all of\n> > > > the statistics for an entire cluster rather than just a portion of it.\n>\n> Yes, agreed. However, I'd suggest adding pg_stat_reset_shared(), instead of\n> pg_stat_reset_shared('all') for this purpose.\n\nAn overloaded function seems a better choice than another target\n'all'. I'm all +1 for it as others seem to concur with the idea of\nhaving something to reset all shared stats.\n\n> > > If more flexibility is wanted in this function, could it be an option\n> > > to consider a flavor like pg_stat_reset_shared(text[]), where it is\n> > > possible to specify a list of shared stats types to reset? Perhaps\n> > > there are no real use cases for it, just wanted to mention it anyway\n> > > regarding the fact that it could have benefits to refactor this code\n> > > to use a bitwise operator for its internals with bit flags for each\n> > > type.\n>\n> I don't think there is much point in such an API - if the caller actually\n> wants to delete individual stats, it's not too hard to loop.\n\n-1 for text[] version.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 11:48:52 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Nov 6, 2023 at 11:39 AM torikoshia <[email protected]> wrote:\n>\n> Thanks all for the comments!\n>\n> On Fri, Nov 3, 2023 at 5:17 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > Knowing that your metrics have a shared starting point can be quite\n> > valuable, as it allows you to do some math that would otherwise be\n> > much less accurate when working with stats over a short amount of\n> > time. I've not used these stats systems much myself, but skew between\n> > metrics caused by different reset points can be difficult to detect\n> > and debug, so I think an atomic call to reset all these stats could be\n> > worth implementing.\n>\n> Since each stats, except wal_prefetch was reset acquiring LWLock,\n> attached PoC patch makes the call atomic by using these LWlocks.\n>\n> If this is the right direction, I'll try to make wal_prefetch also take\n> LWLock.\n\n+ // Acquire LWLocks\n+ LWLock *locks[] = {&stats_archiver->lock, &stats_bgwriter->lock,\n+ &stats_checkpointer->lock, &stats_wal->lock};\n+\n+ for (int i = 0; i < STATS_SHARED_NUM_LWLOCK; i++)\n+ LWLockAcquire(locks[i], LW_EXCLUSIVE);\n+\n+ for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n+ {\n+ LWLock *bktype_lock = &stats_io->locks[i];\n+ LWLockAcquire(bktype_lock, LW_EXCLUSIVE);\n+ }\n\nWell, that's a total of ~17 LWLocks this new function takes to make\nthe stats reset atomic. I'm not sure if this atomicity is worth the\neffort which can easily be misused - what if someone runs something\nlike SELECT pg_stat_reset_shared() FROM generate_series(1,\n100000....n); to cause heavy lock acquisition and release cycles?\n\nIMV, atomicity is not something that applies for the stats reset\noperation because stats are approximate numbers by nature after all.\nIf the pg_stat_reset_shared() resets stats for only a bunch of stats\ntypes and fails, it's the basic application programming style that\nwhen a query fails it's the application that needs to have a retry\nmechanism. FWIW, the atomicity doesn't apply today if someone wants to\nreset stats in a loop for all stats types.\n\n> On 2023-11-04 10:49, Andres Freund wrote:\n>\n> > Yes, agreed. However, I'd suggest adding pg_stat_reset_shared(),\n> > instead of\n> > pg_stat_reset_shared('all') for this purpose.\n>\n> In the attached PoC patch the shared statistics are reset by calling\n> pg_stat_reset_shared() not pg_stat_reset_shared('all').\n\nSome quick comments:\n\n1.\n+/*\n+pg_stat_reset_shared_all(PG_FUNCTION_ARGS)\n+{\n+ pgstat_reset_shared_all();\n+ PG_RETURN_VOID();\n+}\n\nIMO, simpler way is to move the existing code in\npg_stat_reset_shared() to a common internal function like\npgstat_reset_shared(char *target) and the pg_stat_reset_shared_all()\ncan just loop over all the stats targets.\n\n2.\n+{ oid => '8000',\n+ descr => 'statistics: reset collected statistics shared across the cluster',\n+ proname => 'pg_stat_reset_shared', provolatile => 'v', prorettype => 'void',\n+ proargtypes => '', prosrc => 'pg_stat_reset_shared_all' },\n\nWhy a new function consuming the oid? Why can't we just do the trick\nof proisstrict => 'f' and if (PG_ARGISNULL(0)) { reset all stats} else\n{reset specified stats kind} like the pg_stat_reset_slru()?\n\n3. I think the new reset all stats function must also consider\nresetting all SLRU stats, no?\n /* stats for fixed-numbered objects */\n PGSTAT_KIND_ARCHIVER,\n PGSTAT_KIND_BGWRITER,\n PGSTAT_KIND_CHECKPOINTER,\n PGSTAT_KIND_IO,\n PGSTAT_KIND_SLRU,\n PGSTAT_KIND_WAL,\n\n4. I think the new reset all stats function must also consider\nresetting recovery_prefetch.\n\n5.\n+ If no argument is specified, reset all these views at once.\n+ Note current patch is WIP and pg_stat_recovery_prefetch is not reset.\n </para>\n\nHow about \"If the argument is NULL, all counters shown in all of these\nviews are reset.\"?\n\n6. Add a test case to cover the code in stats.sql.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Nov 2023 14:00:13 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "At Mon, 6 Nov 2023 14:00:13 +0530, Bharath Rupireddy <[email protected]> wrote in \r\n> On Mon, Nov 6, 2023 at 11:39 AM torikoshia <[email protected]> wrote:\r\n> > Since each stats, except wal_prefetch was reset acquiring LWLock,\r\n> > attached PoC patch makes the call atomic by using these LWlocks.\r\n> >\r\n> > If this is the right direction, I'll try to make wal_prefetch also take\r\n> > LWLock.\r\n\r\nI must say, I have reservations about this approach. The main concern\r\nis the duplication of reset code, which has been efficiently\r\nencapsulated for individual targets, into this location. This practice\r\ndegrades the maintainability and clarity of the code.\r\n\r\n> Well, that's a total of ~17 LWLocks this new function takes to make\r\n> the stats reset atomic. I'm not sure if this atomicity is worth the\r\n> effort which can easily be misused - what if someone runs something\r\n> like SELECT pg_stat_reset_shared() FROM generate_series(1,\r\n> 100000....n); to cause heavy lock acquisition and release cycles?\r\n...\r\n> IMV, atomicity is not something that applies for the stats reset\r\n> operation because stats are approximate numbers by nature after all.\r\n> If the pg_stat_reset_shared() resets stats for only a bunch of stats\r\n> types and fails, it's the basic application programming style that\r\n> when a query fails it's the application that needs to have a retry\r\n> mechanism. FWIW, the atomicity doesn't apply today if someone wants to\r\n> reset stats in a loop for all stats types.\r\n\r\nThe infrequent use of this feature, coupled with the fact that there\r\nis no inherent need for these counters to be reset simultaneoulsy,\r\nleads me to think that there is little point in cnetralizing the\r\nlocks.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 08 Nov 2023 10:08:42 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 08, 2023 at 10:08:42AM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 6 Nov 2023 14:00:13 +0530, Bharath Rupireddy <[email protected]> wrote in \n> I must say, I have reservations about this approach. The main concern\n> is the duplication of reset code, which has been efficiently\n> encapsulated for individual targets, into this location. This practice\n> degrades the maintainability and clarity of the code.\n\n+{ oid => '8000',\nThis OID pick had me smile.\n\n>> IMV, atomicity is not something that applies for the stats reset\n>> operation because stats are approximate numbers by nature after all.\n>> If the pg_stat_reset_shared() resets stats for only a bunch of stats\n>> types and fails, it's the basic application programming style that\n>> when a query fails it's the application that needs to have a retry\n>> mechanism. FWIW, the atomicity doesn't apply today if someone wants to\n>> reset stats in a loop for all stats types.\n> \n> The infrequent use of this feature, coupled with the fact that there\n> is no inherent need for these counters to be reset simultaneoulsy,\n> leads me to think that there is little point in centralizing the\n> locks.\n\nEach stat listed with fixed_amount has meaning taken in isolation, so\nI don't see why this patch has to be that complicated. I'd expect one\ncode path that just calls a series of pgstat_reset_of_kind(). There\ncould be an argument for a new routine in pgstat.c that loops over the\npgstat_kind_infos and triggers the callbacks where .fixed_amount is \nset, but that's less transparent than the other approach. The reset\ntime should be consistent across all the calls as we rely on\nGetCurrentTimestamp().\n--\nMichael", "msg_date": "Wed, 8 Nov 2023 10:26:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Hi,\n\nOn 2023-11-06 14:00:13 +0530, Bharath Rupireddy wrote:\n> On Mon, Nov 6, 2023 at 11:39 AM torikoshia <[email protected]> wrote:\n> >\n> > Thanks all for the comments!\n> >\n> > On Fri, Nov 3, 2023 at 5:17 AM Matthias van de Meent\n> > <[email protected]> wrote:\n> > > Knowing that your metrics have a shared starting point can be quite\n> > > valuable, as it allows you to do some math that would otherwise be\n> > > much less accurate when working with stats over a short amount of\n> > > time. I've not used these stats systems much myself, but skew between\n> > > metrics caused by different reset points can be difficult to detect\n> > > and debug, so I think an atomic call to reset all these stats could be\n> > > worth implementing.\n> >\n> > Since each stats, except wal_prefetch was reset acquiring LWLock,\n> > attached PoC patch makes the call atomic by using these LWlocks.\n> >\n> > If this is the right direction, I'll try to make wal_prefetch also take\n> > LWLock.\n> \n> + // Acquire LWLocks\n> + LWLock *locks[] = {&stats_archiver->lock, &stats_bgwriter->lock,\n> + &stats_checkpointer->lock, &stats_wal->lock};\n> +\n> + for (int i = 0; i < STATS_SHARED_NUM_LWLOCK; i++)\n> + LWLockAcquire(locks[i], LW_EXCLUSIVE);\n> +\n> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)\n> + {\n> + LWLock *bktype_lock = &stats_io->locks[i];\n> + LWLockAcquire(bktype_lock, LW_EXCLUSIVE);\n> + }\n> \n> Well, that's a total of ~17 LWLocks this new function takes to make\n> the stats reset atomic. I'm not sure if this atomicity is worth the\n> effort which can easily be misused - what if someone runs something\n> like SELECT pg_stat_reset_shared() FROM generate_series(1,\n> 100000....n); to cause heavy lock acquisition and release cycles?\n\nYea, this seems like an *extremely* bad idea to me. Without careful analysis\nit could very well cause deadlocks.\n\n\n> IMV, atomicity is not something that applies for the stats reset\n> operation because stats are approximate numbers by nature after all.\n> If the pg_stat_reset_shared() resets stats for only a bunch of stats\n> types and fails, it's the basic application programming style that\n> when a query fails it's the application that needs to have a retry\n> mechanism. FWIW, the atomicity doesn't apply today if someone wants to\n> reset stats in a loop for all stats types.\n\nYea. Additionally it's not really atomic regardless of the lwlocks, due to\nvarious processes all accumulating in local counters first, and only\noccasionally updating the shared data. So even after holding all the locks at\nthe same time, the shared stats would still not actually represent a truly\natomic state.\n\n\n> 2.\n> +{ oid => '8000',\n> + descr => 'statistics: reset collected statistics shared across the cluster',\n> + proname => 'pg_stat_reset_shared', provolatile => 'v', prorettype => 'void',\n> + proargtypes => '', prosrc => 'pg_stat_reset_shared_all' },\n> \n> Why a new function consuming the oid? Why can't we just do the trick\n> of proisstrict => 'f' and if (PG_ARGISNULL(0)) { reset all stats} else\n> {reset specified stats kind} like the pg_stat_reset_slru()?\n\nIt's not like oids are a precious resource. It's a more confusing API to have\nto have to specify a NULL as an argument than not having to do so. If we\nreally want to avoid a separate oid, a more sensible path would be to add a\ndefault argument to pg_stat_reset_slru() (by doing a CREATE OR REPLACE in\nsystem_functions.sql).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 20:13:31 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Hi,\n\nOn 2023-11-08 10:08:42 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 6 Nov 2023 14:00:13 +0530, Bharath Rupireddy <[email protected]> wrote in \n> > On Mon, Nov 6, 2023 at 11:39 AM torikoshia <[email protected]> wrote:\n> > > Since each stats, except wal_prefetch was reset acquiring LWLock,\n> > > attached PoC patch makes the call atomic by using these LWlocks.\n> > >\n> > > If this is the right direction, I'll try to make wal_prefetch also take\n> > > LWLock.\n> \n> I must say, I have reservations about this approach. The main concern\n> is the duplication of reset code, which has been efficiently\n> encapsulated for individual targets, into this location. This practice\n> degrades the maintainability and clarity of the code.\n\nYes, as-is this seems to evolve the code in precisely the wrong direction. We\nwant less central awareness of different types of stats, not more. The\nproposed new code is far longer than the current pg_stat_reset(), despite\ndoing something conceptually simpler.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 20:18:30 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 8, 2023 at 9:43 AM Andres Freund <[email protected]> wrote:\n>\n> > 2.\n> > +{ oid => '8000',\n> > + descr => 'statistics: reset collected statistics shared across the cluster',\n> > + proname => 'pg_stat_reset_shared', provolatile => 'v', prorettype => 'void',\n> > + proargtypes => '', prosrc => 'pg_stat_reset_shared_all' },\n> >\n> > Why a new function consuming the oid? Why can't we just do the trick\n> > of proisstrict => 'f' and if (PG_ARGISNULL(0)) { reset all stats} else\n> > {reset specified stats kind} like the pg_stat_reset_slru()?\n>\n> It's not like oids are a precious resource. It's a more confusing API to have\n> to have to specify a NULL as an argument than not having to do so. If we\n> really want to avoid a separate oid, a more sensible path would be to add a\n> default argument to pg_stat_reset_slru() (by doing a CREATE OR REPLACE in\n> system_functions.sql).\n\n+1. Attached the patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 8 Nov 2023 14:15:24 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, 8 Nov 2023 at 05:13, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2023-11-06 14:00:13 +0530, Bharath Rupireddy wrote:\n> > Well, that's a total of ~17 LWLocks this new function takes to make\n> > the stats reset atomic. I'm not sure if this atomicity is worth the\n> > effort which can easily be misused - what if someone runs something\n> > like SELECT pg_stat_reset_shared() FROM generate_series(1,\n> > 100000....n); to cause heavy lock acquisition and release cycles?\n>\n> Yea, this seems like an *extremely* bad idea to me. Without careful analysis\n> it could very well cause deadlocks.\n\nI didn't realize that it'd take 17 LwLocks to reset those stats; I\nthought it was one shared system using the same lock, or a very\nlimited set of locks. Aquiring 17 locks is quite likely not worth the\nchance of having to wait for some stats lock or another and thus\ngenerating 'bubbles' in other stats gathering pipelines.\n\n> > IMV, atomicity is not something that applies for the stats reset\n> > operation because stats are approximate numbers by nature after all.\n> > If the pg_stat_reset_shared() resets stats for only a bunch of stats\n> > types and fails, it's the basic application programming style that\n> > when a query fails it's the application that needs to have a retry\n> > mechanism. FWIW, the atomicity doesn't apply today if someone wants to\n> > reset stats in a loop for all stats types.\n>\n> Yea. Additionally it's not really atomic regardless of the lwlocks, due to\n> various processes all accumulating in local counters first, and only\n> occasionally updating the shared data. So even after holding all the locks at\n> the same time, the shared stats would still not actually represent a truly\n> atomic state.\n\nGood points that I hadn't thought much about yet. I agree that atomic\nreset isn't worth implementing in this stats system - it's too costly\nand complex to do it correctly.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 8 Nov 2023 14:17:00 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 08, 2023 at 02:15:24PM +0530, Bharath Rupireddy wrote:\n> On Wed, Nov 8, 2023 at 9:43 AM Andres Freund <[email protected]> wrote:\n>> It's not like oids are a precious resource. It's a more confusing API to have\n>> to have to specify a NULL as an argument than not having to do so. If we\n>> really want to avoid a separate oid, a more sensible path would be to add a\n>> default argument to pg_stat_reset_slru() (by doing a CREATE OR REPLACE in\n>> system_functions.sql).\n> \n> +1. Attached the patch.\n>\n> -- Test that multiple SLRUs are reset when no specific SLRU provided to reset function\n> -SELECT pg_stat_reset_slru(NULL);\n> +SELECT pg_stat_reset_slru();\n\nFor the SLRU part, why not.\n\nHmm. What's the final plan for pg_stat_reset_shared(), then? An\nequivalent that calls a series of pgstat_reset_of_kind()?\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 08:58:48 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-09 08:58, Michael Paquier wrote:\n> On Wed, Nov 08, 2023 at 02:15:24PM +0530, Bharath Rupireddy wrote:\n>> On Wed, Nov 8, 2023 at 9:43 AM Andres Freund <[email protected]> \n>> wrote:\n>>> It's not like oids are a precious resource. It's a more confusing API \n>>> to have\n>>> to have to specify a NULL as an argument than not having to do so. If \n>>> we\n>>> really want to avoid a separate oid, a more sensible path would be to \n>>> add a\n>>> default argument to pg_stat_reset_slru() (by doing a CREATE OR \n>>> REPLACE in\n>>> system_functions.sql).\n>> \n>> +1. Attached the patch.\n>> \n>> -- Test that multiple SLRUs are reset when no specific SLRU provided \n>> to reset function\n>> -SELECT pg_stat_reset_slru(NULL);\n>> +SELECT pg_stat_reset_slru();\n> \n> For the SLRU part, why not.\n> \n> Hmm. What's the final plan for pg_stat_reset_shared(), then? An\n> equivalent that calls a series of pgstat_reset_of_kind()?\n\nSorry for late reply and thanks for the feedbacks everyone.\n\nAs your 1st suggestion, I think \"calls a series of \npgstat_reset_of_kind()\" would be enough.\n\nI am a little concerned about that the reset time is not the same and \nthat GetCurrentTimestamp() is called multiple times, but I think it \nwould be acceptable because the function is probably not used that often \nand the reset time is not atomic in practice.\n\nI'll attach the patch.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Thu, 09 Nov 2023 10:10:39 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Thu, Nov 09, 2023 at 10:10:39AM +0900, torikoshia wrote:\n> I am a little concerned about that the reset time is not the same and that\n> GetCurrentTimestamp() is called multiple times, but I think it would be\n> acceptable because the function is probably not used that often and the\n> reset time is not atomic in practice.\n\nArf, right. I misremembered that this is just a clock_timestamp() so\nthat's not transaction-resilient. Anyway, my take is that this is not\na big deal in practice compared to the usability of the wrapper.\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 10:25:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Hi,\n\nOn 2023-11-09 10:25:18 +0900, Michael Paquier wrote:\n> On Thu, Nov 09, 2023 at 10:10:39AM +0900, torikoshia wrote:\n> > I am a little concerned about that the reset time is not the same and that\n> > GetCurrentTimestamp() is called multiple times, but I think it would be\n> > acceptable because the function is probably not used that often and the\n> > reset time is not atomic in practice.\n> \n> Arf, right. I misremembered that this is just a clock_timestamp() so\n> that's not transaction-resilient. Anyway, my take is that this is not\n> a big deal in practice compared to the usability of the wrapper.\n\nIt seems inconsequential cost-wise. Resetting stats is way more expensive that\na few timestamp determinations. Correctness wise it actually seems *better* to\nrecord the timestamps more granularly, after all, that moves them closer to\nthe time the individual kind of stats is reset.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Nov 2023 17:29:52 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Thu, Nov 09, 2023 at 10:10:39AM +0900, torikoshia wrote:\n> I'll attach the patch.\nAttached.\n\nOn Mon, Nov 6, 2023 at 5:30 PM Bharath Rupireddy\n> 3. I think the new reset all stats function must also consider\n> resetting all SLRU stats, no?\n> /* stats for fixed-numbered objects */\n> PGSTAT_KIND_ARCHIVER,\n> PGSTAT_KIND_BGWRITER,\n> PGSTAT_KIND_CHECKPOINTER,\n> PGSTAT_KIND_IO,\n> PGSTAT_KIND_SLRU,\n> PGSTAT_KIND_WAL,\n\nPGSTAT_KIND_SLRU cannot be reset by pg_stat_reset_shared(), so I feel \nuncomfortable to delete it all together.\nIt might be better after pg_stat_reset_shared() has been modified to \ntake 'slru' as an argument, though.\n\n\nOn Wed, Nov 8, 2023 at 1:13 PM Andres Freund <[email protected]> wrote:\n> It's not like oids are a precious resource. It's a more confusing API \n> to have\n> to have to specify a NULL as an argument than not having to do so. If \n> we\n> really want to avoid a separate oid, a more sensible path would be to \n> add a\n> default argument to pg_stat_reset_slru() (by doing a CREATE OR REPLACE \n> in\n> system_functions.sql).\n\nCurrently proisstrict is true and pg_stat_reset_shared() returns null \nwithout doing any work.\nI thought it would be better to reset statistics even when null is \nspecified so that users are not confused with the behavior of \npg_stat_reset_slru().\nAttached patch added pg_stat_reset_shared() in system_functions.sql \nmainly for this reason.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Thu, 09 Nov 2023 13:50:34 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Thu, Nov 09, 2023 at 01:50:34PM +0900, torikoshia wrote:\n> PGSTAT_KIND_SLRU cannot be reset by pg_stat_reset_shared(), so I feel\n> uncomfortable to delete it all together.\n> It might be better after pg_stat_reset_shared() has been modified to take\n> 'slru' as an argument, though.\n\nNot sure how to feel about that, TBH, but I would not include SLRUs\nhere if we have already a separate function.\n\n> I thought it would be better to reset statistics even when null is specified\n> so that users are not confused with the behavior of pg_stat_reset_slru().\n> Attached patch added pg_stat_reset_shared() in system_functions.sql mainly\n> for this reason.\n\nI'm OK with doing what your patch does, aka do the work if the value\nis NULL or if there's no argument given.\n\n- Resets some cluster-wide statistics counters to zero, depending on the\n+ Resets cluster-wide statistics counters to zero, depending on the \n\nThis does not need to change, aka SLRUs are for example still global\nand not included here.\n\n+ If the argument is NULL or not specified, all counters shown in all\n+ of these views are reset.\n\nMissing a <literal> markup around NULL. I know, we're not consistent\nabout that, either, but if we are tweaking the area let's be right at\nleast. Perhaps \"all the counters from the views listed above are\nreset\"?\n--\nMichael", "msg_date": "Thu, 9 Nov 2023 16:28:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-09 16:28, Michael Paquier wrote:\nThanks for your review.\nAttached v2 patch.\n\n> On Thu, Nov 09, 2023 at 01:50:34PM +0900, torikoshia wrote:\n>> PGSTAT_KIND_SLRU cannot be reset by pg_stat_reset_shared(), so I feel\n>> uncomfortable to delete it all together.\n>> It might be better after pg_stat_reset_shared() has been modified to \n>> take\n>> 'slru' as an argument, though.\n> \n> Not sure how to feel about that, TBH, but I would not include SLRUs\n> here if we have already a separate function.\n\nIMHO I agree with you.\n\n>> I thought it would be better to reset statistics even when null is \n>> specified\n>> so that users are not confused with the behavior of \n>> pg_stat_reset_slru().\n>> Attached patch added pg_stat_reset_shared() in system_functions.sql \n>> mainly\n>> for this reason.\n> \n> I'm OK with doing what your patch does, aka do the work if the value\n> is NULL or if there's no argument given.\n> \n> - Resets some cluster-wide statistics counters to zero, \n> depending on the\n> + Resets cluster-wide statistics counters to zero, depending on \n> the\n> \n> This does not need to change, aka SLRUs are for example still global\n> and not included here.\n> \n> + If the argument is NULL or not specified, all counters shown \n> in all\n> + of these views are reset.\n> \n> Missing a <literal> markup around NULL. I know, we're not consistent\n> about that, either, but if we are tweaking the area let's be right at\n> least. Perhaps \"all the counters from the views listed above are\n> reset\"?\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Fri, 10 Nov 2023 12:33:50 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Fri, Nov 10, 2023 at 12:33:50PM +0900, torikoshia wrote:\n> On 2023-11-09 16:28, Michael Paquier wrote:\n>> Not sure how to feel about that, TBH, but I would not include SLRUs\n>> here if we have already a separate function.\n> \n> IMHO I agree with you.\n\nThe comments added could be better grammatically, but basically LGTM.\nI'll take care of that if there are no objections.\n--\nMichael", "msg_date": "Fri, 10 Nov 2023 13:15:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "Hi, \n\nOn November 8, 2023 11:28:08 PM PST, Michael Paquier <[email protected]> wrote:\n>On Thu, Nov 09, 2023 at 01:50:34PM +0900, torikoshia wrote:\n>> PGSTAT_KIND_SLRU cannot be reset by pg_stat_reset_shared(), so I feel\n>> uncomfortable to delete it all together.\n>> It might be better after pg_stat_reset_shared() has been modified to take\n>> 'slru' as an argument, though.\n>\n>Not sure how to feel about that, TBH, but I would not include SLRUs\n>here if we have already a separate function.\n>\n>> I thought it would be better to reset statistics even when null is specified\n>> so that users are not confused with the behavior of pg_stat_reset_slru().\n>> Attached patch added pg_stat_reset_shared() in system_functions.sql mainly\n>> for this reason.\n>\n>I'm OK with doing what your patch does, aka do the work if the value\n>is NULL or if there's no argument given.\n>\n>- Resets some cluster-wide statistics counters to zero, depending on the\n>+ Resets cluster-wide statistics counters to zero, depending on the \n>\n>This does not need to change, aka SLRUs are for example still global\n>and not included here.\n>\n>+ If the argument is NULL or not specified, all counters shown in all\n>+ of these views are reset.\n>\n>Missing a <literal> markup around NULL. I know, we're not consistent\n>about that, either, but if we are tweaking the area let's be right at\n>least. Perhaps \"all the counters from the views listed above are\n>reset\"?\n\nI see no reason to not include slrus. We should never have added the ability to reset them individually, particularly not without a use case - I couldn't find one skimming some discussion. And what's the point in not allowing to reset them via pg_stat_reset_shared()?\n\nGreetings,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 09 Nov 2023 20:18:28 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-10 13:18, Andres Freund wrote:\n> Hi,\n> \n> On November 8, 2023 11:28:08 PM PST, Michael Paquier\n> <[email protected]> wrote:\n>> On Thu, Nov 09, 2023 at 01:50:34PM +0900, torikoshia wrote:\n>>> PGSTAT_KIND_SLRU cannot be reset by pg_stat_reset_shared(), so I feel\n>>> uncomfortable to delete it all together.\n>>> It might be better after pg_stat_reset_shared() has been modified to \n>>> take\n>>> 'slru' as an argument, though.\n>> \n>> Not sure how to feel about that, TBH, but I would not include SLRUs\n>> here if we have already a separate function.\n>> \n>>> I thought it would be better to reset statistics even when null is \n>>> specified\n>>> so that users are not confused with the behavior of \n>>> pg_stat_reset_slru().\n>>> Attached patch added pg_stat_reset_shared() in system_functions.sql \n>>> mainly\n>>> for this reason.\n>> \n>> I'm OK with doing what your patch does, aka do the work if the value\n>> is NULL or if there's no argument given.\n>> \n>> - Resets some cluster-wide statistics counters to zero, \n>> depending on the\n>> + Resets cluster-wide statistics counters to zero, depending on \n>> the\n>> \n>> This does not need to change, aka SLRUs are for example still global\n>> and not included here.\n>> \n>> + If the argument is NULL or not specified, all counters shown \n>> in all\n>> + of these views are reset.\n>> \n>> Missing a <literal> markup around NULL. I know, we're not consistent\n>> about that, either, but if we are tweaking the area let's be right at\n>> least. Perhaps \"all the counters from the views listed above are\n>> reset\"?\n> \n> I see no reason to not include slrus. We should never have added the\n> ability to reset them individually, particularly not without a use\n> case - I couldn't find one skimming some discussion. And what's the\n> point in not allowing to reset them via pg_stat_reset_shared()?\n\nWhen including SLRUs, do you think it's better to add 'slrus' argument \nwhich enables pg_stat_reset_shared() to reset all SLRUs?\n\nAs described above, since SLRUs cannot be reset by \npg_stat_reset_shared(), I feel a bit uncomfortable to delete it all \ntogether.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Fri, 10 Nov 2023 20:32:34 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Fri, Nov 10, 2023 at 01:15:50PM +0900, Michael Paquier wrote:\n> The comments added could be better grammatically, but basically LGTM.\n> I'll take care of that if there are no objections.\n\nThe documentation also needed a few tweaks (for DEFAULT and the\nargument name), so I have fixed the whole and adapted the new part of\nthe docs to that, with few little tweaks.\n--\nMichael", "msg_date": "Sun, 12 Nov 2023 16:46:51 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Fri, Nov 10, 2023 at 08:32:34PM +0900, torikoshia wrote:\n> On 2023-11-10 13:18, Andres Freund wrote:\n>> I see no reason to not include slrus. We should never have added the\n>> ability to reset them individually, particularly not without a use\n>> case - I couldn't find one skimming some discussion. And what's the\n>> point in not allowing to reset them via pg_stat_reset_shared()?\n> \n> When including SLRUs, do you think it's better to add 'slrus' argument which\n> enables pg_stat_reset_shared() to reset all SLRUs?\n\nI understand that Andres says that he'd be OK with a addition of a\n'slru' option in pg_stat_reset_shared(), as well as including SLRUs in\nthe resets if everything should be wiped.\n\n28cac71bd368 is around since 13~, so changing pg_stat_reset_slru() or\nremoving it could impact existing applications, so there's little\nbenefit in changing it at this stage. Let it be itself.\n\n> As described above, since SLRUs cannot be reset by pg_stat_reset_shared(), I\n> feel a bit uncomfortable to delete it all together.\n\nThat would be only effective if NULL is given to the function to reset\neverything, which is OK IMO, because this is a shared stats.\n--\nMichael", "msg_date": "Sun, 12 Nov 2023 16:54:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-12 16:46, Michael Paquier wrote:\n> On Fri, Nov 10, 2023 at 01:15:50PM +0900, Michael Paquier wrote:\n>> The comments added could be better grammatically, but basically LGTM.\n>> I'll take care of that if there are no objections.\n> \n> The documentation also needed a few tweaks (for DEFAULT and the\n> argument name), so I have fixed the whole and adapted the new part of\n> the docs to that, with few little tweaks.\n\nThanks!\n\nI assume you have already taken this into account, but I think we should \nadd the same documentation to the below patch for pg_stat_reset_slru():\n\n \nhttps://www.postgresql.org/message-id/CALj2ACW4Fqc_m%2BOaavrOMEivZ5aBa24pVKvoXRTmuFECsNBfAg%40mail.gmail.com\n\nOn 2023-11-12 16:54, Michael Paquier wrote:\n> On Fri, Nov 10, 2023 at 08:32:34PM +0900, torikoshia wrote:\n>> On 2023-11-10 13:18, Andres Freund wrote:\n>>> I see no reason to not include slrus. We should never have added the\n>>> ability to reset them individually, particularly not without a use\n>>> case - I couldn't find one skimming some discussion. And what's the\n>>> point in not allowing to reset them via pg_stat_reset_shared()?\n>> \n>> When including SLRUs, do you think it's better to add 'slrus' argument \n>> which\n>> enables pg_stat_reset_shared() to reset all SLRUs?\n> \n> I understand that Andres says that he'd be OK with a addition of a\n> 'slru' option in pg_stat_reset_shared(), as well as including SLRUs in\n> the resets if everything should be wiped.\n\nThanks, I'll make the patch.\n\n> 28cac71bd368 is around since 13~, so changing pg_stat_reset_slru() or\n> removing it could impact existing applications, so there's little\n> benefit in changing it at this stage. Let it be itself.\n\n+1.\n\n>> As described above, since SLRUs cannot be reset by \n>> pg_stat_reset_shared(), I\n>> feel a bit uncomfortable to delete it all together.\n> \n> That would be only effective if NULL is given to the function to reset\n> everything, which is OK IMO, because this is a shared stats.\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Mon, 13 Nov 2023 13:15:14 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Nov 13, 2023 at 01:15:14PM +0900, torikoshia wrote:\n> I assume you have already taken this into account, but I think we should add\n> the same documentation to the below patch for pg_stat_reset_slru():\n> \n> https://www.postgresql.org/message-id/CALj2ACW4Fqc_m%2BOaavrOMEivZ5aBa24pVKvoXRTmuFECsNBfAg%40mail.gmail.com\n\nYep, the DEFAULT value and the argument name should be documented in\nmonitoring.sgml.\n--\nMichael", "msg_date": "Mon, 13 Nov 2023 14:55:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Nov 13, 2023 at 9:45 AM torikoshia <[email protected]> wrote:\n>\n> On 2023-11-12 16:46, Michael Paquier wrote:\n> > On Fri, Nov 10, 2023 at 01:15:50PM +0900, Michael Paquier wrote:\n> >> The comments added could be better grammatically, but basically LGTM.\n> >> I'll take care of that if there are no objections.\n> >\n> > The documentation also needed a few tweaks (for DEFAULT and the\n> > argument name), so I have fixed the whole and adapted the new part of\n> > the docs to that, with few little tweaks.\n>\n> Thanks!\n>\n> I assume you have already taken this into account, but I think we should\n> add the same documentation to the below patch for pg_stat_reset_slru():\n>\n> https://www.postgresql.org/message-id/CALj2ACW4Fqc_m%2BOaavrOMEivZ5aBa24pVKvoXRTmuFECsNBfAg%40mail.gmail.com\n\nModified the docs for pg_stat_reset_slru to match with that of\npg_stat_reset_shared. PSA v2 patch.\n\nI noticed that the commit 23c8c0c8 missed to add proargnames =>\n'{target}' in .dat file for pg_stat_reset_shared, is it intentional?\nNaming the argument in system_funtion.sql is enough to be able to pass\nin named arguments like SELECT pg_stat_reset_shared(target := 'io');,\nbut is it needed in .dat file as well to keep it consistent?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 13 Nov 2023 14:07:21 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Nov 13, 2023 at 02:07:21PM +0530, Bharath Rupireddy wrote:\n> Modified the docs for pg_stat_reset_slru to match with that of\n> pg_stat_reset_shared. PSA v2 patch.\n\nThat feels consistent. Thanks.\n\n> I noticed that the commit 23c8c0c8 missed to add proargnames =>\n> '{target}' in .dat file for pg_stat_reset_shared, is it intentional?\n> Naming the argument in system_funtion.sql is enough to be able to pass\n> in named arguments like SELECT pg_stat_reset_shared(target := 'io');,\n> but is it needed in .dat file as well to keep it consistent?\n\nI don't see a need to do that because, as you say, the functions are\nredefined for their default values, meaning that they'll also have\nargument names consistent with the docs. There are quite a few like\nthat in pg_proc.dat like pg_promote, pg_backup_start,\njson_populate_record, etc.\n--\nMichael", "msg_date": "Mon, 13 Nov 2023 19:31:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Mon, Nov 13, 2023 at 07:31:41PM +0900, Michael Paquier wrote:\n> On Mon, Nov 13, 2023 at 02:07:21PM +0530, Bharath Rupireddy wrote:\n>> Modified the docs for pg_stat_reset_slru to match with that of\n>> pg_stat_reset_shared. PSA v2 patch.\n> \n> That feels consistent. Thanks.\n\nAnd applied this one.\n--\nMichael", "msg_date": "Tue, 14 Nov 2023 09:55:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-13 13:15, torikoshia wrote:\n> On 2023-11-12 16:46, Michael Paquier wrote:\n>> On Fri, Nov 10, 2023 at 01:15:50PM +0900, Michael Paquier wrote:\n>>> The comments added could be better grammatically, but basically LGTM.\n>>> I'll take care of that if there are no objections.\n>> \n>> The documentation also needed a few tweaks (for DEFAULT and the\n>> argument name), so I have fixed the whole and adapted the new part of\n>> the docs to that, with few little tweaks.\n> \n> Thanks!\n> \n> I assume you have already taken this into account, but I think we\n> should add the same documentation to the below patch for\n> pg_stat_reset_slru():\n> \n> \n> https://www.postgresql.org/message-id/CALj2ACW4Fqc_m%2BOaavrOMEivZ5aBa24pVKvoXRTmuFECsNBfAg%40mail.gmail.com\n> \n> On 2023-11-12 16:54, Michael Paquier wrote:\n>> On Fri, Nov 10, 2023 at 08:32:34PM +0900, torikoshia wrote:\n>>> On 2023-11-10 13:18, Andres Freund wrote:\n>>>> I see no reason to not include slrus. We should never have added the\n>>>> ability to reset them individually, particularly not without a use\n>>>> case - I couldn't find one skimming some discussion. And what's the\n>>>> point in not allowing to reset them via pg_stat_reset_shared()?\n>>> \n>>> When including SLRUs, do you think it's better to add 'slrus' \n>>> argument which\n>>> enables pg_stat_reset_shared() to reset all SLRUs?\n>> \n>> I understand that Andres says that he'd be OK with a addition of a\n>> 'slru' option in pg_stat_reset_shared(), as well as including SLRUs in\n>> the resets if everything should be wiped.\n> \n> Thanks, I'll make the patch.\n\nAttached patch.\n\nBTW currently the documentation explains all the arguments of \npg_stat_reset_shared() in one line and I feel it's a bit hard to read.\nAttached patch uses <itemizedlist>.\n\nWhat do you think?\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Tue, 14 Nov 2023 22:02:32 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Tue, Nov 14, 2023 at 10:02:32PM +0900, torikoshia wrote:\n> Attached patch.\n\nYou have forgotten to update the errhint at the end of\npg_stat_reset_shared(), where \"slru\" needs to be listed :)\n\n> BTW currently the documentation explains all the arguments of\n> pg_stat_reset_shared() in one line and I feel it's a bit hard to read.\n> Attached patch uses <itemizedlist>.\n\nYes, this one is a good idea because each target works on a different\nsystem view so it becomes easier to understand what a target affects,\nso I've applied this bit, without the slru addition.\n--\nMichael", "msg_date": "Wed, 15 Nov 2023 09:47:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-15 09:47, Michael Paquier wrote:\n> On Tue, Nov 14, 2023 at 10:02:32PM +0900, torikoshia wrote:\n>> Attached patch.\n> \n> You have forgotten to update the errhint at the end of\n> pg_stat_reset_shared(), where \"slru\" needs to be listed :)\n\nOops, attached v2 patch.\n\n>> BTW currently the documentation explains all the arguments of\n>> pg_stat_reset_shared() in one line and I feel it's a bit hard to read.\n>> Attached patch uses <itemizedlist>.\n> \n> Yes, this one is a good idea because each target works on a different\n> system view so it becomes easier to understand what a target affects,\n> so I've applied this bit, without the slru addition.\n\nThanks!\n\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Wed, 15 Nov 2023 11:58:38 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 15, 2023 at 11:58:38AM +0900, torikoshia wrote:\n> On 2023-11-15 09:47, Michael Paquier wrote:\n>> You have forgotten to update the errhint at the end of\n>> pg_stat_reset_shared(), where \"slru\" needs to be listed :)\n> \n> Oops, attached v2 patch.\n\n+SELECT stats_reset > :'slru_reset_ts'::timestamptz FROM pg_stat_slru;\n\nA problem with these two queries is that they depend on the number of\nSLRUs set in the system while only returning a single 't' without the\ncache names each tuple is linked to. To keep things simple, you could\njust LIMIT 1 or aggregate through the whole set.\n\nOther than that, it looks OK.\n--\nMichael", "msg_date": "Wed, 15 Nov 2023 16:25:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Nov 15, 2023 at 04:25:14PM +0900, Michael Paquier wrote:\n> Other than that, it looks OK.\n\nTweaked the queries of this one slightly, and applied. So I think\nthat we are now good for this thread. Thanks, all!\n--\nMichael", "msg_date": "Thu, 16 Nov 2023 16:48:39 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" }, { "msg_contents": "On 2023-11-16 16:48, Michael Paquier wrote:\n> On Wed, Nov 15, 2023 at 04:25:14PM +0900, Michael Paquier wrote:\n>> Other than that, it looks OK.\n> \n> Tweaked the queries of this one slightly, and applied. So I think\n> that we are now good for this thread. Thanks, all!\n\nThanks for the modification and apply!\n\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Thu, 16 Nov 2023 17:12:26 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add new option 'all' to pg_stat_reset_shared()" } ]
[ { "msg_contents": "Under Meson, it is not very easy to see if TAP tests have been enabled \nor disabled, if you rely on the default auto setting. You either need \nto carefully study the meson setup output, or you notice, what a minute, \ndidn't there use to be like 250 tests, not only 80?\n\nI think it would be better if we still registered the TAP tests in Meson \neven if the tap_tests option is disabled, but with a dummy command that \nregisters them as skipped. That way you get a more informative output like\n\nOk: 78\nExpected Fail: 0\nFail: 0\nUnexpected Pass: 0\nSkipped: 187\nTimeout: 0\n\nwhich is really a more accurate representation of what the test run \nactually accomplished than \"everything Ok\".\n\nSee attached patch for a possible implementation. (This uses perl as a \nhard build requirement. We are planning to do that anyway, but \nobviously other implementations, such as using python, would also be \npossible.)", "msg_date": "Mon, 30 Oct 2023 05:45:52 -0400", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "Hi,\n\n> Under Meson, it is not very easy to see if TAP tests have been enabled\n> or disabled, if you rely on the default auto setting. You either need\n> to carefully study the meson setup output, or you notice, what a minute,\n> didn't there use to be like 250 tests, not only 80?\n>\n> I think it would be better if we still registered the TAP tests in Meson\n> even if the tap_tests option is disabled, but with a dummy command that\n> registers them as skipped. That way you get a more informative output like\n>\n> Ok: 78\n> Expected Fail: 0\n> Fail: 0\n> Unexpected Pass: 0\n> Skipped: 187\n> Timeout: 0\n>\n> which is really a more accurate representation of what the test run\n> actually accomplished than \"everything Ok\".\n>\n> See attached patch for a possible implementation. (This uses perl as a\n> hard build requirement. We are planning to do that anyway, but\n> obviously other implementations, such as using python, would also be\n> possible.)\n\nI tested the patch and it works as intended.\n\nPersonally I like the change. It makes the output more explicit. In my\nuse cases not running TAP tests typically is not something I want . So\nI would appreciate being warned with a long list of bright yellow\n\"SKIP\" messages. If I really want to skip TAP tests these messages are\njust informative and don't bother me.\n\n+1\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 30 Oct 2023 16:47:14 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> Personally I like the change. It makes the output more explicit. In my\n> use cases not running TAP tests typically is not something I want . So\n> I would appreciate being warned with a long list of bright yellow\n> \"SKIP\" messages. If I really want to skip TAP tests these messages are\n> just informative and don't bother me.\n\n+1 for counting such tests as \"skipped\" in the summary. -1 for\nemitting a message per skipped test. If I'm intentionally not\nrunning those tests, that would be very annoying noise, and\npotentially would obscure messages I actually need to see.\n\n(And about -100 for emitting such messages in yellow. Doesn't\nanybody who codes this stuff have a clue about vision problems?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Oct 2023 10:12:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "Hi Peter,\n\nYou may find value in this Meson PR[0] adding a skip keyword argument to \nMeson's test() function. From what I understand of the PR and your \nissue, they seem related. If you could provide a comment describing why \nthis is valuable to you, it would be good to help the Meson \nmaintainers understand the use case better.\n\nThanks!\n\n[0]: https://github.com/mesonbuild/meson/pull/12362\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 31 Oct 2023 11:03:12 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "On 30.10.23 10:12, Tom Lane wrote:\n> +1 for counting such tests as \"skipped\" in the summary. -1 for\n> emitting a message per skipped test. If I'm intentionally not\n> running those tests, that would be very annoying noise, and\n> potentially would obscure messages I actually need to see.\n\nIn my usage, those messages only show up in the logs, not during a \nnormal test run. This is similar to other skip messages, like \"skipped \non Windows\" or \"skipped because LDAP not enabled\" etc.\n\n\n\n", "msg_date": "Thu, 2 Nov 2023 09:52:14 -0400", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "Hi,\n\nOn 2023-10-30 05:45:52 -0400, Peter Eisentraut wrote:\n> Under Meson, it is not very easy to see if TAP tests have been enabled or\n> disabled, if you rely on the default auto setting. You either need to\n> carefully study the meson setup output, or you notice, what a minute, didn't\n> there use to be like 250 tests, not only 80?\n> \n> I think it would be better if we still registered the TAP tests in Meson\n> even if the tap_tests option is disabled, but with a dummy command that\n> registers them as skipped. That way you get a more informative output like\n\nHm, ok. I've never felt I needed this, but I can see the point.\n\n\n> See attached patch for a possible implementation. (This uses perl as a hard\n> build requirement. We are planning to do that anyway, but obviously other\n> implementations, such as using python, would also be possible.)\n\nThere's already other hard dependencies on perl in the meson build (generating\nkwlist etc). We certainly error out if it's not available.\n\n\n> \n> - test(test_dir['name'] / onetap_p,\n> - python,\n> - kwargs: test_kwargs,\n> - args: testwrap_base + [\n> - '--testgroup', test_dir['name'],\n> - '--testname', onetap_p,\n> - '--', test_command,\n> - test_dir['sd'] / onetap,\n> - ],\n> - )\n> + if tap_tests_enabled\n> + test(test_dir['name'] / onetap_p,\n> + python,\n> + kwargs: test_kwargs,\n> + args: testwrap_base + [\n> + '--testgroup', test_dir['name'],\n> + '--testname', onetap_p,\n> + '--', test_command,\n> + test_dir['sd'] / onetap,\n> + ],\n> + )\n> + else\n> + test(test_dir['name'] / onetap_p,\n> + perl,\n> + args: ['-e', 'print \"1..0 # Skipped: TAP tests not enabled\"'],\n> + kwargs: test_kwargs)\n> + endif\n\nI'd just use a single test() invocation here, and add an argument to testwrap\nindicating that it should print out the skipped message. That way we a) don't\nneed two test() invocations, b) could still see the test name etc in the test\ninvocation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Nov 2023 17:51:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "On 04.11.23 01:51, Andres Freund wrote:\n> I'd just use a single test() invocation here, and add an argument to testwrap\n> indicating that it should print out the skipped message. That way we a) don't\n> need two test() invocations, b) could still see the test name etc in the test\n> invocation.\n\nIs testwrap only meant to be used with the tap protocol mode of meson's \ntest()? Otherwise, this skip option would have produce different output \nfor different protocols.\n\n\n\n", "msg_date": "Mon, 6 Nov 2023 17:46:23 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "Hi,\n\nOn 2023-11-06 17:46:23 +0100, Peter Eisentraut wrote:\n> On 04.11.23 01:51, Andres Freund wrote:\n> > I'd just use a single test() invocation here, and add an argument to testwrap\n> > indicating that it should print out the skipped message. That way we a) don't\n> > need two test() invocations, b) could still see the test name etc in the test\n> > invocation.\n> \n> Is testwrap only meant to be used with the tap protocol mode of meson's\n> test()? Otherwise, this skip option would have produce different output for\n> different protocols.\n\nSince Daniel added tap support to pg_regress it's only used with tap. If we\nadd something else, we can add a format parameter?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Nov 2023 14:03:08 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "On 04.11.23 01:51, Andres Freund wrote:\n> I'd just use a single test() invocation here, and add an argument to testwrap\n> indicating that it should print out the skipped message. That way we a) don't\n> need two test() invocations, b) could still see the test name etc in the test\n> invocation.\n\nHere is a patch that does it that way.", "msg_date": "Wed, 15 Nov 2023 11:02:19 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" }, { "msg_contents": "On 2023-11-15 11:02:19 +0100, Peter Eisentraut wrote:\n> On 04.11.23 01:51, Andres Freund wrote:\n> > I'd just use a single test() invocation here, and add an argument to testwrap\n> > indicating that it should print out the skipped message. That way we a) don't\n> > need two test() invocations, b) could still see the test name etc in the test\n> > invocation.\n> \n> Here is a patch that does it that way.\n\nWFM!\n\n\n", "msg_date": "Wed, 15 Nov 2023 08:50:08 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Explicitly skip TAP tests under Meson if disabled" } ]
[ { "msg_contents": "Hi, hackers!\n\nI have already written about the problem of InvalidPath [0] appearing. I \ninvestigated this and found an error in the add_path() function, when we \nreject a path, we free up the memory of the path, but do not delete \nvarious mentions of it (for example, in the ancestor of relation, as in \nthe example below).\n\nThus, we may face the problem of accessing the freed memory.\n\nI demonstrated this below using gdb when I call a query after running a \ntest in test/regression/sql/create_misc.sql:\n\n*Query:*\n\n:-- That ALTER TABLE should have added TOAST tables.\n\nSELECT relname, reltoastrelid <> 0 AS has_toast_table\n    FROM pg_class\n    WHERE oid::regclass IN ('a_star', 'c_star')\n    ORDER BY 1;\n\n--UPDATE b_star*\n--   SET a = text 'gazpacho'\n--   WHERE aa > 4;\n\nSELECT class, aa, a FROM a_star*;\n\n\n*gdb:\n*\n\n0x00007ff3f7325fda in epoll_wait (epfd=5, events=0x55bf9ee75c38, \nmaxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n30  ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n(gdb) b /home/alena/postgrespro_3/src/backend/optimizer/util/pathnode.c:620\nBreakpoint 1 at 0x55bf9cfe4c65: file pathnode.c, line 621.\n(gdb) c\nContinuing.\n\nBreakpoint 1, add_path (parent_rel=0x55bf9ef7f5c0, \nnew_path=0x55bf9ef7f4e0) at pathnode.c:621\n621     if (!IsA(new_path, IndexPath))\n(gdb) n\n622       pfree(new_path);\n(gdb) n\n624 }\n(gdb) p *new_path\n$1 = {type = T_Invalid, pathtype = T_Invalid, parent = \n0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n   param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe \n= 127, parallel_workers = 2139062143,\n   rows = 1.3824172084878715e+306, startup_cost = \n1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n   pathkeys = 0x7f7f7f7f7f7f7f7f}\n*(gdb) p new_path\n$2 = (Path *) 0x55bf9ef7f4e0*\n\n(gdb) p ((ProjectionPath *)((SortPath*)parent_rel->pathlist->elements \n[0].ptr_value)->subpath)->path->parent->cheapest_startup_path\n*$20 = (struct Path *) 0x55bf9ef7f4e0*\n\n(gdb) p *((ProjectionPath *)((SortPath*)parent_rel->pathlist->elements \n[0].ptr_value)->subpath)->path->parent->cheapest_startup_path\n$17 = {type = T_Invalid, pathtype = T_Invalid, parent = \n0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n   param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe \n= 127, parallel_workers = 2139062143,\n   rows = 1.3824172084878715e+306, startup_cost = \n1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n   pathkeys = 0x7f7f7f7f7f7f7f7f}\n\n(gdb) p (Path*)(((ProjectionPath \n*)((SortPath*)parent_rel->pathlist->elements \n[0].ptr_value)->subpath)->path->parent->pathlist->elements[1].ptr_value)\n*$21 = (Path *) 0x55bf9ef7f4e0*\n\n(gdb) p *(Path*)(((ProjectionPath \n*)((SortPath*)parent_rel->pathlist->elements \n[0].ptr_value)->subpath)->path->parent->pathlist->elements[1].ptr_value)\n$19 = {type = T_Invalid, pathtype = T_Invalid, parent = \n0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n   param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe \n= 127, parallel_workers = 2139062143,\n   rows = 1.3824172084878715e+306, startup_cost = \n1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n   pathkeys = 0x7f7f7f7f7f7f7f7f}\n(gdb)\n\n\nThe same problem may be in the add_partial_path() function.\n\nUnfortunately, I have not yet been able to find a problematic query with \nthe described case, but I have prepared a patch to fix this problem.\n\nWhat do you think?\n\n0. \nhttps://www.postgresql.org/message-id/flat/[email protected]\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional", "msg_date": "Mon, 30 Oct 2023 14:31:42 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": true, "msg_subject": "Not deleted mentions of the cleared path" }, { "msg_contents": "On Mon, Oct 30, 2023 at 5:01 PM Alena Rybakina\n<[email protected]> wrote:\n>\n> Hi, hackers!\n>\n> I have already written about the problem of InvalidPath [0] appearing. I investigated this and found an error in the add_path() function, when we reject a path, we free up the memory of the path, but do not delete various mentions of it (for example, in the ancestor of relation, as in the example below).\n>\n> Thus, we may face the problem of accessing the freed memory.\n>\n> I demonstrated this below using gdb when I call a query after running a test in test/regression/sql/create_misc.sql:\n>\n> Query:\n>\n> :-- That ALTER TABLE should have added TOAST tables.\n>\n> SELECT relname, reltoastrelid <> 0 AS has_toast_table\n> FROM pg_class\n> WHERE oid::regclass IN ('a_star', 'c_star')\n> ORDER BY 1;\n>\n> --UPDATE b_star*\n> -- SET a = text 'gazpacho'\n> -- WHERE aa > 4;\n>\n> SELECT class, aa, a FROM a_star*;\n>\n>\n> gdb:\n>\n> 0x00007ff3f7325fda in epoll_wait (epfd=5, events=0x55bf9ee75c38, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n> 30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n> (gdb) b /home/alena/postgrespro_3/src/backend/optimizer/util/pathnode.c:620\n> Breakpoint 1 at 0x55bf9cfe4c65: file pathnode.c, line 621.\n> (gdb) c\n> Continuing.\n>\n> Breakpoint 1, add_path (parent_rel=0x55bf9ef7f5c0, new_path=0x55bf9ef7f4e0) at pathnode.c:621\n> 621 if (!IsA(new_path, IndexPath))\n> (gdb) n\n> 622 pfree(new_path);\n> (gdb) n\n> 624 }\n> (gdb) p *new_path\n> $1 = {type = T_Invalid, pathtype = T_Invalid, parent = 0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n> param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe = 127, parallel_workers = 2139062143,\n> rows = 1.3824172084878715e+306, startup_cost = 1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n> pathkeys = 0x7f7f7f7f7f7f7f7f}\n> (gdb) p new_path\n> $2 = (Path *) 0x55bf9ef7f4e0\n\nAt this point the new_path has not been added to the parent_rel. We do\nnot set the cheapest* paths while paths are being added. The stack\ntrace will give you an idea where this is happening.\n\n>\n> (gdb) p ((ProjectionPath *)((SortPath*)parent_rel->pathlist->elements [0].ptr_value)->subpath)->path->parent->cheapest_startup_path\n> $20 = (struct Path *) 0x55bf9ef7f4e0\n\nThis looks familiar though. There was some nearby thread where Tom\nLane, if my memory serves well, provided a case where a path from\nlower rel was added to an upper rel without copying or changing its\nparent. This very much looks like that case.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 30 Oct 2023 20:06:13 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Not deleted mentions of the cleared path" }, { "msg_contents": "On Mon, Oct 30, 2023 at 7:31 PM Alena Rybakina <[email protected]>\nwrote:\n\n> I have already written about the problem of InvalidPath [0] appearing. I\n> investigated this and found an error in the add_path() function, when we\n> reject a path, we free up the memory of the path, but do not delete various\n> mentions of it (for example, in the ancestor of relation, as in the example\n> below).\n>\n\nI agree that what you observed is true - add_path() may free a path\nwhile it's still referenced from some lower rels. For instance, when\ncreating ordered paths, we may use the input path unchanged without\ncopying if it's already well ordered, and it might be freed afterwards\nif it fails when competing in add_path().\n\nBut this doesn't seem to be a problem in practice. We will not access\nthese references from the lower rels.\n\nI'm not sure if this is an issue that we need to fix, or we need to live\nwith. But I do think it deserves some explanation in the comment of\nadd_path().\n\nThanks\nRichard\n\nOn Mon, Oct 30, 2023 at 7:31 PM Alena Rybakina <[email protected]> wrote:\n I have already written about the problem of InvalidPath [0]\n appearing. I investigated this and found an error in the add_path()\n function, when we reject a path, we free up the memory of the path,\n but do not delete various mentions of it (for example, in the\n ancestor of relation, as in the example below).I agree that what you observed is true - add_path() may free a pathwhile it's still referenced from some lower rels.  For instance, whencreating ordered paths, we may use the input path unchanged withoutcopying if it's already well ordered, and it might be freed afterwardsif it fails when competing in add_path().But this doesn't seem to be a problem in practice.  We will not accessthese references from the lower rels.I'm not sure if this is an issue that we need to fix, or we need to livewith.  But I do think it deserves some explanation in the comment ofadd_path().ThanksRichard", "msg_date": "Tue, 31 Oct 2023 11:25:44 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Not deleted mentions of the cleared path" }, { "msg_contents": "Hi! Thank you for the interest to this issue.\n\nOn 31.10.2023 06:25, Richard Guo wrote:\n\n> On Mon, Oct 30, 2023 at 7:31 PM Alena Rybakina <[email protected]> wrote:\n>\n> I have already written about the problem of InvalidPath [0]\n> appearing. I investigated this and found an error in the add_path()\n> function, when we reject a path, we free up the memory of the path,\n> but do not delete various mentions of it (for example, in the\n> ancestor of relation, as in the example below).\n>\n> I agree that what you observed is true - add_path() may free a path\n> while it's still referenced from some lower rels.  For instance, when\n> creating ordered paths, we may use the input path unchanged without\n> copying if it's already well ordered, and it might be freed afterwards\n> if it fails when competing in add_path().\n> But this doesn't seem to be a problem in practice.  We will not access\n> these references from the lower rels.\n> I'm not sure if this is an issue that we need to fix, or we need to live\n> with.  But I do think it deserves some explanation in the comment of\n> add_path().\n\nI agree that the code looks like an error, but without a real request, \nit is still difficult to identify it as a bug. I'll try to reproduce it. \nAnd yes, at least a comment is required here, and to be honest, I have \nalready faced this problem myself.\n\nOn 30.10.2023 17:36, Ashutosh Bapat wrote:\n> On Mon, Oct 30, 2023 at 5:01 PM Alena Rybakina\n> <[email protected]> wrote:\n>> Hi, hackers!\n>>\n>> I have already written about the problem of InvalidPath [0] appearing. I investigated this and found an error in the add_path() function, when we reject a path, we free up the memory of the path, but do not delete various mentions of it (for example, in the ancestor of relation, as in the example below).\n>>\n>> Thus, we may face the problem of accessing the freed memory.\n>>\n>> I demonstrated this below using gdb when I call a query after running a test in test/regression/sql/create_misc.sql:\n>>\n>> Query:\n>>\n>> :-- That ALTER TABLE should have added TOAST tables.\n>>\n>> SELECT relname, reltoastrelid <> 0 AS has_toast_table\n>> FROM pg_class\n>> WHERE oid::regclass IN ('a_star', 'c_star')\n>> ORDER BY 1;\n>>\n>> --UPDATE b_star*\n>> -- SET a = text 'gazpacho'\n>> -- WHERE aa > 4;\n>>\n>> SELECT class, aa, a FROM a_star*;\n>>\n>>\n>> gdb:\n>>\n>> 0x00007ff3f7325fda in epoll_wait (epfd=5, events=0x55bf9ee75c38, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n>> 30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n>> (gdb) b /home/alena/postgrespro_3/src/backend/optimizer/util/pathnode.c:620\n>> Breakpoint 1 at 0x55bf9cfe4c65: file pathnode.c, line 621.\n>> (gdb) c\n>> Continuing.\n>>\n>> Breakpoint 1, add_path (parent_rel=0x55bf9ef7f5c0, new_path=0x55bf9ef7f4e0) at pathnode.c:621\n>> 621 if (!IsA(new_path, IndexPath))\n>> (gdb) n\n>> 622 pfree(new_path);\n>> (gdb) n\n>> 624 }\n>> (gdb) p *new_path\n>> $1 = {type = T_Invalid, pathtype = T_Invalid, parent = 0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n>> param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe = 127, parallel_workers = 2139062143,\n>> rows = 1.3824172084878715e+306, startup_cost = 1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n>> pathkeys = 0x7f7f7f7f7f7f7f7f}\n>> (gdb) p new_path\n>> $2 = (Path *) 0x55bf9ef7f4e0\n> At this point the new_path has not been added to the parent_rel. We do\n> not set the cheapest* paths while paths are being added. The stack\n> trace will give you an idea where this is happening.\n>> (gdb) p ((ProjectionPath *)((SortPath*)parent_rel->pathlist->elements [0].ptr_value)->subpath)->path->parent->cheapest_startup_path\n>> $20 = (struct Path *) 0x55bf9ef7f4e0\n> This looks familiar though. There was some nearby thread where Tom\n> Lane, if my memory serves well, provided a case where a path from\n> lower rel was added to an upper rel without copying or changing its\n> parent. This very much looks like that case.\n>\nThank you, I think this might help me to find a query to reproduce it.\n\n\n\n\n\n\nHi! Thank you for the interest to this issue.\n\n\nOn 31.10.2023 06:25, Richard Guo wrote:\n\n\n\n\n\n\nOn Mon, Oct 30, 2023 at 7:31 PM Alena Rybakina <[email protected]> wrote:\n\n\n\n\n I have already written about the problem of InvalidPath [0]\n appearing. I investigated this and found an error in the add_path()\n function, when we reject a path, we free up the memory of the path,\n but do not delete various mentions of it (for example, in the\n ancestor of relation, as in the example below).\n\n\n\n\n\n\n\nI agree that what you observed is true - add_path() may free a path\nwhile it's still referenced from some lower rels.  For instance, when\ncreating ordered paths, we may use the input path unchanged without\ncopying if it's already well ordered, and it might be freed afterwards\nif it fails when competing in add_path().\n\nBut this doesn't seem to be a problem in practice.  We will not access\nthese references from the lower rels.\n\nI'm not sure if this is an issue that we need to fix, or we need to live\nwith.  But I do think it deserves some explanation in the comment of\nadd_path().\n\n\n\n\nI agree that the code looks like an error, but without a real\n request, it is still difficult to identify it as a bug. I'll try\n to reproduce it. And yes, at least a comment is required here, and\n to be honest, I have already faced this problem myself.\nOn 30.10.2023 17:36, Ashutosh Bapat\n wrote:\n\n\nOn Mon, Oct 30, 2023 at 5:01 PM Alena Rybakina\n<[email protected]> wrote:\n\n\nHi, hackers!\n\nI have already written about the problem of InvalidPath [0] appearing. I investigated this and found an error in the add_path() function, when we reject a path, we free up the memory of the path, but do not delete various mentions of it (for example, in the ancestor of relation, as in the example below).\n\nThus, we may face the problem of accessing the freed memory.\n\nI demonstrated this below using gdb when I call a query after running a test in test/regression/sql/create_misc.sql:\n\nQuery:\n\n:-- That ALTER TABLE should have added TOAST tables.\n\nSELECT relname, reltoastrelid <> 0 AS has_toast_table\n FROM pg_class\n WHERE oid::regclass IN ('a_star', 'c_star')\n ORDER BY 1;\n\n--UPDATE b_star*\n-- SET a = text 'gazpacho'\n-- WHERE aa > 4;\n\nSELECT class, aa, a FROM a_star*;\n\n\ngdb:\n\n0x00007ff3f7325fda in epoll_wait (epfd=5, events=0x55bf9ee75c38, maxevents=1, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30\n30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory.\n(gdb) b /home/alena/postgrespro_3/src/backend/optimizer/util/pathnode.c:620\nBreakpoint 1 at 0x55bf9cfe4c65: file pathnode.c, line 621.\n(gdb) c\nContinuing.\n\nBreakpoint 1, add_path (parent_rel=0x55bf9ef7f5c0, new_path=0x55bf9ef7f4e0) at pathnode.c:621\n621 if (!IsA(new_path, IndexPath))\n(gdb) n\n622 pfree(new_path);\n(gdb) n\n624 }\n(gdb) p *new_path\n$1 = {type = T_Invalid, pathtype = T_Invalid, parent = 0x7f7f7f7f7f7f7f7f, pathtarget = 0x7f7f7f7f7f7f7f7f,\n param_info = 0x7f7f7f7f7f7f7f7f, parallel_aware = 127, parallel_safe = 127, parallel_workers = 2139062143,\n rows = 1.3824172084878715e+306, startup_cost = 1.3824172084878715e+306, total_cost = 1.3824172084878715e+306,\n pathkeys = 0x7f7f7f7f7f7f7f7f}\n(gdb) p new_path\n$2 = (Path *) 0x55bf9ef7f4e0\n\n\nAt this point the new_path has not been added to the parent_rel. We do\nnot set the cheapest* paths while paths are being added. The stack\ntrace will give you an idea where this is happening.\n\n\n(gdb) p ((ProjectionPath *)((SortPath*)parent_rel->pathlist->elements [0].ptr_value)->subpath)->path->parent->cheapest_startup_path\n$20 = (struct Path *) 0x55bf9ef7f4e0\n\n\nThis looks familiar though. There was some nearby thread where Tom\nLane, if my memory serves well, provided a case where a path from\nlower rel was added to an upper rel without copying or changing its\nparent. This very much looks like that case.\n\n\n\n Thank you, I think this might help me to find a query to reproduce\n it.", "msg_date": "Wed, 1 Nov 2023 23:09:25 +0300", "msg_from": "\"a.rybakina\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Not deleted mentions of the cleared path" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile exploring the JIT support for tuple deforming process, I noticed that\none check for TTSOpsVirtual in slot_compile_deform is obsolete. Since\nvirtual tuples never need deforming and there's an assertion in\nllvm_compile_expr[1]. I simply replace it with an assertion in\nslot_compile_deform. Patch is attached.\n\n[1]\nhttps://github.com/postgres/postgres/blob/0c60e8ba80e03491b028204a19a9dca6d216df91/src/backend/jit/llvm/llvmjit_expr.c#L322\n\nBest Regards,\nXing", "msg_date": "Mon, 30 Oct 2023 22:58:49 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Remove obsolete check for TTSOpsVirtual from slot_compile_deform" } ]
[ { "msg_contents": "This is an offshoot of the \"CRC32C Parallel Computation Optimization on\nARM\" thread [0]. I intend for this to be a prerequisite patch set.\n\nPresently, for the SSE 4.2 and ARMv8 CRC instructions used in the CRC32C\ncode for WAL records, etc., we first check if the intrinsics are available\nwith the default compiler flags. If so, we only bother compiling the\nimplementation that uses those intrinsics. If not, we also check whether\nthe intrinsics are available with some extra CFLAGS, and if they are, we\ncompile both the implementation that uses the intrinsics as well as a\nfallback implementation that doesn't require any special instructions.\nThen, at runtime, we check what's available in the hardware and choose the\nappropriate CRC32C implementation.\n\nThe aforementioned other thread [0] aims to further optimize this code by\nusing another instruction that requires additional configure and/or runtime\nchecks. $SUBJECT has been in the back of my mind for a while, but given\nproposals to add further complexity to this code, I figured it might be a\ngood time to propose this simplification. Specifically, I think we\nshouldn't worry about trying to compile only the special instrinics\nversions, and instead always try to build both and choose the appropriate\none at runtime.\n\nAFAICT the trade-offs aren't too bad. With some simple testing, I see that\nthe runtime check occurs once at startup, so I don't anticipate any\nnoticeable performance impact. I suppose each process might need to do the\ncheck in EXEC_BACKEND builds, but even so, I suspect the difference is\nnegligible.\n\nI also see that the SSE 4.2 runtime check requires the CPUID instruction,\nso we wouldn't use the instrinsics for hardware that supports SSE 4.2 but\nnot CPUID. However, I'm not sure such hardware even exists. Wikipedia\nsays that CPUID was introduced in 1993 [1], and meson.build appears to omit\nthe CPUID check when determining which CRC32C implementation to use.\nFurthermore, meson.build alludes to problems with some of the CPUID-related\nchecks:\n\n\t# XXX: The configure.ac check for __cpuid() is broken, we don't copy that\n\t# here. To prevent problems due to two detection methods working, stop\n\t# checking after one.\n\nAre there any other reasons that we should try to avoid the runtime check\nwhen possible?\n\nI've attached two patches. 0001 adds a debug message to the SSE 4.2\nruntime check that matches the one already present for the ARMv8 check.\nThis message just notes whether the runtime check found that the special\nCRC instructions are available. 0002 is a first attempt at $SUBJECT. I've\ntested it on both x86 and ARM, and it seems to work as intended. You'll\nnotice that I'm still checking for the intrinsics with the default compiler\nflags first. I didn't see any strong reason to change this, and doing so\nallows us to avoid sending extra CFLAGS when possible.\n\nThoughts?\n\n[0] https://postgr.es/m/DB9PR08MB6991329A73923BF8ED4B3422F5DBA%40DB9PR08MB6991.eurprd08.prod.outlook.com\n[1] https://en.wikipedia.org/wiki/CPUID\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 30 Oct 2023 11:17:06 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "always use runtime checks for CRC-32C instructions" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> The aforementioned other thread [0] aims to further optimize this code by\n> using another instruction that requires additional configure and/or runtime\n> checks. $SUBJECT has been in the back of my mind for a while, but given\n> proposals to add further complexity to this code, I figured it might be a\n> good time to propose this simplification. Specifically, I think we\n> shouldn't worry about trying to compile only the special instrinics\n> versions, and instead always try to build both and choose the appropriate\n> one at runtime.\n\nOn the one hand, I agree that we need to keep the complexity from\ngetting out of hand. On the other hand, I wonder if this approach\nisn't optimizing for the wrong case. How many machines that PG 17\nwill ever be run on in production will lack SSE 4.2 (for Intel)\nor ARMv8 instructions (on the ARM side)? It seems like a shame\nto be burdening these instructions with a subroutine call for the\nbenefit of long-obsolete hardware versions. Maybe that overhead\nis negligible, but it doesn't sound like you tried to measure it.\n\nI'm not quite sure what to propose instead, though. I thought for\na little bit about a configure switch to select \"test first\" or\n\"pedal to the metal\". But in practice, package builders would\nprobably have to select the conservative \"test first\" option; and\nwe know that the vast majority of modern installations use prebuilt\npackages, so it's not clear that this answer would help many people.\n\nAnyway, I agree that the cost of a one-time-per-process probe should\nbe negligible; it's the per-use cost that I worry about. If you can\ndo some measurements proving that that worry is ill-founded, then\nI'm good with test-first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Oct 2023 12:39:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Mon, 2023-10-30 at 12:39 -0400, Tom Lane wrote:\n> It seems like a shame\n> to be burdening these instructions with a subroutine call for the\n> benefit of long-obsolete hardware versions.\n\nIt's already doing a call to pg_comp_crc32c_sse42() regardless, right?\n\nI assume you are concerned about the call going through a function\npointer? If so, is it possible that setting a flag and then branching\nwould be better?\n\nAlso, if it's a concern, should we also consider making an inlineable\nversion of pg_comp_crc32c_sse42()?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 30 Oct 2023 13:48:29 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Mon, Oct 30, 2023 at 12:39:23PM -0400, Tom Lane wrote:\n> On the one hand, I agree that we need to keep the complexity from\n> getting out of hand. On the other hand, I wonder if this approach\n> isn't optimizing for the wrong case. How many machines that PG 17\n> will ever be run on in production will lack SSE 4.2 (for Intel)\n> or ARMv8 instructions (on the ARM side)?\n\nFor the CRC instructions in use today, I wouldn't be surprised if that\nnumber is pretty small, but for newer or optional instructions (like ARM's\nPMULL), I don't think we'll be so lucky. Even if we do feel comfortable\nassuming the presence of SSE 4.2, etc., we'll likely still need to add\nruntime checks for future optimizations.\n\n> It seems like a shame\n> to be burdening these instructions with a subroutine call for the\n> benefit of long-obsolete hardware versions. Maybe that overhead\n> is negligible, but it doesn't sound like you tried to measure it.\n\nWhen I went to measure this, I noticed that my relatively new x86 machine\nwith a relatively new version of gcc uses the runtime check. I then\nskimmed through a few dozen buildfarm machines and found that, of all x86\nand ARM machines that supported the specialized CRC instructions, only one\nARM machine did not use the runtime check. Of course, this is far from a\nscientific data point, but it seems to indicate that the runtime check is\nthe norm.\n\n(I still need to measure it.)\n\n> Anyway, I agree that the cost of a one-time-per-process probe should\n> be negligible; it's the per-use cost that I worry about. If you can\n> do some measurements proving that that worry is ill-founded, then\n> I'm good with test-first.\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 16:01:28 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Mon, Oct 30, 2023 at 01:48:29PM -0700, Jeff Davis wrote:\n> I assume you are concerned about the call going through a function\n> pointer? If so, is it possible that setting a flag and then branching\n> would be better?\n> \n> Also, if it's a concern, should we also consider making an inlineable\n> version of pg_comp_crc32c_sse42()?\n\nI tested pg_waldump -z with 50M 65-byte records for the following\nimplementations on an ARM system:\n\n * slicing-by-8 : ~3.08s\n * proposed patches applied (runtime check) : ~2.44s\n * only CRC intrinsics implementation compiled : ~2.42s\n * forced inlining : ~2.38s\n\nAvoiding the runtime check produced a 0.8% improvement, and forced inlining\nproduced another 1.7% improvement. In comparison, even the runtime check\nimplementation produced a 20.8% improvement over the slicing-by-8 one.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Oct 2023 22:36:01 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Mon, Oct 30, 2023 at 10:36:01PM -0500, Nathan Bossart wrote:\n> I tested pg_waldump -z with 50M 65-byte records for the following\n> implementations on an ARM system:\n> \n> * slicing-by-8 : ~3.08s\n> * proposed patches applied (runtime check) : ~2.44s\n> * only CRC intrinsics implementation compiled : ~2.42s\n> * forced inlining : ~2.38s\n> \n> Avoiding the runtime check produced a 0.8% improvement, and forced inlining\n> produced another 1.7% improvement. In comparison, even the runtime check\n> implementation produced a 20.8% improvement over the slicing-by-8 one.\n\nAfter reflecting on these numbers for a bit, I think I'm still inclined to\ndo $SUBJECT. I considered the following:\n\n* While it would be nice to gain a couple of percentage points for existing\n hardware, I think we'll still end up doing runtime checks most of the\n time once we add support for newer instructions.\n\n* The performance improvements that the new instructions provide seem\n likely to outweigh these small regressions, especially for workloads with\n larger WAL records [0].\n\n* From my quick scan of a few dozen machines on the buildfarm, it looks\n like the runtime checks are already the norm, so the number of systems\n that would be subject to a regression from v16 to v17 should be pretty\n small, in theory. And this regression seems to be on the order of 1%\n based on the numbers above.\n\nDo folks think this is reasonable? Or should we instead try to squeeze\nevery last drop out of the current implementations by avoiding function\npointers, forcing inlining, etc.?\n\n[0] https://postgr.es/m/20231025014539.GA977906%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 10:55:18 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Mon, Oct 30, 2023 at 10:36:01PM -0500, Nathan Bossart wrote:\n>> I tested pg_waldump -z with 50M 65-byte records for the following\n>> implementations on an ARM system:\n>> \n>> * slicing-by-8 : ~3.08s\n>> * proposed patches applied (runtime check) : ~2.44s\n>> * only CRC intrinsics implementation compiled : ~2.42s\n>> * forced inlining : ~2.38s\n>> \n>> Avoiding the runtime check produced a 0.8% improvement, and forced inlining\n>> produced another 1.7% improvement. In comparison, even the runtime check\n>> implementation produced a 20.8% improvement over the slicing-by-8 one.\n\nI find these numbers fairly concerning. If you can see a\ncouple-of-percent slowdown on a macroscopic benchmark like pg_waldump,\nthat implies that the percentage slowdown considering the CRC\noperation alone is much worse. So there may be other use-cases where\nwe would take a bigger relative hit.\n\n> * From my quick scan of a few dozen machines on the buildfarm, it looks\n> like the runtime checks are already the norm, so the number of systems\n> that would be subject to a regression from v16 to v17 should be pretty\n> small, in theory. And this regression seems to be on the order of 1%\n> based on the numbers above.\n\nI did a more thorough scrape of the buildfarm results. Of 161 animals\ncurrently reporting configure output on HEAD, we have\n\n 2 ARMv8 CRC instructions\n 36 ARMv8 CRC instructions with runtime check\n 2 LoongArch CRCC instructions\n 2 SSE 4.2\n 52 SSE 4.2 with runtime check\n 67 slicing-by-8\n\nWhile that'd seem to support your conclusion, the two using ARM CRC\n*without* a runtime check are my Apple M1 Mac animals (sifaka/indri);\nand I see the same selection on my laptop. So one platform where\nwe'd clearly be taking a regression is M-series Macs; that's a pretty\npopular platform. The two using SSE without a check are prion and\ntayra. I notice those are using gcc 11; so perhaps the default cflags\nhave changed to include -msse4.2 recently? I couldn't see much other\npattern though. (Scraping results attached in case anybody wants to\nlook.)\n\nReally this just reinforces my concern that doing a runtime check\nall the time is on the wrong side of history. I grant that we've\ngot to do that for anything where the availability of the instruction\nis really in serious question, but I'm not very convinced that that's\na majority situation on popular platforms.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 31 Oct 2023 15:16:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "I wrote:\n> I did a more thorough scrape of the buildfarm results. Of 161 animals\n> currently reporting configure output on HEAD, we have\n\nOh ... take \"current\" with a grain of salt there, because I just noticed\nthat I typo'd my search condition so that it collected results from all\nsystems that reported since 2022-Oct, rather than in the last month as\nI'd intended. There are just 137 animals currently reporting.\n\nOf those, I broke down the architectures reporting using slicing-by-8:\n\n# select arch,count(*) from results where crc = 'slicing-by-8' group by 1 order by 1;\n arch | count \n--------------------+-------\n aarch64 | 1\n macppc | 1\n mips64eb; -mabi=64 | 1\n mips64el; -mabi=32 | 1\n ppc64 (power7) | 4\n ppc64 (power8) | 2\n ppc64le | 7\n ppc64le (power8) | 1\n ppc64le (power9) | 15\n riscv64 | 2\n s390x (z15) | 14\n sparc | 1\n(12 rows)\n\nThe one machine using slicing-by-8 where there might be a better\nalternative is arowana, which is CentOS 7 with a pretty ancient gcc\nversion. So I reject the idea that slicing-by-8 is an appropriate\nbaseline for comparisons. There isn't anybody who will see an\nimprovement over current behavior: in the population of interest,\njust about all platforms are using CRC instructions with or without\na runtime check.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Oct 2023 15:42:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Tue, Oct 31, 2023 at 03:16:16PM -0400, Tom Lane wrote:\n> Really this just reinforces my concern that doing a runtime check\n> all the time is on the wrong side of history. I grant that we've\n> got to do that for anything where the availability of the instruction\n> is really in serious question, but I'm not very convinced that that's\n> a majority situation on popular platforms.\n\nOkay. With that in mind, I think the path forward for new instructions is\nas follows:\n\n* If the special CRC instructions can be used with the default compiler\n flags, we can only use newer instructions if they can also be used with\n the default compiler flags. (My M2 machine appears to add +crypto by\n default, so I bet your buildfarm animals would fall into this bucket.)\n* Otherwise, if the CRC instructions can be used with added flags (i.e.,\n the runtime check path), we can do a runtime check for the new\n instructions as well. (Most other buildfarm animals would fall into this\n bucket.)\n\nAny platform that can use the CRC instructions with default compiler flags\nbut not the new instructions wouldn't be able to take advantage of the\nproposed optimization, but it also wouldn't be subject to the small\nperformance regression.\n\nIf we wanted to further eliminate runtime checks for SSE 4.2 and ARMv8,\nthen I think things become a little trickier, as having a compiler that\nunderstands things like +crypto would mean that you're automatically\nsubject to the runtime check regression (assuming we proceed with the\nproposed optimization). An alternate approach could be to only use newer\ninstructions if they are available with the default compiler flags, but\ngiven the current state of the buildfarm, such optimizations might not get\nmuch uptake for a while.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 14:53:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Tue, Oct 31, 2023 at 03:42:33PM -0400, Tom Lane wrote:\n> The one machine using slicing-by-8 where there might be a better\n> alternative is arowana, which is CentOS 7 with a pretty ancient gcc\n> version. So I reject the idea that slicing-by-8 is an appropriate\n> baseline for comparisons. There isn't anybody who will see an\n> improvement over current behavior: in the population of interest,\n> just about all platforms are using CRC instructions with or without\n> a runtime check.\n\nI only included the slicing-by-8 benchmark to demonstrate that 1) the CRC\ncomputations are a big portion of that pg_waldump -z command and that 2)\nthe CRC instructions provide significant performance gains.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 14:57:43 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> Okay. With that in mind, I think the path forward for new instructions is\n> as follows:\n\n> * If the special CRC instructions can be used with the default compiler\n> flags, we can only use newer instructions if they can also be used with\n> the default compiler flags. (My M2 machine appears to add +crypto by\n> default, so I bet your buildfarm animals would fall into this bucket.)\n> * Otherwise, if the CRC instructions can be used with added flags (i.e.,\n> the runtime check path), we can do a runtime check for the new\n> instructions as well. (Most other buildfarm animals would fall into this\n> bucket.)\n\nThis seems like a reasonable proposal.\n\n> Any platform that can use the CRC instructions with default compiler flags\n> but not the new instructions wouldn't be able to take advantage of the\n> proposed optimization, but it also wouldn't be subject to the small\n> performance regression.\n\nCheck. For now I think that's fine. If we get to a place where this\npolicy is really leaving a lot of performance on the table, we can\nrevisit it ... but that situation is hypothetical and may remain so.\n\n(It's worth noting also that a package builder can move the goalposts\nat will, since our idea of \"default flags\" is really whatever the user\nsays to use.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:12:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Tue, Oct 31, 2023 at 04:12:40PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> Okay. With that in mind, I think the path forward for new instructions is\n>> as follows:\n> \n>> * If the special CRC instructions can be used with the default compiler\n>> flags, we can only use newer instructions if they can also be used with\n>> the default compiler flags. (My M2 machine appears to add +crypto by\n>> default, so I bet your buildfarm animals would fall into this bucket.)\n>> * Otherwise, if the CRC instructions can be used with added flags (i.e.,\n>> the runtime check path), we can do a runtime check for the new\n>> instructions as well. (Most other buildfarm animals would fall into this\n>> bucket.)\n> \n> This seems like a reasonable proposal.\n\nGreat. I think that leaves us with nothing left to do for this thread, so\nI'll withdraw it from the commitfest and move the discussion back to the\noriginal thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 15:38:17 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" }, { "msg_contents": "On Tue, Oct 31, 2023 at 03:38:17PM -0500, Nathan Bossart wrote:\n> On Tue, Oct 31, 2023 at 04:12:40PM -0400, Tom Lane wrote:\n>> This seems like a reasonable proposal.\n> \n> Great. I think that leaves us with nothing left to do for this thread, so\n> I'll withdraw it from the commitfest and move the discussion back to the\n> original thread.\n\n(Also, thanks for the discussion.)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 31 Oct 2023 15:43:28 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: always use runtime checks for CRC-32C instructions" } ]
[ { "msg_contents": "Please find attached a patch to provide some basic ordering to the system\nviews pg_available_extensions and pg_available_extension_versions. It is\nsorely tempting to add ORDER BYs to many of the other views in that file,\nbut I understand that would be contentious as there are reasons for not\nadding an ORDER BY. However, in the case of pg_available_extensions, it's a\nvery, very small resultset, with an obvious default ordering, and extremely\nunlikely to be a part of a larger complex query. It's much more likely\npeople like myself are just doing a \"SELECT * FROM pg_available_extensions\"\nand then get annoyed at the random ordering.\n\nCheers,\nGreg", "msg_date": "Mon, 30 Oct 2023 16:07:09 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": true, "msg_subject": "Adding ordering to list of available extensions" } ]
[ { "msg_contents": "hi.\nerreport bug over partitioned table in pgrowlocks.\n\nBEGIN;\nCREATE TABLE fk_parted_pk (a int PRIMARY KEY) PARTITION BY LIST (a);\nSELECT * FROM pgrowlocks('fk_parted_pk');\nERROR: only heap AM is supported\n\nerror should be the following part:\nif (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\nereport(ERROR,\n(errcode(ERRCODE_WRONG_OBJECT_TYPE),\nerrmsg(\"\\\"%s\\\" is a partitioned table\",\nRelationGetRelationName(rel)),\nerrdetail(\"Partitioned tables do not contain rows.\")));\n\n\n", "msg_date": "Tue, 31 Oct 2023 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "small erreport bug over partitioned table pgrowlocks module" }, { "msg_contents": "On Tue, 31 Oct 2023 at 13:00, jian he <[email protected]> wrote:\n> BEGIN;\n> CREATE TABLE fk_parted_pk (a int PRIMARY KEY) PARTITION BY LIST (a);\n> SELECT * FROM pgrowlocks('fk_parted_pk');\n> ERROR: only heap AM is supported\n>\n> error should be the following part:\n> if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> ereport(ERROR,\n> (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> errmsg(\"\\\"%s\\\" is a partitioned table\",\n> RelationGetRelationName(rel)),\n> errdetail(\"Partitioned tables do not contain rows.\")));\n\nYeah. Seems that 4b8266415 didn't look closely enough at the other\nerror messages and mistakenly put the relam check first instead of\nlast.\n\nHere's a patch that puts the relam check last.\n\nDavid", "msg_date": "Tue, 31 Oct 2023 13:18:17 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small erreport bug over partitioned table pgrowlocks module" }, { "msg_contents": "On Tue, 31 Oct 2023 at 13:18, David Rowley <[email protected]> wrote:\n> Here's a patch that puts the relam check last.\n\nI've pushed that patch.\n\nDavid\n\n\n", "msg_date": "Tue, 31 Oct 2023 16:45:36 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: small erreport bug over partitioned table pgrowlocks module" } ]
[ { "msg_contents": "Hi, all:\r\n\r\n\r\nwhen I execute simple sql,report ERROR:\r\n\r\n\r\n\r\n\r\npostgres=# CREATE TABLE test_v(id int,name varchar(30));\r\nCREATE TABLE\r\npostgres=# insert into test_v values(9,'abc'),(9,'def'),(9,'gh'), (9,'gh');\r\nINSERT 0 4\r\npostgres=# explain (costs off) select distinct (id,name,'D3Q84xpymM',123,'123') from test_v;\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;QUERY PLAN &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;\r\n-------------------------------------------------------------\r\n&nbsp;Unique\r\n&nbsp; &nbsp;-&gt; &nbsp;Sort\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Sort Key: (ROW(id, name, 'D3Q84xpymM', 123, '123'))\r\n&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;-&gt; &nbsp;Seq Scan on test_v\r\n(4 rows)\r\n\r\n\r\npostgres=# select distinct (id,name,'D3Q84xpymM',123,'123') from test_v;\r\nERROR: &nbsp;could not identify a comparison function for type unknown\r\n\r\n\r\n\r\n\r\nPostgreSQL &nbsp;could not identify 'D3Q84xpymM' and '123' datatype:\r\n\r\n\r\n\r\nwe can allow an UNKNOWN type to change it to TEXT,&nbsp;\r\nplease check my patch.\r\n\r\n\r\nThanks!", "msg_date": "Tue, 31 Oct 2023 11:15:54 +0800", "msg_from": "\"=?gb18030?B?z8LT6szs?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "fix 'ERROR: could not identify a comparison function for type\n unknown'" } ]
[ { "msg_contents": "Hi,\nFor some reason plannode.h has declared variable to hold RTIs as\nBitmapset * instead of Relids like other places. Here's patch to fix\nit. This is superficial change as Relids is typedefed to Bitmapset *.\nBuild succeeds for me and also make check passes.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Tue, 31 Oct 2023 11:41:45 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Relids instead of Bitmapset * in plannode.h" }, { "msg_contents": "Hello,\n\nOn 2023-Oct-31, Ashutosh Bapat wrote:\n\n> For some reason plannode.h has declared variable to hold RTIs as\n> Bitmapset * instead of Relids like other places. Here's patch to fix\n> it. This is superficial change as Relids is typedefed to Bitmapset *.\n> Build succeeds for me and also make check passes.\n\nI think the reason for having done it this way, was mostly to avoid\nincluding pathnodes.h in plannodes.h. Did you explore what the\nconsequences are? Starting here:\nhttps://doxygen.postgresql.org/plannodes_8h.html\n\nWhile looking at it, I noticed that tcopprot.h includes both plannodes.h\nand parsenodes.h, but there's no reason to include the latter (or at\nleast headerscheck doesn't complain after removing it), so I propose to\nremove it, per 0001 here. There's a couple of files that need to be\nrepaired for this change. windowfuncs.c is a no-brainer. However,\nhaving to edit bootstrap.h is a bit surprising -- I think before\ndac048f71ebb (\"Build all Flex files standalone\") this inclusion wasn't\nnecessary, because the .y file already includes parsenodes.h; but after\nthat commit, bootparse.h needs parsenodes.h to declare YYSTYPE, per\ncomments in bootscanner.l. Anyway, this seems a good change.\n\nI also noticed while looking that I messed up in commit 7103ebb7aae8\n(\"Add support for MERGE SQL command\") on this point, because I added\n#include parsenodes.h to plannodes.h. This is because MergeAction,\nwhich is in parsenodes.h, is also needed by some executor code. But the\nreal way to fix that is to define that struct in primnodes.h. 0002 does\nthat. (I'm forced to also move enum OverridingKind there, which is a\nbit annoying.)\n\n0003 here is your patch, which I include because of conflicts with my\n0002. After my 0002, plannodes.h is pretty slim, so I'd be hesitant to\ninclude pathnodes.h just to be able to change the Bitmapset * to Relids.\nBut on the other hand, it doesn't seem to have too bad an effect overall\n(if only because plannodes.h is included by rather few files), so +0.1\non doing this. I would be more at ease if we didn't have to include\nparsenodes.h in pathnodes.h, though, but that looks more difficult to\nachieve.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Tue, 7 Nov 2023 12:06:28 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Relids instead of Bitmapset * in plannode.h" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Oct-31, Ashutosh Bapat wrote:\n>> For some reason plannode.h has declared variable to hold RTIs as\n>> Bitmapset * instead of Relids like other places. Here's patch to fix\n>> it. This is superficial change as Relids is typedefed to Bitmapset *.\n>> Build succeeds for me and also make check passes.\n\n> I think the reason for having done it this way, was mostly to avoid\n> including pathnodes.h in plannodes.h.\n\nYes, I'm pretty sure that's exactly the reason, and I'm strongly\nagainst the initially-proposed patch. The include footprint of\npathnodes.h would be greatly expanded, for no real benefit.\nAmong other things, that fuzzes the distinction between planner\nmodules and non-planner modules.\n\n> While looking at it, I noticed that tcopprot.h includes both plannodes.h\n> and parsenodes.h, but there's no reason to include the latter (or at\n> least headerscheck doesn't complain after removing it), so I propose to\n> remove it, per 0001 here.\n\n0001 is ok, except check #include alphabetization.\n\n> I also noticed while looking that I messed up in commit 7103ebb7aae8\n> (\"Add support for MERGE SQL command\") on this point, because I added\n> #include parsenodes.h to plannodes.h. This is because MergeAction,\n> which is in parsenodes.h, is also needed by some executor code. But the\n> real way to fix that is to define that struct in primnodes.h. 0002 does\n> that. (I'm forced to also move enum OverridingKind there, which is a\n> bit annoying.)\n\nThis seems OK. It seems to me that parsenodes.h has been required\nby plannodes.h for a long time, but if we can decouple them, all\nthe better.\n\n> 0003 here is your patch, which I include because of conflicts with my\n> 0002.\n\nStill don't like it.\n\n> ... I would be more at ease if we didn't have to include\n> parsenodes.h in pathnodes.h, though, but that looks more difficult to\n> achieve.\n\nYeah, that dependency has been there a long time too. I'm not too\nfussed by dependencies on parsenodes.h, because anything involved\nwith either planning or execution will certainly be looking at\nquery trees too. But I don't want to add dependencies that tie\nplanning and execution together.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Nov 2023 10:24:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Relids instead of Bitmapset * in plannode.h" }, { "msg_contents": "On Tue, Nov 7, 2023 at 8:54 PM Tom Lane <[email protected]> wrote:\n>\n> Alvaro Herrera <[email protected]> writes:\n> > On 2023-Oct-31, Ashutosh Bapat wrote:\n> >> For some reason plannode.h has declared variable to hold RTIs as\n> >> Bitmapset * instead of Relids like other places. Here's patch to fix\n> >> it. This is superficial change as Relids is typedefed to Bitmapset *.\n> >> Build succeeds for me and also make check passes.\n>\n> > I think the reason for having done it this way, was mostly to avoid\n> > including pathnodes.h in plannodes.h.\n>\n> Yes, I'm pretty sure that's exactly the reason, and I'm strongly\n> against the initially-proposed patch. The include footprint of\n> pathnodes.h would be greatly expanded, for no real benefit.\n> Among other things, that fuzzes the distinction between planner\n> modules and non-planner modules.\n\nAs I mentioned in [1] the Bitmapset implementation is not space\nefficient to be used as Relids when there are thousands of partitions.\nI was assessing all usages of Bitmapset to find if there are other\nplaces where this is an issue. That's when I noticed this. At some\npoint in future (possibly quite near) when queries will involved\nthousands of relations (partitions or otherwise) we will need to\nimplement Relids in more space efficient way. Having all Relids usages\nof Bitmapset labelled as Relids will help us then. If we don't want to\nadd pathnodes.h to plannodes.h there will be more work to identify\nRelids usage. That shouldn't be a couple of days work, so it's ok.\n\nOther possibilities are\n1. Define Relids in bitmapset.h itself and use Relids everywhere\nBitmapset * is really Relids. Wherever Relids is used bitmapset.h must\nhave been included one or other other way. That's a bigger churn.\n\n2. Replace RTIs with Relids in the comments and add the following\ncomment somewhere near the #include section. \"The Relids members in\nvarious structures in this file have been declared as Bitmapset * to\navoid including pathnodes.h in this file. This include has greatly\nexpanded footprint for no real benefit.\".\n\n3. Do nothing right now. If and when we implement Relids as a separate\ndatastructure, it will get its own module. We will be able to place it\nsomewhere properly.\n\nI have no additional comments on other patches.\n\n[1] https://www.postgresql.org/message-id/CAExHW5s4EqY43oB%3Dne6B2%3D-xLgrs9ZGeTr1NXwkGFt2j-OmaQQ%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 9 Nov 2023 11:12:28 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Relids instead of Bitmapset * in plannode.h" }, { "msg_contents": "Ashutosh Bapat <[email protected]> writes:\n> On Tue, Nov 7, 2023 at 8:54 PM Tom Lane <[email protected]> wrote:\n>> Yes, I'm pretty sure that's exactly the reason, and I'm strongly\n>> against the initially-proposed patch. The include footprint of\n>> pathnodes.h would be greatly expanded, for no real benefit.\n\n> As I mentioned in [1] the Bitmapset implementation is not space\n> efficient to be used as Relids when there are thousands of partitions.\n\nTBH, I'd be very strongly against \"optimizing\" that case by adopting a\ndata structure that is less efficient for typical rangetable sizes.\nI continue to think that anybody who is using that many partitions\nis Doing It Wrong and has no excuse for thinking it'll be free.\nMoreover, the size of their relid sets is pretty unlikely to be\ntheir worst pain point.\n\nIn any case, that is a poor argument for weakening the separation\nbetween planner and executor. When and if somebody comes up with\na credible replacement for bitmapsets here, we can consider what\nwe want to do in terms of header-file organization --- but I do\nnot think including pathnodes.h into executor files will be it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 09 Nov 2023 01:08:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Relids instead of Bitmapset * in plannode.h" } ]
[ { "msg_contents": "Hi\n\nCurrently we do not allow TRUNCATE of a table when any Foreign Keys\npoint to that table.\n\nAt the same time we do allow one to delete all rows when\nsession_replication_role=replica\n\nThis causes all kinds of pain when trying to copy in large amounts of\ndata, especially at the start of logical replication set-up, as many\noptimisations to COPY require the table to be TRUNCATEd .\n\nThe main two are ability to FREEZE while copying and the skipping of\nWAL generation in case of wal_level=minimal, both of which can achieve\nsignificant benefits when data amounts are large.\n\nIs there any reason to not allow TRUNCATE when\nsession_replication_role=replica ?\n\nUnless there are any serious objections, I will send a patch to also\nallow TRUNCATE in this case.\n\n\nBest Regards\nHannu\n\n\n", "msg_date": "Tue, 31 Oct 2023 09:09:24 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Allowing TRUNCATE of FK target when session_replication_role=replica" }, { "msg_contents": "On Tue, Oct 31, 2023, at 5:09 AM, Hannu Krosing wrote:\n> Currently we do not allow TRUNCATE of a table when any Foreign Keys\n> point to that table.\n\nIt is allowed iif you *also* truncate all tables referencing it.\n\n> At the same time we do allow one to delete all rows when\n> session_replication_role=replica\n\nThat's true.\n\n> This causes all kinds of pain when trying to copy in large amounts of\n> data, especially at the start of logical replication set-up, as many\n> optimisations to COPY require the table to be TRUNCATEd .\n> \n> The main two are ability to FREEZE while copying and the skipping of\n> WAL generation in case of wal_level=minimal, both of which can achieve\n> significant benefits when data amounts are large.\n\nThe former is true but the latter is not. Logical replication requires\nwal_level = logical. That's also true for skipping FSM.\n\n> Is there any reason to not allow TRUNCATE when\n> session_replication_role=replica ?\n\nThat's basically the same proposal as [1]. That patch was rejected because it\nwas implemented in a different way that doesn't require the\nsession_replication_role = replica to bypass the FK checks.\n\nThat's basically the same proposal as [1]. That patch was rejected because it\nwas implemented in a different way that doesn't require the\nsession_replication_role = replica to bypass the FK checks.\n\nThere are at least 3 cases that can benefit from this feature:\n\n1) if your scenario includes an additional table only in the subscriber\nside that contains a foreign key to a replicated table then you will break your\nreplication like\n\nERROR: cannot truncate a table referenced in a foreign key constraint\nDETAIL: Table \"foo\" references \"bar\".\nHINT: Truncate table \"foo\" at the same time, or use TRUNCATE ... CASCADE.\nCONTEXT: processing remote data for replication origin \"pg_16406\" during\nmessage type \"TRUNCATE\" in transaction 12880, finished at 0/297FE08\n\nand you have to manually fix your replication. If we allow\nsession_replication_role = replica to bypass FK check for TRUNCATE commands, we\nwouldn't have an error. I'm not saying that it is a safe operation for logical\nreplication scenarios. Maybe it is not because table foo will contain invalid\nreferences to table bar and someone should fix it in the subscriber side.\nHowever, the current implementation already allows such orphan rows due to\nsession_replication_role behavior.\n\n2) truncate table at subscriber side during the initial copy. As you mentioned,\nthis feature should take advantage of the FREEZE and FSM optimizations. There\nwas a proposal a few years ago [2].\n\n3) resynchronize a table. Same advantages as item 2.\n\n> Unless there are any serious objections, I will send a patch to also\n> allow TRUNCATE in this case.\n> \n\nYou should start checking the previous proposal [1].\n\n\n[1] https://www.postgresql.org/message-id/ff835f71-3c6c-335e-4c7b-b9e1646cf3d7%402ndquadrant.it\n[2] https://www.postgresql.org/message-id/CF3B6672-2A43-4204-A60A-68F359218A9B%40endpoint.com\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Oct 31, 2023, at 5:09 AM, Hannu Krosing wrote:Currently we do not allow TRUNCATE of a table when any Foreign Keyspoint to that table.It is allowed iif you *also* truncate all tables referencing it.At the same time we do allow one to delete all rows whensession_replication_role=replicaThat's true.This causes all kinds of pain when trying to copy in large amounts ofdata, especially at the start of logical replication set-up, as manyoptimisations to COPY require the table to be TRUNCATEd .The main two are ability to FREEZE while copying and the skipping ofWAL generation in case of wal_level=minimal, both of which can achievesignificant benefits when data amounts are large.The former is true but the latter is not. Logical replication requireswal_level = logical. That's also true for skipping FSM.Is there any reason to not allow TRUNCATE whensession_replication_role=replica ?That's basically the same proposal as [1]. That patch was rejected because itwas implemented in a different way that doesn't require thesession_replication_role = replica to bypass the FK checks.That's basically the same proposal as [1]. That patch was rejected because itwas implemented in a different way that doesn't require thesession_replication_role = replica to bypass the FK checks.There are at least 3 cases that can benefit from this feature:1) if your scenario includes an additional table only in the subscriberside that contains a foreign key to a replicated table then you will break yourreplication likeERROR:  cannot truncate a table referenced in a foreign key constraintDETAIL:  Table \"foo\" references \"bar\".HINT:  Truncate table \"foo\" at the same time, or use TRUNCATE ... CASCADE.CONTEXT:  processing remote data for replication origin \"pg_16406\" duringmessage type \"TRUNCATE\" in transaction 12880, finished at 0/297FE08and you have to manually fix your replication. If we allowsession_replication_role = replica to bypass FK check for TRUNCATE commands, wewouldn't have an error. I'm not saying that it is a safe operation for logicalreplication scenarios. Maybe it is not because table foo will contain invalidreferences to table bar and someone should fix it in the subscriber side.However, the current implementation already allows such orphan rows due tosession_replication_role behavior.2) truncate table at subscriber side during the initial copy. As you mentioned,this feature should take advantage of the FREEZE and FSM optimizations. Therewas a proposal a few years ago [2].3) resynchronize a table. Same advantages as item 2.Unless there are any serious objections, I will send a patch to alsoallow TRUNCATE in this case.You should start checking the previous proposal [1].[1] https://www.postgresql.org/message-id/ff835f71-3c6c-335e-4c7b-b9e1646cf3d7%402ndquadrant.it[2] https://www.postgresql.org/message-id/CF3B6672-2A43-4204-A60A-68F359218A9B%40endpoint.com--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 31 Oct 2023 13:55:42 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing TRUNCATE of FK target when\n session_replication_role=replica" }, { "msg_contents": "Thanks for the pointers.\n\nOne thing though re:\n> The former is true but the latter is not. Logical replication requires\n> wal_level = logical. That's also true for skipping FSM.\n\nwal_level=logical is only needed *at provider* side, at least when\nrunning pglogical.\n\nAlso, even for native logical replication it is possible to disconnect\nthe initial copy from CDC streaming, in which case again you can set\nwal_level=minimal on the target side.\n\nWill check the [1] and [2] and come back with more detailed proposal.\n\n---\nBest regards,\nHannu\n\n\n\n\nOn Tue, Oct 31, 2023 at 5:56 PM Euler Taveira <[email protected]> wrote:\n>\n> On Tue, Oct 31, 2023, at 5:09 AM, Hannu Krosing wrote:\n>\n> Currently we do not allow TRUNCATE of a table when any Foreign Keys\n> point to that table.\n>\n>\n> It is allowed iif you *also* truncate all tables referencing it.\n>\n> At the same time we do allow one to delete all rows when\n> session_replication_role=replica\n>\n>\n> That's true.\n>\n> This causes all kinds of pain when trying to copy in large amounts of\n> data, especially at the start of logical replication set-up, as many\n> optimisations to COPY require the table to be TRUNCATEd .\n>\n> The main two are ability to FREEZE while copying and the skipping of\n> WAL generation in case of wal_level=minimal, both of which can achieve\n> significant benefits when data amounts are large.\n>\n>\n> The former is true but the latter is not. Logical replication requires\n> wal_level = logical. That's also true for skipping FSM.\n>\n> Is there any reason to not allow TRUNCATE when\n> session_replication_role=replica ?\n>\n>\n> That's basically the same proposal as [1]. That patch was rejected because it\n> was implemented in a different way that doesn't require the\n> session_replication_role = replica to bypass the FK checks.\n>\n> That's basically the same proposal as [1]. That patch was rejected because it\n> was implemented in a different way that doesn't require the\n> session_replication_role = replica to bypass the FK checks.\n>\n> There are at least 3 cases that can benefit from this feature:\n>\n> 1) if your scenario includes an additional table only in the subscriber\n> side that contains a foreign key to a replicated table then you will break your\n> replication like\n>\n> ERROR: cannot truncate a table referenced in a foreign key constraint\n> DETAIL: Table \"foo\" references \"bar\".\n> HINT: Truncate table \"foo\" at the same time, or use TRUNCATE ... CASCADE.\n> CONTEXT: processing remote data for replication origin \"pg_16406\" during\n> message type \"TRUNCATE\" in transaction 12880, finished at 0/297FE08\n>\n> and you have to manually fix your replication. If we allow\n> session_replication_role = replica to bypass FK check for TRUNCATE commands, we\n> wouldn't have an error. I'm not saying that it is a safe operation for logical\n> replication scenarios. Maybe it is not because table foo will contain invalid\n> references to table bar and someone should fix it in the subscriber side.\n> However, the current implementation already allows such orphan rows due to\n> session_replication_role behavior.\n>\n> 2) truncate table at subscriber side during the initial copy. As you mentioned,\n> this feature should take advantage of the FREEZE and FSM optimizations. There\n> was a proposal a few years ago [2].\n>\n> 3) resynchronize a table. Same advantages as item 2.\n>\n> Unless there are any serious objections, I will send a patch to also\n> allow TRUNCATE in this case.\n>\n>\n> You should start checking the previous proposal [1].\n>\n>\n> [1] https://www.postgresql.org/message-id/ff835f71-3c6c-335e-4c7b-b9e1646cf3d7%402ndquadrant.it\n> [2] https://www.postgresql.org/message-id/CF3B6672-2A43-4204-A60A-68F359218A9B%40endpoint.com\n>\n>\n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n>\n\n\n", "msg_date": "Tue, 31 Oct 2023 19:21:06 +0100", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Allowing TRUNCATE of FK target when\n session_replication_role=replica" }, { "msg_contents": "On Tue, Oct 31, 2023, at 3:21 PM, Hannu Krosing wrote:\n> One thing though re:\n> > The former is true but the latter is not. Logical replication requires\n> > wal_level = logical. That's also true for skipping FSM.\n> \n> wal_level=logical is only needed *at provider* side, at least when\n> running pglogical.\n\nIt is not a requirement for the subscriber. However, it increases the\ncomplexity for a real scenario (in which you set up backup and sometimes\nadditional physical replicas) because key GUCs require a restart.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Oct 31, 2023, at 3:21 PM, Hannu Krosing wrote:One thing though re:> The former is true but the latter is not. Logical replication requires> wal_level = logical. That's also true for skipping FSM.wal_level=logical is only needed *at provider* side, at least whenrunning pglogical.It is not a requirement for the subscriber. However, it increases thecomplexity for a real scenario (in which you set up backup and sometimesadditional physical replicas) because key GUCs require a restart.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Tue, 31 Oct 2023 18:10:15 -0300", "msg_from": "\"Euler Taveira\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Allowing TRUNCATE of FK target when\n session_replication_role=replica" } ]
[ { "msg_contents": "Hello hackers,\n\nCommit 3f1ce97 refactored XLog record access macros, but missed in a few places. I fixed this, and patch is attached.\n\n--\nYuhang Qiu", "msg_date": "Tue, 31 Oct 2023 17:22:50 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <[email protected]>", "msg_from_op": true, "msg_subject": "Simplify xlogreader.c with XLogRec* macros" }, { "msg_contents": "On Tue, Oct 31, 2023 at 5:23 PM 邱宇航 <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> Commit 3f1ce97 refactored XLog record access macros, but missed in a few places. I fixed this, and patch is attached.\n>\n> --\n> Yuhang Qiu\n>\n>\n>\n\n@@ -2036,8 +2035,8 @@ RestoreBlockImage(XLogReaderState *record, uint8\nblock_id, char *page)\n char *ptr;\n PGAlignedBlock tmp;\n\n- if (block_id > record->record->max_block_id ||\n- !record->record->blocks[block_id].in_use)\n+ if (block_id > XLogRecMaxBlockId(record) ||\n+ !XLogRecGetBlock(record, block_id)->in_use)\n\nI thought these can also be rewrite to:\n\nif (!XLogRecHasBlockRef(record, block_id))\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 31 Oct 2023 18:24:51 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify xlogreader.c with XLogRec* macros" }, { "msg_contents": "> @@ -2036,8 +2035,8 @@ RestoreBlockImage(XLogReaderState *record, uint8\n> block_id, char *page)\n> char *ptr;\n> PGAlignedBlock tmp;\n> \n> - if (block_id > record->record->max_block_id ||\n> - !record->record->blocks[block_id].in_use)\n> + if (block_id > XLogRecMaxBlockId(record) ||\n> + !XLogRecGetBlock(record, block_id)->in_use)\n> \n> I thought these can also be rewrite to:\n> \n> if (!XLogRecHasBlockRef(record, block_id))\n\nOops, I missed that. New version is attached.\n\n--\nYuhang Qiu", "msg_date": "Tue, 31 Oct 2023 18:42:21 +0800", "msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify xlogreader.c with XLogRec* macros" }, { "msg_contents": "On Tue, Oct 31, 2023 at 4:12 PM 邱宇航 <[email protected]> wrote:\n>\n> >\n> > I thought these can also be rewrite to:\n> >\n> > if (!XLogRecHasBlockRef(record, block_id))\n>\n> Oops, I missed that. New version is attached.\n\n+1. Indeed a reasonable change. The attached v2 patch LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Nov 2023 00:01:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify xlogreader.c with XLogRec* macros" }, { "msg_contents": "On Fri, Nov 3, 2023 at 12:01 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Oct 31, 2023 at 4:12 PM 邱宇航 <[email protected]> wrote:\n> >\n> > >\n> > > I thought these can also be rewrite to:\n> > >\n> > > if (!XLogRecHasBlockRef(record, block_id))\n> >\n> > Oops, I missed that. New version is attached.\n>\n> +1. Indeed a reasonable change. The attached v2 patch LGTM.\n\nThis patch basically uses the macros introduced by commit 3f1ce97 [1]\nmore extensively. I don't see a CF entry added for this patch. Please\nadd one if not added.\n\n[1]\ncommit 3f1ce973467a0d285961bf2f99b11d06e264e2c1\nAuthor: Thomas Munro <[email protected]>\nDate: Fri Mar 18 17:45:04 2022 +1300\n\n Add circular WAL decoding buffer, take II.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Nov 2023 16:50:03 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify xlogreader.c with XLogRec* macros" } ]
[ { "msg_contents": "Hi hackers,\n\nThere is a failure with 't/003_logical_slots.pl' test during the\nupgrade. The failure is intermittent and observed in the Windows\nenvironment.\n\nDetails-\nTest - pg_upgrade/t/003_logical_slots.pl\nResult -\nt/003_logical_slots.pl .. 5/?\n# Failed test 'run of pg_upgrade of old cluster'\n# at t/003_logical_slots.pl line 165.\nt/003_logical_slots.pl .. 10/?\n# Failed test 'check the slot exists on new cluster'\n# at t/003_logical_slots.pl line 171.\n# got: ''\n# expected: 'regress_sub|t'\n# Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 25 just after 11.\nt/003_logical_slots.pl .. Dubious, test returned 25 (wstat 6400, 0x1900)\nFailed 2/11 subtests\n\nTest Summary Report\n-------------------\nt/003_logical_slots.pl (Wstat: 6400 (exited 25) Tests: 11 Failed: 2)\n Failed tests: 10-11\n Non-zero exit status: 25\n Parse errors: No plan found in TAP output\nFiles=1, Tests=11, 32 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\nResult: FAIL\n\nlog attached - 'regress_log_003_logical_slots'.\n\nThe failure cause is -\nno data was returned by command\n\"\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V\"\ncheck for \"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\"\nfailed: cannot execute\n\nFailure, exiting\n[16:24:21.144](6.275s) not ok 10 - run of pg_upgrade of old cluster\n\nIf the same command is run manually, it succeeds -\n\n>\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V\npg_resetwal (PostgreSQL) 17devel\n\nThe same test failure (intermittent) is also seen with different\ncommands like pg_ctl and pg_dump as failure cause while retrieving\nversion -\nEx -\nno data was returned by command\n\"\"D:/Project/pg1/postgres/tmp_install/bin/pg_dump\" -V\"\ncheck for \"D:/Project/pg1/postgres/tmp_install/bin/pg_dump\" failed:\ncannot execute\n\nFailure, exiting\n[16:08:50.444](7.434s) not ok 10 - run of pg_upgrade of old cluster\n\nHas anyone come across this issue? I am not sure what is the issue here.\nAny thoughts?\n\nThanks,\nNisha Moond", "msg_date": "Tue, 31 Oct 2023 16:53:00 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "On Tue, Oct 31, 2023 at 4:53 PM Nisha Moond <[email protected]> wrote:\n>\n> There is a failure with 't/003_logical_slots.pl' test during the\n> upgrade. The failure is intermittent and observed in the Windows\n> environment.\n>\n\nHow did you reach the conclusion that it is only for\n't/003_logical_slots.pl'? I see that the failure is while pg_upgrade\ninternally running pg_resetwal -V command to check the version which\ndoesn't seem to be directly related to the newly added test or code.\n\n> Details-\n> Test - pg_upgrade/t/003_logical_slots.pl\n> Result -\n> t/003_logical_slots.pl .. 5/?\n> # Failed test 'run of pg_upgrade of old cluster'\n> # at t/003_logical_slots.pl line 165.\n> t/003_logical_slots.pl .. 10/?\n> # Failed test 'check the slot exists on new cluster'\n> # at t/003_logical_slots.pl line 171.\n> # got: ''\n> # expected: 'regress_sub|t'\n> # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 25 just after 11.\n> t/003_logical_slots.pl .. Dubious, test returned 25 (wstat 6400, 0x1900)\n> Failed 2/11 subtests\n>\n> Test Summary Report\n> -------------------\n> t/003_logical_slots.pl (Wstat: 6400 (exited 25) Tests: 11 Failed: 2)\n> Failed tests: 10-11\n> Non-zero exit status: 25\n> Parse errors: No plan found in TAP output\n> Files=1, Tests=11, 32 wallclock secs ( 0.03 usr + 0.01 sys = 0.05 CPU)\n> Result: FAIL\n>\n> log attached - 'regress_log_003_logical_slots'.\n>\n> The failure cause is -\n> no data was returned by command\n> \"\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V\"\n> check for \"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\"\n> failed: cannot execute\n>\n> Failure, exiting\n> [16:24:21.144](6.275s) not ok 10 - run of pg_upgrade of old cluster\n>\n> If the same command is run manually, it succeeds -\n>\n\nCan you add some LOGs in pg_resetwal to find out if the command has\nperformed appropriately?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 31 Oct 2023 17:51:07 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "On Tue, 31 Oct 2023 at 17:51, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Oct 31, 2023 at 4:53 PM Nisha Moond <[email protected]> wrote:\n> >\n> > There is a failure with 't/003_logical_slots.pl' test during the\n> > upgrade. The failure is intermittent and observed in the Windows\n> > environment.\n> >\n>\n> How did you reach the conclusion that it is only for\n> 't/003_logical_slots.pl'? I see that the failure is while pg_upgrade\n> internally running pg_resetwal -V command to check the version which\n> doesn't seem to be directly related to the newly added test or code.\n\nI also felt it is not related to the 003_logical_slots test, I felt\nthe problem might be because of the pipe_read_line function:\n....\npipe_read_line(char *cmd, char *line, int maxsize)\n{\nFILE *pgver;\n\nfflush(NULL);\n\nerrno = 0;\nif ((pgver = popen(cmd, \"r\")) == NULL)\n{\nperror(\"popen failure\");\nreturn NULL;\n}\n\nerrno = 0;\nif (fgets(line, maxsize, pgver) == NULL)\n...\n\nFew others are also facing this problem with similar code like in:\nhttps://stackoverflow.com/questions/15882799/fgets-returning-error-for-file-returned-by-popen\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 31 Oct 2023 18:11:48 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "Dear Nisha,\r\n\r\n> \r\n> The failure cause is -\r\n> no data was returned by command\r\n> \"\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V\"\r\n> check for \"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\"\r\n> failed: cannot execute\r\n> \r\n> Failure, exiting\r\n> [16:24:21.144](6.275s) not ok 10 - run of pg_upgrade of old cluster\r\n\r\nI thought it was not related with the feature. I doubt the pg_upgrade read the\r\ncommand result before it was really executed.\r\n\r\nFirst of all, The stack trace until the system call _popen() is as follows. \r\n\r\n```\r\ncheck_exec()\r\npipe_read_line()\r\npopen()\r\npgwin32_popen()\r\n_popen() // process was forked and command would be executed\r\n```\r\n\r\nI read MSdocs and said that _popen executes specified commands asynchronously [1].\r\n \r\n> The _popen function creates a pipe. It then asynchronously executes a spawned\r\n> copy of the command processor, and uses command as the command line.\r\n \r\n\r\nYour failure meant that the binary was found but its output was not found by fgets().\r\nSo I thought that the forked process has not executed the command yet at that time. Thought?\r\n\r\n[1]: https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/popen-wpopen?view=msvc-170\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 1 Nov 2023 00:36:41 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "At Tue, 31 Oct 2023 18:11:48 +0530, vignesh C <[email protected]> wrote in \n> Few others are also facing this problem with similar code like in:\n> https://stackoverflow.com/questions/15882799/fgets-returning-error-for-file-returned-by-popen\n\nI'm inclined to believe that the pipe won't enter the EOF state until\nthe target command terminates (then the top-level cmd.exe). The target\ncommand likely terminated prematurely due to an error before priting\nany output.\n\nIf we append \"2>&1\" to the command line, we can capture the error\nmessage through the returned pipe if any. Such error messages will\ncause the subsequent code to fail with an error such as \"unexpected\nstring: 'the output'\". I'm not sure, but if this is permissive, the\nreturned error messages could potentially provide insight into the\nunderlying issue, paving the way for a potential solution.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Nov 2023 15:22:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on\n windows" }, { "msg_contents": "On Thu, Nov 2, 2023 at 11:52 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Tue, 31 Oct 2023 18:11:48 +0530, vignesh C <[email protected]> wrote in\n> > Few others are also facing this problem with similar code like in:\n> > https://stackoverflow.com/questions/15882799/fgets-returning-error-for-file-returned-by-popen\n>\n> I'm inclined to believe that the pipe won't enter the EOF state until\n> the target command terminates (then the top-level cmd.exe). The target\n> command likely terminated prematurely due to an error before priting\n> any output.\n>\n> If we append \"2>&1\" to the command line, we can capture the error\n> message through the returned pipe if any. Such error messages will\n> cause the subsequent code to fail with an error such as \"unexpected\n> string: 'the output'\". I'm not sure, but if this is permissive, the\n> returned error messages could potentially provide insight into the\n> underlying issue, paving the way for a potential solution.\n>\n\nAppending '2>&1 test:\nThe command still results in NULL and ends up failing as no data is\nreturned. Which means even no error message is returned. The error log\nwith appended '2>$1' is -\n\nno data was returned by command\n\"\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V 2>&1\"\n\ncheck for \"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\"\nfailed: cannot execute\nFailure, exiting\n\nFurther observations:\n1. To make sure the forked process completes before fgets(), I tested\nwith Sleep(100) before fgets() call.\n...\n...\nif ((pgver = popen(cmd, \"r\")) == NULL)\n{\nperror(\"popen failure\");\nreturn NULL;\n}\n\nerrno = 0;\nSleep(100);\nif (fgets(line, maxsize, pgver) == NULL)\n{\nif (feof(pgver))\nfprintf(stderr, \"no data was returned by command \\\"%s\\\"\\n\", cmd);\n...\n...\n\nThis also doesn't resolve the issue, the error is still seen intermittently.\n\n2. Even though fgets() fails, the output is still getting captured in\n'line' string.\nTested with printing the 'line' in case of failure:\n...\n...\nif ((pgver = popen(cmd, \"r\")) == NULL)\n{\nperror(\"popen failure\");\nreturn NULL;\n}\n\nerrno = 0;\nif (fgets(line, maxsize, pgver) == NULL)\n{\n if (line)\n fprintf(stderr, \"cmd output - %s\\n\", line);\n\n if (feof(pgver))\n fprintf(stderr, \"no data was returned by command \\\"%s\\\"\\n\", cmd);\n…\n…\nAnd the log looks like -\ncmd output - postgres (PostgreSQL) 17devel\nno data was returned by command\n\"\"D:/Project/pg1/postgres/tmp_install/bin/pg_controldata\" -V\"\n\ncheck for \"D:/Project/pg1/postgres/tmp_install/bin/pg_controldata\"\nfailed: cannot execute\nFailure, exiting\n\nAttached test result log for the same - \"regress_log_003_logical_slots\".\n\nThanks,\nNisha Moond", "msg_date": "Fri, 3 Nov 2023 17:02:44 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "On Fri, Nov 3, 2023 at 5:02 PM Nisha Moond <[email protected]> wrote:\n>\n> On Thu, Nov 2, 2023 at 11:52 AM Kyotaro Horiguchi\n> <[email protected]> wrote:\n> >\n> > At Tue, 31 Oct 2023 18:11:48 +0530, vignesh C <[email protected]> wrote in\n> > > Few others are also facing this problem with similar code like in:\n> > > https://stackoverflow.com/questions/15882799/fgets-returning-error-for-file-returned-by-popen\n> >\n> > I'm inclined to believe that the pipe won't enter the EOF state until\n> > the target command terminates (then the top-level cmd.exe). The target\n> > command likely terminated prematurely due to an error before priting\n> > any output.\n> >\n> > If we append \"2>&1\" to the command line, we can capture the error\n> > message through the returned pipe if any. Such error messages will\n> > cause the subsequent code to fail with an error such as \"unexpected\n> > string: 'the output'\". I'm not sure, but if this is permissive, the\n> > returned error messages could potentially provide insight into the\n> > underlying issue, paving the way for a potential solution.\n> >\n>\n> Appending '2>&1 test:\n> The command still results in NULL and ends up failing as no data is\n> returned. Which means even no error message is returned. The error log\n> with appended '2>$1' is -\n>\n> no data was returned by command\n> \"\"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\" -V 2>&1\"\n>\n> check for \"D:/Project/pg1/postgres/tmp_install/bin/pg_resetwal\"\n> failed: cannot execute\n> Failure, exiting\n>\n> Further observations:\n> 1. To make sure the forked process completes before fgets(), I tested\n> with Sleep(100) before fgets() call.\n> ...\n> ...\n> if ((pgver = popen(cmd, \"r\")) == NULL)\n> {\n> perror(\"popen failure\");\n> return NULL;\n> }\n>\n> errno = 0;\n> Sleep(100);\n> if (fgets(line, maxsize, pgver) == NULL)\n> {\n> if (feof(pgver))\n> fprintf(stderr, \"no data was returned by command \\\"%s\\\"\\n\", cmd);\n> ...\n> ...\n>\n> This also doesn't resolve the issue, the error is still seen intermittently.\n>\n> 2. Even though fgets() fails, the output is still getting captured in\n> 'line' string.\n> Tested with printing the 'line' in case of failure:\n> ...\n> ...\n> if ((pgver = popen(cmd, \"r\")) == NULL)\n> {\n> perror(\"popen failure\");\n> return NULL;\n> }\n>\n> errno = 0;\n> if (fgets(line, maxsize, pgver) == NULL)\n> {\n> if (line)\n> fprintf(stderr, \"cmd output - %s\\n\", line);\n>\n> if (feof(pgver))\n> fprintf(stderr, \"no data was returned by command \\\"%s\\\"\\n\", cmd);\n> …\n> …\n> And the log looks like -\n> cmd output - postgres (PostgreSQL) 17devel\n> no data was returned by command\n> \"\"D:/Project/pg1/postgres/tmp_install/bin/pg_controldata\" -V\"\n>\n> check for \"D:/Project/pg1/postgres/tmp_install/bin/pg_controldata\"\n> failed: cannot execute\n> Failure, exiting\n>\n> Attached test result log for the same - \"regress_log_003_logical_slots\".\n\nThe same failure is observed with test 't\\002_pg_upgrade.pl' too\n(intermittently). So, it is not limited to \"t/003_logical_slots.pl\"\ntest alone. It is more closely associated with the pg_upgrade command\nrun.\n\n--\nThanks,\nNisha Moond\n\n\n", "msg_date": "Mon, 6 Nov 2023 19:42:21 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "At Mon, 6 Nov 2023 19:42:21 +0530, Nisha Moond <[email protected]> wrote in \n> > Appending '2>&1 test:\n> > The command still results in NULL and ends up failing as no data is\n> > returned. Which means even no error message is returned. The error log\n\nThanks for confirmation. So, at least the child process was launced\nsuccessfully in the cmd.exe's view.\n\nUpon a quick check on my end with Windows' _popen, I have obseved the\nfollowing:\n\n- Once a child process is started, it seems to go undetected as an\n error by _popen or subsequent fgets calls if the process ends\n abnormally, with a non-zero exit status or even with a SEGV.\n\n- After the child process has flushed data to stdout, it is possible\n to read from the pipe even if the child process crashes or ends\n thereafter.\n\n- Even if fgets is called before the program starts, it will correctly\n block until the program outputs something. Specifically, when I used\n popen(\"sleep 5 & target.exe\") and immediately performed fgets on the\n pipe, I was able to read the output of target.exe as the first line.\n\nTherefore, based on the information available, it is conceivable that\nthe child process was killed by something right after it started, or\nthe program terminated on its own without any error messages.\n\nBy the way, in the case of aforementioned SEGV, Application Errors\ncorresponding to it were identifiable in the Event\nViewer. Additionally, regarding the exit statuses, they can be\ncaptured by using a wrapper batch file (.bat) that records\n%ERRORLEVEL% after running the target program. This may yield\ninsights, aothough its effectiveness is not guaranteed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 07 Nov 2023 14:35:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on\n windows" }, { "msg_contents": "Hi,\nThe same intermittent failure is reproducible on my system.\nFor the intermittent issues I found that many issues are due to errors\nwhere commands like 'psql -V' are not returning any output.\nTo reproduce it in an easy way, I wrote a script (.bat file) with\n'--version' option for different binaries. And found out that it was\nnot giving any output for some command (varies for each run).\nThen I tried to run the same script after adding 'fflush(stdout)' in\nthe function called with '--version' option and it started to give\noutput for each command.\nI noticed the same for '--help' option and did the changes for the same.\n\nI have attached the test script(changes the extension to .txt as gmail\nis blocking it), output of test before the changes.\nI have also attached the patch with changes which resolved the above issue.\n\nThis change has resolved most of the intermittent issues for me. I am\nfacing some more intermittent issues. Will analyse and share it as\nwell.\n\nThanks and regards\nShlok Kyal\n\nOn Tue, 7 Nov 2023 at 11:05, Kyotaro Horiguchi <[email protected]> wrote:\n>\n> At Mon, 6 Nov 2023 19:42:21 +0530, Nisha Moond <[email protected]> wrote in\n> > > Appending '2>&1 test:\n> > > The command still results in NULL and ends up failing as no data is\n> > > returned. Which means even no error message is returned. The error log\n>\n> Thanks for confirmation. So, at least the child process was launced\n> successfully in the cmd.exe's view.\n>\n> Upon a quick check on my end with Windows' _popen, I have obseved the\n> following:\n>\n> - Once a child process is started, it seems to go undetected as an\n> error by _popen or subsequent fgets calls if the process ends\n> abnormally, with a non-zero exit status or even with a SEGV.\n>\n> - After the child process has flushed data to stdout, it is possible\n> to read from the pipe even if the child process crashes or ends\n> thereafter.\n>\n> - Even if fgets is called before the program starts, it will correctly\n> block until the program outputs something. Specifically, when I used\n> popen(\"sleep 5 & target.exe\") and immediately performed fgets on the\n> pipe, I was able to read the output of target.exe as the first line.\n>\n> Therefore, based on the information available, it is conceivable that\n> the child process was killed by something right after it started, or\n> the program terminated on its own without any error messages.\n>\n> By the way, in the case of aforementioned SEGV, Application Errors\n> corresponding to it were identifiable in the Event\n> Viewer. Additionally, regarding the exit statuses, they can be\n> captured by using a wrapper batch file (.bat) that records\n> %ERRORLEVEL% after running the target program. This may yield\n> insights, aothough its effectiveness is not guaranteed.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>", "msg_date": "Tue, 26 Dec 2023 17:39:47 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "Thanks for working on it. I tested the patch on my system and it\nresolved the issue with commands running -V (version check).\n\nAs you mentioned, I am also still seeing intermittent errors even with\nthe patch as below -\n in 'pg_upgrade/002_pg_upgrade' -\n\n# Running: pg_upgrade --no-sync -d\nD:\\Project\\pg2\\postgres\\build/testrun/pg_upgrade/002_pg_upgrade\\data/t_002_pg_upgrade_old_node_data/pgdata\n-D D:\\Project\\pg2\\postgres\\build/testrun/pg_upgrade/002_pg_upgrade\\data/t_002_pg_upgrade_new_node_data/pgdata\n-b D:/Project/pg2/postgres/build/tmp_install/Project/pg2/postgresql/bin\n-B D:/Project/pg2/postgres/build/tmp_install/Project/pg2/postgresql/bin\n-s 127.0.0.1 -p 56095 -P 56096 --copy --check\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\n\nThe source cluster lacks cluster state information:\nFailure, exiting\n[12:37:38.666](3.317s) not ok 12 - run of pg_upgrade --check for new instance\n[12:37:38.666](0.000s) # Failed test 'run of pg_upgrade --check for\nnew instance'\n# at D:/Project/pg2/postgres/src/bin/pg_upgrade/t/002_pg_upgrade.pl line 375.\n\nand in 'pg_upgrade/003_logical_slots' -\n\n[12:35:33.773](0.001s) not ok 6 - run of pg_upgrade of old cluster\nwith slots having unconsumed WAL records stdout /(?^:Your installation\ncontains logical replication slots that can't be upgraded.)/\n[12:35:33.773](0.000s) # Failed test 'run of pg_upgrade of old\ncluster with slots having unconsumed WAL records stdout /(?^:Your\ninstallation contains logical replication slots that can't be\nupgraded.)/'\n# at D:/Project/pg2/postgres/src/bin/pg_upgrade/t/003_logical_slots.pl\nline 102.\n[12:35:33.773](0.000s) # 'Performing Consistency Checks\n# -----------------------------\n# Checking cluster versions ok\n#\n# The target cluster lacks cluster state information:\n# Failure, exiting\n\nIt seems 'Performing Consistency Checks' fail due to a lack of some\ninformation and possible that it can also be fixed on the same lines.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Wed, 27 Dec 2023 13:05:23 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" }, { "msg_contents": "Hi,\nApart of these I am getting following some intermittent failure as below:\n\n131/272 postgresql:pg_basebackup / pg_basebackup/010_pg_basebackup\n ERROR 30.51s (exit status 255 or\nsignal 127 SIGinvalid)\n114/272 postgresql:libpq / libpq/001_uri\n ERROR 9.66s exit status 8\n 34/272 postgresql:pg_upgrade / pg_upgrade/003_logical_slots\n ERROR 99.14s exit status 1\n186/272 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n ERROR 306.22s exit status 1\n 29/272 postgresql:recovery / recovery/002_archiving\n ERROR 89.62s (exit status 255 or\nsignal 127 SIGinvalid)\n138/272 postgresql:pg_resetwal / pg_resetwal/001_basic\n ERROR 3.93s (exit status 255 or\nsignal 127 SIGinvalid)\n\nHave attached the regress logs for the same as well.\n\nThanks and Regards\nShlok Kyal\n\nOn Tue, 26 Dec 2023 at 17:39, Shlok Kyal <[email protected]> wrote:\n>\n> Hi,\n> The same intermittent failure is reproducible on my system.\n> For the intermittent issues I found that many issues are due to errors\n> where commands like 'psql -V' are not returning any output.\n> To reproduce it in an easy way, I wrote a script (.bat file) with\n> '--version' option for different binaries. And found out that it was\n> not giving any output for some command (varies for each run).\n> Then I tried to run the same script after adding 'fflush(stdout)' in\n> the function called with '--version' option and it started to give\n> output for each command.\n> I noticed the same for '--help' option and did the changes for the same.\n>\n> I have attached the test script(changes the extension to .txt as gmail\n> is blocking it), output of test before the changes.\n> I have also attached the patch with changes which resolved the above issue.\n>\n> This change has resolved most of the intermittent issues for me. I am\n> facing some more intermittent issues. Will analyse and share it as\n> well.\n>\n> Thanks and regards\n> Shlok Kyal\n>\n> On Tue, 7 Nov 2023 at 11:05, Kyotaro Horiguchi <[email protected]> wrote:\n> >\n> > At Mon, 6 Nov 2023 19:42:21 +0530, Nisha Moond <[email protected]> wrote in\n> > > > Appending '2>&1 test:\n> > > > The command still results in NULL and ends up failing as no data is\n> > > > returned. Which means even no error message is returned. The error log\n> >\n> > Thanks for confirmation. So, at least the child process was launced\n> > successfully in the cmd.exe's view.\n> >\n> > Upon a quick check on my end with Windows' _popen, I have obseved the\n> > following:\n> >\n> > - Once a child process is started, it seems to go undetected as an\n> > error by _popen or subsequent fgets calls if the process ends\n> > abnormally, with a non-zero exit status or even with a SEGV.\n> >\n> > - After the child process has flushed data to stdout, it is possible\n> > to read from the pipe even if the child process crashes or ends\n> > thereafter.\n> >\n> > - Even if fgets is called before the program starts, it will correctly\n> > block until the program outputs something. Specifically, when I used\n> > popen(\"sleep 5 & target.exe\") and immediately performed fgets on the\n> > pipe, I was able to read the output of target.exe as the first line.\n> >\n> > Therefore, based on the information available, it is conceivable that\n> > the child process was killed by something right after it started, or\n> > the program terminated on its own without any error messages.\n> >\n> > By the way, in the case of aforementioned SEGV, Application Errors\n> > corresponding to it were identifiable in the Event\n> > Viewer. Additionally, regarding the exit statuses, they can be\n> > captured by using a wrapper batch file (.bat) that records\n> > %ERRORLEVEL% after running the target program. This may yield\n> > insights, aothough its effectiveness is not guaranteed.\n> >\n> > regards.\n> >\n> > --\n> > Kyotaro Horiguchi\n> > NTT Open Source Software Center\n> >\n> >", "msg_date": "Thu, 28 Dec 2023 13:25:19 +0530", "msg_from": "Shlok Kyal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Intermittent failure with t/003_logical_slots.pl test on windows" } ]
[ { "msg_contents": "Here is an updated patch for tracking Postgres memory usage.\n\nIn this new patch, Postgres “reserves” memory, first by updating process-private counters, and then eventually by updating global counters. If the new GUC variable “max_total_memory” is set, reservations exceeding the limit are turned down and treated as though the kernel had reported an out of memory error.\n\nPostgres memory reservations come from multiple sources.\n\n * Malloc calls made by the Postgres memory allocators.\n * Static shared memory created by the postmaster at server startup,\n * Dynamic shared memory created by the backends.\n * A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.\n\nEach process also maintains an accurate count of its actual memory allocations. The process-private variable “my_memory” holds the total allocations for that process. Since there can be no contention, each process updates its own counters very efficiently.\n\nPgstat now includes global memory counters. These shared memory counters represent the sum of all reservations made by all Postgres processes. For efficiency, these global counters are only updated when new reservations exceed a threshold, currently 1 Mb for each process. Consequently, the global reservation counters are approximate totals which may differ from the actual allocation totals by up to 1 Mb per process.\n\nThe max_total_memory limit is checked whenever the global counters are updated. There is no special error handling if a memory allocation exceeds the global limit. That allocation returns a NULL for malloc style allocations or an ENOMEM for shared memory allocations. Postgres has existing mechanisms for dealing with out of memory conditions.\n\nFor sanity checking, pgstat now includes the pg_backend_memory_allocation view showing memory allocations made by the backend process. This view includes a scan of the top memory context, so it compares memory allocations reported through pgstat with actual allocations. The two should match.\n\n\nTwo other views were created as well. pg_stat_global_memory_tracking shows how much server memory has been reserved overall and how much memory remains to be reserved. pg_stat_memory_reservation iterates through the memory reserved by each server process. Both of these views use pgstat’s “snapshot” mechanism to ensure consistent values within a transaction.\n\nPerformance-wise, there was no measurable impact with either pgbench or a simple “SELECT * from series” query.", "msg_date": "Tue, 31 Oct 2023 17:11:26 +0000", "msg_from": "John Morris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "Hi,\n\nOn 2023-10-31 17:11:26 +0000, John Morris wrote:\n> Postgres memory reservations come from multiple sources.\n> \n> * Malloc calls made by the Postgres memory allocators.\n> * Static shared memory created by the postmaster at server startup,\n> * Dynamic shared memory created by the backends.\n> * A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.\n> \n> Each process also maintains an accurate count of its actual memory\n> allocations. The process-private variable “my_memory” holds the total\n> allocations for that process. Since there can be no contention, each process\n> updates its own counters very efficiently.\n\nI think this will introduce measurable overhead in low concurrency cases and\nvery substantial overhead / contention when there is a decent amount of\nconcurrency. This makes all memory allocations > 1MB contend on a single\natomic. Massive amount of energy have been spent writing multi-threaded\nallocators that have far less contention than this - the current state is to\nnever contend on shared resources on any reasonably common path. This gives\naway one of the few major advantages our process model has away.\n\nThe patch doesn't just introduce contention when limiting is enabled - it\nintroduces it even when memory usage is just tracked. It makes absolutely no\nsense to have a single contended atomic in that case - just have a per-backend\nvariable in shared memory that's updated. It's *WAY* cheaper to compute the\noverall memory usage during querying than to keep a running total in shared\nmemory.\n\n\n\n> Pgstat now includes global memory counters. These shared memory counters\n> represent the sum of all reservations made by all Postgres processes. For\n> efficiency, these global counters are only updated when new reservations\n> exceed a threshold, currently 1 Mb for each process. Consequently, the\n> global reservation counters are approximate totals which may differ from the\n> actual allocation totals by up to 1 Mb per process.\n\nI see that you added them to the \"cumulative\" stats system - that doesn't\nimmediately makes sense to me - what you're tracking here isn't an\naccumulating counter, it's something showing the current state, right?\n\n\n> The max_total_memory limit is checked whenever the global counters are\n> updated. There is no special error handling if a memory allocation exceeds\n> the global limit. That allocation returns a NULL for malloc style\n> allocations or an ENOMEM for shared memory allocations. Postgres has\n> existing mechanisms for dealing with out of memory conditions.\n\nI still think it's extremely unwise to do tracking of memory and limiting of\nmemory in one patch. You should work towards and acceptable patch that just\ntracks memory usage in an as simple and low overhead way as possible. Then we\nlater can build on that.\n\n\n\n> For sanity checking, pgstat now includes the pg_backend_memory_allocation\n> view showing memory allocations made by the backend process. This view\n> includes a scan of the top memory context, so it compares memory allocations\n> reported through pgstat with actual allocations. The two should match.\n\nCan't you just do this using the existing pg_backend_memory_contexts view?\n\n\n> Performance-wise, there was no measurable impact with either pgbench or a\n> simple “SELECT * from series” query.\n\nThat seems unsurprising - allocations aren't a major part of the work there,\nyou'd have to regress by a lot to see memory allocator changes to show a\nsignificant performance decrease.\n\n\n> diff --git a/src/test/regress/expected/opr_sanity.out b/src/test/regress/expected/opr_sanity.out\n> index 7a6f36a6a9..6c813ec465 100644\n> --- a/src/test/regress/expected/opr_sanity.out\n> +++ b/src/test/regress/expected/opr_sanity.out\n> @@ -468,9 +468,11 @@ WHERE proallargtypes IS NOT NULL AND\n> ARRAY(SELECT proallargtypes[i]\n> FROM generate_series(1, array_length(proallargtypes, 1)) g(i)\n> WHERE proargmodes IS NULL OR proargmodes[i] IN ('i', 'b', 'v'));\n> - oid | proname | proargtypes | proallargtypes | proargmodes \n> ------+---------+-------------+----------------+-------------\n> -(0 rows)\n> + oid | proname | proargtypes | proallargtypes | proargmodes \n> +------+----------------------------------+-------------+---------------------------+-------------------\n> + 9890 | pg_stat_get_memory_reservation | | {23,23,20,20,20,20,20,20} | {i,o,o,o,o,o,o,o}\n> + 9891 | pg_get_backend_memory_allocation | | {23,23,20,20,20,20,20} | {i,o,o,o,o,o,o}\n> +(2 rows)\n\nThis indicates that your pg_proc entries are broken, they need to fixed rather\nthan allowed here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Nov 2023 21:19:00 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "Greetings,\n\n* Andres Freund ([email protected]) wrote:\n> On 2023-10-31 17:11:26 +0000, John Morris wrote:\n> > Postgres memory reservations come from multiple sources.\n> > \n> > * Malloc calls made by the Postgres memory allocators.\n> > * Static shared memory created by the postmaster at server startup,\n> > * Dynamic shared memory created by the backends.\n> > * A fixed amount (1Mb) of “initial” memory reserved whenever a process starts up.\n> > \n> > Each process also maintains an accurate count of its actual memory\n> > allocations. The process-private variable “my_memory” holds the total\n> > allocations for that process. Since there can be no contention, each process\n> > updates its own counters very efficiently.\n> \n> I think this will introduce measurable overhead in low concurrency cases and\n> very substantial overhead / contention when there is a decent amount of\n> concurrency. This makes all memory allocations > 1MB contend on a single\n> atomic. Massive amount of energy have been spent writing multi-threaded\n> allocators that have far less contention than this - the current state is to\n> never contend on shared resources on any reasonably common path. This gives\n> away one of the few major advantages our process model has away.\n\nWe could certainly adjust the size of each reservation to reduce the\nfrequency of having to hit the atomic. Specific suggestions about how\nto benchmark and see the regression that's being worried about here\nwould be great. Certainly my hope has generally been that when we do a\nlarger allocation, it's because we're about to go do a bunch of work,\nmeaning that hopefully the time spent updating the atomic is minor\noverall.\n\n> The patch doesn't just introduce contention when limiting is enabled - it\n> introduces it even when memory usage is just tracked. It makes absolutely no\n> sense to have a single contended atomic in that case - just have a per-backend\n> variable in shared memory that's updated. It's *WAY* cheaper to compute the\n> overall memory usage during querying than to keep a running total in shared\n> memory.\n\nAgreed that we should avoid the contention when limiting isn't being\nused, certainly easy to do so, and had actually intended to but that\nseems to have gotten lost along the way. Will fix.\n\nOther than that change inside update_global_reservation though, the code\nfor reporting per-backend memory usage and querying it does work as\nyou're outlining above inside the stats system.\n\nThat said- I just want to confirm that you would agree that querying the\namount of memory used by every backend, to add it all up to enforce an\noverall limit, surely isn't something we're going to want to do during\nan allocation and that having a global atomic for that is better, right?\n\n> > Pgstat now includes global memory counters. These shared memory counters\n> > represent the sum of all reservations made by all Postgres processes. For\n> > efficiency, these global counters are only updated when new reservations\n> > exceed a threshold, currently 1 Mb for each process. Consequently, the\n> > global reservation counters are approximate totals which may differ from the\n> > actual allocation totals by up to 1 Mb per process.\n> \n> I see that you added them to the \"cumulative\" stats system - that doesn't\n> immediately makes sense to me - what you're tracking here isn't an\n> accumulating counter, it's something showing the current state, right?\n\nYes, this is current state, not an accumulation.\n\n> > The max_total_memory limit is checked whenever the global counters are\n> > updated. There is no special error handling if a memory allocation exceeds\n> > the global limit. That allocation returns a NULL for malloc style\n> > allocations or an ENOMEM for shared memory allocations. Postgres has\n> > existing mechanisms for dealing with out of memory conditions.\n> \n> I still think it's extremely unwise to do tracking of memory and limiting of\n> memory in one patch. You should work towards and acceptable patch that just\n> tracks memory usage in an as simple and low overhead way as possible. Then we\n> later can build on that.\n\nFrankly, while tracking is interesting, the limiting is the feature\nthat's needed more urgently imv. We could possibly split it up but\nthere's an awful lot of the same code that would need to be changed and\nthat seems less than ideal. Still, we'll look into this.\n\n> > For sanity checking, pgstat now includes the pg_backend_memory_allocation\n> > view showing memory allocations made by the backend process. This view\n> > includes a scan of the top memory context, so it compares memory allocations\n> > reported through pgstat with actual allocations. The two should match.\n> \n> Can't you just do this using the existing pg_backend_memory_contexts view?\n\nNot and get a number that you can compare to the local backend number\ndue to the query itself happening and performing allocations and\ncreating new contexts. We wanted to be able to show that we are\naccounting correctly and exactly matching to what the memory context\nsystem is tracking.\n\n> > - oid | proname | proargtypes | proallargtypes | proargmodes \n> > ------+---------+-------------+----------------+-------------\n> > -(0 rows)\n> > + oid | proname | proargtypes | proallargtypes | proargmodes \n> > +------+----------------------------------+-------------+---------------------------+-------------------\n> > + 9890 | pg_stat_get_memory_reservation | | {23,23,20,20,20,20,20,20} | {i,o,o,o,o,o,o,o}\n> > + 9891 | pg_get_backend_memory_allocation | | {23,23,20,20,20,20,20} | {i,o,o,o,o,o,o}\n> > +(2 rows)\n> \n> This indicates that your pg_proc entries are broken, they need to fixed rather\n> than allowed here.\n\nAgreed, will fix.\n\nThanks!\n\nStephen", "msg_date": "Mon, 6 Nov 2023 13:02:50 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "Hi,\n\nOn 2023-11-06 13:02:50 -0500, Stephen Frost wrote:\n> > > The max_total_memory limit is checked whenever the global counters are\n> > > updated. There is no special error handling if a memory allocation exceeds\n> > > the global limit. That allocation returns a NULL for malloc style\n> > > allocations or an ENOMEM for shared memory allocations. Postgres has\n> > > existing mechanisms for dealing with out of memory conditions.\n> > \n> > I still think it's extremely unwise to do tracking of memory and limiting of\n> > memory in one patch. You should work towards and acceptable patch that just\n> > tracks memory usage in an as simple and low overhead way as possible. Then we\n> > later can build on that.\n> \n> Frankly, while tracking is interesting, the limiting is the feature\n> that's needed more urgently imv.\n\nI agree that we need limiting, but that the tracking needs to be very robust\nfor that to be usable.\n\n\n> We could possibly split it up but there's an awful lot of the same code that\n> would need to be changed and that seems less than ideal. Still, we'll look\n> into this.\n\nShrug. IMO keeping them together just makes it very likely that neither goes\nin.\n\n\n> > > For sanity checking, pgstat now includes the pg_backend_memory_allocation\n> > > view showing memory allocations made by the backend process. This view\n> > > includes a scan of the top memory context, so it compares memory allocations\n> > > reported through pgstat with actual allocations. The two should match.\n> > \n> > Can't you just do this using the existing pg_backend_memory_contexts view?\n> \n> Not and get a number that you can compare to the local backend number\n> due to the query itself happening and performing allocations and\n> creating new contexts. We wanted to be able to show that we are\n> accounting correctly and exactly matching to what the memory context\n> system is tracking.\n\nI think creating a separate view for this will be confusing for users, without\nreally much to show for. Excluding the current query would be useful for other\ncases as well, why don't we provide a way to do that with\npg_backend_memory_contexts?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 7 Nov 2023 11:55:06 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "Greetings,\n\n* Andres Freund ([email protected]) wrote:\n> On 2023-11-06 13:02:50 -0500, Stephen Frost wrote:\n> > > > The max_total_memory limit is checked whenever the global counters are\n> > > > updated. There is no special error handling if a memory allocation exceeds\n> > > > the global limit. That allocation returns a NULL for malloc style\n> > > > allocations or an ENOMEM for shared memory allocations. Postgres has\n> > > > existing mechanisms for dealing with out of memory conditions.\n> > > \n> > > I still think it's extremely unwise to do tracking of memory and limiting of\n> > > memory in one patch. You should work towards and acceptable patch that just\n> > > tracks memory usage in an as simple and low overhead way as possible. Then we\n> > > later can build on that.\n> > \n> > Frankly, while tracking is interesting, the limiting is the feature\n> > that's needed more urgently imv.\n> \n> I agree that we need limiting, but that the tracking needs to be very robust\n> for that to be usable.\n\nIs there an issue with the tracking in the patch that you saw? That's\ncertainly an area that we've tried hard to get right and to match up to\nnumbers from the rest of the system, such as the memory context system.\n\n> > We could possibly split it up but there's an awful lot of the same code that\n> > would need to be changed and that seems less than ideal. Still, we'll look\n> > into this.\n> \n> Shrug. IMO keeping them together just makes it very likely that neither goes\n> in.\n\nI'm happy to hear your support for the limiting part of this- that's\nencouraging.\n\n> > > > For sanity checking, pgstat now includes the pg_backend_memory_allocation\n> > > > view showing memory allocations made by the backend process. This view\n> > > > includes a scan of the top memory context, so it compares memory allocations\n> > > > reported through pgstat with actual allocations. The two should match.\n> > > \n> > > Can't you just do this using the existing pg_backend_memory_contexts view?\n> > \n> > Not and get a number that you can compare to the local backend number\n> > due to the query itself happening and performing allocations and\n> > creating new contexts. We wanted to be able to show that we are\n> > accounting correctly and exactly matching to what the memory context\n> > system is tracking.\n> \n> I think creating a separate view for this will be confusing for users, without\n> really much to show for. Excluding the current query would be useful for other\n> cases as well, why don't we provide a way to do that with\n> pg_backend_memory_contexts?\n\nBoth of these feel very much like power-user views, so I'm not terribly\nconcerned about users getting confused. That said, we could possibly\ndrop this as a view and just have the functions which are then used in\nthe regression tests to catch things should the numbers start to\ndiverge.\n\nHaving a way to get the memory contexts which don't include the\ncurrently running query might be interesting too but is rather\nindependent of what this patch is trying to do. The only reason we\ncollected up the memory-context info is as a cross-check to the tracking\nthat we're doing and while the existing memory-context view is just fine\nfor a lot of other things, it doesn't work for that specific need.\n\nThanks,\n\nStephen", "msg_date": "Tue, 7 Nov 2023 15:55:48 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "Hi,\n\nOn 2023-11-07 15:55:48 -0500, Stephen Frost wrote:\n> * Andres Freund ([email protected]) wrote:\n> > On 2023-11-06 13:02:50 -0500, Stephen Frost wrote:\n> > > > > The max_total_memory limit is checked whenever the global counters are\n> > > > > updated. There is no special error handling if a memory allocation exceeds\n> > > > > the global limit. That allocation returns a NULL for malloc style\n> > > > > allocations or an ENOMEM for shared memory allocations. Postgres has\n> > > > > existing mechanisms for dealing with out of memory conditions.\n> > > > \n> > > > I still think it's extremely unwise to do tracking of memory and limiting of\n> > > > memory in one patch. You should work towards and acceptable patch that just\n> > > > tracks memory usage in an as simple and low overhead way as possible. Then we\n> > > > later can build on that.\n> > > \n> > > Frankly, while tracking is interesting, the limiting is the feature\n> > > that's needed more urgently imv.\n> > \n> > I agree that we need limiting, but that the tracking needs to be very robust\n> > for that to be usable.\n> \n> Is there an issue with the tracking in the patch that you saw? That's\n> certainly an area that we've tried hard to get right and to match up to\n> numbers from the rest of the system, such as the memory context system.\n\nThere's some details I am pretty sure aren't right - the DSM tracking piece\nseems bogus to me. But beyond that: I don't know. There's enough other stuff\nin the patch that it's hard to focus on that aspect. That's why I'd like to\nmerge a patch doing just that, so we actually can collect numbers. If any of\nthe developers of the patch had focused on polishing that part instead of\nfocusing on the limiting, it'd have been ready to be merged a while ago, maybe\neven in 16. I think the limiting piece is unlikely to be ready for 17.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 8 Nov 2023 09:20:44 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." }, { "msg_contents": "hi.\n\n+static void checkAllocations();\nshould be \"static void checkAllocations(void);\" ?\n\nPgStatShared_Memtrack there is a lock, but seems not initialized, and\nnot used. Can you expand on it?\nSo in view pg_stat_global_memory_tracking, column\n\"total_memory_reserved\" is a point of time, total memory the whole\nserver reserved/malloced? will it change every time you call it?\nthe function pg_stat_get_global_memory_tracking provolatile => 's'.\nshould be a VOLATILE function?\n\n\npg_stat_get_memory_reservation, pg_stat_get_global_memory_tracking\nshould be proretset => 'f'.\n+{ oid => '9891',\n+ descr => 'statistics: memory utilized by current backend',\n+ proname => 'pg_get_backend_memory_allocation', prorows => '1',\nproisstrict => 'f',\n+ proretset => 't', provolatile => 's', proparallel => 'r',\n\n\nyou declared\n+void pgstat_backend_memory_reservation_cb(void);\nbut seems there is no definition.\n\n\nthis part is unnecessary since you already declared\nsrc/include/catalog/pg_proc.dat?\n+/* SQL Callable functions */\n+extern Datum pg_stat_get_memory_reservation(PG_FUNCTION_ARGS);\n+extern Datum pg_get_backend_memory_allocation(PG_FUNCTION_ARGS);\n+extern Datum pg_stat_get_global_memory_tracking(PG_FUNCTION_ARGS);\n\nThe last sentence is just a plain link, no explanation. something is missing?\n <para>\n+ Reports how much memory remains available to the server. If a\n+ backend process attempts to allocate more memory than remains,\n+ the process will fail with an out of memory error, resulting in\n+ cancellation of the process's active query/transaction.\n+ If memory is not being limited (ie. max_total_memory is zero or not set),\n+ this column returns NULL.\n+ <xref linkend=\"guc-max-total-memory\"/>.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>static_shared_memory</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Reports how much static shared memory (non-DSM shared memory)\nis being used by\n+ the server. Static shared memory is configured by the postmaster at\n+ at server startup.\n+ <xref linkend=\"guc-max-total-memory\"/>.\n+ </para></entry>\n+ </row>\n\n\n", "msg_date": "Fri, 10 Nov 2023 17:55:27 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add the ability to limit the amount of memory that can be\n allocated to backends." } ]
[ { "msg_contents": "Postgres has been bitten by a few locale-related bugs, most recently via \nlibperl[0]. I had previously submitted this patchset in the bug thread \nfor the aforementioned bug, but here is a standalone submission for the \npurposes of an eventual commitfest submission, and to get discussion \ngoing. I was also flubbing up the commitfest bot with my patches. Sorry \nJoe! I feel fairly good about the patch, but I think I need some more \ntesting and feedback. Localization is such a fickle beast.\n\nI did leave one TODO because I need some input:\n\n\t/* TODO: This does not handle \"\" as the locale */\n\ncheck_locale() takes a canonname argument, which the caller expects to \nbe the \"canonical name\" of the locale the caller passed in. The \nsetlocale() man page is not very explicit about under what conditions \nthe return value is different from the input string, and I haven't found \nmuch on the internet. Best I can tell is that the empty string is the \nonly input value that differs from the output value of setlocale(). If \nthat's the case, on Postmaster startup, I can query setlocale() for what \nthe empty string canonicalizes to for all the locale categories we care \nabout, and save them off. The other solution to the problem would be to \nfind the equivalent API in the uselocale() family of functions, but I am \nunder the impression that such an API doesn't exist given I haven't \nfound it yet.\n\nAlso, should we just remove HAVE_USELOCALE? It seems like Windows is the \nonly platform that doesn't support it. Then we can just use _WIN32 \ninstead.\n\nI do not think this should be backpatched. Please see Joe's patch in the \nbug thread as a way to fix the libperl bug on pre-17 versions.\n\n[0]: https://www.postgresql.org/message-id/17946-3e84cb577e9551c3%40postgresql.org\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 31 Oct 2023 15:02:36 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Use thread-safe locale APIs" }, { "msg_contents": "Please discard this second thread. My mail client seems to have done \nsomething very wrong.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 31 Oct 2023 15:03:38 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use thread-safe locale APIs" } ]
[ { "msg_contents": "Hi,\n\nCurrently, nbtree code compares each and every column of an index\ntuple during the binary search on the index page. With large indexes\nthat have many duplicate prefix column values (e.g. an index on (bool,\nbool, uuid) ) that means a lot of wasted time getting to the right\ncolumn.\n\nThe attached patch improves on that by doing per-page dynamic prefix\ntruncation: If we know that on both the right and left side there are\nindex tuples where the first two attributes are equal to the scan key,\nwe skip comparing those attributes at the current index tuple and\nstart with comparing attribute 3, saving two attribute compares. We\ngain performance whenever comparing prefixing attributes is expensive\nand when there are many tuples with a shared prefix - in unique\nindexes this doesn't gain much, but we also don't lose much in this\ncase.\n\nThis patch was originally suggested at [0], but it was mentioned that\nthey could be pulled out into it's own thread. Earlier, the\nperformance gains were not clearly there for just this patch, but\nafter further benchmarking this patch stands on its own for\nperformance: it sees no obvious degradation of performance, while\ngaining 0-5% across various normal indexes on the cc-complete sample\ndataset, with the current worst-case index shape getting a 60%+\nimproved performance on INSERTs in the tests at [0].\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nPS. Best served with the downlink right separator/HIKEY optimization\n(separate patch to be submitted soon), and specialization over at [0].\n\n[0] https://www.postgresql.org/message-id/CAEze2WiqOONRQTUT1p_ZV19nyMA69UU2s0e2dp+jSBM=j8snuw@mail.gmail.com", "msg_date": "Tue, 31 Oct 2023 22:12:26 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "Hi\n\nút 31. 10. 2023 v 22:12 odesílatel Matthias van de Meent <\[email protected]> napsal:\n\n> Hi,\n>\n> Currently, nbtree code compares each and every column of an index\n> tuple during the binary search on the index page. With large indexes\n> that have many duplicate prefix column values (e.g. an index on (bool,\n> bool, uuid) ) that means a lot of wasted time getting to the right\n> column.\n>\n> The attached patch improves on that by doing per-page dynamic prefix\n> truncation: If we know that on both the right and left side there are\n> index tuples where the first two attributes are equal to the scan key,\n> we skip comparing those attributes at the current index tuple and\n> start with comparing attribute 3, saving two attribute compares. We\n> gain performance whenever comparing prefixing attributes is expensive\n> and when there are many tuples with a shared prefix - in unique\n> indexes this doesn't gain much, but we also don't lose much in this\n> case.\n>\n> This patch was originally suggested at [0], but it was mentioned that\n> they could be pulled out into it's own thread. Earlier, the\n> performance gains were not clearly there for just this patch, but\n> after further benchmarking this patch stands on its own for\n> performance: it sees no obvious degradation of performance, while\n> gaining 0-5% across various normal indexes on the cc-complete sample\n> dataset, with the current worst-case index shape getting a 60%+\n> improved performance on INSERTs in the tests at [0].\n>\n\n+1\n\nThis can be nice functionality. I had a customer with a very slow index\nscan - the main problem was a long common prefix like prg010203xxxx.\n\nRegards\n\nPavel\n\n\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n> PS. Best served with the downlink right separator/HIKEY optimization\n> (separate patch to be submitted soon), and specialization over at [0].\n>\n> [0]\n> https://www.postgresql.org/message-id/CAEze2WiqOONRQTUT1p_ZV19nyMA69UU2s0e2dp+jSBM=j8snuw@mail.gmail.com\n>\n\nHiút 31. 10. 2023 v 22:12 odesílatel Matthias van de Meent <[email protected]> napsal:Hi,\n\nCurrently, nbtree code compares each and every column of an index\ntuple during the binary search on the index page. With large indexes\nthat have many duplicate prefix column values (e.g. an index on (bool,\nbool, uuid) ) that means a lot of wasted time getting to the right\ncolumn.\n\nThe attached patch improves on that by doing per-page dynamic prefix\ntruncation: If we know that on both the right and left side there are\nindex tuples where the first two attributes are equal to the scan key,\nwe skip comparing those attributes at the current index tuple and\nstart with comparing attribute 3, saving two attribute compares. We\ngain performance whenever comparing prefixing attributes is expensive\nand when there are many tuples with a shared prefix - in unique\nindexes this doesn't gain much, but we also don't lose much in this\ncase.\n\nThis patch was originally suggested at [0], but it was mentioned that\nthey could be pulled out into it's own thread. Earlier, the\nperformance gains were not clearly there for just this patch, but\nafter further benchmarking this patch stands on its own for\nperformance: it sees no obvious degradation of performance, while\ngaining 0-5% across various normal indexes on the cc-complete sample\ndataset, with the current worst-case index shape getting a 60%+\nimproved performance on INSERTs in the tests at [0].+1This can be nice functionality. I had a customer with a very slow index scan - the main problem was a long common prefix like prg010203xxxx.RegardsPavel\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nPS. Best served with the downlink right separator/HIKEY optimization\n(separate patch to be submitted soon), and specialization over at [0].\n\n[0] https://www.postgresql.org/message-id/CAEze2WiqOONRQTUT1p_ZV19nyMA69UU2s0e2dp+jSBM=j8snuw@mail.gmail.com", "msg_date": "Wed, 1 Nov 2023 07:47:18 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Wed, 1 Nov 2023 at 07:47, Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> út 31. 10. 2023 v 22:12 odesílatel Matthias van de Meent <[email protected]> napsal:\n>> This patch was originally suggested at [0], but it was mentioned that\n>> they could be pulled out into it's own thread. Earlier, the\n>> performance gains were not clearly there for just this patch, but\n>> after further benchmarking this patch stands on its own for\n>> performance: it sees no obvious degradation of performance, while\n>> gaining 0-5% across various normal indexes on the cc-complete sample\n>> dataset, with the current worst-case index shape getting a 60%+\n>> improved performance on INSERTs in the tests at [0].\n>\n>\n> +1\n\nThanks for showing interest.\n\n> This can be nice functionality. I had a customer with a very slow index scan - the main problem was a long common prefix like prg010203xxxx.\n\nI'll have to note that this patch doesn't cover cases where e.g. text\nattributes have large shared prefixes, but are still unique: the\ndynamic prefix compression in this patch is only implemented at the\ntuple attribute level; it doesn't implement type aware dynamic prefix\ncompression inside the attributes. So, a unique index on a column of\nint32 formatted like '%0100i' would not materially benefit from this\npatch.\n\nWhile would certainly be possible to add some type-level prefix\ntruncation in the framework of this patch, adding that would require\nsignificant code churn in btree compare operators, because we'd need\nan additional return argument to contain a numerical \"shared prefix\",\nand that is not something I was planning to implement in this patch.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 1 Nov 2023 11:32:46 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "st 1. 11. 2023 v 11:32 odesílatel Matthias van de Meent <\[email protected]> napsal:\n\n> On Wed, 1 Nov 2023 at 07:47, Pavel Stehule <[email protected]>\n> wrote:\n> >\n> > Hi\n> >\n> > út 31. 10. 2023 v 22:12 odesílatel Matthias van de Meent <\n> [email protected]> napsal:\n> >> This patch was originally suggested at [0], but it was mentioned that\n> >> they could be pulled out into it's own thread. Earlier, the\n> >> performance gains were not clearly there for just this patch, but\n> >> after further benchmarking this patch stands on its own for\n> >> performance: it sees no obvious degradation of performance, while\n> >> gaining 0-5% across various normal indexes on the cc-complete sample\n> >> dataset, with the current worst-case index shape getting a 60%+\n> >> improved performance on INSERTs in the tests at [0].\n> >\n> >\n> > +1\n>\n> Thanks for showing interest.\n>\n> > This can be nice functionality. I had a customer with a very slow index\n> scan - the main problem was a long common prefix like prg010203xxxx.\n>\n> I'll have to note that this patch doesn't cover cases where e.g. text\n> attributes have large shared prefixes, but are still unique: the\n> dynamic prefix compression in this patch is only implemented at the\n> tuple attribute level; it doesn't implement type aware dynamic prefix\n> compression inside the attributes. So, a unique index on a column of\n> int32 formatted like '%0100i' would not materially benefit from this\n> patch.\n>\n> While would certainly be possible to add some type-level prefix\n> truncation in the framework of this patch, adding that would require\n> significant code churn in btree compare operators, because we'd need\n> an additional return argument to contain a numerical \"shared prefix\",\n> and that is not something I was planning to implement in this patch.\n>\n\nThanks for the explanation.\n\nPavel\n\n\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n>\n\nst 1. 11. 2023 v 11:32 odesílatel Matthias van de Meent <[email protected]> napsal:On Wed, 1 Nov 2023 at 07:47, Pavel Stehule <[email protected]> wrote:\n>\n> Hi\n>\n> út 31. 10. 2023 v 22:12 odesílatel Matthias van de Meent <[email protected]> napsal:\n>> This patch was originally suggested at [0], but it was mentioned that\n>> they could be pulled out into it's own thread. Earlier, the\n>> performance gains were not clearly there for just this patch, but\n>> after further benchmarking this patch stands on its own for\n>> performance: it sees no obvious degradation of performance, while\n>> gaining 0-5% across various normal indexes on the cc-complete sample\n>> dataset, with the current worst-case index shape getting a 60%+\n>> improved performance on INSERTs in the tests at [0].\n>\n>\n> +1\n\nThanks for showing interest.\n\n> This can be nice functionality. I had a customer with a very slow index scan - the main problem was a long common prefix like prg010203xxxx.\n\nI'll have to note that this patch doesn't cover cases where e.g. text\nattributes have large shared prefixes, but are still unique: the\ndynamic prefix compression in this patch is only implemented at the\ntuple attribute level; it doesn't implement type aware dynamic prefix\ncompression inside the attributes. So, a unique index on a column of\nint32 formatted like '%0100i' would not materially benefit from this\npatch.\n\nWhile would certainly be possible to add some type-level prefix\ntruncation in the framework of this patch, adding that would require\nsignificant code churn in btree compare operators, because we'd need\nan additional return argument to contain a numerical \"shared prefix\",\nand that is not something I was planning to implement in this patch.Thanks for the explanation.  Pavel\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Wed, 1 Nov 2023 13:03:45 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Wed, Nov 1, 2023 at 2:42 AM Matthias van de Meent\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Currently, nbtree code compares each and every column of an index\n> tuple during the binary search on the index page. With large indexes\n> that have many duplicate prefix column values (e.g. an index on (bool,\n> bool, uuid) ) that means a lot of wasted time getting to the right\n> column.\n>\n> The attached patch improves on that by doing per-page dynamic prefix\n> truncation: If we know that on both the right and left side there are\n> index tuples where the first two attributes are equal to the scan key,\n> we skip comparing those attributes at the current index tuple and\n> start with comparing attribute 3, saving two attribute compares. We\n> gain performance whenever comparing prefixing attributes is expensive\n> and when there are many tuples with a shared prefix - in unique\n> indexes this doesn't gain much, but we also don't lose much in this\n> case.\n>\n> This patch was originally suggested at [0], but it was mentioned that\n> they could be pulled out into it's own thread. Earlier, the\n> performance gains were not clearly there for just this patch, but\n> after further benchmarking this patch stands on its own for\n> performance: it sees no obvious degradation of performance, while\n> gaining 0-5% across various normal indexes on the cc-complete sample\n> dataset, with the current worst-case index shape getting a 60%+\n> improved performance on INSERTs in the tests at [0].\n\n+1 for the idea, I have some initial comments while reading through the patch.\n\n1.\nCommit message refers to a non-existing reference '(see [0])'.\n\n\n2.\n+When we do a binary search on a sorted set (such as a BTree), we know that a\n+tuple will be smaller than its left neighbour, and larger than its right\n+neighbour.\n\nI think this should be 'larger than left neighbour and smaller than\nright neighbour' instead of the other way around.\n\n3.\n+With the above optimizations, dynamic prefix truncation improves the worst\n+case complexity of indexing from O(tree_height * natts * log(tups_per_page))\n+to O(tree_height * (3*natts + log(tups_per_page)))\n\nWhere do the 3*natts come from? Is it related to setting up the\ndynamic prefix at each level?\n\n4.\n+ /*\n+ * All tuple attributes are equal to the scan key, only later attributes\n+ * could potentially not equal the scan key.\n+ */\n+ *compareattr = ntupatts + 1;\n\nCan you elaborate on this more? If all tuple attributes are equal to\nthe scan key then what do those 'later attributes' mean?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 10:24:54 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Fri, 19 Jan 2024 at 05:55, Dilip Kumar <[email protected]> wrote:\n>\n> On Wed, Nov 1, 2023 at 2:42 AM Matthias van de Meent\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Currently, nbtree code compares each and every column of an index\n> > tuple during the binary search on the index page. With large indexes\n> > that have many duplicate prefix column values (e.g. an index on (bool,\n> > bool, uuid) ) that means a lot of wasted time getting to the right\n> > column.\n> >\n> > The attached patch improves on that by doing per-page dynamic prefix\n> > truncation: If we know that on both the right and left side there are\n> > index tuples where the first two attributes are equal to the scan key,\n> > we skip comparing those attributes at the current index tuple and\n> > start with comparing attribute 3, saving two attribute compares. We\n> > gain performance whenever comparing prefixing attributes is expensive\n> > and when there are many tuples with a shared prefix - in unique\n> > indexes this doesn't gain much, but we also don't lose much in this\n> > case.\n> >\n> > This patch was originally suggested at [0], but it was mentioned that\n> > they could be pulled out into it's own thread. Earlier, the\n> > performance gains were not clearly there for just this patch, but\n> > after further benchmarking this patch stands on its own for\n> > performance: it sees no obvious degradation of performance, while\n> > gaining 0-5% across various normal indexes on the cc-complete sample\n> > dataset, with the current worst-case index shape getting a 60%+\n> > improved performance on INSERTs in the tests at [0].\n>\n> +1 for the idea, I have some initial comments while reading through the patch.\n\nThank you for the review.\n\n> 1.\n> Commit message refers to a non-existing reference '(see [0])'.\n\nNoted, I'll update that.\n\n> 2.\n> +When we do a binary search on a sorted set (such as a BTree), we know that a\n> +tuple will be smaller than its left neighbour, and larger than its right\n> +neighbour.\n>\n> I think this should be 'larger than left neighbour and smaller than\n> right neighbour' instead of the other way around.\n\nNoted, will be fixed, too.\n\n> 3.\n> +With the above optimizations, dynamic prefix truncation improves the worst\n> +case complexity of indexing from O(tree_height * natts * log(tups_per_page))\n> +to O(tree_height * (3*natts + log(tups_per_page)))\n>\n> Where do the 3*natts come from? Is it related to setting up the\n> dynamic prefix at each level?\n\nYes: We need to establish prefixes for both a tuple that's ahead of\nthe to-be-compared tuple, and one that's after the to-be-compared\ntuple. Assuming homogenous random distribution of scan key accesses\nacross the page (not always the case, but IMO a reasonable starting\npoint) this would average to 3 unprefixed compares before you have\nestablished both a higher and a lower prefix.\n\n> 4.\n> + /*\n> + * All tuple attributes are equal to the scan key, only later attributes\n> + * could potentially not equal the scan key.\n> + */\n> + *compareattr = ntupatts + 1;\n>\n> Can you elaborate on this more? If all tuple attributes are equal to\n> the scan key then what do those 'later attributes' mean?\n\nIn inner pages, tuples may not have all key attributes, as some may\nhave been truncated away in page splits. So, tuples that have at least\nthe same prefix as this (potentially truncated) tuple will need to be\ncompared starting at the first missing attribute of this tuple, i.e.\nntupatts + 1.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 24 Jan 2024 13:02:00 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Wed, 24 Jan 2024 at 13:02, Matthias van de Meent\n<[email protected]> wrote:\n> > 1.\n> > Commit message refers to a non-existing reference '(see [0])'.\n>\n> Noted, I'll update that.\n>\n> > 2.\n> > +When we do a binary search on a sorted set (such as a BTree), we know that a\n> > +tuple will be smaller than its left neighbour, and larger than its right\n> > +neighbour.\n> >\n> > I think this should be 'larger than left neighbour and smaller than\n> > right neighbour' instead of the other way around.\n>\n> Noted, will be fixed, too.\n\nAttached is version 15 of this patch, with the above issues fixed.\nIt's also rebased on top of 655dc310 of this morning, so that should\nkeep good for some time again.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Fri, 1 Mar 2024 14:48:53 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Fri, 1 Mar 2024 at 14:48, Matthias van de Meent\n<[email protected]> wrote:\n> Attached is version 15 of this patch, with the above issues fixed.\n> It's also rebased on top of 655dc310 of this morning, so that should\n> keep good for some time again.\n\nAttached is version 16 now. Relevant changes from previous patch versions:\n\n- Move from 1-indexed AttributeNumber to 0-indexed ints for prefixes,\nand use \"prefix\" as naming scheme, rather than cmpcol. A lack of\nprefix, previously indicated with a cmpcol value of 1, is now a prefix\nvalue of 0.\n- Adjusted README\n- Improved the efficiency of the insertion path in some cases where\nwe've previously compared the page's highkey.\n\nAs always, why we need this:\n\nCurrently, btrees are quite inefficient when they have many key\nattributes but low attribute cardinality in the prefix, e.g. an index\non (\"\", \"\", \"\", uuid). This is not just inefficient use of disk space\nwith the high repetition of duplicate prefix values in pages, but it\nis also a computational overhead when we're descending the tree in\ne.g. _bt_first() or btinsert(): The code we use to search a page\ncurrently compares the full key to the full searchkey, for a\ncomplexity of O(n_equal_attributes + 1) for every tuple on the page,\nfor O(log(page_ntups) * (n_equal_attributes + 1)) attribute compares\nevery page during descent.\n\nThis patch fixes that part of the computational complexity by applying\ndynamic prefix compression, thus reducing the average computational\ncomplexity in random index lookups to O(3 * (n_equal_attributes) +\nlog(page_ntups)) per page (assuming at least 4 non-hikey tuples on\neach page). In practice, this makes indexes with 3+ attributes and\nprefixes with low selectivity (such as the example above) much more\nviable computationally, as we have to spend much less effort on\ncomparing known index attributes during descent.\n\nNote that this does _not_ reuse prefix bounds across pages - it\nre-establishes the left- and right prefixes every page during the\nbinary search. See the README modified in the patch for specific\nimplementation details and considerations.\n\nThis patch synergizes with the highkey optimization used in [0]: When\ncombined, the number of attribute compare function calls could be\nfurther reduced to O(2 * (n_equal_atts) + log(page_ntups)), a\nreduction by n_equal_atts every page, which in certain wide index\ntypes could be over 25% of all attribute compare function calls on the\npage after dynamic prefix truncation. However, both are separately\nuseful and reduce the amount of work done on most pages.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n[0] https://www.postgresql.org/message-id/flat/CAEze2WijWhCQy_nZVP4Ye5Hwj=YW=3rqv+hbMJGcOHtrYQmyKw@mail.gmail.com", "msg_date": "Tue, 6 Aug 2024 23:41:53 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" }, { "msg_contents": "On Tue, Aug 6, 2024 at 5:42 PM Matthias van de Meent\n<[email protected]> wrote:\n> Attached is version 16 now.\n\nI ran this with my old land registry benchmark, used for validating\nthe space utilization impact of nbtree deduplication (among other\nthings). This isn't obviously the best benchmark for this sort of\nthing, but I seem to recall that you'd used it yourself at some point.\nTo validate work in this area, likely including this patch. So I\ndecided to start there.\n\nTo be clear, this test involves bulk loading of an unlogged table (the\nland registry table). The following composite index is created on the\ntable before we insert any rows, so most of the cycles here are in\nindex maintenance including _bt_search descents:\n\nCREATE INDEX composite ON land2 USING btree (county COLLATE \"C\", city\nCOLLATE \"C\", locality COLLATE \"C\");\n\nI wasn't able to see much of an improvement with this patch applied.\nIt went from ~00:51.598 to ~00:51.053. That's a little disappointing,\ngiven that this is supposed to be a sympathetic case for the patch.\nCan you suggest something else? (Granted, I understand that this patch\nhas some complicated relationship with other patches of yours, which I\ndon't understand currently.)\n\nI'm a bit worried about side-effects for this assertion:\n\n@@ -485,7 +489,7 @@ _bt_check_unique(Relation rel, BTInsertState\ninsertstate, Relation heapRel,\n Assert(insertstate->bounds_valid);\n Assert(insertstate->low >= P_FIRSTDATAKEY(opaque));\n Assert(insertstate->low <= insertstate->stricthigh);\n- Assert(_bt_compare(rel, itup_key, page, offset) < 0);\n+ Assert(_bt_compare(rel, itup_key, page, offset, &sprefix) < 0);\n break;\n }\n\nMore generally, it's not entirely clear how the code in\n_bt_check_unique is supposed to work with the patch. Why should it be\nsafe to do what you're doing with the prefix there? It's not like\nwe're doing a binary search here -- it's more like a linear search.\n\n> - Move from 1-indexed AttributeNumber to 0-indexed ints for prefixes,\n> and use \"prefix\" as naming scheme, rather than cmpcol. A lack of\n> prefix, previously indicated with a cmpcol value of 1, is now a prefix\n> value of 0.\n\nFound a likely-related bug in the changes you made to amcheck, which I\nwas able to fix myself like so:\n\ndiff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\nindex c7dc6725a..15be61777 100644\n--- a/contrib/amcheck/verify_nbtree.c\n+++ b/contrib/amcheck/verify_nbtree.c\n@@ -3187,7 +3187,7 @@ bt_rootdescend(BtreeCheckState *state, IndexTuple itup)\n insertstate.buf = lbuf;\n\n /* Get matching tuple on leaf page */\n- offnum = _bt_binsrch_insert(state->rel, &insertstate, 1);\n+ offnum = _bt_binsrch_insert(state->rel, &insertstate, 0);\n /* Compare first >= matching item on leaf page, if any */\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 13 Aug 2024 14:39:10 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: implement dynamic prefix truncation (was: Improving btree\n performance through specializing by key shape, take 2)" } ]
[ { "msg_contents": "(now really to -hackers)\nHi,\n\nOver at [0] I'd implemented an optimization that allows us to skip\ncalling _bt_compare in _bt_moveright in many common cases. This patch,\nwhen stacked on top of the prefix truncation patch, improves INSERT\nperformance by an additional 2-9%pt, with an extreme case of 45% in\nthe worscase index tests at [0].\n\nThe optimization is that we now recognze that our page split algorithm\nall but guarantees that the HIKEY matches this page's downlink's right\nseparator key bytewise, excluding the data stored in the\nIndexTupleData struct.\n\nBy caching the right separator index tuple in _bt_search, we can\ncompare the downlink's right separator and the HIKEY, and when they\nare equal (memcmp() == 0) we don't have to call _bt_compare - the\nHIKEY is known to be larger than the scan key, because our key is\nsmaller than the right separator, and thus transitively also smaller\nthan the HIKEY because it contains the same data. As _bt_compare can\ncall expensive user-provided functions, this can be a large\nperformance boon, especially when there are only a small number of\ncolumn getting compared on each page (e.g. index tuples of many 100s\nof bytes, or dynamic prefix truncation is enabled).\n\nBy adding this, the number of _bt_compare calls per _bt_search is\noften reduced by one per btree level.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\nPS. Best served with dynamic prefix truncation [1] and btree specialization [0].\n\n[0] https://www.postgresql.org/message-id/CAEze2WiqOONRQTUT1p_ZV19nyMA69UU2s0e2dp+jSBM=j8snuw@mail.gmail.com\n[1] https://www.postgresql.org/message-id/flat/CAEze2Wh-h20DmPSMXp4qHR0-ykh9=Z3ejX8MSsbikbOqaYe_OQ@mail.gmail.com", "msg_date": "Tue, 31 Oct 2023 23:08:04 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On 01/11/2023 00:08, Matthias van de Meent wrote:\n> calling _bt_compare in _bt_moveright in many common cases. This patch,\n> when stacked on top of the prefix truncation patch, improves INSERT\n> performance by an additional 2-9%pt, with an extreme case of 45% in\n> the worscase index tests at [0].\n> \n> The optimization is that we now recognze that our page split algorithm\n> all but guarantees that the HIKEY matches this page's downlink's right\n> separator key bytewise, excluding the data stored in the\n> IndexTupleData struct.\n\nGood observation.\n\n> By caching the right separator index tuple in _bt_search, we can\n> compare the downlink's right separator and the HIKEY, and when they\n> are equal (memcmp() == 0) we don't have to call _bt_compare - the\n> HIKEY is known to be larger than the scan key, because our key is\n> smaller than the right separator, and thus transitively also smaller\n> than the HIKEY because it contains the same data. As _bt_compare can\n> call expensive user-provided functions, this can be a large\n> performance boon, especially when there are only a small number of\n> column getting compared on each page (e.g. index tuples of many 100s\n> of bytes, or dynamic prefix truncation is enabled).\n\nWhat would be the worst case scenario for this? One situation where the \nmemcmp() would not match is when there is a concurrent page split. I \nthink it's OK to pessimize that case. Are there any other situations? \nWhen the memcmp() matches, I think this is almost certainly not slower \nthan calling the datatype's comparison function.\n\n> \t\tif (offnum < PageGetMaxOffsetNumber(page))\n> \t\t{\n> \t\t\tItemId\trightsepitem = PageGetItemId(page, offnum + 1);\n> \t\t\tIndexTuple pagerightsep = (IndexTuple) PageGetItem(page, rightsepitem);\n> \t\t\tmemcpy(rsepbuf.data, pagerightsep, ItemIdGetLength(rightsepitem));\n> \t\t\trightsep = &rsepbuf.tuple;\n> \t\t}\n> \t\telse if (!P_RIGHTMOST(opaque))\n> \t\t{\n> \t\t\t/*\n> \t\t\t * The rightmost data tuple on inner page has P_HIKEY as its right\n> \t\t\t * separator.\n> \t\t\t */\n> \t\t\tItemId\trightsepitem = PageGetItemId(page, P_HIKEY);\n> \t\t\tIndexTuple pagerightsep = (IndexTuple) PageGetItem(page, rightsepitem);\n> \t\t\tmemcpy(rsepbuf.data, pagerightsep, ItemIdGetLength(rightsepitem));\n> \t\t\trightsep = &rsepbuf.tuple;\n> \t\t}\n\nThis could use a one-line comment above this, something like \"Remember \nthe right separator of the downlink we follow, to speed up the next \n_bt_moveright call\".\n\nShould there be an \"else rightsep = NULL;\" here? Is it possible that we \nfollow the non-rightmost downlink on a higher level and rightmost \ndownlink on next level? Concurrent page deletion?\n\nPlease update the comment above _bt_moveright to describe the new \nargument. Perhaps the text from README should go there, this feels like \na detail specific to _bt_search and _bt_moveright.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 5 Dec 2023 09:43:20 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Wed, 1 Nov 2023 at 03:38, Matthias van de Meent\n<[email protected]> wrote:\n>\n> (now really to -hackers)\n> Hi,\n>\n> Over at [0] I'd implemented an optimization that allows us to skip\n> calling _bt_compare in _bt_moveright in many common cases. This patch,\n> when stacked on top of the prefix truncation patch, improves INSERT\n> performance by an additional 2-9%pt, with an extreme case of 45% in\n> the worscase index tests at [0].\n>\n> The optimization is that we now recognze that our page split algorithm\n> all but guarantees that the HIKEY matches this page's downlink's right\n> separator key bytewise, excluding the data stored in the\n> IndexTupleData struct.\n>\n> By caching the right separator index tuple in _bt_search, we can\n> compare the downlink's right separator and the HIKEY, and when they\n> are equal (memcmp() == 0) we don't have to call _bt_compare - the\n> HIKEY is known to be larger than the scan key, because our key is\n> smaller than the right separator, and thus transitively also smaller\n> than the HIKEY because it contains the same data. As _bt_compare can\n> call expensive user-provided functions, this can be a large\n> performance boon, especially when there are only a small number of\n> column getting compared on each page (e.g. index tuples of many 100s\n> of bytes, or dynamic prefix truncation is enabled).\n>\n> By adding this, the number of _bt_compare calls per _bt_search is\n> often reduced by one per btree level.\n\nCFBot shows the following compilation error at [1]:\n[16:56:22.153] FAILED:\nsrc/backend/postgres_lib.a.p/access_nbtree_nbtsearch.c.obj\n[16:56:22.153] \"cl\" \"-Isrc\\backend\\postgres_lib.a.p\" \"-Isrc\\include\"\n\"-I..\\src\\include\" \"-Ic:\\openssl\\1.1\\include\"\n\"-I..\\src\\include\\port\\win32\" \"-I..\\src\\include\\port\\win32_msvc\"\n\"/MDd\" \"/FIpostgres_pch.h\" \"/Yupostgres_pch.h\"\n\"/Fpsrc\\backend\\postgres_lib.a.p\\postgres_pch.pch\" \"/nologo\"\n\"/showIncludes\" \"/utf-8\" \"/W2\" \"/Od\" \"/Zi\" \"/DWIN32\" \"/DWINDOWS\"\n\"/D__WINDOWS__\" \"/D__WIN32__\" \"/D_CRT_SECURE_NO_DEPRECATE\"\n\"/D_CRT_NONSTDC_NO_DEPRECATE\" \"/wd4018\" \"/wd4244\" \"/wd4273\" \"/wd4101\"\n\"/wd4102\" \"/wd4090\" \"/wd4267\" \"-DBUILDING_DLL\" \"/FS\"\n\"/FdC:\\cirrus\\build\\src\\backend\\postgres_lib.pdb\"\n/Fosrc/backend/postgres_lib.a.p/access_nbtree_nbtsearch.c.obj \"/c\"\n../src/backend/access/nbtree/nbtsearch.c\n[16:56:22.153] ../src/backend/access/nbtree/nbtsearch.c(112): error\nC2143: syntax error: missing ';' before 'type'\n[16:56:22.280] ../src/backend/access/nbtree/nbtsearch.c(112): warning\nC4091: ' ': ignored on left of 'int' when no variable is declared\n\n[1] - https://cirrus-ci.com/task/4634619035779072\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 6 Jan 2024 21:09:56 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Tue, 5 Dec 2023 at 08:43, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 01/11/2023 00:08, Matthias van de Meent wrote:\n> > By caching the right separator index tuple in _bt_search, we can\n> > compare the downlink's right separator and the HIKEY, and when they\n> > are equal (memcmp() == 0) we don't have to call _bt_compare - the\n> > HIKEY is known to be larger than the scan key, because our key is\n> > smaller than the right separator, and thus transitively also smaller\n> > than the HIKEY because it contains the same data. As _bt_compare can\n> > call expensive user-provided functions, this can be a large\n> > performance boon, especially when there are only a small number of\n> > column getting compared on each page (e.g. index tuples of many 100s\n> > of bytes, or dynamic prefix truncation is enabled).\n>\n> What would be the worst case scenario for this? One situation where the\n> memcmp() would not match is when there is a concurrent page split. I\n> think it's OK to pessimize that case. Are there any other situations?\n\nThere is also concurrent page deletion which can cause downlinked\npages to get removed from the set of accessible pages, but that's\nquite rare, too: arguably even more rare than page splits.\n\n> When the memcmp() matches, I think this is almost certainly not slower\n> than calling the datatype's comparison function.\n>\n> > if (offnum < PageGetMaxOffsetNumber(page))\n> > [...]\n> > else if (!P_RIGHTMOST(opaque))\n> > [...]\n> > }\n>\n> This could use a one-line comment above this, something like \"Remember\n> the right separator of the downlink we follow, to speed up the next\n> _bt_moveright call\".\n\nDone.\n\n> Should there be an \"else rightsep = NULL;\" here? Is it possible that we\n> follow the non-rightmost downlink on a higher level and rightmost\n> downlink on next level? Concurrent page deletion?\n\nWhile possible, the worst this could do is be less efficient in those\nfringe cases: The cached right separator is a key that is known to\ncompare larger than the search key and thus always correct to use as\nan optimization for \"is this HIKEY larger than my search key\", as long\nas we don't clobber the data in that cache (which we don't).\nNull-ing the argument, while not incorrect, could be argued to be\nworse than useless here, as the only case where NULL may match an\nactual highkey is on the rightmost page, which we already special-case\nin _bt_moveright before hitting the 'compare the highkey' code.\nRemoval of the value would thus remove any chance of using the\noptimization after hitting the rightmost page in a layer below.\n\nI've added a comment to explain this in an empty else block in the\nattached version 2 of the patch.\n\n> Please update the comment above _bt_moveright to describe the new\n> argument. Perhaps the text from README should go there, this feels like\n> a detail specific to _bt_search and _bt_moveright.\n\nDone.\n\nThank you for the review.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 22 Feb 2024 14:34:00 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Sat, 6 Jan 2024 at 16:40, vignesh C <[email protected]> wrote:\n>\n> CFBot shows the following compilation error at [1]:\n> [16:56:22.153] FAILED:\n> src/backend/postgres_lib.a.p/access_nbtree_nbtsearch.c.obj\n> [...]\n> ../src/backend/access/nbtree/nbtsearch.c\n> [16:56:22.153] ../src/backend/access/nbtree/nbtsearch.c(112): error\n> C2143: syntax error: missing ';' before 'type'\n> [16:56:22.280] ../src/backend/access/nbtree/nbtsearch.c(112): warning\n> C4091: ' ': ignored on left of 'int' when no variable is declared\n\nI forgot to address this in the previous patch, so here's v3 which\nfixes the issue warning.\n\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Thu, 22 Feb 2024 16:42:40 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Thu, Feb 22, 2024 at 10:42 AM Matthias van de Meent\n<[email protected]> wrote:\n> I forgot to address this in the previous patch, so here's v3 which\n> fixes the issue warning.\n\nWhat benchmarking have you done here?\n\nHave you tried just reordering things in _bt_search() instead? If we\ndelay the check until after the binary search, then the result of the\nbinary search is usually proof enough that we cannot possibly need to\nmove right. That has the advantage of not requiring that we copy\nanything to the stack.\n\nAdmittedly, it's harder to make the \"binary search first\" approach\nwork on the leaf level, under the current code structure. But maybe\nthat doesn't matter very much. And even if it does matter, maybe we\nshould just move the call to _bt_binsrch() that currently takes place\nin _bt_first into _bt_search() itself -- so that _bt_binsrch() is\nstrictly under the control of _bt_search() (obviously not doable for\nnon-_bt_first callers, which need to call _bt_binsrch_insert instead).\nThis whole approach will have been made easier by the refactoring I\ndid late last year, in commit c9c0589fda.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Mar 2024 14:14:04 -0500", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Fri, Mar 8, 2024 at 2:14 PM Peter Geoghegan <[email protected]> wrote:\n> What benchmarking have you done here?\n\nI think that the memcmp() test is subtly wrong:\n\n> + if (PointerIsValid(rightsep))\n> + {\n> + /*\n> + * Note: we're not in the rightmost page (see branchpoint earlier in\n> + * the loop), so we always have a HIKEY on this page.\n> + */\n> + ItemId hikeyid = PageGetItemId(page, P_HIKEY);\n> + IndexTuple highkey = (IndexTuple) PageGetItem(page, hikeyid);\n> +\n> + if (ItemIdGetLength(hikeyid) == IndexTupleSize(rightsep) &&\n> + memcmp(&highkey[1], &rightsep[1],\n> + IndexTupleSize(rightsep) - sizeof(IndexTupleData)) == 0)\n> + {\n> + break;\n> + }\n> + }\n\nUnlike amcheck's bt_pivot_tuple_identical, your memcmp() does not\ncompare relevant metadata fields from struct IndexTupleData. It\nwouldn't make sense for it to compare the block number, of course (if\nit did then the optimization would simply never work), but ISTM that\nyou still need to compare ItemPointerData.ip_posid.\n\nSuppose, for example, that you had two very similar pivot tuples from\na multicolumn index on (a int4, b int2) columns. The first/earlier\ntuple is (a,b) = \"(42, -inf)\", due to the influence of suffix\ntruncation. The second/later tuple is (a,b) = \"(42, 0)\", since suffix\ntruncation couldn't take place when the second pivot tuple was\ncreated. (Actually, suffix truncation would have been possible, but it\nwould have only managed to truncate-away the tiebreak heap TID\nattribute value in the case of our second tuple).\n\nThere'll be more alignment padding (more zero padding bytes) in the\nsecond tuple, compared to the first. But the tuples are still the same\nsize. When you go to you memcmp() this pair of tuples using the\napproach in v3, the bytes that are actually compared will be\nidentical, despite the fact that these are really two distinct tuples,\nwith distinct values. (As I said, you'd have to actually compare the\nItemPointerData.ip_posid metadata to notice this small difference.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Mar 2024 15:11:01 -0500", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" }, { "msg_contents": "On Fri, 8 Mar 2024 at 20:14, Peter Geoghegan <[email protected]> wrote:\n>\n> On Thu, Feb 22, 2024 at 10:42 AM Matthias van de Meent\n> <[email protected]> wrote:\n> > I forgot to address this in the previous patch, so here's v3 which\n> > fixes the issue warning.\n>\n> What benchmarking have you done here?\n\nI have benchmarked this atop various versions of master when it was\npart of the btree specialization patchset, where it showed a 2-9%\nincrease in btree insert performance over the previous patch in the\npatchset on the various index types in that set.\nMore recently, on an unlogged pgbench with foreign keys enabled (-s400\n-j4 -c8) I can't find any obvious regressions (it gains 0-0.7% on\nmaster across 5-minute runs), while being 4.5% faster on inserting\ndata on a table with an excessively bad index shape (single index of\n10 columns of empty strings with the non-default \"nl-BE-x-icu\"\ncollation followed by 1 random uuid column, inserted from a 10M row\ndataset. Extrapolation indicates this could indeed get over 7%\nimprovement when the index shape is 31 nondefault -collated nonnull\ntext columns and a single random ID index column).\n\n> Have you tried just reordering things in _bt_search() instead? If we\n> delay the check until after the binary search, then the result of the\n> binary search is usually proof enough that we cannot possibly need to\n> move right. That has the advantage of not requiring that we copy\n> anything to the stack.\n\nI've not tried that, because it would makes page-level prefix\ntruncation more expensive by ~50%: With this patch, we need only 2\nfull tuple _bt_compares per page before we can establish a prefix,\nwhile without this patch (i.e. if we did a binsrch-first approach)\nwe'd need 3 on average (assuming linearly randomly distributed\naccesses). Because full-key compares can be arbitrarily more expensive\nthan normal attribute compares, I'd rather not have that 50% overhead.\n\n> > On Fri, Mar 8, 2024 at 2:14 PM Peter Geoghegan <[email protected]> wrote:\n> > What benchmarking have you done here?\n> I think that the memcmp() test is subtly wrong:\n\nGood catch, it's been fixed in the attached version, using a new function.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 11 Mar 2024 19:35:02 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": true, "msg_subject": "Re: btree: downlink right separator/HIKEY optimization" } ]
[ { "msg_contents": "Looking at the Assert inside tts_virtual_copyslot(), it does:\n\nAssert(srcdesc->natts <= dstslot->tts_tupleDescriptor->natts);\n\nSo, that seems to indicate that it's ok for the src slot to have fewer\nattributes than the destination. The code then calls\ntts_virtual_clear(dstslot), then slot_getallattrs(srcslot); then does\nthe following loop:\n\nfor (int natt = 0; natt < srcdesc->natts; natt++)\n{\n dstslot->tts_values[natt] = srcslot->tts_values[natt];\n dstslot->tts_isnull[natt] = srcslot->tts_isnull[natt];\n}\n\nSeems ok so far. If the srcslot has fewer attributes then that'll\nleave the extra dstslot array elements untouched.\n\nWhere it gets weird is inside tts_virtual_materialize(). In that\nfunction, we materialize *all* of the dstslot attributes, even the\nextra ones that were left alone in the for loop shown above. Why do\nwe need to materialize all of those attributes? We only need to\nmaterialize up to srcslot->natts.\n\nPer the following code, only up to the srcdesc->natts would be\naccessible anyway:\n\ndstslot->tts_nvalid = srcdesc->natts;\n\nVirtual slots don't need any further deforming and\ntts_virtual_getsomeattrs() is coded in a way that we'll find out if\nanything tries to deform a virtual slot.\n\nI changed the Assert in tts_virtual_copyslot() to check the natts\nmatch in each of the slots and all of the regression tests still pass,\nso it seems we have no tests where there's an attribute number\nmismatch...\n\nI wondered if there are any other cases that try to handle mismatching\nattribute numbers. On a quick scan of git grep -E\n\"^\\s*Assert\\(.*natts.*\\);\" I don't see any other Asserts that allow\nmismatching attribute numbers.\n\nI think if we are going to support copying slots where the source and\ndestination don't have the same number of attributes then the\nfollowing comment should explain what's allowed and what's not\nallowed:\n\n/*\n* Copy the contents of the source slot into the destination slot's own\n* context. Invoked using callback of the destination slot.\n*/\nvoid (*copyslot) (TupleTableSlot *dstslot, TupleTableSlot *srcslot);\n\nI also tried adding the following to ExecCopySlot() to see if there is\nany other slot copying going on with other slot types where the natts\ndon't match. All tests pass still.\n\nAssert(srcslot->tts_tupleDescriptor->natts ==\ndstslot->tts_tupleDescriptor->natts);\n\nIs the Assert() in tts_virtual_copyslot() wrong?\n\nDavid\n\n\n", "msg_date": "Wed, 1 Nov 2023 11:35:50 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Something seems weird inside tts_virtual_copyslot()" }, { "msg_contents": "Hi,\n\nOn 2023-11-01 11:35:50 +1300, David Rowley wrote:\n> Looking at the Assert inside tts_virtual_copyslot(), it does:\n> \n> Assert(srcdesc->natts <= dstslot->tts_tupleDescriptor->natts);\n\nI think that assert was intended to be the other way round.\n\n\n> So, that seems to indicate that it's ok for the src slot to have fewer\n> attributes than the destination. The code then calls\n> tts_virtual_clear(dstslot), then slot_getallattrs(srcslot); then does\n> the following loop:\n\n> for (int natt = 0; natt < srcdesc->natts; natt++)\n> {\n> dstslot->tts_values[natt] = srcslot->tts_values[natt];\n> dstslot->tts_isnull[natt] = srcslot->tts_isnull[natt];\n> }\n> \n> Seems ok so far.\n>\n> If the srcslot has fewer attributes then that'll leave the extra dstslot\n> array elements untouched.\n\nIt is not ok even up to just here! Any access to dstslot->tts_{values,isnull}\nfor an attribute bigger than srcdesc->natts would now be stale, potentially\npointing to another attribute.\n\n\n> Where it gets weird is inside tts_virtual_materialize(). In that\n> function, we materialize *all* of the dstslot attributes, even the\n> extra ones that were left alone in the for loop shown above. Why do\n> we need to materialize all of those attributes? We only need to\n> materialize up to srcslot->natts.\n> \n> Per the following code, only up to the srcdesc->natts would be\n> accessible anyway:\n> \n> dstslot->tts_nvalid = srcdesc->natts;\n> \n> Virtual slots don't need any further deforming and\n> tts_virtual_getsomeattrs() is coded in a way that we'll find out if\n> anything tries to deform a virtual slot.\n> \n> I changed the Assert in tts_virtual_copyslot() to check the natts\n> match in each of the slots and all of the regression tests still pass,\n> so it seems we have no tests where there's an attribute number\n> mismatch...\n\nIf we want to prohibit that, I think we ought to assert this in\nExecCopySlot(), rather than just tts_virtual_copyslot.\n\nEven that does survive the test - but I don't think it'd be really wrong to\ncopy from a slot with more columns into one with fewer. And it seems plausible\nthat that could happen somewhere, e.g. when copying from a slot in a child\npartition with additional columns into a slot from the parent, where the\ncolumn types/order otherwise matches, so we don't have to use the attribute\nmapping infrastructure.\n\n\n> I think if we are going to support copying slots where the source and\n> destination don't have the same number of attributes then the\n> following comment should explain what's allowed and what's not\n> allowed:\n> \n> /*\n> * Copy the contents of the source slot into the destination slot's own\n> * context. Invoked using callback of the destination slot.\n> */\n> void (*copyslot) (TupleTableSlot *dstslot, TupleTableSlot *srcslot);\n\nArguably the more relevant place would be document this in ExecCopySlot(), as\nthat's what \"users\" of ExecCopySlot() would presumably would look at. I dug a\nbit in the history, and we used to say\n\nThe caller must ensure the slots have compatible tupdescs.\n\nwhatever that precisely means.\n\n\n> Is the Assert() in tts_virtual_copyslot() wrong?\n\nYes, it's inverted.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Nov 2023 19:15:50 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Something seems weird inside tts_virtual_copyslot()" }, { "msg_contents": "On Sat, 4 Nov 2023 at 15:15, Andres Freund <[email protected]> wrote:\n>\n> On 2023-11-01 11:35:50 +1300, David Rowley wrote:\n> > I changed the Assert in tts_virtual_copyslot() to check the natts\n> > match in each of the slots and all of the regression tests still pass,\n> > so it seems we have no tests where there's an attribute number\n> > mismatch...\n>\n> If we want to prohibit that, I think we ought to assert this in\n> ExecCopySlot(), rather than just tts_virtual_copyslot.\n>\n> Even that does survive the test - but I don't think it'd be really wrong to\n> copy from a slot with more columns into one with fewer. And it seems plausible\n> that that could happen somewhere, e.g. when copying from a slot in a child\n> partition with additional columns into a slot from the parent, where the\n> column types/order otherwise matches, so we don't have to use the attribute\n> mapping infrastructure.\n\nDo you have any examples of when this could happen?\n\nI played around with partitioned tables and partitions with various\ncombinations of dropped columns and can't see any cases of this. Given\nthe assert's condition has been backwards for 5 years now\n(4da597edf1b), it just seems a bit unlikely that we have cases where\nthe source slot can have more attributes than the destination.\n\nGiven the Assert has been that way around for this long without any\ncomplaints, I think we should just insist that the natts must match in\neach slot. If we later discover some reason that there's some corner\ncase where they don't match, we can adjust the code then.\n\nI played around with the attached patch which removes the Assert and\nputs some additional Assert checks inside ExecCopySlot() which\nadditionally checks the attribute types also match. There are valid\ncases where they don't match and that seems to be limited to cases\nwhere we're performing DML on a table with a dropped column.\nexpand_insert_targetlist() will add NULL::int4 constants to the\ntargetlist in place of dropped columns but the tupledesc of the table\nwill have the atttypid set to InvalidOid per what that gets set to\nwhen a column is dropped in RemoveAttributeById().\n\n> > I think if we are going to support copying slots where the source and\n> > destination don't have the same number of attributes then the\n> > following comment should explain what's allowed and what's not\n> > allowed:\n> >\n> > /*\n> > * Copy the contents of the source slot into the destination slot's own\n> > * context. Invoked using callback of the destination slot.\n> > */\n> > void (*copyslot) (TupleTableSlot *dstslot, TupleTableSlot *srcslot);\n>\n> Arguably the more relevant place would be document this in ExecCopySlot(), as\n> that's what \"users\" of ExecCopySlot() would presumably would look at. I dug a\n> bit in the history, and we used to say\n\nI think it depends on what you're documenting. Writing comments above\nthe copyslot API function declaration is useful to define the API\nstandard for what new implementations of that interface must abide by.\nComments in ExecCopySlot() are useful to users of that function. It\nseems to me that both locations are relevant. New implementations of\ncopyslot need to know what they must handle, else they're left just to\nlook at what other implementations do and guess the rest.\n\nDavid", "msg_date": "Mon, 6 Nov 2023 11:16:26 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Something seems weird inside tts_virtual_copyslot()" }, { "msg_contents": "Hi,\n\nOn 2023-11-06 11:16:26 +1300, David Rowley wrote:\n> On Sat, 4 Nov 2023 at 15:15, Andres Freund <[email protected]> wrote:\n> >\n> > On 2023-11-01 11:35:50 +1300, David Rowley wrote:\n> > > I changed the Assert in tts_virtual_copyslot() to check the natts\n> > > match in each of the slots and all of the regression tests still pass,\n> > > so it seems we have no tests where there's an attribute number\n> > > mismatch...\n> >\n> > If we want to prohibit that, I think we ought to assert this in\n> > ExecCopySlot(), rather than just tts_virtual_copyslot.\n> >\n> > Even that does survive the test - but I don't think it'd be really wrong to\n> > copy from a slot with more columns into one with fewer. And it seems plausible\n> > that that could happen somewhere, e.g. when copying from a slot in a child\n> > partition with additional columns into a slot from the parent, where the\n> > column types/order otherwise matches, so we don't have to use the attribute\n> > mapping infrastructure.\n> \n> Do you have any examples of when this could happen?\n\n> I played around with partitioned tables and partitions with various\n> combinations of dropped columns and can't see any cases of this. Given\n> the assert's condition has been backwards for 5 years now\n> (4da597edf1b), it just seems a bit unlikely that we have cases where\n> the source slot can have more attributes than the destination.\n\nI think my concerns might be unfounded - I was worried about stuff like\nattribute mapping deciding that it's safe to copy without an attribute\nmapping, because all the types match. But it looks like we do check that the\nattnums match as well. There's similar code in a bunch of other places,\ne.g. ExecEvalWholeRowVar(), but that also verifies ->natts matches.\n\nSo I think adding an assert to ExecCopySlot(), perhaps with a comment saying\nthat the restriction could be lifted with a bit of work, would be fine.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 30 Nov 2023 16:14:27 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Something seems weird inside tts_virtual_copyslot()" }, { "msg_contents": "On Fri, 1 Dec 2023 at 13:14, Andres Freund <[email protected]> wrote:\n> So I think adding an assert to ExecCopySlot(), perhaps with a comment saying\n> that the restriction could be lifted with a bit of work, would be fine.\n\nThanks for looking at this again.\n\nHow about the attached? I wrote the comment you mentioned and also\nremoved the Assert from tts_virtual_copyslot().\n\nI also noted in the copyslot callback declaration that implementers\ncan assume the number of attributes in the source and destination\nslots match.\n\nDavid", "msg_date": "Fri, 1 Dec 2023 14:30:41 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Something seems weird inside tts_virtual_copyslot()" }, { "msg_contents": "On Fri, 1 Dec 2023 at 14:30, David Rowley <[email protected]> wrote:\n>\n> On Fri, 1 Dec 2023 at 13:14, Andres Freund <[email protected]> wrote:\n> > So I think adding an assert to ExecCopySlot(), perhaps with a comment saying\n> > that the restriction could be lifted with a bit of work, would be fine.\n>\n> How about the attached? I wrote the comment you mentioned and also\n> removed the Assert from tts_virtual_copyslot().\n\nI looked over this again and didn't see any issues, so pushed the patch.\n\nThanks for helping with this.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Dec 2023 21:29:41 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Something seems weird inside tts_virtual_copyslot()" } ]
[ { "msg_contents": "Hi hackers,\n\nI hope this email finds you well.\n\nI noticed that the CREATE/ALTER TABLE document does not mention that\nEXCLUDE can accept a collation. I created a documentation fix for this\nissue, and I have attached it to this email.\n\nPlease let me know if you have any questions or concerns.\n\nThanks,\nShihao", "msg_date": "Tue, 31 Oct 2023 20:13:39 -0400", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": true, "msg_subject": "EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "shihao zhong <[email protected]> writes:\n> I noticed that the CREATE/ALTER TABLE document does not mention that\n> EXCLUDE can accept a collation. I created a documentation fix for this\n> issue, and I have attached it to this email.\n\nHmm ... is this actually correct? I think that the collate\noption has to come before the opclass name etc, so you'd need\nto shove it into exclude_element to provide an accurate\ndescription of the syntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 31 Oct 2023 21:07:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "On Tue, Oct 31, 2023 at 9:07 PM Tom Lane <[email protected]> wrote:\n\n> shihao zhong <[email protected]> writes:\n> > I noticed that the CREATE/ALTER TABLE document does not mention that\n> > EXCLUDE can accept a collation. I created a documentation fix for this\n> > issue, and I have attached it to this email.\n>\n> > Hmm ... is this actually correct? I think that the collate\n> > option has to come before the opclass name etc, so you'd need\n> > to shove it into exclude_element to provide an accurate\n> > description of the syntax.\n> >\n> > regards, tom lane\n>\nHi Tom,\nThank you for your feedback on my previous patch. I have fixed the issue\nand attached a new patch for your review. Could you please take a look for\nit if you have a sec? Thanks\n\nAlso, if I understand correctly, the changes to sql_help.c will be made by\nthe committer, so I do not need to run create_help.pl in my patch. Can you\nplease confirm?\n\nI appreciate your help and time.\n\nThanks,\nShihao", "msg_date": "Tue, 31 Oct 2023 22:30:33 -0400", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "On Wed, Nov 1, 2023 at 10:30 AM shihao zhong <[email protected]> wrote:\n>\n> Thank you for your feedback on my previous patch. I have fixed the issue and attached a new patch for your review. Could you please take a look for it if you have a sec? Thanks\n>\n\nYour patch works fine. you can see it here:\nhttps://cirrus-ci.com/task/6481922939944960\nin an ideal world, since the doc is already built, we can probably\nview it as a plain html file just click the ci test result.\n\nin src/sgml/ref/create_table.sgml:\n\"Each exclude_element can optionally specify an operator class and/or\nordering options; these are described fully under CREATE INDEX.\"\n\nYou may need to update this sentence to reflect that exclude_element\ncan also optionally specify collation.\n\n\n", "msg_date": "Fri, 10 Nov 2023 22:59:27 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "Hi Jian,\n\nThanks for your comments, a new version is attached.\n\nThanks,\nShihao\n\nOn Fri, Nov 10, 2023 at 9:59 AM jian he <[email protected]> wrote:\n\n> On Wed, Nov 1, 2023 at 10:30 AM shihao zhong <[email protected]>\n> wrote:\n> >\n> > Thank you for your feedback on my previous patch. I have fixed the issue\n> and attached a new patch for your review. Could you please take a look for\n> it if you have a sec? Thanks\n> >\n>\n> Your patch works fine. you can see it here:\n> https://cirrus-ci.com/task/6481922939944960\n> in an ideal world, since the doc is already built, we can probably\n> view it as a plain html file just click the ci test result.\n>\n> in src/sgml/ref/create_table.sgml:\n> \"Each exclude_element can optionally specify an operator class and/or\n> ordering options; these are described fully under CREATE INDEX.\"\n>\n> You may need to update this sentence to reflect that exclude_element\n> can also optionally specify collation.\n>", "msg_date": "Thu, 16 Nov 2023 18:25:33 -0500", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "On Fri, Nov 17, 2023 at 4:55 AM shihao zhong <[email protected]> wrote:\n>\n> Hi Jian,\n>\n> Thanks for your comments, a new version is attached.\n>\n> Thanks,\n> Shihao\n>\n> On Fri, Nov 10, 2023 at 9:59 AM jian he <[email protected]> wrote:\n>>\n>> On Wed, Nov 1, 2023 at 10:30 AM shihao zhong <[email protected]> wrote:\n>> >\n>> > Thank you for your feedback on my previous patch. I have fixed the issue and attached a new patch for your review. Could you please take a look for it if you have a sec? Thanks\n>> >\n>>\n>> Your patch works fine. you can see it here:\n>> https://cirrus-ci.com/task/6481922939944960\n>> in an ideal world, since the doc is already built, we can probably\n>> view it as a plain html file just click the ci test result.\n>>\n>> in src/sgml/ref/create_table.sgml:\n>> \"Each exclude_element can optionally specify an operator class and/or\n>> ordering options; these are described fully under CREATE INDEX.\"\n>>\n>> You may need to update this sentence to reflect that exclude_element\n>> can also optionally specify collation.\n\nI have reviewed the changes and it looks fine.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Wed, 29 Nov 2023 10:19:34 +0530", "msg_from": "Shubham Khanna <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "shihao zhong wrote:\n\n> Thanks for your comments, a new version is attached.\n\nIn this hunk:\n\n@@ -1097,8 +1097,8 @@ WITH ( MODULUS <replaceable\nclass=\"parameter\">numeric_literal</replaceable>, REM\n method <replaceable>index_method</replaceable>.\n The operators are required to be commutative.\n Each <replaceable class=\"parameter\">exclude_element</replaceable>\n- can optionally specify an operator class and/or ordering options;\n- these are described fully under\n+ can optionally specify any of the following: a collation, a\n+ operator class, or ordering options; these are described fully under\n <xref linkend=\"sql-createindex\"/>.\n </para>\n \n\"a\" should be \"an\" as it's followed by \"operator class\".\n\nAlso the use of \"and/or\" in the previous version conveys the fact\nthat operator class and ordering options are not mutually\nexclusive. But when using \"any of the following\" in the new text,\ndoesn't it loose that meaning?\n\nIn case it does, I would suggest the attached diff.\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Fri, 01 Dec 2023 15:59:22 +0100", "msg_from": "\"Daniel Verite\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" }, { "msg_contents": "\"Daniel Verite\" <[email protected]> writes:\n> Also the use of \"and/or\" in the previous version conveys the fact\n> that operator class and ordering options are not mutually\n> exclusive. But when using \"any of the following\" in the new text,\n> doesn't it loose that meaning?\n\nYeah; and/or is perfectly fine here and doesn't need to be improved\non.\n\nThere's a bigger problem though, which is that these bits\nare *also* missing any reference to opclass parameters.\nI fixed that and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 15:38:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXCLUDE COLLATE in CREATE/ALTER TABLE document" } ]
[ { "msg_contents": "I didn't see any recent mentions in the archives, so I'll volunteer to\nbe CF manager for 2023-11.\n\n\n", "msg_date": "Wed, 1 Nov 2023 13:33:26 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest manager November 2023" }, { "msg_contents": "Hi,\n\n> I didn't see any recent mentions in the archives, so I'll volunteer to\n> be CF manager for 2023-11.\n\nMany thanks for volunteering! If you need any help please let me know.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Nov 2023 14:58:56 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager November 2023" }, { "msg_contents": "On Wed, Nov 1, 2023 at 7:59 AM Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi,\n>\n> > I didn't see any recent mentions in the archives, so I'll volunteer to\n> > be CF manager for 2023-11.\n>\n> I would love to help with that if you need.\n>\n> --\n> Thanks,\n>\n Shihao\n\nOn Wed, Nov 1, 2023 at 7:59 AM Aleksander Alekseev <[email protected]> wrote:Hi,\n\n> I didn't see any recent mentions in the archives, so I'll volunteer to\n> be CF manager for 2023-11.\n\nI would love to help with that if you need.\n\n-- Thanks,   Shihao", "msg_date": "Wed, 1 Nov 2023 11:28:36 -0400", "msg_from": "shihao zhong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager November 2023" }, { "msg_contents": "On Wed, Nov 01, 2023 at 01:33:26PM +0700, John Naylor wrote:\n> I didn't see any recent mentions in the archives, so I'll volunteer to\n> be CF manager for 2023-11.\n\nThanks, John!\n--\nMichael", "msg_date": "Thu, 2 Nov 2023 07:54:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager November 2023" }, { "msg_contents": "> On 1 Nov 2023, at 07:33, John Naylor <[email protected]> wrote:\n> \n> I didn't see any recent mentions in the archives, so I'll volunteer to\n> be CF manager for 2023-11.\n\nYou probably need some extra admin privileges on your account for accessing the\nCFM functionality, in the meantime I've switched the 202311 CF to InProgress\nand marked 202401 as Open.\n\nThanks for volunteering!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 2 Nov 2023 10:35:08 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest manager November 2023" }, { "msg_contents": "On Thu, Nov 2, 2023 at 4:35 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 1 Nov 2023, at 07:33, John Naylor <[email protected]> wrote:\n> >\n> > I didn't see any recent mentions in the archives, so I'll volunteer to\n> > be CF manager for 2023-11.\n>\n> You probably need some extra admin privileges on your account for accessing the\n> CFM functionality, in the meantime I've switched the 202311 CF to InProgress\n> and marked 202401 as Open.\n\nThanks for taking care of that!\n\n(Per the wiki, I requested admin privs on pgsql-www, so still awaiting that)\n\n\n", "msg_date": "Thu, 2 Nov 2023 16:41:04 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commitfest manager November 2023" } ]
[ { "msg_contents": "\nHi hackers,\n\nI try to run regression test on illumos, the 010_tab_completion will\nfailed because of timeout.\n\nHere is my build commands and logs:\n\n $ ../configure --enable-cassert --enable-debug --enable-nls --with-perl \\\n --with-python --with-tcl --with-openssl --with-libxml --with-libxslt \\\n --without-icu --enable-tap-tests --prefix=/home/japin/postgres/build/pg\n $ make -j $(nproc)\n ...\n $ cd src/bin/psql/ && make check\n make -C ../../../src/backend generated-headers\n make[1]: Entering directory '/home/japin/postgres/build/src/backend'\n make -C catalog distprep generated-header-symlinks\n make[2]: Entering directory '/home/japin/postgres/build/src/backend/catalog'\n make[2]: Nothing to be done for 'distprep'.\n make[2]: Nothing to be done for 'generated-header-symlinks'.\n make[2]: Leaving directory '/home/japin/postgres/build/src/backend/catalog'\n make -C nodes distprep generated-header-symlinks\n make[2]: Entering directory '/home/japin/postgres/build/src/backend/nodes'\n make[2]: Nothing to be done for 'distprep'.\n make[2]: Nothing to be done for 'generated-header-symlinks'.\n make[2]: Leaving directory '/home/japin/postgres/build/src/backend/nodes'\n make -C utils distprep generated-header-symlinks\n make[2]: Entering directory '/home/japin/postgres/build/src/backend/utils'\n make[2]: Nothing to be done for 'distprep'.\n make[2]: Nothing to be done for 'generated-header-symlinks'.\n make[2]: Leaving directory '/home/japin/postgres/build/src/backend/utils'\n make[1]: Leaving directory '/home/japin/postgres/build/src/backend'\n rm -rf '/home/japin/postgres/build'/tmp_install\n /opt/local/bin/mkdir -p '/home/japin/postgres/build'/tmp_install/log\n make -C '../../..' DESTDIR='/home/japin/postgres/build'/tmp_install install >'/home/japin/postgres/build'/tmp_install/log/install.log 2>&1\n make -j1 checkprep >>'/home/japin/postgres/build'/tmp_install/log/install.log 2>&1\n PATH=\"/home/japin/postgres/build/tmp_install/home/japin/postgres/build/pg/bin:/home/japin/postgres/build/src/bin/psql:$PATH\" LD_LIBRARY_PATH=\"/home/japin/postgres/build/tmp_install/home/japin/postgres/build/pg/lib\" INITDB_TEMPLATE='/home/japin/postgres/build'/tmp_install/initdb-template initdb -A trust -N --no-instructions --no-locale '/home/japin/postgres/build'/tmp_install/initdb-template >>'/home/japin/postgres/build'/tmp_install/log/initdb-template.log 2>&1\n echo \"# +++ tap check in src/bin/psql +++\" && rm -rf '/home/japin/postgres/build/src/bin/psql'/tmp_check && /opt/local/bin/mkdir -p '/home/japin/postgres/build/src/bin/psql'/tmp_check && cd /home/japin/postgres/build/../src/bin/psql && TESTLOGDIR='/home/japin/postgres/build/src/bin/psql/tmp_check/log' TESTDATADIR='/home/japin/postgres/build/src/bin/psql/tmp_check' PATH=\"/home/japin/postgres/build/tmp_install/home/japin/postgres/build/pg/bin:/home/japin/postgres/build/src/bin/psql:$PATH\" LD_LIBRARY_PATH=\"/home/japin/postgres/build/tmp_install/home/japin/postgres/build/pg/lib\" INITDB_TEMPLATE='/home/japin/postgres/build'/tmp_install/initdb-template PGPORT='65432' top_builddir='/home/japin/postgres/build/src/bin/psql/../../..' PG_REGRESS='/home/japin/postgres/build/src/bin/psql/../../../src/test/regress/pg_regress' /opt/local/bin/prove -I /home/japin/postgres/build/../src/test/perl/ -I /home/japin/postgres/build/../src/bin/psql t/*.pl\n # +++ tap check in src/bin/psql +++\n t/001_basic.pl ........... ok\n t/010_tab_completion.pl .. Dubious, test returned 25 (wstat 6400, 0x1900)\n No subtests run\n t/020_cancel.pl .......... ok\n \n Test Summary Report\n -------------------\n t/010_tab_completion.pl (Wstat: 6400 Tests: 0 Failed: 0)\n Non-zero exit status: 25\n Parse errors: No plan found in TAP output\n \n $ cat tmp_check/log/regress_log_010_tab_completion\n # Checking port 59378\n # Found port 59378\n Name: main\n Data directory: /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/pgdata\n Backup directory: /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/backup\n Archive directory: /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/archives\n Connection string: port=59378 host=/tmp/2tdG0Ck7Zb\n Log file: /home/japin/postgres/build/src/bin/psql/tmp_check/log/010_tab_completion_main.log\n [07:06:06.492](0.050s) # initializing database system by copying initdb template\n # Running: cp -RPp /home/japin/postgres/build/tmp_install/initdb-template /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/pgdata\n # Running: /home/japin/postgres/build/src/bin/psql/../../../src/test/regress/pg_regress --config-auth /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/pgdata\n ### Starting node \"main\"\n # Running: pg_ctl -w -D /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/pgdata -l /home/japin/postgres/build/src/bin/psql/tmp_check/log/010_tab_completion_main.log -o --cluster-name=main start\n waiting for server to start.... done\n server started\n # Postmaster PID for node \"main\" is 219980\n #### Begin standard error\n psql:<stdin>:6: WARNING: wal_level is insufficient to publish logical changes\n HINT: Set wal_level to \"logical\" before creating subscriptions.\n #### End standard error\n IPC::Run: timeout on timer #1 at /opt/local/lib/perl5/vendor_perl/5.34.0/IPC/Run.pm line 2951. <-- HERE\n # Postmaster PID for node \"main\" is 219980\n ### Stopping node \"main\" using mode immediate\n # Running: pg_ctl -D /home/japin/postgres/build/src/bin/psql/tmp_check/t_010_tab_completion_main_data/pgdata -m immediate stop\n waiting for server to shut down.... done\n server stopped\n # No postmaster PID for node \"main\"\n\n $ uname -a\n SunOS db_build 5.11 xxxxx i86pc i386 i86pc illumos\n $ perl --version\n This is perl 5, version 34, subversion 0 (v5.34.0) built for x86_64-solaris-thread-multi-64\n\nI try to change PG_TEST_TIMEOUT_DEFAULT to 600, it also failed with timeout.\n\nAny suggestions? Thanks in advance!\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Wed, 01 Nov 2023 15:19:39 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Tab completion regression test failed on illumos" }, { "msg_contents": "Hi,\n\n> I try to run regression test on illumos, the 010_tab_completion will\n> failed because of timeout.\n>\n> Here is my build commands and logs:\n>\n> [...]\n>\n> Any suggestions? Thanks in advance!\n\nIt's hard to say what went wrong with this output due to lack of\nbacktrace. I would suggest adding debug output to\n010_tab_completion.pl to figure out on which line the script fails.\nThen I would figure out the exact command that failed. Then I would\nexecute it manually and compare the result with my expectations.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Nov 2023 15:10:48 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> I try to run regression test on illumos, the 010_tab_completion will\n> failed because of timeout.\n\nWhy are you getting this?\n\n> #### Begin standard error\n> psql:<stdin>:6: WARNING: wal_level is insufficient to publish logical changes\n> HINT: Set wal_level to \"logical\" before creating subscriptions.\n> #### End standard error\n\nNot sure, but perhaps that unexpected output is confusing the test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Nov 2023 10:56:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "On Wed, Nov 01, 2023 at 03:19:39PM +0800, Japin Li wrote:\n> I try to run regression test on illumos, the 010_tab_completion will\n> failed because of timeout.\n\n> Any suggestions? Thanks in advance!\n\nThis test failed for me, in a different way, when I briefly installed IO::Pty\non a Solaris buildfarm member:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2023-01-03%2022%3A39%3A26\n\nThe IPC::Run pty tests also fail on Solaris. If I were debugging this, I'd\nstart by fixing IO::Pty and IPC::Run to pass their own test suites on Solaris\nor illumos. Then I'd see if problems continue for this postgresql test.\n\n\n", "msg_date": "Wed, 1 Nov 2023 22:01:22 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "\nOn Thu, 02 Nov 2023 at 13:01, Noah Misch <[email protected]> wrote:\n> On Wed, Nov 01, 2023 at 03:19:39PM +0800, Japin Li wrote:\n>> I try to run regression test on illumos, the 010_tab_completion will\n>> failed because of timeout.\n>\n>> Any suggestions? Thanks in advance!\n>\n> This test failed for me, in a different way, when I briefly installed IO::Pty\n> on a Solaris buildfarm member:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2023-01-03%2022%3A39%3A26\n>\nThanks for confirm this!\n\nI try to install IO::Pty using cpan, however, I cannot get the same error.\n\n> The IPC::Run pty tests also fail on Solaris. If I were debugging this, I'd\n> start by fixing IO::Pty and IPC::Run to pass their own test suites on Solaris\n> or illumos. Then I'd see if problems continue for this postgresql test.\n\nSo, it might be a bug comes from Perl.\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Thu, 02 Nov 2023 13:42:56 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "\nOn Thu, 02 Nov 2023 at 13:42, Japin Li <[email protected]> wrote:\n> On Thu, 02 Nov 2023 at 13:01, Noah Misch <[email protected]> wrote:\n>> On Wed, Nov 01, 2023 at 03:19:39PM +0800, Japin Li wrote:\n>>> I try to run regression test on illumos, the 010_tab_completion will\n>>> failed because of timeout.\n>>\n>>> Any suggestions? Thanks in advance!\n>>\n>> This test failed for me, in a different way, when I briefly installed IO::Pty\n>> on a Solaris buildfarm member:\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2023-01-03%2022%3A39%3A26\n>>\n> Thanks for confirm this!\n>\n> I try to install IO::Pty using cpan, however, I cannot get the same error.\n>\n>> The IPC::Run pty tests also fail on Solaris. If I were debugging this, I'd\n>> start by fixing IO::Pty and IPC::Run to pass their own test suites on Solaris\n>> or illumos. Then I'd see if problems continue for this postgresql test.\n>\n> So, it might be a bug comes from Perl.\n\nAfter enable debug for IPC::Run, I found the following logs:\n\nIPC::Run 0001 0123456789012-4 [#2(415745)]: writing to fd 11 (kid's stdin)\nIPC::Run 0001 0123456789012-4 [#2(415745)]: write( 11, 'SEL ' ) = 4 <- Here write 4 bytes.\nIPC::Run 0001 0123456789012-4 [#2(415745)]: fds for select: -----------r--r\nIPC::Run 0001 0123456789012-4 [#2(415745)]: timeout=0\nIPC::Run 0001 0123456789012-4 [#2(415745)]: selected -----------r\nIPC::Run 0001 0123456789012-4 [#2(415745)]: filtering data from fd 11 (kid's stdout)\nIPC::Run 0001 0123456789012-4 [#2(415745)]: reading from fd 11 (kid's stdout)\nIPC::Run 0001 0123456789012-4 [#2(415745)]: read( 11 ) = 8 chars 'SEL ' <- But read 8 bytes.\n\nIt seems the 'SEL\\t' is converted to 'SEL ' which is \"SEL\" with 5 spaces.\n\n-- \nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Thu, 02 Nov 2023 15:10:31 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "Japin Li <[email protected]> writes:\n> It seems the 'SEL\\t' is converted to 'SEL ' which is \"SEL\" with 5 spaces.\n\nThat would be plausible if readline were disabled, or otherwise\nnot functioning.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Nov 2023 10:23:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "\nOn Thu, 02 Nov 2023 at 22:23, Tom Lane <[email protected]> wrote:\n> Japin Li <[email protected]> writes:\n>> It seems the 'SEL\\t' is converted to 'SEL ' which is \"SEL\" with 5 spaces.\n>\n> That would be plausible if readline were disabled, or otherwise\n> not functioning.\n>\n\nI think this might be a bug comes from Illumos pseudo-tty. I can reproduce\nthis by using pseudo-tty on Illumos.\n\nHere is a simple test case:\n\n$ cat pseudo-tty.c\n#define _XOPEN_SOURCE 600\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <sys/types.h>\n#include <sys/wait.h>\n\n#define DEV_PTMX \"/dev/ptmx\"\n\nint\nmain(void)\n{\n int ptm_fd;\n pid_t pid;\n char *pts_name;\n\n ptm_fd = open(DEV_PTMX, O_RDWR);\n grantpt(ptm_fd);\n unlockpt(ptm_fd);\n pts_name = ptsname(ptm_fd);\n\n pid = fork();\n if (pid == -1) {\n fprintf(stderr, \"could not fork a new process: %m\\n\");\n close(ptm_fd);\n return -1;\n } else if (pid == 0) {\n int pts_fd;\n\n close(ptm_fd);\n pts_fd = open(pts_name, O_RDWR);\n write(pts_fd, \"SEL\\tH\", 5);\n close(pts_fd);\n } else {\n int status;\n char buffer[512] = { 0 };\n ssize_t bytes;\n\n bytes = read(ptm_fd, buffer, sizeof(buffer));\n printf(\"%ld: '%s'\\n\", bytes, buffer);\n waitpid(pid, &status, 0);\n close(ptm_fd);\n }\n\n return 0;\n}\n\nOn IllumsOS\n$ gcc -o pseudo-tty pseudo-tty.c\n$ ./pseudo-tty\n9: 'SEL H'\n\nOn Ubuntu\n$ gcc -o pseudo-tty pseudo-tty.c\n$ ./pseudo-tty\n5: 'SEL\tH'\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Thu, 02 Nov 2023 22:42:13 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "On Fri, Nov 3, 2023 at 3:42 AM Japin Li <[email protected]> wrote:\n> On Thu, 02 Nov 2023 at 22:23, Tom Lane <[email protected]> wrote:\n> > Japin Li <[email protected]> writes:\n> >> It seems the 'SEL\\t' is converted to 'SEL ' which is \"SEL\" with 5 spaces.\n> >\n> > That would be plausible if readline were disabled, or otherwise\n> > not functioning.\n> >\n>\n> I think this might be a bug comes from Illumos pseudo-tty. I can reproduce\n> this by using pseudo-tty on Illumos.\n\nI don't know but my guess is that this has to do with termios defaults\nbeing different. From a quick look at 'man termios', perhaps TABDLY\nis set to expand tabs to spaces? Can you fix it by tweaking the flags\nin src/common/sprompt.c? Somewhere near the line that disables ECHO,\nperhaps you can figure out how to disable that in c_oflag? This is\nall the ancient forgotten magic that allows all Unixes to drive 70\nyear old electric typewriters, inserting suitable pauses and\nconverting various things as it goes.\n\n\n", "msg_date": "Fri, 3 Nov 2023 14:22:12 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "On Fri, Nov 3, 2023 at 2:22 PM Thomas Munro <[email protected]> wrote:\n> On Fri, Nov 3, 2023 at 3:42 AM Japin Li <[email protected]> wrote:\n> > I think this might be a bug comes from Illumos pseudo-tty. I can reproduce\n> > this by using pseudo-tty on Illumos.\n>\n> I don't know but my guess is that this has to do with termios defaults\n> being different. From a quick look at 'man termios', perhaps TABDLY\n> is set to expand tabs to spaces? Can you fix it by tweaking the flags\n> in src/common/sprompt.c?\n\nArgh, sorry that's completely the wrong end. I suppose that sort of\nthing would have to happen in IPC::Run. I wonder what would happen if\nIPC::Run called ->set_raw() on the IO::Pty object it constructs, or\nfailing that, if IO::Stty can be used to mess with the relevant\nsettings.\n\n\n", "msg_date": "Fri, 3 Nov 2023 15:03:19 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "\nOn Fri, 03 Nov 2023 at 10:03, Thomas Munro <[email protected]> wrote:\n> On Fri, Nov 3, 2023 at 2:22 PM Thomas Munro <[email protected]> wrote:\n>> On Fri, Nov 3, 2023 at 3:42 AM Japin Li <[email protected]> wrote:\n>> > I think this might be a bug comes from Illumos pseudo-tty. I can reproduce\n>> > this by using pseudo-tty on Illumos.\n>>\n>> I don't know but my guess is that this has to do with termios defaults\n>> being different. From a quick look at 'man termios', perhaps TABDLY\n>> is set to expand tabs to spaces? Can you fix it by tweaking the flags\n>> in src/common/sprompt.c?\n>\n> Argh, sorry that's completely the wrong end. I suppose that sort of\n> thing would have to happen in IPC::Run. I wonder what would happen if\n> IPC::Run called ->set_raw() on the IO::Pty object it constructs, or\n> failing that, if IO::Stty can be used to mess with the relevant\n> settings.\n\nThanks for your explantation, the termios.c_oflag on Illumos enables TABDLY\nby default.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Fri, 03 Nov 2023 10:14:35 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tab completion regression test failed on illumos" }, { "msg_contents": "On Fri, Nov 3, 2023 at 3:14 PM Japin Li <[email protected]> wrote:\n> Thanks for your explantation, the termios.c_oflag on Illumos enables TABDLY\n> by default.\n\nIt seems that various other open source Unixen dropped that between 29\nand 2 years ago, but not illumos. I guess no one ever had IO::Pty\ninstalled on an older OpenBSD or NetBSD machine or we'd have seen this\nproblem there too, but as of a few years ago they behave like Linux\nand FreeBSD: no tab expansion.\n\nhttps://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/bsd/sys/ttydefaults.h#L79\nhttps://github.com/freebsd/freebsd-src/commit/210df5b10c855161149dd7a1e88f610972f2afaa\nhttps://github.com/NetBSD/src/commit/44a07dbdbdcb2b9e14340856c8267dc659a0ebd8\nhttps://github.com/openbsd/src/commit/818e463522f2237e9da1be8aa7958dcc8af28fca\nhttps://github.com/illumos/illumos-gate/blob/0f9b8dcfdb872a210003f6b077d091b793c24a6e/usr/src/uts/common/io/tty_common.c#L35\n\n\n", "msg_date": "Fri, 3 Nov 2023 21:01:37 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tab completion regression test failed on illumos" } ]
[ { "msg_contents": "Since pgstatfuncs.c was refactored, the comments for synthesized\nfunction names are significant to find the function body.\n\nI happened to find a misspelling among the function name\ncomments. \"pg_stat_get_mods_since_analyze\" should be\n\"pg_stat_get_mod_since_analyze\".\n\nUpon checking the file using a rudimentary script, I found no other\nsimilar mistakes in the same file.\n\n(FWIW, I also feel that these macros might be going a bit too far by\nsynthesizing even the function names.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 01 Nov 2023 17:23:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong function name in pgstatfuncs.c" }, { "msg_contents": "> On 1 Nov 2023, at 09:23, Kyotaro Horiguchi <[email protected]> wrote:\n> \n> Since pgstatfuncs.c was refactored, the comments for synthesized\n> function names are significant to find the function body.\n> \n> I happened to find a misspelling among the function name\n> comments. \"pg_stat_get_mods_since_analyze\" should be\n> \"pg_stat_get_mod_since_analyze\".\n> \n> Upon checking the file using a rudimentary script, I found no other\n> similar mistakes in the same file.\n\nNice catch, that's indeed a tiny typo, will fix.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 1 Nov 2023 09:37:13 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong function name in pgstatfuncs.c" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile reviewing another patch I noticed how the GUCs are\ninconsistently named within the GUC_check_errdetail messages:\n\n======\n\nbelow, the GUC name is embedded but not quoted:\n\nsrc/backend/access/transam/xlogprefetcher.c:\nGUC_check_errdetail(\"recovery_prefetch is not supported on platforms\nthat lack posix_fadvise().\");\nsrc/backend/access/transam/xlogrecovery.c:\nGUC_check_errdetail(\"recovery_target_timeline is not a valid\nnumber.\");\nsrc/backend/commands/variable.c:\nGUC_check_errdetail(\"effective_io_concurrency must be set to 0 on\nplatforms that lack posix_fadvise().\");\nsrc/backend/commands/variable.c:\nGUC_check_errdetail(\"maintenance_io_concurrency must be set to 0 on\nplatforms that lack posix_fadvise().\");\nsrc/backend/port/sysv_shmem.c:\nGUC_check_errdetail(\"huge_page_size must be 0 on this platform.\");\nsrc/backend/port/win32_shmem.c:\nGUC_check_errdetail(\"huge_page_size must be 0 on this platform.\");\nsrc/backend/replication/syncrep.c:\nGUC_check_errdetail(\"synchronous_standby_names parser failed\");\nsrc/backend/storage/file/fd.c:\nGUC_check_errdetail(\"debug_io_direct is not supported on this\nplatform.\");\nsrc/backend/storage/file/fd.c:\nGUC_check_errdetail(\"debug_io_direct is not supported for WAL because\nXLOG_BLCKSZ is too small\");\nsrc/backend/storage/file/fd.c:\nGUC_check_errdetail(\"debug_io_direct is not supported for data because\nBLCKSZ is too small\");\nsrc/backend/tcop/postgres.c:\nGUC_check_errdetail(\"client_connection_check_interval must be set to 0\non this platform.\");\n\n~~~\n\nbelow, the GUC name is embedded and double-quoted:\n\nsrc/backend/commands/vacuum.c:\nGUC_check_errdetail(\"\\\"vacuum_buffer_usage_limit\\\" must be 0 or\nbetween %d kB and %d kB\",\nsrc/backend/commands/variable.c:\nGUC_check_errdetail(\"Conflicting \\\"datestyle\\\" specifications.\");\nsrc/backend/storage/buffer/localbuf.c:\nGUC_check_errdetail(\"\\\"temp_buffers\\\" cannot be changed after any\ntemporary tables have been accessed in the session.\");\nsrc/backend/tcop/postgres.c:\nGUC_check_errdetail(\"\\\"max_stack_depth\\\" must not exceed %ldkB.\",\nsrc/backend/tcop/postgres.c: GUC_check_errdetail(\"Cannot enable\nparameter when \\\"log_statement_stats\\\" is true.\");\nsrc/backend/tcop/postgres.c: GUC_check_errdetail(\"Cannot enable\n\\\"log_statement_stats\\\" when \"\n\n~~~\n\nbelow, the GUC name is substituted but not quoted:\n\nsrc/backend/access/table/tableamapi.c: GUC_check_errdetail(\"%s\ncannot be empty.\",\nsrc/backend/access/table/tableamapi.c: GUC_check_errdetail(\"%s is\ntoo long (maximum %d characters).\",\n\n~~~\n\nI had intended to make a patch to address the inconsistency, but\ncouldn't decide which of those styles was the preferred one.\n\nThen I worried this could be the tip of the iceberg -- GUC names occur\nin many other error messages where they are sometimes quoted and\nsometimes not quoted:\ne.g. Not quoted -- errhint(\"You might need to run fewer transactions\nat a time or increase max_connections.\")));\ne.g. Quoted -- errmsg(\"\\\"max_wal_size\\\" must be at least twice\n\\\"wal_segment_size\\\"\")));\n\nIdeally, they should all look the same everywhere, shouldn't they?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 1 Nov 2023 20:02:01 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "GUC names in messages" }, { "msg_contents": "On Wed, Nov 1, 2023 at 8:02 PM Peter Smith <[email protected]> wrote:\n...\n>\n> I had intended to make a patch to address the inconsistency, but\n> couldn't decide which of those styles was the preferred one.\n>\n> Then I worried this could be the tip of the iceberg -- GUC names occur\n> in many other error messages where they are sometimes quoted and\n> sometimes not quoted:\n> e.g. Not quoted -- errhint(\"You might need to run fewer transactions\n> at a time or increase max_connections.\")));\n> e.g. Quoted -- errmsg(\"\\\"max_wal_size\\\" must be at least twice\n> \\\"wal_segment_size\\\"\")));\n>\n> Ideally, they should all look the same everywhere, shouldn't they?\n>\n\nOne idea to achieve consistency might be to always substitute GUC\nnames using a macro.\n\n#define GUC_NAME(s) (\"\\\"\" s \"\\\"\")\n\nereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"%s must be at least twice %s\",\n GUC_NAME(\"max_wal_size\"),\n GUC_NAME(\"wal_segment_size\"))));\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 1 Nov 2023 20:22:30 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "> On 1 Nov 2023, at 10:02, Peter Smith <[email protected]> wrote:\n\n> GUC_check_errdetail(\"effective_io_concurrency must be set to 0 on\n> platforms that lack posix_fadvise().\");\n> src/backend/commands/variable.c:\n> GUC_check_errdetail(\"maintenance_io_concurrency must be set to 0 on\n> platforms that lack posix_fadvise().\");\n\nThese should be substituted to reduce the number of distinct messages that need\nto be translated. I wouldn't be surprised if more like these have slipped\nthrough.\n\n> I had intended to make a patch to address the inconsistency, but\n> couldn't decide which of those styles was the preferred one.\n\nGiven the variety in the codebase I don't think there is a preferred one.\n\n> Then I worried this could be the tip of the iceberg\n\nAll good rabbit-holes uncovered during hacking are.. =)\n\n> Ideally, they should all look the same everywhere, shouldn't they?\n\nHaving a policy would be good, having one which is known and enforced is even\nbetter (like how we are consistent around error messages based on our Error\nMessage Style Guide).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 1 Nov 2023 10:23:22 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "> On 1 Nov 2023, at 10:22, Peter Smith <[email protected]> wrote:\n> \n> On Wed, Nov 1, 2023 at 8:02 PM Peter Smith <[email protected]> wrote:\n> ...\n>> \n>> I had intended to make a patch to address the inconsistency, but\n>> couldn't decide which of those styles was the preferred one.\n>> \n>> Then I worried this could be the tip of the iceberg -- GUC names occur\n>> in many other error messages where they are sometimes quoted and\n>> sometimes not quoted:\n>> e.g. Not quoted -- errhint(\"You might need to run fewer transactions\n>> at a time or increase max_connections.\")));\n>> e.g. Quoted -- errmsg(\"\\\"max_wal_size\\\" must be at least twice\n>> \\\"wal_segment_size\\\"\")));\n>> \n>> Ideally, they should all look the same everywhere, shouldn't they?\n>> \n> \n> One idea to achieve consistency might be to always substitute GUC\n> names using a macro.\n> \n> #define GUC_NAME(s) (\"\\\"\" s \"\\\"\")\n> \n> ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"%s must be at least twice %s\",\n> GUC_NAME(\"max_wal_size\"),\n> GUC_NAME(\"wal_segment_size\"))));\n\nSomething like this might make translations harder since the remaining string\nleaves little context about the message. We already have that today to some\nextent (so it might not be an issue), and it might be doable to automatically\nadd translator comments, but it's something to consider.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 1 Nov 2023 10:59:21 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 1 Nov 2023, at 10:22, Peter Smith <[email protected]> wrote:\n>> One idea to achieve consistency might be to always substitute GUC\n>> names using a macro.\n>> \n>> #define GUC_NAME(s) (\"\\\"\" s \"\\\"\")\n>> \n>> ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> errmsg(\"%s must be at least twice %s\",\n>> GUC_NAME(\"max_wal_size\"),\n>> GUC_NAME(\"wal_segment_size\"))));\n\n> Something like this might make translations harder since the remaining string\n> leaves little context about the message. We already have that today to some\n> extent (so it might not be an issue), and it might be doable to automatically\n> add translator comments, but it's something to consider.\n\nOur error message style guidelines say not to assemble messages out\nof separate parts, because it makes translation difficult. Originally\nwe applied that rule to GUC names mentioned in messages as well.\nAwhile ago the translation team decided that that made for too many\nduplicative translations, so they'd be willing to compromise on\nsubstituting GUC names. That's only been changed in a haphazard\nfashion though, mostly in cases where there actually were duplicative\nmessages that could be merged this way. And there's never been any\nreal clarity about whether to quote GUC names, though certainly we're\nmore likely to quote anything injected with %s. So that's why we have\na mishmash right now.\n\nI'm not enamored of the GUC_NAME idea suggested above. I don't\nthink it buys anything, and what it does do is make *every single\none* of our GUC-mentioning messages wrong. I think if we want to\nstandardize here, we should standardize on something that's\nalready pretty common in the code base.\n\nAnother problem with the idea as depicted above is that it\nmistakenly assumes that \"...\" is the correct quoting method\nin all languages. You could make GUC_NAME be a pure no-op\nmacro and continue to put quoting in the translatable string\nwhere it belongs, but then the macro brings even less value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Nov 2023 10:25:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 01.11.23 10:25, Tom Lane wrote:\n> And there's never been any\n> real clarity about whether to quote GUC names, though certainly we're\n> more likely to quote anything injected with %s. So that's why we have\n> a mishmash right now.\n\nI'm leaning toward not quoting GUC names. The quoting is needed in \nplaces where the value can be arbitrary, to avoid potential confusion. \nBut the GUC names are well-known, and we wouldn't add confusing GUC \nnames like \"table\" or \"not found\" in the future.\n\n\n", "msg_date": "Wed, 1 Nov 2023 16:12:20 -0400", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Wed, 2023-11-01 at 16:12 -0400, Peter Eisentraut wrote:\n> On 01.11.23 10:25, Tom Lane wrote:\n> > And there's never been any\n> > real clarity about whether to quote GUC names, though certainly we're\n> > more likely to quote anything injected with %s. So that's why we have\n> > a mishmash right now.\n> \n> I'm leaning toward not quoting GUC names. The quoting is needed in \n> places where the value can be arbitrary, to avoid potential confusion. \n> But the GUC names are well-known, and we wouldn't add confusing GUC \n> names like \"table\" or \"not found\" in the future.\n\nI agree for names with underscores in them. But I think that quoting\nis necessary for names like \"timezone\" or \"datestyle\" that might be\nmistaken for normal words. My personal preference is to always quote\nGUC names, but I think it is OK not to quote GOCs whose name are\nclearly not natural language words.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 01 Nov 2023 21:46:52 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 2, 2023 at 1:25 AM Tom Lane <[email protected]> wrote:\n>\n> Daniel Gustafsson <[email protected]> writes:\n> > On 1 Nov 2023, at 10:22, Peter Smith <[email protected]> wrote:\n> >> One idea to achieve consistency might be to always substitute GUC\n> >> names using a macro.\n> >>\n> >> #define GUC_NAME(s) (\"\\\"\" s \"\\\"\")\n> >>\n> >> ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> >> errmsg(\"%s must be at least twice %s\",\n> >> GUC_NAME(\"max_wal_size\"),\n> >> GUC_NAME(\"wal_segment_size\"))));\n>\n> > Something like this might make translations harder since the remaining string\n> > leaves little context about the message. We already have that today to some\n> > extent (so it might not be an issue), and it might be doable to automatically\n> > add translator comments, but it's something to consider.\n>\n> Our error message style guidelines say not to assemble messages out\n> of separate parts, because it makes translation difficult. Originally\n> we applied that rule to GUC names mentioned in messages as well.\n> Awhile ago the translation team decided that that made for too many\n> duplicative translations, so they'd be willing to compromise on\n> substituting GUC names. That's only been changed in a haphazard\n> fashion though, mostly in cases where there actually were duplicative\n> messages that could be merged this way. And there's never been any\n> real clarity about whether to quote GUC names, though certainly we're\n> more likely to quote anything injected with %s. So that's why we have\n> a mishmash right now.\n>\n> I'm not enamored of the GUC_NAME idea suggested above. I don't\n> think it buys anything, and what it does do is make *every single\n> one* of our GUC-mentioning messages wrong. I think if we want to\n> standardize here, we should standardize on something that's\n> already pretty common in the code base.\n>\n\nThanks to everybody for the feedback received so far.\n\nPerhaps as a first step, I can try to quantify the GUC name styles\nalready in the source code. The numbers might help decide how to\nproceed\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 2 Nov 2023 08:37:14 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Wed, Nov 01, 2023 at 09:46:52PM +0100, Laurenz Albe wrote:\n> I agree for names with underscores in them. But I think that quoting\n> is necessary for names like \"timezone\" or \"datestyle\" that might be\n> mistaken for normal words. My personal preference is to always quote\n> GUC names, but I think it is OK not to quote GOCs whose name are\n> clearly not natural language words.\n\n+1, IMHO quoting GUC names makes it abundantly clear that they are special\nidentifiers. In de4d456, we quoted the role names in a bunch of messages.\nWe didn't quote the attribute/option names, but those are in all-caps, so\nthey already stand out nicely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Nov 2023 20:52:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 2023-Nov-01, Nathan Bossart wrote:\n\n> On Wed, Nov 01, 2023 at 09:46:52PM +0100, Laurenz Albe wrote:\n> > I agree for names with underscores in them. But I think that quoting\n> > is necessary for names like \"timezone\" or \"datestyle\" that might be\n> > mistaken for normal words. My personal preference is to always quote\n> > GUC names, but I think it is OK not to quote GOCs whose name are\n> > clearly not natural language words.\n> \n> +1, IMHO quoting GUC names makes it abundantly clear that they are special\n> identifiers. In de4d456, we quoted the role names in a bunch of messages.\n> We didn't quote the attribute/option names, but those are in all-caps, so\n> they already stand out nicely.\n\nI like this, and I propose we codify it in the message style guide. How\nabout this? We can start looking at code changes to make once we decide\nwe agree with this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"", "msg_date": "Tue, 7 Nov 2023 10:33:03 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2023-Nov-01, Nathan Bossart wrote:\n>> +1, IMHO quoting GUC names makes it abundantly clear that they are special\n>> identifiers. In de4d456, we quoted the role names in a bunch of messages.\n>> We didn't quote the attribute/option names, but those are in all-caps, so\n>> they already stand out nicely.\n\n> I like this, and I propose we codify it in the message style guide. How\n> about this? We can start looking at code changes to make once we decide\n> we agree with this.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Nov 2023 09:53:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, Nov 07, 2023 at 10:33:03AM +0100, Alvaro Herrera wrote:\n> On 2023-Nov-01, Nathan Bossart wrote:\n>> +1, IMHO quoting GUC names makes it abundantly clear that they are special\n>> identifiers. In de4d456, we quoted the role names in a bunch of messages.\n>> We didn't quote the attribute/option names, but those are in all-caps, so\n>> they already stand out nicely.\n> \n> I like this, and I propose we codify it in the message style guide. How\n> about this? We can start looking at code changes to make once we decide\n> we agree with this.\n\n> + <para>\n> + In messages containing configuration variable names, quotes are\n> + not necessary when the names are visibly not English natural words, such\n> + as when they have underscores or are all-uppercase. Otherwise, quotes\n> + must be added. Do include double-quotes in a message where an arbitrary\n> + variable name is to be expanded.\n> + </para>\n\nІ'd vote for quoting all GUC names, if for no other reason than \"visibly\nnot English natural words\" feels a bit open to interpretation. But this\nseems like it's on the right track, so I won't argue too strongly if I'm\nthe only holdout.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 7 Nov 2023 08:58:21 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "FWIW, I am halfway through doing regex checking of the PG16 source for\nall GUC names in messages to see what current styles are in use today.\n\nNot sure if those numbers will influence the decision.\n\nI hope I can post my findings today or tomorrow.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 8 Nov 2023 07:40:48 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Wed, Nov 8, 2023 at 7:40 AM Peter Smith <[email protected]> wrote:\n>\n> FWIW, I am halfway through doing regex checking of the PG16 source for\n> all GUC names in messages to see what current styles are in use today.\n>\n> Not sure if those numbers will influence the decision.\n>\n> I hope I can post my findings today or tomorrow.\n>\n\nHere are my findings from the current PG16 source messages.\n\nI used a regex search:\n\".*GUCNAME\n\nto find how each GUCNAME is used in the messages in *.c files.\n\nThe GUC names are taken from the guc_tables.c code, so they are\ngrouped accordingly below.\n\n~TOTALS:\n\nmessages where GUC names are QUOTED:\n- bool = 11\n- int = 11\n- real = 0\n- string = 10\n- enum = 7\nTOTAL = 39\n\nmessages where GUC names are NOT QUOTED:\n- bool = 14\n- int = 60\n- real = 0\n- string = 59\n- enum = 31\nTOTAL = 164\n\n~~~\n\nDetails are in the attached file. PSA.\n\nI've categorised them as being currently QUOTED, NOT QUOTED, and NONE\n(most are not used in any messages).\n\nNotice that NOT QUOTED is the far more common pattern, so my vote\nwould be just to standardise on making everything this way. I know\nthere was some concern raised about ambiguous words like \"timezone\"\nand \"datestyle\" etc but in practice, those are rare. Also, those GUCs\nare different in that they are written as camel-case (e.g.\n\"DateStyle\") in the guc_tables.c, so if they were also written\ncamel-case in the messages that could remove ambiguities with normal\nwords. YMMV.\n\nAnyway, I will await a verdict about what to do.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 9 Nov 2023 09:53:55 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 2, 2023 at 1:25 AM Tom Lane <[email protected]> wrote:\n>\n...\n> Our error message style guidelines say not to assemble messages out\n> of separate parts, because it makes translation difficult. Originally\n> we applied that rule to GUC names mentioned in messages as well.\n> Awhile ago the translation team decided that that made for too many\n> duplicative translations, so they'd be willing to compromise on\n> substituting GUC names. That's only been changed in a haphazard\n> fashion though, mostly in cases where there actually were duplicative\n> messages that could be merged this way. And there's never been any\n> real clarity about whether to quote GUC names, though certainly we're\n> more likely to quote anything injected with %s. So that's why we have\n> a mishmash right now.\n\nRight. While looking at all the messages I observed a number of them\nhaving almost the same (but not quite the same) wording:\n\nFor example,\n\nerrhint(\"Consider increasing the configuration parameter \\\"max_wal_size\\\".\")));\nerrhint(\"You might need to increase %s.\", \"max_locks_per_transaction\")));\nerrhint(\"You might need to increase %s.\", \"max_pred_locks_per_transaction\")));\nerrmsg(\"could not find free replication state, increase\nmax_replication_slots\")));\nhint ? errhint(\"You might need to increase %s.\", \"max_slot_wal_keep_size\") : 0);\nerrhint(\"You may need to increase max_worker_processes.\")));\nerrhint(\"Consider increasing configuration parameter\n\\\"max_worker_processes\\\".\")));\nerrhint(\"Consider increasing the configuration parameter\n\\\"max_worker_processes\\\".\")));\nerrhint(\"You might need to increase %s.\", \"max_worker_processes\")));\nerrhint(\"You may need to increase max_worker_processes.\")));\nerrhint(\"You might need to increase %s.\", \"max_logical_replication_workers\")));\n\n~\n\nThe most common pattern there is \"You might need to increase %s.\".\n\nHere is a patch to modify those other similar variations so they share\nthat common wording.\n\nPSA.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 9 Nov 2023 12:55:44 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "At Thu, 9 Nov 2023 12:55:44 +1100, Peter Smith <[email protected]> wrote in \n> The most common pattern there is \"You might need to increase %s.\".\n..\n> Here is a patch to modify those other similar variations so they share\n> that common wording.\n> \n> PSA.\n\nI'm uncertain whether the phrases \"Consider doing something\" and \"You\nmight need to do something\" are precisely interchangeable. However,\n(for me..) it appears that either phrase could be applied for all\nmessages that the patch touches.\n\nIn short, I'm fine with the patch.\n\n\nBy the way, I was left scratching my head after seeing the following\nmessage.\n\n> ereport(PANIC,\n> (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED),\n> - errmsg(\"could not find free replication state, increase max_replication_slots\")));\n\nBeing told to increase max_replication_slots in a PANIC\nmessage feels somewhat off to me. Just looking at the message, it\nseems unconvincing to increase \"slots\" because there is a lack of\n\"state\". So, I poked around in the code and found the following\ncomment:\n\n> ReplicationOriginShmemSize(void)\n> {\n> ...\n> /*\n> * XXX: max_replication_slots is arguably the wrong thing to use, as here\n> * we keep the replay state of *remote* transactions. But for now it seems\n> * sufficient to reuse it, rather than introduce a separate GUC.\n> */\n\nI haven't read the related code, but if my understanding based on this\ncomment is correct, wouldn't it mean that a lack of storage space for\nthe state at the location outputting the message indicates a bug in\nthe program, not a user configuration error? In other words, isn't\nthis message something that at least shouldn't be a user-facing\nmessage, and might it be more appropriate to use an Assert instead?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Nov 2023 14:15:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 2023-Nov-09, Peter Smith wrote:\n\n> Notice that NOT QUOTED is the far more common pattern, so my vote\n> would be just to standardise on making everything this way. I know\n> there was some concern raised about ambiguous words like \"timezone\"\n> and \"datestyle\" etc but in practice, those are rare. Also, those GUCs\n> are different in that they are written as camel-case (e.g.\n> \"DateStyle\") in the guc_tables.c, so if they were also written\n> camel-case in the messages that could remove ambiguities with normal\n> words. YMMV.\n\nWell, I think camel-casing is also a sufficient differentiator for these\nidentifiers not being English words. We'd need to ensure they're always\nwritten that way, when not quoted. However, in cases where arbitrary\nvalues are expanded, I don't know that they would be expanded that way,\nso I would still go for quoting in that case.\n\nThere's also a few that are not camel-cased nor have any underscores --\nlooking at postgresql.conf.sample, we have \"port\", \"bonjour\", \"ssl\",\n\"fsync\", \"geqo\", \"jit\", \"autovacuum\", \"xmlbinary\", \"xmloption\". (We also\nhave \"include\", but I doubt that's ever used in an error message). But\nactually, there's more: every reloption is a candidate, and there we\nhave \"fillfactor\", \"autosummarize\", \"fastupdate\", \"buffering\". So if we\nwant to make generic advice on how to deal with option names in error\nmessages, I think the wording on conditional quoting I proposed should\ngo in (adding CamelCase as a reason not to quote), and then we can fix\nthe code to match. Looking at your list, I think the changes to make\nare not too numerous.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Nadie está tan esclavizado como el que se cree libre no siéndolo\" (Goethe)\n\n\n", "msg_date": "Thu, 9 Nov 2023 12:04:21 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 9, 2023 at 10:04 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-09, Peter Smith wrote:\n>\n> > Notice that NOT QUOTED is the far more common pattern, so my vote\n> > would be just to standardise on making everything this way. I know\n> > there was some concern raised about ambiguous words like \"timezone\"\n> > and \"datestyle\" etc but in practice, those are rare. Also, those GUCs\n> > are different in that they are written as camel-case (e.g.\n> > \"DateStyle\") in the guc_tables.c, so if they were also written\n> > camel-case in the messages that could remove ambiguities with normal\n> > words. YMMV.\n>\n> Well, I think camel-casing is also a sufficient differentiator for these\n> identifiers not being English words. We'd need to ensure they're always\n> written that way, when not quoted. However, in cases where arbitrary\n> values are expanded, I don't know that they would be expanded that way,\n> so I would still go for quoting in that case.\n>\n> There's also a few that are not camel-cased nor have any underscores --\n> looking at postgresql.conf.sample, we have \"port\", \"bonjour\", \"ssl\",\n> \"fsync\", \"geqo\", \"jit\", \"autovacuum\", \"xmlbinary\", \"xmloption\". (We also\n> have \"include\", but I doubt that's ever used in an error message). But\n> actually, there's more: every reloption is a candidate, and there we\n> have \"fillfactor\", \"autosummarize\", \"fastupdate\", \"buffering\". So if we\n> want to make generic advice on how to deal with option names in error\n> messages, I think the wording on conditional quoting I proposed should\n> go in (adding CamelCase as a reason not to quote), and then we can fix\n> the code to match. Looking at your list, I think the changes to make\n> are not too numerous.\n>\n\nSorry for my delay in getting back to this thread.\n\nPSA a patch for this work.\n\nThere may be some changes I've missed, but hopefully, this is a nudge\nin the right direction.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Thu, 23 Nov 2023 18:27:04 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 23, 2023 at 06:27:04PM +1100, Peter Smith wrote:\n> There may be some changes I've missed, but hopefully, this is a nudge\n> in the right direction.\n\nThanks for spending some time on that.\n\n <para>\n+ In messages containing configuration variable names, do not include quotes\n+ when the names are visibly not English natural words, such as when they\n+ have underscores or are all-uppercase or have mixed case. Otherwise, quotes\n+ must be added. Do include quotes in a message where an arbitrary variable\n+ name is to be expanded.\n+ </para>\n\nThat seems to describe clearly the consensus reached on the thread\n(quotes for GUCs that are single terms, no quotes for names that are\nobviously parameters).\n\nIn terms of messages that have predictible names, 0002 moves in the\nneedle in the right direction. There seem to be more:\nsrc/backend/postmaster/bgworker.c: errhint(\"Consider increasing the\nconfiguration parameter \\\"max_worker_processes\\\".\")));\ncontrib/pg_prewarm/autoprewarm.c: errhint(\"Consider increasing\nconfiguration parameter \\\"max_worker_processes\\\".\")));\n\nThings like parse_and_validate_value() and set_config_option_ext()\ninclude log strings about GUC and these use quotes. Could these areas\nbe made smarter with a routine to check if quotes are applied\nautomatically when we have a \"simple\" GUC name, aka I guess made of\nonly lower-case characters? This could be done with a islower() on\nthe string name, for instance.\n--\nMichael", "msg_date": "Fri, 24 Nov 2023 12:11:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 2023-Nov-24, Michael Paquier wrote:\n\n> On Thu, Nov 23, 2023 at 06:27:04PM +1100, Peter Smith wrote:\n> > There may be some changes I've missed, but hopefully, this is a nudge\n> > in the right direction.\n> \n> Thanks for spending some time on that.\n\n+1\n\n> <para>\n> + In messages containing configuration variable names, do not include quotes\n> + when the names are visibly not English natural words, such as when they\n> + have underscores or are all-uppercase or have mixed case. Otherwise, quotes\n> + must be added. Do include quotes in a message where an arbitrary variable\n> + name is to be expanded.\n> + </para>\n> \n> That seems to describe clearly the consensus reached on the thread\n> (quotes for GUCs that are single terms, no quotes for names that are\n> obviously parameters).\n\nYeah, this is pretty much the patch I proposed earlier.\n\n> In terms of messages that have predictible names, 0002 moves in the\n> needle in the right direction. There seem to be more:\n> src/backend/postmaster/bgworker.c: errhint(\"Consider increasing the\n> configuration parameter \\\"max_worker_processes\\\".\")));\n> contrib/pg_prewarm/autoprewarm.c: errhint(\"Consider increasing\n> configuration parameter \\\"max_worker_processes\\\".\")));\n\nYeah. Also, these could be changed to have the GUC name outside the\nmessage proper, which would reduce the total number of messages. (But\ncare must be given to the word \"the\" there.)\n\n> Things like parse_and_validate_value() and set_config_option_ext()\n> include log strings about GUC and these use quotes. Could these areas\n> be made smarter with a routine to check if quotes are applied\n> automatically when we have a \"simple\" GUC name, aka I guess made of\n> only lower-case characters? This could be done with a islower() on\n> the string name, for instance.\n\nI think we could leave these improvements for a second round. They\ndon't need to hold back the improvement we already have.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\nhttps://postgr.es/m/[email protected]\n\n\n", "msg_date": "Fri, 24 Nov 2023 10:53:40 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, Nov 24, 2023 at 10:53:40AM +0100, Alvaro Herrera wrote:\n> I think we could leave these improvements for a second round. They\n> don't need to hold back the improvement we already have.\n\nOf course, no problem here to do things one step at a time.\n--\nMichael", "msg_date": "Fri, 24 Nov 2023 23:01:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, Nov 24, 2023 at 2:11 PM Michael Paquier <[email protected]> wrote:\n>\n> On Thu, Nov 23, 2023 at 06:27:04PM +1100, Peter Smith wrote:\n> > There may be some changes I've missed, but hopefully, this is a nudge\n> > in the right direction.\n>\n> Thanks for spending some time on that.\n>\n> <para>\n> + In messages containing configuration variable names, do not include quotes\n> + when the names are visibly not English natural words, such as when they\n> + have underscores or are all-uppercase or have mixed case. Otherwise, quotes\n> + must be added. Do include quotes in a message where an arbitrary variable\n> + name is to be expanded.\n> + </para>\n>\n> That seems to describe clearly the consensus reached on the thread\n> (quotes for GUCs that are single terms, no quotes for names that are\n> obviously parameters).\n>\n> In terms of messages that have predictible names, 0002 moves in the\n> needle in the right direction. There seem to be more:\n> src/backend/postmaster/bgworker.c: errhint(\"Consider increasing the\n> configuration parameter \\\"max_worker_processes\\\".\")));\n> contrib/pg_prewarm/autoprewarm.c: errhint(\"Consider increasing\n> configuration parameter \\\"max_worker_processes\\\".\")));\n\nDone in patch 0002\n\n>\n> Things like parse_and_validate_value() and set_config_option_ext()\n> include log strings about GUC and these use quotes. Could these areas\n> be made smarter with a routine to check if quotes are applied\n> automatically when we have a \"simple\" GUC name, aka I guess made of\n> only lower-case characters? This could be done with a islower() on\n> the string name, for instance.\n\nSee what you think of patch 0003\n\n~~\n\nPSA v2 patches.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 27 Nov 2023 09:41:51 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, Nov 24, 2023 at 8:53 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2023-Nov-24, Michael Paquier wrote:\n>\n> > On Thu, Nov 23, 2023 at 06:27:04PM +1100, Peter Smith wrote:\n> > > There may be some changes I've missed, but hopefully, this is a nudge\n> > > in the right direction.\n> >\n> > Thanks for spending some time on that.\n>\n> +1\n>\n> > <para>\n> > + In messages containing configuration variable names, do not include quotes\n> > + when the names are visibly not English natural words, such as when they\n> > + have underscores or are all-uppercase or have mixed case. Otherwise, quotes\n> > + must be added. Do include quotes in a message where an arbitrary variable\n> > + name is to be expanded.\n> > + </para>\n> >\n> > That seems to describe clearly the consensus reached on the thread\n> > (quotes for GUCs that are single terms, no quotes for names that are\n> > obviously parameters).\n>\n> Yeah, this is pretty much the patch I proposed earlier.\n>\n> > In terms of messages that have predictible names, 0002 moves in the\n> > needle in the right direction. There seem to be more:\n> > src/backend/postmaster/bgworker.c: errhint(\"Consider increasing the\n> > configuration parameter \\\"max_worker_processes\\\".\")));\n> > contrib/pg_prewarm/autoprewarm.c: errhint(\"Consider increasing\n> > configuration parameter \\\"max_worker_processes\\\".\")));\n>\n> Yeah. Also, these could be changed to have the GUC name outside the\n> message proper, which would reduce the total number of messages. (But\n> care must be given to the word \"the\" there.)\n>\n\n I had posted something similar a few posts back [1], but it just\ncaused more questions unrelated to GUC name quotes so I abandoned that\ntemporarily.\n\nSo for now, I hope this thread can be only about quotes on GUC names,\notherwise, I thought it may become stuck debating dozens of individual\nmessages. Certainly later, or in another thread, we can revisit all\nmessages again to try to identify/extract any \"common\" ones.\n\n> > Things like parse_and_validate_value() and set_config_option_ext()\n> > include log strings about GUC and these use quotes. Could these areas\n> > be made smarter with a routine to check if quotes are applied\n> > automatically when we have a \"simple\" GUC name, aka I guess made of\n> > only lower-case characters? This could be done with a islower() on\n> > the string name, for instance.\n>\n> I think we could leave these improvements for a second round. They\n> don't need to hold back the improvement we already have.\n>\n\nI tried something for this already but kept it in a separate patch. See v2-0003\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPv8VG7fvXzg5PNeQuUhJG17xwCWNpZSUUkN11ArV%3D%3DCdg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 27 Nov 2023 10:04:35 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Mon, Nov 27, 2023 at 10:04:35AM +1100, Peter Smith wrote:\n> On Fri, Nov 24, 2023 at 8:53 PM Alvaro Herrera <[email protected]> wrote:\n>> Yeah. Also, these could be changed to have the GUC name outside the\n>> message proper, which would reduce the total number of messages. (But\n>> care must be given to the word \"the\" there.)\n> \n> I had posted something similar a few posts back [1], but it just\n> caused more questions unrelated to GUC name quotes so I abandoned that\n> temporarily.\n\nYes, I kind of agree to let that out of the picture for the moment.\nIt would be good to reduce the translation chunks.\n\n> So for now, I hope this thread can be only about quotes on GUC names,\n> otherwise, I thought it may become stuck debating dozens of individual\n> messages. Certainly later, or in another thread, we can revisit all\n> messages again to try to identify/extract any \"common\" ones.\n\n-HINT: Perhaps you need a different \"datestyle\" setting.\n+HINT: Perhaps you need a different DateStyle setting. \n\nIs the change for \"datestyle\" really required? It does not betray the\nGUC quoting policy added by 0001.\n\n>> I think we could leave these improvements for a second round. They\n>> don't need to hold back the improvement we already have.\n> \n> I tried something for this already but kept it in a separate patch. See v2-0003\n\n+ if (*p == '_')\n+ underscore = true;\n\nIs there a reason why we don't just use islower() or is that just to\nget something entirely local independent? I am not sure that it needs\nto be that complicated. We should just check that all the characters\nare lower-case and apply quotes.\n--\nMichael", "msg_date": "Mon, 27 Nov 2023 10:43:50 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> Is there a reason why we don't just use islower() or is that just to\n> get something entirely local independent?\n\nislower() and related functions are not to be trusted for this\npurpose. They will certainly give locale-dependent results,\nand they might give entirely wrong ones if there's any inconsistency\nbetween the database encoding and what libc thinks the locale is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Nov 2023 21:07:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Mon, Nov 27, 2023 at 12:44 PM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Nov 27, 2023 at 10:04:35AM +1100, Peter Smith wrote:\n> > On Fri, Nov 24, 2023 at 8:53 PM Alvaro Herrera <[email protected]> wrote:\n> >> Yeah. Also, these could be changed to have the GUC name outside the\n> >> message proper, which would reduce the total number of messages. (But\n> >> care must be given to the word \"the\" there.)\n> >\n> > I had posted something similar a few posts back [1], but it just\n> > caused more questions unrelated to GUC name quotes so I abandoned that\n> > temporarily.\n>\n> Yes, I kind of agree to let that out of the picture for the moment.\n> It would be good to reduce the translation chunks.\n>\n> > So for now, I hope this thread can be only about quotes on GUC names,\n> > otherwise, I thought it may become stuck debating dozens of individual\n> > messages. Certainly later, or in another thread, we can revisit all\n> > messages again to try to identify/extract any \"common\" ones.\n>\n> -HINT: Perhaps you need a different \"datestyle\" setting.\n> +HINT: Perhaps you need a different DateStyle setting.\n>\n> Is the change for \"datestyle\" really required? It does not betray the\n> GUC quoting policy added by 0001.\n>\n\nTBH, I suspect something fishy about these mixed-case GUCs.\n\nIn the documentation and in the guc_tables.c they are all described in\nMixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\nmessages should use the same case the documentation, which is why I\nchanged all the ones you are referring to.\n\nI know the code is doing a case-insensitive hashtable lookup but I\nsuspect some of the string passing still in the code for those\nparticular GUCs ought to be using the same mixed case string literal\nas in the guc_tables.c. Currently, I have seen a few quirks where the\ncase is inconsistent with the MixedCase docs. It needs some more\ninvestigation to understand the reason. For example,\n\n2023-11-27 11:03:48.565 AEDT [15303] STATEMENT: set intervalstyle=123;\nERROR: invalid value for parameter \"intervalstyle\": \"123\"\n\nversus\n\n2023-11-27 11:13:56.018 AEDT [15303] STATEMENT: set datestyle=123;\nERROR: invalid value for parameter DateStyle: \"123\"\n\n> >> I think we could leave these improvements for a second round. They\n> >> don't need to hold back the improvement we already have.\n> >\n> > I tried something for this already but kept it in a separate patch. See v2-0003\n>\n> + if (*p == '_')\n> + underscore = true;\n>\n> Is there a reason why we don't just use islower() or is that just to\n> get something entirely local independent? I am not sure that it needs\n> to be that complicated. We should just check that all the characters\n> are lower-case and apply quotes.\n\nThanks for the feedback. Probably I have overcomplicated it. I'll revisit it.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 27 Nov 2023 13:41:18 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Mon, Nov 27, 2023 at 01:41:18PM +1100, Peter Smith wrote:\n> TBH, I suspect something fishy about these mixed-case GUCs.\n> \n> In the documentation and in the guc_tables.c they are all described in\n> MixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\n> messages should use the same case the documentation, which is why I\n> changed all the ones you are referring to.\n\nFWIW, I've been tempted for a few years to propose that we should keep\nthe parsers as they behave now, but format the name of these\nparameters in the code and the docs to just be lower-case all the\ntime.\n\n>> Is there a reason why we don't just use islower() or is that just to\n>> get something entirely local independent? I am not sure that it needs\n>> to be that complicated. We should just check that all the characters\n>> are lower-case and apply quotes.\n> \n> Thanks for the feedback. Probably I have overcomplicated it. I'll revisit it.\n\nThe use of a static variable with a fixed size was itching me a bit as\nwell.. I was wondering if it would be cleaner to use %s%s%s in the\nstrings adding a note that these are GUC names that may be optionally\nquoted, then hide what gets assigned in a macro with a result rather\nsimilar to LSN_FORMAT_ARGS (GUC_FORMAT?). The routine checking if\nquotes should be applied would only need to return a boolean to tell\nwhat to do, and could be hidden in the macro.\n--\nMichael", "msg_date": "Mon, 27 Nov 2023 13:06:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages[" }, { "msg_contents": "On Mon, 2023-11-27 at 13:41 +1100, Peter Smith wrote:\n> TBH, I suspect something fishy about these mixed-case GUCs.\n> \n> In the documentation and in the guc_tables.c they are all described in\n> MixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\n> messages should use the same case the documentation, which is why I\n> changed all the ones you are referring to.\n\nI agree with that decision; we should use mixed case for these parameters.\n\nOtherwise we might get complaints that the following query does not return\nany results:\n\n SELECT * FROM pg_settings WHERE name = 'timezone';\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 27 Nov 2023 07:31:50 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2023-11-27 at 13:41 +1100, Peter Smith wrote:\n>> In the documentation and in the guc_tables.c they are all described in\n>> MixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\n>> messages should use the same case the documentation, which is why I\n>> changed all the ones you are referring to.\n\n> I agree with that decision; we should use mixed case for these parameters.\n> Otherwise we might get complaints that the following query does not return\n> any results:\n> SELECT * FROM pg_settings WHERE name = 'timezone';\n\nYeah. Like Michael upthread, I've wondered occasionally about changing\nthese names to all-lower-case. It'd surely be nicer if we'd done it\nlike that to begin with. But I can't convince myself that the ensuing\nuser pain would be justified.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Nov 2023 01:35:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Mon, Nov 27, 2023 at 01:35:44AM -0500, Tom Lane wrote:\n> Laurenz Albe <[email protected]> writes:\n> > On Mon, 2023-11-27 at 13:41 +1100, Peter Smith wrote:\n>>> In the documentation and in the guc_tables.c they are all described in\n>>> MixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\n>>> messages should use the same case the documentation, which is why I\n>>> changed all the ones you are referring to.\n> \n>> I agree with that decision; we should use mixed case for these parameters.\n>> Otherwise we might get complaints that the following query does not return\n>> any results:\n>> SELECT * FROM pg_settings WHERE name = 'timezone';\n\n(I'm sure that you mean the opposite. This query does not return any\nresults on HEAD, but it would with \"TimeZone\".)\n\n> Yeah. Like Michael upthread, I've wondered occasionally about changing\n> these names to all-lower-case. It'd surely be nicer if we'd done it\n> like that to begin with. But I can't convince myself that the ensuing\n> user pain would be justified.\n\nPerhaps not. I'd like to think that a lot of queries on pg_settings\nhave the wisdom to apply a lower() or upper(), but that's very\nunlikely.\n\n- errhint(\"Perhaps you need a different \\\"datestyle\\\" setting.\")));\n+ errhint(\"Perhaps you need a different DateStyle setting.\")));\n\nSaying that, I'd let this one be in 0002. It causes a log of diff\nchurn in the tests and quoting it based on Alvaro's suggestion would\nstill be correct because it's fully lower-case. (Yeah, I'm perhaps\nnit-ing here, so feel free to counter-argue if you prefer what the\npatch does.)\n--\nMichael", "msg_date": "Tue, 28 Nov 2023 07:53:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Here is patch set v3.\n\nPatches 0001 and 0002 are unchanged from v2.\n\nPatch 0003 now uses a \"%s%s%s\" format specifier with GUC_FORMAT macro\nin guc.c, as recently suggested by Michael [1].\n\n~\n\n(Meanwhile, the MixedCase stuff is still an open question, to be\naddressed in a later patch version)\n\n======\n[1] https://www.postgresql.org/message-id/ZWQVxu8zWIx64V7l%40paquier.xyz\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 28 Nov 2023 11:54:33 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, 2023-11-28 at 07:53 +0900, Michael Paquier wrote:\n> On Mon, Nov 27, 2023 at 01:35:44AM -0500, Tom Lane wrote:\n> > Laurenz Albe <[email protected]> writes:\n> > > On Mon, 2023-11-27 at 13:41 +1100, Peter Smith wrote:\n> > > > In the documentation and in the guc_tables.c they are all described in\n> > > > MixedCase (e.g. \"DateStyle\" instead of \"datestyle\"), so I felt the\n> > > > messages should use the same case the documentation, which is why I\n> > > > changed all the ones you are referring to.\n> > \n> > > I agree with that decision; we should use mixed case for these parameters.\n> > > Otherwise we might get complaints that the following query does not return\n> > > any results:\n> > > SELECT * FROM pg_settings WHERE name = 'timezone';\n> \n> (I'm sure that you mean the opposite. This query does not return any\n> results on HEAD, but it would with \"TimeZone\".)\n\nNo, I meant it just like I said. If all messages suggest that the parameter\nis called \"timezone\", and not \"TimeZone\" (because we convert the name to lower\ncase), then it is surprising that the above query does not return results.\n\nIt would be better to call the parameter \"TimeZone\" everywhere.\n\n(It would be best to convert the parameter to lower case, but I am worried\nabout the compatibility-pain-to-benefit ratio.)\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 28 Nov 2023 08:05:02 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, Nov 28, 2023 at 11:54:33AM +1100, Peter Smith wrote:\n> Here is patch set v3.\n> \n> Patches 0001 and 0002 are unchanged from v2.\n\nAfter some grepping, I've noticed that 0002 had a mistake with\ntrack_commit_timestamp: some alternate output of modules/commit_ts/\nwas not updated. meson was able to reproduce the failure as well.\n\nI am not sure regarding what we should do a mixed cases as well, so I\nhave discarded DateStyle for now, and applied the rest.\n\nAlso applied 0001 from Alvaro.\n\n> Patch 0003 now uses a \"%s%s%s\" format specifier with GUC_FORMAT macro\n> in guc.c, as recently suggested by Michael [1].\n\nI cannot think about a better idea as these strings need to be\ntranslated so they need three %s.\n\n+\t\tif (*p == '_')\n+\t\t\tunderscore = true;\n+\t\telse if ('a' <= *p && *p <= 'z')\n+\t\t\tlowercase = true;\n\nAn issue with this code is that it would forget to quote GUCs that use\ndots, like the ones from an extension. I don't really see why we\ncannot just make the macro return true only if all the characters of a\nGUC name is made of lower-case alpha characters?\n\nWith an extra indentation applied, I finish with the attached for\n0003.\n--\nMichael", "msg_date": "Thu, 30 Nov 2023 14:59:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "At Thu, 30 Nov 2023 14:59:21 +0900, Michael Paquier <[email protected]> wrote in \r\n> > Patch 0003 now uses a \"%s%s%s\" format specifier with GUC_FORMAT macro\r\n> > in guc.c, as recently suggested by Michael [1].\r\n> \r\n> I cannot think about a better idea as these strings need to be\r\n> translated so they need three %s.\r\n\r\n\r\nIn this patch, the quotation marks cannot be changed from double\r\nquotes.\r\n\r\nAfter a brief review of the use of quotation marks in various\r\nlanguages, it's observed that French uses guillemets (« »), German\r\nuses lower qutation marks („ “), Spanish uses angular quotation marks\r\n(« ») or alternatively, lower quotetaion marks. Japanese commonly uses\r\ncorner brackets (「」), but can also adopt double or single quotation\r\nmarks in certain contexts. I took a look at the backend's fr.po file\r\nfor a trial, and it indeed seems that guillemets are being used.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Thu, 30 Nov 2023 15:29:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 30, 2023 at 03:29:49PM +0900, Kyotaro Horiguchi wrote:\n> In this patch, the quotation marks cannot be changed from double\n> quotes.\n\nIndeed, that's a good point. I completely forgot about that.\n--\nMichael", "msg_date": "Thu, 30 Nov 2023 16:03:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Nov 30, 2023 at 4:59 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Nov 28, 2023 at 11:54:33AM +1100, Peter Smith wrote:\n> > Here is patch set v3.\n> >\n> > Patches 0001 and 0002 are unchanged from v2.\n>\n> After some grepping, I've noticed that 0002 had a mistake with\n> track_commit_timestamp: some alternate output of modules/commit_ts/\n> was not updated. meson was able to reproduce the failure as well.\n>\n> I am not sure regarding what we should do a mixed cases as well, so I\n> have discarded DateStyle for now, and applied the rest.\n>\n> Also applied 0001 from Alvaro.\n>\n\nThanks for pushing those parts.\n\n> > Patch 0003 now uses a \"%s%s%s\" format specifier with GUC_FORMAT macro\n> > in guc.c, as recently suggested by Michael [1].\n>\n> I cannot think about a better idea as these strings need to be\n> translated so they need three %s.\n>\n> + if (*p == '_')\n> + underscore = true;\n> + else if ('a' <= *p && *p <= 'z')\n> + lowercase = true;\n>\n> An issue with this code is that it would forget to quote GUCs that use\n> dots, like the ones from an extension. I don't really see why we\n> cannot just make the macro return true only if all the characters of a\n> GUC name is made of lower-case alpha characters?\n\nNot forgotten. I felt the dot separator in such names might be\nmistaken for a period in a sentence which is why I left quotes for\nthose ones. YMMV.\n\n======\nKind Regards,\nPeter. Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 30 Nov 2023 18:39:32 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "\n> +/*\n> + * Return whether the GUC name should be enclosed in double-quotes.\n> + *\n> + * Quoting is intended for names which could be mistaken for normal English\n> + * words. Quotes are only applied to GUC names that are written entirely with\n> + * lower-case alphabetical characters.\n> + */\n> +static bool\n> +quotes_needed_for_GUC_name(const char *name)\n> +{\n> +\tfor (const char *p = name; *p; p++)\n> +\t{\n> +\t\tif ('a' > *p || *p > 'z')\n> +\t\t\treturn false;\n> +\t}\n> +\n> +\treturn true;\n> +}\n\nI think you need a function that the name possibly quoted, in a way that\nlets the translator handle the quoting:\n\n static char buffer[SOMEMAXLEN];\n\n quotes_needed = ...;\n\n if (quotes_needed)\n /* translator: a quoted configuration parameter name */\n snprintf(buffer, _(\"\\\"%s\\\"\"), name);\n return buffer\n else\n /* no translation needed in this case */\n return name;\n\nthen the calling code just does a single %s that prints the string\nreturned by this function. (Do note that the function is not reentrant,\nlike pg_dump's fmtId. Shouldn't be a problem ...)\n\n> @@ -3621,8 +3673,8 @@ set_config_option_ext(const char *name, const char *value,\n> \t{\n> \t\tif (changeVal && !makeDefault)\n> \t\t{\n> -\t\t\telog(DEBUG3, \"\\\"%s\\\": setting ignored because previous source is higher priority\",\n> -\t\t\t\t name);\n> +\t\t\telog(DEBUG3, \"%s%s%s: setting ignored because previous source is higher priority\",\n> +\t\t\t\t GUC_FORMAT(name));\n\nNote that elog() doesn't do translation, and DEBUG doesn't really need\nto worry too much about style anyway. I'd leave these as-is.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n", "msg_date": "Thu, 30 Nov 2023 11:57:05 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 30.11.23 06:59, Michael Paquier wrote:\n> \t\t\tereport(elevel,\n> \t\t\t\t\t(errcode(ERRCODE_UNDEFINED_OBJECT),\n> -\t\t\t\t\t errmsg(\"unrecognized configuration parameter \\\"%s\\\" in file \\\"%s\\\" line %d\",\n> -\t\t\t\t\t\t\titem->name,\n> +\t\t\t/* translator: %s%s%s is for an optionally quoted GUC name */\n> +\t\t\t\t\t errmsg(\"unrecognized configuration parameter %s%s%s in file \\\"%s\\\" line %d\",\n> +\t\t\t\t\t\t\tGUC_FORMAT(item->name),\n> \t\t\t\t\t\t\titem->filename, item->sourceline)));\n\nI think this is completely over-engineered and wrong. If we start down \nthis road, then the next person is going to start engineering some rules \nby which we should quote file names and other things. Which will lead \nto more confusion, not less. The whole point of this quoting thing is \nthat you do it all the time or not, not dynamically based on what's \ninside of it.\n\nThe original version of this string (and similar ones) seems the most \ncorrect, simple, and useful one to me.\n\n\n\n", "msg_date": "Thu, 30 Nov 2023 21:38:03 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, Dec 1, 2023 at 7:38 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 30.11.23 06:59, Michael Paquier wrote:\n> > ereport(elevel,\n> > (errcode(ERRCODE_UNDEFINED_OBJECT),\n> > - errmsg(\"unrecognized configuration parameter \\\"%s\\\" in file \\\"%s\\\" line %d\",\n> > - item->name,\n> > + /* translator: %s%s%s is for an optionally quoted GUC name */\n> > + errmsg(\"unrecognized configuration parameter %s%s%s in file \\\"%s\\\" line %d\",\n> > + GUC_FORMAT(item->name),\n> > item->filename, item->sourceline)));\n>\n> I think this is completely over-engineered and wrong. If we start down\n> this road, then the next person is going to start engineering some rules\n> by which we should quote file names and other things. Which will lead\n> to more confusion, not less. The whole point of this quoting thing is\n> that you do it all the time or not, not dynamically based on what's\n> inside of it.\n>\n> The original version of this string (and similar ones) seems the most\n> correct, simple, and useful one to me.\n>\n\nYeah, trying to manipulate the quoting dynamically seems like it was\nan overreach...\n\nRemoving that still leaves some other changes needed to \"fix\" the\nmessages using MixedCase GUCs.\n\n\nPSA v4\n\n======\nDetails:\n\nPatch 0001 -- \"datestyle\" becomes DateStyle in messages\nRebased this again, which was part of an earlier patch set\n- I think any GUC names documented as MixedCase should keep that same\ncase in messages; this also obeys the guidelines recently pushed [1].\n- Some others agreed, expecting the exact GUC name (in the message)\ncan be found in pg_settings [2].\n- OTOH, Michael didn't like the diff churn [3] caused by this patch.\n\n~~~\n\nPatch 0002 -- use mixed case for intervalstyle error message\nI found that the GUC name substituted to the error message was coming\nfrom the statement, not from the original name in the guc_tables, so\nthere was a case mismatch:\n\nBEFORE Patch 0002 (see the lowercase in the error message)\n2023-12-08 13:21:32.897 AEDT [32609] STATEMENT: set intervalstyle = 1234;\nERROR: invalid value for parameter \"intervalstyle\": \"1234\"\nHINT: Available values: postgres, postgres_verbose, sql_standard, iso_8601.\n\nAFTER Patch 0002\n2023-12-08 13:38:48.638 AEDT [29684] STATEMENT: set intervalstyle = 1234;\nERROR: invalid value for parameter \"IntervalStyle\": \"1234\"\nHINT: Available values: postgres, postgres_verbose, sql_standard, iso_8601.\n\n======\n[1] GUC quoting guidelines -\nhttps://github.com/postgres/postgres/commit/a243569bf65c5664436e8f63d870b7ee9c014dcb\n[2] The case should match pg_settings -\nhttps://www.postgresql.org/message-id/db3e4290ced77111c17e7a2adfb1d660734f5f78.camel%40cybertec.at\n[3] Dislike of diff churn -\nhttps://www.postgresql.org/message-id/ZWUd8dYYA9v83KvI%40paquier.xyz\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 8 Dec 2023 15:10:29 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 2023-Dec-08, Peter Smith wrote:\n\n> Patch 0001 -- \"datestyle\" becomes DateStyle in messages\n> Rebased this again, which was part of an earlier patch set\n> - I think any GUC names documented as MixedCase should keep that same\n> case in messages; this also obeys the guidelines recently pushed [1].\n\nI agree.\n\n> Patch 0002 -- use mixed case for intervalstyle error message\n> I found that the GUC name substituted to the error message was coming\n> from the statement, not from the original name in the guc_tables, so\n> there was a case mismatch:\n\nI agree. Let's also add a test that shows this difference (my 0002\nhere).\n\nI'm annoyed that this saga has transiently created a few untranslated\nstrings by removing unnecessary quotes but failing to move the variable\nnames outside the translatable part of the string. I change a few of\nthose in 0003 -- mostly the ones in strings already touched by commit\n8d9978a7176a, but also a few others. Didn't go out of my way to grep\nfor other possible messages to fix, though. (I feel like this is\nmissing some \"translator:\" comments.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)", "msg_date": "Fri, 8 Dec 2023 13:55:28 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 08.12.23 05:10, Peter Smith wrote:\n> Patch 0001 -- \"datestyle\" becomes DateStyle in messages\n> Rebased this again, which was part of an earlier patch set\n> - I think any GUC names documented as MixedCase should keep that same\n> case in messages; this also obeys the guidelines recently pushed [1].\n> - Some others agreed, expecting the exact GUC name (in the message)\n> can be found in pg_settings [2].\n> - OTOH, Michael didn't like the diff churn [3] caused by this patch.\n\nI'm fine with adjusting the mixed-case stuff, but intuitively, I don't \nthink removing the quotes in this is an improvement:\n\n- GUC_check_errdetail(\"Conflicting \\\"datestyle\\\" specifications.\");\n+ GUC_check_errdetail(\"Conflicting DateStyle specifications.\");\n\n\n\n", "msg_date": "Fri, 8 Dec 2023 15:48:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Sat, Dec 9, 2023 at 1:48 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 08.12.23 05:10, Peter Smith wrote:\n> > Patch 0001 -- \"datestyle\" becomes DateStyle in messages\n> > Rebased this again, which was part of an earlier patch set\n> > - I think any GUC names documented as MixedCase should keep that same\n> > case in messages; this also obeys the guidelines recently pushed [1].\n> > - Some others agreed, expecting the exact GUC name (in the message)\n> > can be found in pg_settings [2].\n> > - OTOH, Michael didn't like the diff churn [3] caused by this patch.\n>\n> I'm fine with adjusting the mixed-case stuff, but intuitively, I don't\n> think removing the quotes in this is an improvement:\n>\n> - GUC_check_errdetail(\"Conflicting \\\"datestyle\\\" specifications.\");\n> + GUC_check_errdetail(\"Conflicting DateStyle specifications.\");\n>\n\nMy original intention of this thread was only to document the GUC name\nquoting guidelines and then apply those consistently in the code.\n\nI'm happy either way for the MixedCase names to be quoted or not\nquoted, whatever is the consensus.\n\nIf the rule is changed to quote those MixedCase GUCs then the docs\nwill require minor tweaking\n\nCURRENT\n <para>\n In messages containing configuration variable names, do not include quotes\n when the names are visibly not natural English words, such as when they\n have underscores, are all-uppercase or have mixed case. Otherwise, quotes\n must be added. Do include quotes in a message where an arbitrary variable\n name is to be expanded.\n </para>\n\n\"are all-uppercase or have mixed case.\" --> \"or are all-uppercase.\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 11 Dec 2023 10:07:59 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "This v5* looks good to me, except it will need some further\nmodification if PeterE's suggestion [1] to keep quotes for the\nMixedCase GUCs is adopted.\n\n======\n[1] https://www.postgresql.org/message-id/9e7802b2-2cf2-4c2d-b680-b2ccb9db1d2f%40eisentraut.org\n\nKind Regards,\nPeter Smith.\nFutjisu Australia.\n\n\n", "msg_date": "Mon, 11 Dec 2023 10:14:11 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Mon, Dec 11, 2023 at 10:14:11AM +1100, Peter Smith wrote:\n> This v5* looks good to me, except it will need some further\n> modification if PeterE's suggestion [1] to keep quotes for the\n> MixedCase GUCs is adopted.\n\n- errdetail(\"The database cluster was initialized with CATALOG_VERSION_NO %d,\"\n- \" but the server was compiled with CATALOG_VERSION_NO %d.\",\n- ControlFile->catalog_version_no, CATALOG_VERSION_NO),\n+ /*- translator: %s is a variable name and %d is its value */\n+ errdetail(\"The database cluster was initialized with %s %d,\"\n+ \" but the server was compiled with %s %d.\",\n+ \"CATALOG_VERSION_NO\",\n\nGood point. There are a lot of strings that can be shaved from the\ntranslations here.\n\nsrc/backend/access/transam/xlog.c: errdetail(\"The database cluster was initialized with PG_CONTROL_VERSION %d (0x%08x),\"\nsrc/backend/access/transam/xlog.c: errdetail(\"The database cluster was initialized with PG_CONTROL_VERSION %d,\"\nsrc/backend/access/transam/xlog.c: errdetail(\"The database cluster was initialized without USE_FLOAT8_BYVAL\"\nsrc/backend/access/transam/xlog.c: errdetail(\"The database cluster was initialized with USE_FLOAT8_BYVAL\"\n\nI think that you should apply the same conversion for these ones.\nThere is no gain with the 1st and 3rd ones, but the 2nd and 4th one\ncan be grouped together.\n\nFWIW, if we don't convert MixedCase GUCs to become mixedcase, I don't\nthink that there is any need to apply quotes to them because they\ndon't really look like natural English words. That's as far as my\nopinion goes, so feel free to ignore me if the consensus is different.\n--\nMichael", "msg_date": "Mon, 11 Dec 2023 11:00:07 +0100", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 11.12.23 00:07, Peter Smith wrote:\n> If the rule is changed to quote those MixedCase GUCs then the docs\n> will require minor tweaking\n> \n> CURRENT\n> <para>\n> In messages containing configuration variable names, do not include quotes\n> when the names are visibly not natural English words, such as when they\n> have underscores, are all-uppercase or have mixed case. Otherwise, quotes\n> must be added. Do include quotes in a message where an arbitrary variable\n> name is to be expanded.\n> </para>\n> \n> \"are all-uppercase or have mixed case.\" --> \"or are all-uppercase.\"\n\nAfter these discussions, I think this rule change was not a good idea. \nIt effectively enforces these kinds of inconsistencies. For example, if \nyou ever refactored\n\n \"DateStyle is wrong\"\n\nto\n\n \"%s is wrong\"\n\nyou'd need to adjust the quotes, and thus user-visible behavior, for \nentirely internal reasons. This is not good. And then came the idea to \ndetermine the quoting dynamically, which I think everyone agreed was too \nmuch. So I don't see a way to make this work well.\n\n\n\n", "msg_date": "Thu, 14 Dec 2023 09:38:40 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, Dec 14, 2023 at 09:38:40AM +0100, Peter Eisentraut wrote:\n> After these discussions, I think this rule change was not a good idea. It\n> effectively enforces these kinds of inconsistencies. For example, if you\n> ever refactored\n> \n> \"DateStyle is wrong\"\n> \n> to\n> \n> \"%s is wrong\"\n> \n> you'd need to adjust the quotes, and thus user-visible behavior, for\n> entirely internal reasons. This is not good.\n\nSo, what are you suggesting? Should the encouraged rule be removed\nfrom the docs? Or do you object to some of the changes done in the\nlatest patch series v5?\n\nFWIW, I am a bit meh with v5-0001, because I don't see the benefits.\nOn the contrary v5-0003 is useful, because it reduces a bit the number\nof strings to translate. That's always good to take. I don't have a\nproblem with v5-0002, either, where we'd begin using the name of the \nGUC as stored in the static tables rather than the name provided in\nthe SET query, particularly for the reason that it makes the GUC name\na bit more consistent even when using double-quotes around the\nparameter name in the query, where the error messages would not force\na lower-case conversion. The patch would, AFAIU, change HEAD from\nthat:\n=# SET \"intervalstylE\" to popo;\nERROR: 22023: invalid value for parameter \"intervalstylE\": \"popo\"\nTo that:\n=# SET \"intervalstylE\" to popo;\nERROR: 22023: invalid value for parameter \"IntervalStyle\": \"popo\"\n\n> And then came the idea to\n> determine the quoting dynamically, which I think everyone agreed was too\n> much. So I don't see a way to make this work well.\n\nYeah, with the quotes being language-dependent, any idea I can think\nof is as good as unreliable and dead.\n--\nMichael", "msg_date": "Sat, 16 Dec 2023 10:33:45 +0100", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Hi,\n\nThis thread seems to be a bit stuck, so I thought I would try to\nsummarize my understanding to hopefully get it moving again...\n\nThe original intent of the thread was just to introduce some\nguidelines for quoting or not quoting GUC names in messages because\npreviously it seemed quite ad-hoc. Along the way, there was some scope\ncreep. IIUC, now there are 3 main topics in this thread:\n\n1. GUC name quoting\n2. GUC name case\n3. Using common messages\n\n======\n\n#1. GUC name quoting.\n\nSome basic guidelines were decided and a patch is already pushed [1].\n\n <para>\n In messages containing configuration variable names, do not include quotes\n when the names are visibly not natural English words, such as when they\n have underscores, are all-uppercase or have mixed case. Otherwise, quotes\n must be added. Do include quotes in a message where an arbitrary variable\n name is to be expanded.\n </para>\n\nAFAIK there is nothing controversial there, although maybe the\nguideline for 'mixed case' needs revisiting depending on objections\nabout point #2.\n\n~~~\n\n#2. GUC name case.\n\nGUC names defined in guc_tables.c are either lowercase (xxxx),\nlowercase with underscores (xxxx_yyyy) or mixed case (XxxxYyyy).\n\nThere are only a few examples of mixed case. They are a bit\nproblematic, but IIUC they are not going to change away so we need to\ndeal with them:\n- TimeZone\n- DateStyle\n- IntervalStyle\n\nIt was proposed (e.g. [2]) that it would be better/intuitive if the\nGUC name of the error message would be the same case as in the\nguc_table.c. In other words, other words you should be able to find\nthe same name from the message in pg_settings.\n\nSo mesages with \"datestyle\" should become DateStyle because:\nSELECT * FROM pg_settings WHERE name = 'DateStyle'; ==> found\nSELECT * FROM pg_settings WHERE name = 'datestlye'; ==> not found\n\nThat latest v5 patches make those adjustments\n- Patch v5-0001 fixes case for DateStyle. Yeah, there is some diff\nchurn because there are a lot of DateStyle tests, but IMO that's too\nbad.\n- Patch v5-0002 fixed case for IntervalStyle.\n\n~~~\n\n#3. Using common messages\n\nAny message with a non-translatable component to them (e.g. the GUC\nname) can be restructured in a way so there is a common translatable\nerrmsg part with the non-translatable parameters substituted.\n\ne.g.\n- GUC_check_errdetail(\"The only allowed value is \\\"immediate\\\".\");\n+ GUC_check_errdetail(\"The only allowed value is \\\"%s\\\".\", \"immediate\");\n\nAFAIK think there is no disagreement that this is a good idea,\nalthough IMO it deserved to be in a separate thread.\n\nI think there will be many messages that qualify to be modified, and\nprobably there will be some discussion about whether certain common\nmessages that can be merged -- (e.g. Is \"You might need to increase\n%s.\" same as \"Consider increasing %s.\" or not?).\n\nAnyway, this part is a WIP. Currently, patch v5-0003 makes a start for\nthis task.\n\n//////\n\nI think patches v5-0002, v5-0003 are uncontroversial.\n\nSo the sticking point seems to be the MixedCase GUC (e.g. patch\nv5-0001). I agree, that the churn is not ideal (it's only because\nthere are lots of DateStyle messages in test output), but OTOH that's\njust what happens if a rule is applied when previously there were no\nrules.\n\nAlso, PeterE wrote [4]\n\n> On Thu, Dec 14, 2023 at 09:38:40AM +0100, Peter Eisentraut wrote:\n> > After these discussions, I think this rule change was not a good idea. It\n> > effectively enforces these kinds of inconsistencies. For example, if you\n> > ever refactored\n> >\n> > \"DateStyle is wrong\"\n> >\n> > to\n> >\n> > \"%s is wrong\"\n> >\n> > you'd need to adjust the quotes, and thus user-visible behavior, for\n> > entirely internal reasons. This is not good.\n>\n\nI didn't understand the problem. By the current guidelines the mixed\ncase GUC won't quoted in the message (see patch v5-0001)\n\nSo whether it is:\nerrmsg(\"DateStyle is wrong\"), OR\nerrmsg(\"%s is wrong\", \"DateStyle\")\n\nwhere is the \"you'd need to adjust the quotes\" problem there?\n\n======\n[1] GUC quoting guidelines --\nhttps://www.postgresql.org/docs/devel/error-style-guide.html\n[2] Case in messages should be same as pg_settings --\nhttps://www.postgresql.org/message-id/db3e4290ced77111c17e7a2adfb1d660734f5f78.camel%40cybertec.at\n[3] v5 patches --\nhttps://www.postgresql.org/message-id/202312081255.wlsfmhe2sri7%40alvherre.pgsql\n[4] PeterE concerna about DateStyle --\nhttps://www.postgresql.org/message-id/6d66eb1a-290d-4aaa-972a-0a06a1af02af%40eisentraut.org\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 21 Dec 2023 17:24:18 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 21.12.23 07:24, Peter Smith wrote:\n> #1. GUC name quoting.\n> \n> Some basic guidelines were decided and a patch is already pushed [1].\n> \n> <para>\n> In messages containing configuration variable names, do not include quotes\n> when the names are visibly not natural English words, such as when they\n> have underscores, are all-uppercase or have mixed case. Otherwise, quotes\n> must be added. Do include quotes in a message where an arbitrary variable\n> name is to be expanded.\n> </para>\n> \n> AFAIK there is nothing controversial there, although maybe the\n> guideline for 'mixed case' needs revisiting depending on objections\n> about point #2.\n\nNow that I read this again, I think this is wrong.\n\nWe should decide the quoting for a category, not the actual content. \nLike, quote all file names; do not quote keywords.\n\nThis led to the attempted patch to decide the quoting of GUC parameter \nnames dynamically based on the actual content, which no one really \nliked. But then, to preserve consistency, we also need to be uniform in \nquoting GUC parameter names where the name is hardcoded.\n\n\n\n", "msg_date": "Thu, 21 Dec 2023 14:24:00 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, Dec 22, 2023 at 12:24 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 21.12.23 07:24, Peter Smith wrote:\n> > #1. GUC name quoting.\n> >\n> > Some basic guidelines were decided and a patch is already pushed [1].\n> >\n> > <para>\n> > In messages containing configuration variable names, do not include quotes\n> > when the names are visibly not natural English words, such as when they\n> > have underscores, are all-uppercase or have mixed case. Otherwise, quotes\n> > must be added. Do include quotes in a message where an arbitrary variable\n> > name is to be expanded.\n> > </para>\n> >\n> > AFAIK there is nothing controversial there, although maybe the\n> > guideline for 'mixed case' needs revisiting depending on objections\n> > about point #2.\n>\n> Now that I read this again, I think this is wrong.\n>\n> We should decide the quoting for a category, not the actual content.\n> Like, quote all file names; do not quote keywords.\n>\n> This led to the attempted patch to decide the quoting of GUC parameter\n> names dynamically based on the actual content, which no one really\n> liked. But then, to preserve consistency, we also need to be uniform in\n> quoting GUC parameter names where the name is hardcoded.\n>\n\nI agree. By attempting to define when to and when not to use quotes it\nhas become overcomplicated.\n\nEarlier in the thread, I counted how quotes were used in the existing\nmessages [5]; there were ~39 quoted and 164 not quoted. Based on that\nwe chose to stay with the majority, and leave all the unquoted ones so\nonly adding quotes \"when necessary\". In hindsight, that was probably\nthe wrong choice because it opened a can of worms about what \"when\nnecessary\" even means (e.g. what about underscores, mixed case etc).\n\nCertainly one simple rule \"just quote everything\" is easiest to follow.\n\n~~~\n\nOPTION#1. DO quote hardcoded GUC names everywhere\n- pro: consistent with the dynamic names, which are always quoted\n- pro: no risk of mistaking GUC names for normal words in the message\n- con: more patch changes than not quoting\n\nLaurenz [2] \"My personal preference is to always quote GUC names\"\nNathan [3][4] \"І'd vote for quoting all GUC names, if for no other\nreason than \"visibly not English natural words\" feels a bit open to\ninterpretation.\"\nPeterE [6] \"... to preserve consistency, we also need to be uniform in\nquoting GUC parameter names where the name is hardcoded.\"\n\n~\n\nOPTION#2. DO NOT quote hardcoded GUC names anywhere\n- pro: less patch changes than quoting everything\n- con: not consistent with the dynamic names, which are always quoted\n- con: risk of mistaking GUC names for normal words in the message\n\nPeterE, originally [1] said \"I'm leaning toward not quoting GUC\nnames\", but IIUC changed his opinion in [6].\n\n~~~\n\nGiven the above, I've updated the v6 patch set to just *always* quote\nGUC names. The docs are also re-written.\n\n======\n[1] https://www.postgresql.org/message-id/22998fc0-93c2-48d2-b0f9-361cd5764695%40eisentraut.org\n[2] https://www.postgresql.org/message-id/4b83f9888428925e3049e24b60a73f4b94dc2368.camel%40cybertec.at\n[3] https://www.postgresql.org/message-id/20231102015239.GA82553%40nathanxps13\n[4] https://www.postgresql.org/message-id/20231107145821.GA779199%40nathanxps13\n[5] https://www.postgresql.org/message-id/CAHut%2BPtqTao%2BOKRxGcCzUxt9h9d0%3DTQZZoRjMYe3xe0-O7_hsQ%40mail.gmail.com\n[6] https://www.postgresql.org/message-id/1704b2cf-2444-484a-a7a4-2ba79f72951d%40eisentraut.org\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 4 Jan 2024 17:53:44 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 04.01.24 07:53, Peter Smith wrote:\n>> Now that I read this again, I think this is wrong.\n>>\n>> We should decide the quoting for a category, not the actual content.\n>> Like, quote all file names; do not quote keywords.\n>>\n>> This led to the attempted patch to decide the quoting of GUC parameter\n>> names dynamically based on the actual content, which no one really\n>> liked. But then, to preserve consistency, we also need to be uniform in\n>> quoting GUC parameter names where the name is hardcoded.\n>>\n> \n> I agree. By attempting to define when to and when not to use quotes it\n> has become overcomplicated.\n> \n> Earlier in the thread, I counted how quotes were used in the existing\n> messages [5]; there were ~39 quoted and 164 not quoted. Based on that\n> we chose to stay with the majority, and leave all the unquoted ones so\n> only adding quotes \"when necessary\". In hindsight, that was probably\n> the wrong choice because it opened a can of worms about what \"when\n> necessary\" even means (e.g. what about underscores, mixed case etc).\n> \n> Certainly one simple rule \"just quote everything\" is easiest to follow.\n\nI've been going through the translation updates for PG17 these days and \nwas led back around to this issue. It seems we left it in an \nintermediate state that no one was really happy with and which is \narguably as inconsistent or more so than before.\n\nI think we should accept your two patches\n\nv6-0001-GUC-names-docs.patch\nv6-0002-GUC-names-add-quotes.patch\n\nwhich effectively everyone was in favor of and which seem to be the most \nrobust and sustainable solution.\n\n(The remaining three patches from the v6 set would be PG18 material at \nthis point.)\n\n\n\n", "msg_date": "Thu, 16 May 2024 13:35:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 2024-May-16, Peter Eisentraut wrote:\n\n> I think we should accept your two patches\n> \n> v6-0001-GUC-names-docs.patch\n> v6-0002-GUC-names-add-quotes.patch\n> \n> which effectively everyone was in favor of and which seem to be the most\n> robust and sustainable solution.\n\nI think we should also take patch 0005 in pg17, which reduces the number\nof strings to translate.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n", "msg_date": "Thu, 16 May 2024 13:56:33 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "> On 16 May 2024, at 13:35, Peter Eisentraut <[email protected]> wrote:\n\n> I think we should accept your two patches\n\nI agree with this.\n\n> v6-0001-GUC-names-docs.patch\n\n+1\n\n> v6-0002-GUC-names-add-quotes.patch\n\n- errmsg(\"WAL generated with full_page_writes=off was replayed \"\n+ errmsg(\"WAL generated with \\\"full_page_writes=off\\\" was replayed \"\n \nI'm not a fan of this syntax, but I at the same time can't offer a better idea\nso this isn't an objection but a hope that it can be made even better during\nthe v18 cycle.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 16 May 2024 22:07:30 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "> On 16 May 2024, at 13:56, Alvaro Herrera <[email protected]> wrote:\n\n> I think we should also take patch 0005 in pg17, which reduces the number\n> of strings to translate.\n\nAgreed, lessening the burden on translators is always a good idea.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 16 May 2024 22:11:41 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 16 May 2024, at 13:35, Peter Eisentraut <[email protected]> wrote:\n>> - errmsg(\"WAL generated with full_page_writes=off was replayed \"\n>> + errmsg(\"WAL generated with \\\"full_page_writes=off\\\" was replayed \"\n \n> I'm not a fan of this syntax, but I at the same time can't offer a better idea\n> so this isn't an objection but a hope that it can be made even better during\n> the v18 cycle.\n\nYeah ... formally correct would be something like\n\n\terrmsg(\"WAL generated with \\\"full_page_writes\\\"=\\\"off\\\" was replayed \"\n\nbut that's a bit much for my taste.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 May 2024 16:20:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Thu, May 16, 2024 at 9:35 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 04.01.24 07:53, Peter Smith wrote:\n> >> Now that I read this again, I think this is wrong.\n> >>\n> >> We should decide the quoting for a category, not the actual content.\n> >> Like, quote all file names; do not quote keywords.\n> >>\n> >> This led to the attempted patch to decide the quoting of GUC parameter\n> >> names dynamically based on the actual content, which no one really\n> >> liked. But then, to preserve consistency, we also need to be uniform in\n> >> quoting GUC parameter names where the name is hardcoded.\n> >>\n> >\n> > I agree. By attempting to define when to and when not to use quotes it\n> > has become overcomplicated.\n> >\n> > Earlier in the thread, I counted how quotes were used in the existing\n> > messages [5]; there were ~39 quoted and 164 not quoted. Based on that\n> > we chose to stay with the majority, and leave all the unquoted ones so\n> > only adding quotes \"when necessary\". In hindsight, that was probably\n> > the wrong choice because it opened a can of worms about what \"when\n> > necessary\" even means (e.g. what about underscores, mixed case etc).\n> >\n> > Certainly one simple rule \"just quote everything\" is easiest to follow.\n>\n> I've been going through the translation updates for PG17 these days and\n> was led back around to this issue. It seems we left it in an\n> intermediate state that no one was really happy with and which is\n> arguably as inconsistent or more so than before.\n>\n> I think we should accept your two patches\n>\n> v6-0001-GUC-names-docs.patch\n> v6-0002-GUC-names-add-quotes.patch\n>\n> which effectively everyone was in favor of and which seem to be the most\n> robust and sustainable solution.\n>\n> (The remaining three patches from the v6 set would be PG18 material at\n> this point.)\n\nThanks very much for taking an interest in resurrecting this thread.\n\nIt was always my intention to come back to this when the dust had\nsettled on PG17. But it would be even better if the docs for the rule\n\"just quote everything\", and anything else you deem acceptable, can be\npushed sooner.\n\nOf course, there will still be plenty more to do for PG18, including\nlocating examples in newly pushed code for messages that have slipped\nthrough the cracks during the last few months using different formats,\nand other improvements, but those tasks should become easier if we can\nget some of these v6 patches out of the way first.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 17 May 2024 13:31:08 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On 17.05.24 05:31, Peter Smith wrote:\n>> I think we should accept your two patches\n>>\n>> v6-0001-GUC-names-docs.patch\n>> v6-0002-GUC-names-add-quotes.patch\n>>\n>> which effectively everyone was in favor of and which seem to be the most\n>> robust and sustainable solution.\n>>\n>> (The remaining three patches from the v6 set would be PG18 material at\n>> this point.)\n> Thanks very much for taking an interest in resurrecting this thread.\n> \n> It was always my intention to come back to this when the dust had\n> settled on PG17. But it would be even better if the docs for the rule\n> \"just quote everything\", and anything else you deem acceptable, can be\n> pushed sooner.\n> \n> Of course, there will still be plenty more to do for PG18, including\n> locating examples in newly pushed code for messages that have slipped\n> through the cracks during the last few months using different formats,\n> and other improvements, but those tasks should become easier if we can\n> get some of these v6 patches out of the way first.\n\nI committed your 0001 and 0002 now, with some small fixes.\n\nThere has also been quite a bit of new code, of course, since you posted \nyour patches, so we'll probably find a few more things that could use \nadjustment.\n\nI'd be happy to consider the rest of your patch set after beta1 and/or \nfor PG18.\n\n\n\n", "msg_date": "Fri, 17 May 2024 13:57:51 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, May 17, 2024 at 9:57 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 17.05.24 05:31, Peter Smith wrote:\n> >> I think we should accept your two patches\n> >>\n> >> v6-0001-GUC-names-docs.patch\n> >> v6-0002-GUC-names-add-quotes.patch\n> >>\n> >> which effectively everyone was in favor of and which seem to be the most\n> >> robust and sustainable solution.\n> >>\n> >> (The remaining three patches from the v6 set would be PG18 material at\n> >> this point.)\n> > Thanks very much for taking an interest in resurrecting this thread.\n> >\n> > It was always my intention to come back to this when the dust had\n> > settled on PG17. But it would be even better if the docs for the rule\n> > \"just quote everything\", and anything else you deem acceptable, can be\n> > pushed sooner.\n> >\n> > Of course, there will still be plenty more to do for PG18, including\n> > locating examples in newly pushed code for messages that have slipped\n> > through the cracks during the last few months using different formats,\n> > and other improvements, but those tasks should become easier if we can\n> > get some of these v6 patches out of the way first.\n>\n> I committed your 0001 and 0002 now, with some small fixes.\n>\n> There has also been quite a bit of new code, of course, since you posted\n> your patches, so we'll probably find a few more things that could use\n> adjustment.\n>\n> I'd be happy to consider the rest of your patch set after beta1 and/or\n> for PG18.\n>\n\nThanks for pushing!\n\nI'll try to dedicate more time to this sometime soon to go through all\nthe code again to track down those loose ends.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 20 May 2024 08:25:26 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Fri, May 17, 2024 at 9:57 PM Peter Eisentraut <[email protected]> wrote:\n>\n> On 17.05.24 05:31, Peter Smith wrote:\n> >> I think we should accept your two patches\n> >>\n> >> v6-0001-GUC-names-docs.patch\n> >> v6-0002-GUC-names-add-quotes.patch\n> >>\n> >> which effectively everyone was in favor of and which seem to be the most\n> >> robust and sustainable solution.\n> >>\n> >> (The remaining three patches from the v6 set would be PG18 material at\n> >> this point.)\n> > Thanks very much for taking an interest in resurrecting this thread.\n> >\n> > It was always my intention to come back to this when the dust had\n> > settled on PG17. But it would be even better if the docs for the rule\n> > \"just quote everything\", and anything else you deem acceptable, can be\n> > pushed sooner.\n> >\n> > Of course, there will still be plenty more to do for PG18, including\n> > locating examples in newly pushed code for messages that have slipped\n> > through the cracks during the last few months using different formats,\n> > and other improvements, but those tasks should become easier if we can\n> > get some of these v6 patches out of the way first.\n>\n> I committed your 0001 and 0002 now, with some small fixes.\n>\n> There has also been quite a bit of new code, of course, since you posted\n> your patches, so we'll probably find a few more things that could use\n> adjustment.\n>\n> I'd be happy to consider the rest of your patch set after beta1 and/or\n> for PG18.\n\nThanks for pushing some of those v6 patches. Here is the new patch set v7*.\n\nI have used a homegrown script/regex to help identify all the GUC\nnames that still needed quoting. Many of these occurrences are from\nrecently pushed code -- i.e. they are more recent than that v6-0002\npatch previously pushed [1].\n\nThe new GUC quoting patches are separated by different GUC types only\nto simplify my processing of them.\n\nv7-0001 = Add quotes for GUCs - bool\nv7-0002 = Add quotes for GUCs - int\nv7-0003 = Add quotes for GUCs - real\nv7-0004 = Add quotes for GUCs - string\nv7-0005 = Add quotes for GUCs - enum\n\nThe other v7 patches are just carried forward unchanged from v6:\n\nv7-0006 = fix case for IntervalStyle\nv7-0007 = fix case for Datestyle\nv7-0008 = make common translatable message strings\n\n~~~~\n\nSTATUS\n\nHere is the status of these v7* patches, and remaining works to do:\n\n* AFAIK those first 5 (\"Add quotes\") patches can be pushed ASAP in\nPG17. If anybody finds more GUCs still not quoted then those are\nprobably somehow accidentally missed by me and should be fixed.\n\n* The remaining 3 patches may wait until PG18.\n\n* The patch 0008 (\"make common translatable message strings\") may be\nOK to be pushed as-is. OTOH, this is the tip of another iceberg so I\nexpect if we look harder there will be many many more candidates to\nturn into common messages. There may also be examples where 'similar'\nmessages can use identical common text, but those will require more\ndiscussion/debate case-by-case\n\n* Another remaining task is to check current usage and improve the\nconsistency of how some of the GUC values have been quoted. Refer to\nmail from Kyotaro-san [2] for examples of this.\n\n======\n[1] v6-0001,0002 were already pushed.\nhttps://www.postgresql.org/message-id/55ab714f-86e3-41a3-a1d2-a96a115db8bd%40eisentraut.org\n\n[2] https://www.postgresql.org/message-id/20240520.165613.189183526936651938.horikyota.ntt%40gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 28 May 2024 16:16:24 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, May 28, 2024 at 4:16 PM Peter Smith <[email protected]> wrote:\n>\n...\n>\n> The new GUC quoting patches are separated by different GUC types only\n> to simplify my processing of them.\n>\n> v7-0001 = Add quotes for GUCs - bool\n> v7-0002 = Add quotes for GUCs - int\n> v7-0003 = Add quotes for GUCs - real\n> v7-0004 = Add quotes for GUCs - string\n> v7-0005 = Add quotes for GUCs - enum\n>\n> The other v7 patches are just carried forward unchanged from v6:\n>\n> v7-0006 = fix case for IntervalStyle\n> v7-0007 = fix case for Datestyle\n> v7-0008 = make common translatable message strings\n>\n> ~~~~\n\nHi,\n\nHere is a new patch set v8*, which is the same as v7* but fixes an\nerror in v7-0008 detected by cfbot.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 11 Jun 2024 12:11:08 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "CFBot reported some failures, so I have attached the rebased patch set v9*.\n\nI'm hopeful the majority of these might be pushed to avoid more rebasing...\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 20 Aug 2024 17:40:13 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Hi.\n\nThe cfbot was reporting my patches needed to be rebased.\n\nHere is the rebased patch set v10*. Everything is the same as before\nexcept now there are only 7 patches instead of 8. The previous v9-0001\n(\"bool\") patch no longer exists because those changes are now already\npresent in HEAD.\n\nI hope these might be pushed soon to avoid further rebasing.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 3 Sep 2024 12:00:19 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, Sep 03, 2024 at 12:00:19PM +1000, Peter Smith wrote:\n> Here is the rebased patch set v10*. Everything is the same as before\n> except now there are only 7 patches instead of 8. The previous v9-0001\n> (\"bool\") patch no longer exists because those changes are now already\n> present in HEAD.\n> \n> I hope these might be pushed soon to avoid further rebasing.\n\n0001~0004 could just be merged, they're the same thing, for different\nGUC types. The consensus mentioned in 17974ec25946 makes that clear.\n\n0007 is a good thing for translators, indeed.. I'll see about doing\nsomething here, at least.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 15:35:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Tue, Sep 3, 2024 at 4:35 PM Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Sep 03, 2024 at 12:00:19PM +1000, Peter Smith wrote:\n> > Here is the rebased patch set v10*. Everything is the same as before\n> > except now there are only 7 patches instead of 8. The previous v9-0001\n> > (\"bool\") patch no longer exists because those changes are now already\n> > present in HEAD.\n> >\n> > I hope these might be pushed soon to avoid further rebasing.\n>\n> 0001~0004 could just be merged, they're the same thing, for different\n> GUC types. The consensus mentioned in 17974ec25946 makes that clear.\n>\n> 0007 is a good thing for translators, indeed.. I'll see about doing\n> something here, at least.\n> --\n> Michael\n\nHi Michael, thanks for your interest.\n\nI have merged the patches 0001-0004 as suggested. Please see v11 attachments.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 4 Sep 2024 09:17:15 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Wed, Sep 04, 2024 at 09:17:15AM +1000, Peter Smith wrote:\n> I have merged the patches 0001-0004 as suggested. Please see v11 attachments.\n\nThanks.\n\nIt took me some time to go through the whole tree for more\ninconsistencies.\n\nIn 0001, search_path was missing quotes in vacuumlo.c and oid2name.c.\nNot the most critical tools ever, still fixed these.\n\nCheckRequiredParameterValues() has two \"wal_level=minimal\". Shouldn't\nseparate quotes be used for the GUC name and its value to be more\nconsistent with the rest? There are also two \"full_page_writes=off\"\nin xlog.c. Point mentioned at [1] by Daniel on v6, changed as they\nare on HEAD by 17974ec25946.\n\nIn 0004, there are a couple of changes where this does not represent a\ngain for translators, and it was even made worse. For example\nhuge_page_size updated for sysv but not WIN32, leading to two\nmessages. The changes in postgres.c, predicate.c, syncrep.c and\nvariable.c don't show a gain.\n\nThe changes in dfmgr.c should have a translator note, I think, because\nit becomes unclear what these messages are about.\n\nBy the way, I don't get why we use \"/*- translator:\" in some places\nwhile we document to use \"/* translator:\" in the NLS section of the\ndocs. One pattern is much more used than the other, guess which one.\n\n0001 and 0004 have been applied with these tweaks. I am still not\nsure about the changes for DateStyle and IntervalStyle in 0002 and\n0003. Perhaps others have an opinion that could drive to a consensus.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Wed, 4 Sep 2024 14:54:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "Hello Peter,\n\n[ sorry for the kind of off-topic ]\n\n17.05.2024 14:57, Peter Eisentraut wrote:\n> I committed your 0001 and 0002 now, with some small fixes.\n>\n> There has also been quite a bit of new code, of course, since you posted your patches, so we'll probably find a few \n> more things that could use adjustment.\n>\n> I'd be happy to consider the rest of your patch set after beta1 and/or for PG18.\n\nWhile translating messages, I've encountered a weird message, updated by\n17974ec25:\n     printf(_(\"(in \\\"wal_sync_method\\\" preference order, except fdatasync is Linux's default)\\n\"));\n\nDoes \"except ...\" make sense here or it's just a copy-and-paste from docs?:\n         The default is the first method in the above list that is supported\n         by the platform, except that <literal>fdatasync</literal> is the default on\n         Linux and FreeBSD.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 4 Sep 2024 19:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC names in messages" }, { "msg_contents": "On Wed, Sep 4, 2024 at 3:54 PM Michael Paquier <[email protected]> wrote:\n>\n...\n> 0001 and 0004 have been applied with these tweaks. I am still not\n> sure about the changes for DateStyle and IntervalStyle in 0002 and\n> 0003. Perhaps others have an opinion that could drive to a consensus.\n>\n\nThanks for pushing the patches 0001 and 0004.\n\nI have rebased the two remaining patches. See v12 attached.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 10 Sep 2024 17:11:13 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC names in messages" } ]
[ { "msg_contents": "Hi hackers,\n\nI found that there's a nullable pointer being passed to strcmp() and\ncan make the server crash. It can be reproduced on the latest master\nbranch by crafting an extension[1]. Patch for fixing it is attatched.\n\n[1] https://github.com/higuoxing/guc_crash/tree/pg\n\n-- \nBest Regards,\nXing", "msg_date": "Wed, 1 Nov 2023 17:25:42 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Don't pass NULL pointer to strcmp()." }, { "msg_contents": "On Wed, Nov 1, 2023 at 5:25 PM Xing Guo <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> I found that there's a nullable pointer being passed to strcmp() and\n> can make the server crash. It can be reproduced on the latest master\n> branch by crafting an extension[1]. Patch for fixing it is attatched.\n>\n> [1] https://github.com/higuoxing/guc_crash/tree/pg\n>\n\nCan we set a string guc to NULL? If not, `*lconf->variable == NULL` would\nbe unnecessary.\n\n> --\n> Best Regards,\n> Xing\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 1 Nov 2023 18:21:00 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Hi,\n\n> > I found that there's a nullable pointer being passed to strcmp() and\n> > can make the server crash. It can be reproduced on the latest master\n> > branch by crafting an extension[1]. Patch for fixing it is attatched.\n> >\n> > [1] https://github.com/higuoxing/guc_crash/tree/pg\n\nThanks for reporting. I can confirm that the issue reproduces on the\n`master` branch and the proposed patch fixes it.\n\n> Can we set a string guc to NULL? If not, `*lconf->variable == NULL` would\n> be unnecessary.\n\nJudging by the rest of the code we better keep it, at least for consistenc.\n\nI see one more place with a similar code in guc.c around line 1472.\nAlthough I don't have exact steps to trigger a crash I suggest adding\na similar check there.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 1 Nov 2023 14:44:07 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Hi Aleksander and Junwang,\n\nThanks for your comments. I have updated the patch accordingly.\n\nBest Regards,\nXing\n\n\n\n\n\n\n\n\nOn Wed, Nov 1, 2023 at 7:44 PM Aleksander Alekseev <[email protected]>\nwrote:\n\n> Hi,\n>\n> > > I found that there's a nullable pointer being passed to strcmp() and\n> > > can make the server crash. It can be reproduced on the latest master\n> > > branch by crafting an extension[1]. Patch for fixing it is attatched.\n> > >\n> > > [1] https://github.com/higuoxing/guc_crash/tree/pg\n>\n> Thanks for reporting. I can confirm that the issue reproduces on the\n> `master` branch and the proposed patch fixes it.\n>\n> > Can we set a string guc to NULL? If not, `*lconf->variable == NULL` would\n> > be unnecessary.\n>\n> Judging by the rest of the code we better keep it, at least for consistenc.\n>\n> I see one more place with a similar code in guc.c around line 1472.\n> Although I don't have exact steps to trigger a crash I suggest adding\n> a similar check there.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>", "msg_date": "Wed, 1 Nov 2023 21:03:10 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Xing Guo <[email protected]> writes:\n> Thanks for your comments. I have updated the patch accordingly.\n\nI'm leery of accepting this patch, as I see no reason that we\nshould consider it valid for an extension to have a string GUC\nwith a boot_val of NULL.\n\nI realize that we have a few core GUCs that are like that, but\nI'm pretty sure that every one of them has special-case code\nthat initializes the GUC to something non-null a bit later on\nin startup. I don't think there are any cases where a string\nGUC's persistent value will be null, and I don't like the\nidea of considering that to be an allowed case. It would\nopen the door to more crash situations, and it brings up the\nold question of how could a user tell NULL from empty string\n(via SHOW or current_setting() or whatever). Besides, what's\nthe benefit really?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Nov 2023 11:30:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Hi Tom,\n\nThere're extensions that set their boot_val to NULL. E.g., postgres_fdw (\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/contrib/postgres_fdw/option.c#L582),\nplperl (\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L422C13-L422C13,\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L444C12-L444C12,\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L452C6-L452C6)\n(Can we treat plperl as an extension?), pltcl (\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/tcl/pltcl.c#L465C14-L465C14,\nhttps://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/tcl/pltcl.c#L472C12-L472C12\n).\n\nTBH, I don't know if NULL is a valid boot_val for string variables, I just\ncame across some extensions that use NULL as their boot_val. If the\nboot_val can't be NULL in extensions, we should probably add some\nassertions or comments about it?\n\nBest Regards,\nXing\n\n\n\n\n\n\n\n\nOn Wed, Nov 1, 2023 at 11:30 PM Tom Lane <[email protected]> wrote:\n\n> Xing Guo <[email protected]> writes:\n> > Thanks for your comments. I have updated the patch accordingly.\n>\n> I'm leery of accepting this patch, as I see no reason that we\n> should consider it valid for an extension to have a string GUC\n> with a boot_val of NULL.\n>\n> I realize that we have a few core GUCs that are like that, but\n> I'm pretty sure that every one of them has special-case code\n> that initializes the GUC to something non-null a bit later on\n> in startup. I don't think there are any cases where a string\n> GUC's persistent value will be null, and I don't like the\n> idea of considering that to be an allowed case. It would\n> open the door to more crash situations, and it brings up the\n> old question of how could a user tell NULL from empty string\n> (via SHOW or current_setting() or whatever). Besides, what's\n> the benefit really?\n>\n> regards, tom lane\n>\n\nHi Tom,There're extensions that set their boot_val to NULL. E.g., postgres_fdw (https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/contrib/postgres_fdw/option.c#L582), plperl (https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L422C13-L422C13, https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L444C12-L444C12, https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/plperl/plperl.c#L452C6-L452C6) (Can we treat plperl as an extension?), pltcl (https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/tcl/pltcl.c#L465C14-L465C14, https://github.com/postgres/postgres/blob/4210b55f598534db9d52c4535b7dcc777dda75a6/src/pl/tcl/pltcl.c#L472C12-L472C12).TBH, I don't know if NULL is a valid boot_val for string variables, I just came across some extensions that use NULL as their boot_val. If the boot_val can't be NULL in extensions, we should probably add some assertions or comments about it?Best Regards,XingOn Wed, Nov 1, 2023 at 11:30 PM Tom Lane <[email protected]> wrote:Xing Guo <[email protected]> writes:\n> Thanks for your comments. I have updated the patch accordingly.\n\nI'm leery of accepting this patch, as I see no reason that we\nshould consider it valid for an extension to have a string GUC\nwith a boot_val of NULL.\n\nI realize that we have a few core GUCs that are like that, but\nI'm pretty sure that every one of them has special-case code\nthat initializes the GUC to something non-null a bit later on\nin startup.  I don't think there are any cases where a string\nGUC's persistent value will be null, and I don't like the\nidea of considering that to be an allowed case.  It would\nopen the door to more crash situations, and it brings up the\nold question of how could a user tell NULL from empty string\n(via SHOW or current_setting() or whatever).  Besides, what's\nthe benefit really?\n\n                        regards, tom lane", "msg_date": "Thu, 2 Nov 2023 07:45:33 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Xing Guo <[email protected]> writes:\n> There're extensions that set their boot_val to NULL. E.g., postgres_fdw\n\nHmm ... if we're doing it ourselves, I suppose we've got to consider\nit supported :-(. But I'm still wondering how many seldom-used\ncode paths didn't get the message. An example here is that this\ncould lead to GetConfigOptionResetString returning NULL, which\nI think is outside its admittedly-vague API spec.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Nov 2023 20:24:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "I wrote:\n> Hmm ... if we're doing it ourselves, I suppose we've got to consider\n> it supported :-(. But I'm still wondering how many seldom-used\n> code paths didn't get the message. An example here is that this\n> could lead to GetConfigOptionResetString returning NULL, which\n> I think is outside its admittedly-vague API spec.\n\nAfter digging around for a bit, I think part of the problem is a lack\nof a clearly defined spec for what should happen with NULL string GUCs.\nIn the attached v3, I attempted to remedy that by adding a comment in\nguc_tables.h (which is maybe not the best place but I didn't see a\nbetter one). That led me to a couple more changes beyond what you had.\n\nIt's possible that some of these are unreachable --- for example,\ngiven that a NULL could only be the default value, I'm not sure that\nthe fix in write_one_nondefault_variable is a live bug. But we ought\nto code all this stuff defensively, and most of it already was\nNULL-safe.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 01 Nov 2023 21:57:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Thank you Tom!\n\nYour comment\n\"NULL doesn't have semantics that are visibly different from an empty\nstring\" is exactly what I want to confirm :-)\n\nOn 11/2/23, Tom Lane <[email protected]> wrote:\n> I wrote:\n>> Hmm ... if we're doing it ourselves, I suppose we've got to consider\n>> it supported :-(. But I'm still wondering how many seldom-used\n>> code paths didn't get the message. An example here is that this\n>> could lead to GetConfigOptionResetString returning NULL, which\n>> I think is outside its admittedly-vague API spec.\n>\n> After digging around for a bit, I think part of the problem is a lack\n> of a clearly defined spec for what should happen with NULL string GUCs.\n> In the attached v3, I attempted to remedy that by adding a comment in\n> guc_tables.h (which is maybe not the best place but I didn't see a\n> better one). That led me to a couple more changes beyond what you had.\n>\n> It's possible that some of these are unreachable --- for example,\n> given that a NULL could only be the default value, I'm not sure that\n> the fix in write_one_nondefault_variable is a live bug. But we ought\n> to code all this stuff defensively, and most of it already was\n> NULL-safe.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n-- \nBest Regards,\nXing\n\n\n", "msg_date": "Thu, 2 Nov 2023 10:09:27 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "On Wed, Nov 01, 2023 at 09:57:18PM -0400, Tom Lane wrote:\n> I wrote:\n>> Hmm ... if we're doing it ourselves, I suppose we've got to consider\n>> it supported :-(. But I'm still wondering how many seldom-used\n>> code paths didn't get the message. An example here is that this\n>> could lead to GetConfigOptionResetString returning NULL, which\n>> I think is outside its admittedly-vague API spec.\n> \n> After digging around for a bit, I think part of the problem is a lack\n> of a clearly defined spec for what should happen with NULL string GUCs.\n> In the attached v3, I attempted to remedy that by adding a comment in\n> guc_tables.h (which is maybe not the best place but I didn't see a\n> better one). That led me to a couple more changes beyond what you had.\n\nWhat if we disallowed NULL string GUCs in v17? That'd simplify the spec\nand future-proof against similar bugs, but it might also break a fair\nnumber of extensions. If there aren't any other reasons to continue\nsupporting it, maybe it's the right long-term approach, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Nov 2023 21:32:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Wed, Nov 01, 2023 at 09:57:18PM -0400, Tom Lane wrote:\n>> After digging around for a bit, I think part of the problem is a lack\n>> of a clearly defined spec for what should happen with NULL string GUCs.\n\n> What if we disallowed NULL string GUCs in v17?\n\nWell, we'd need to devise some other solution for hacks like the\none used by timezone_abbreviations (see comment in\ncheck_timezone_abbreviations). I think it's not worth the trouble,\nespecially seeing that 95% of guc.c is already set up for this.\nThe bugs are mostly in newer code like get_explain_guc_options,\nand I think that's directly traceable to the lack of any comments\nor docs about this behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Nov 2023 22:39:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "On Wed, Nov 01, 2023 at 10:39:04PM -0400, Tom Lane wrote:\n> Nathan Bossart <[email protected]> writes:\n>> What if we disallowed NULL string GUCs in v17?\n> \n> Well, we'd need to devise some other solution for hacks like the\n> one used by timezone_abbreviations (see comment in\n> check_timezone_abbreviations). I think it's not worth the trouble,\n> especially seeing that 95% of guc.c is already set up for this.\n> The bugs are mostly in newer code like get_explain_guc_options,\n> and I think that's directly traceable to the lack of any comments\n> or docs about this behavior.\n\nEh, yeah, it's probably not worth it if we find ourselves trading one set\nof hacks for another.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 1 Nov 2023 22:45:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Hi,\n\nSeems that Tom's patch cannot be applied to the current master branch.\nI just re-generate the patch for others to play with.\n\nOn 11/2/23, Nathan Bossart <[email protected]> wrote:\n> On Wed, Nov 01, 2023 at 10:39:04PM -0400, Tom Lane wrote:\n>> Nathan Bossart <[email protected]> writes:\n>>> What if we disallowed NULL string GUCs in v17?\n>>\n>> Well, we'd need to devise some other solution for hacks like the\n>> one used by timezone_abbreviations (see comment in\n>> check_timezone_abbreviations). I think it's not worth the trouble,\n>> especially seeing that 95% of guc.c is already set up for this.\n>> The bugs are mostly in newer code like get_explain_guc_options,\n>> and I think that's directly traceable to the lack of any comments\n>> or docs about this behavior.\n>\n> Eh, yeah, it's probably not worth it if we find ourselves trading one set\n> of hacks for another.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\n\n-- \nBest Regards,\nXing", "msg_date": "Thu, 2 Nov 2023 16:48:00 +0800", "msg_from": "Xing Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." }, { "msg_contents": "Looking closer, I realized that my proposed change in RestoreGUCState\nis unnecessary, because guc_free() is already permissive about being\npassed a NULL. That leaves us with one live bug in\nget_explain_guc_options, two probably-unreachable hazards in\ncheck_GUC_init and write_one_nondefault_variable, and two API changes\nin GetConfigOption and GetConfigOptionResetString. I'm dubious that\nback-patching the API changes would be a good idea, so I applied\nthat to HEAD only. The rest I backpatched as far as relevant.\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Nov 2023 11:59:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Don't pass NULL pointer to strcmp()." } ]