threads
listlengths
1
2.99k
[ { "msg_contents": "Hi all,\n\nAs promised around thread [1] that has moved the docs related to\nWindows into a new sub-section for Visual, here is a follow-up to\nimprove portions of its documentation, for discussion in the next CF.\n\nSome of my notes:\n- How much does command line editing work on Windows? When it came to\nVS, I always got the impression that this never worked. Andres in [2]\nmentioned otherwise because meson makes that easier?\n- ActiveState perl should not be recommended IMO, as being able to use\na perl binary requires one to *register* into their service for tokens\nthat can be used to kick perl sessions, last time I checked. Two\nalternatives:\n-- MinGW perl binary.\n-- strawberry perl (?) and Chocolatey.\n-- MSYS\n- http://www.mingw.org/ is a dead end. This could be replaced by\nlinks to https://www.mingw-w64.org/ instead?\n\nAt the end, the main issue that I have with this part of the\ndocumentation is the lack of consistency leading to a confusing user\nexperience in the builds of Windows. My recent impressions were that\nAndrew has picked up Chocolatey in some (all?) of his buildfarm\nanimals with Strawberry Perl. I've had a good experience with it,\nFWIW, but Andres has also mentioned me a couple of weeks ago while in\nPrague that Strawberry could lead to unpredictible results (sorry I\ndon't remember all the exact details).\n\nBetween MSYS2, mingw-w64 and Chocolatey, there are a lot of options\navailable to users. So shouldn't we try to recommend only of them,\nthen align the buildfarm and the CI to use one of them? Supporting\nmore than one, at most two, would be OK for me, my end goal would be\nto get rid entirely of the list of build dependencies in this \"Visual\"\nsection, because that's just a duplicate of what meson lists, except\nthat meson should do a better job at detecting dependencies than what\nthe now-dead MSVC scripts did. If we support two, the CI and the\nbuildfarm should run them.\n\nI am attaching a patch that's an embryon of work (little time for\nhacking as of life these days, still I wanted to get the discussion\nstarted), but let's discuss which direction we should take moving\nforward for 17~.\n\nThanks,\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n[2]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Sun, 31 Dec 2023 15:13:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn Sun, 31 Dec 2023 at 09:13, Michael Paquier <[email protected]> wrote:\n>\n> Hi all,\n>\n> As promised around thread [1] that has moved the docs related to\n> Windows into a new sub-section for Visual, here is a follow-up to\n> improve portions of its documentation, for discussion in the next CF.\n>\n> Some of my notes:\n> - How much does command line editing work on Windows? When it came to\n> VS, I always got the impression that this never worked. Andres in [2]\n> mentioned otherwise because meson makes that easier?\n\nI do not know that either.\n\n> - ActiveState perl should not be recommended IMO, as being able to use\n> a perl binary requires one to *register* into their service for tokens\n> that can be used to kick perl sessions, last time I checked. Two\n> alternatives:\n> -- MinGW perl binary.\n> -- strawberry perl (?) and Chocolatey.\n> -- MSYS\n\nI agree. Also, its installation & use steps are complicated IMO. It is\nnot like install it, add it to PATH and forget.\n\n> - http://www.mingw.org/ is a dead end. This could be replaced by\n> links to https://www.mingw-w64.org/ instead?\n\nCorrect.\n\n> At the end, the main issue that I have with this part of the\n> documentation is the lack of consistency leading to a confusing user\n> experience in the builds of Windows. My recent impressions were that\n> Andrew has picked up Chocolatey in some (all?) of his buildfarm\n> animals with Strawberry Perl. I've had a good experience with it,\n> FWIW, but Andres has also mentioned me a couple of weeks ago while in\n> Prague that Strawberry could lead to unpredictible results (sorry I\n> don't remember all the exact details).\n\nPostgres CI uses Strawberry Perl [1] as well but it is directly\ninstalled from the strawberryperl.com and its version is locked to\n'5.26.3.1' for now.\n\n> Between MSYS2, mingw-w64 and Chocolatey, there are a lot of options\n> available to users. So shouldn't we try to recommend only of them,\n> then align the buildfarm and the CI to use one of them? Supporting\n> more than one, at most two, would be OK for me, my end goal would be\n> to get rid entirely of the list of build dependencies in this \"Visual\"\n> section, because that's just a duplicate of what meson lists, except\n> that meson should do a better job at detecting dependencies than what\n> the now-dead MSVC scripts did. If we support two, the CI and the\n> buildfarm should run them.\n\nI agree.\n\n> I am attaching a patch that's an embryon of work (little time for\n> hacking as of life these days, still I wanted to get the discussion\n> started), but let's discuss which direction we should take moving\n> forward for 17~.\n\nThe current changes look good.\n\n Both <productname>Bison</productname> and\n<productname>Flex</productname>\n are included in the <productname>msys</productname> tool\nsuite, available\n- from <ulink url=\"http://www.mingw.org/wiki/MSYS\"></ulink> as\npart of the\n- <productname>MinGW</productname> compiler suite.\n+ from <ulink url=\"https://www.msys2.org/\"></ulink>.\n\nSince we are changing that part, I think we need to change 'You will\nneed to add the directory containing flex.exe and bison.exe to the\nPATH environment variable. In the case of MinGW, the directory is the\n\\msys\\1.0\\bin subdirectory of your MinGW installation directory.'\nsentence to its msys2 version. My initial testing showed that the\ndirectory is the '\\usr\\bin' subdirectory of the msys2 installation\ndirectory in my environment.\n\n[1] https://github.com/anarazel/pg-vm-images/blob/main/scripts/windows_install_perl.ps1\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 19 Jan 2024 12:38:32 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "\nOn 2024-01-19 Fr 04:38, Nazir Bilal Yavuz wrote:\n>\n>> At the end, the main issue that I have with this part of the\n>> documentation is the lack of consistency leading to a confusing user\n>> experience in the builds of Windows. My recent impressions were that\n>> Andrew has picked up Chocolatey in some (all?) of his buildfarm\n>> animals with Strawberry Perl. I've had a good experience with it,\n>> FWIW, but Andres has also mentioned me a couple of weeks ago while in\n>> Prague that Strawberry could lead to unpredictible results (sorry I\n>> don't remember all the exact details).\n> Postgres CI uses Strawberry Perl [1] as well but it is directly\n> installed from the strawberryperl.com and its version is locked to\n> '5.26.3.1' for now.\n\n\nFYI Strawberry was a bit stuck for a while at 5.32, but they are now up \nto 5.38. See <https://strawberryperl.com/releases.html>\n\n\nI agree we shouldn't be recommending any particular perl distro, \nespecially not ASPerl which now has annoying license issues.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 06:11:40 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Fri, Jan 19, 2024 at 06:11:40AM -0500, Andrew Dunstan wrote:\n> FYI Strawberry was a bit stuck for a while at 5.32, but they are now up to\n> 5.38. See <https://strawberryperl.com/releases.html>\n>\n> I agree we shouldn't be recommending any particular perl distro, especially\n> not ASPerl which now has annoying license issues.\n\nThe more I think about this thread, the more I'd tend to wipe out most\nof \"windows-requirements\" for the sole reason that it is the far-west\nregarding the various ways it is possible to get the dependencies we\nneed for the build and at runtime. We could keep it minimal with the\nset of requirements we are listing under meson in terms of versions:\nhttps://www.postgresql.org/docs/devel/install-requirements.html\n\nThen we could have one sentence recommending one, at most two\nfacilities used the buildfarm, like https://www.msys2.org/ or\nchocolatey as these group basically all the dependencies we need for a\nmeson build (right?) while linking back to the meson page about the\nversion requirements.\n\nOne issue I have with the meson page listing the requirements is that\nwe don't directly mention Diff, but that's critical for the tests.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 30 Jan 2024 17:01:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn Tue, 30 Jan 2024 at 11:02, Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jan 19, 2024 at 06:11:40AM -0500, Andrew Dunstan wrote:\n> > FYI Strawberry was a bit stuck for a while at 5.32, but they are now up to\n> > 5.38. See <https://strawberryperl.com/releases.html>\n> >\n> > I agree we shouldn't be recommending any particular perl distro, especially\n> > not ASPerl which now has annoying license issues.\n>\n> The more I think about this thread, the more I'd tend to wipe out most\n> of \"windows-requirements\" for the sole reason that it is the far-west\n> regarding the various ways it is possible to get the dependencies we\n> need for the build and at runtime. We could keep it minimal with the\n> set of requirements we are listing under meson in terms of versions:\n> https://www.postgresql.org/docs/devel/install-requirements.html\n>\n> Then we could have one sentence recommending one, at most two\n> facilities used the buildfarm, like https://www.msys2.org/ or\n> chocolatey as these group basically all the dependencies we need for a\n> meson build (right?) while linking back to the meson page about the\n> version requirements.\n\nI tested both msys2 and chocolatey on the fresh Windows containers and\nI confirm that Postgres can be built using these. I tested the\ndependencies that are required to build and run Postgres. If more\ndependencies are required to be checked, I can test again.\n\nAs these will be continuously tested by the buildfarm, I agree that\nwhat you suggested looks better.\n\n> One issue I have with the meson page listing the requirements is that\n> we don't directly mention Diff, but that's critical for the tests.\n\nI think that is because most distros come with a preinstalled\ndiffutils package. It is mentioned under the Windows requirements page\n[1] since it does not come preinstalled. However, I agree that it\ncould be good to mention it under the meson page listing the\nrequirements.\n\n[1] https://www.postgresql.org/docs/devel/installation-platform-notes.html#WINDOWS-REQUIREMENTS\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 8 Feb 2024 16:07:36 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Thu, Feb 08, 2024 at 04:07:36PM +0300, Nazir Bilal Yavuz wrote:\n> I tested both msys2 and chocolatey on the fresh Windows containers and\n> I confirm that Postgres can be built using these. I tested the\n> dependencies that are required to build and run Postgres. If more\n> dependencies are required to be checked, I can test again.\n\nThanks.\n\n> As these will be continuously tested by the buildfarm, I agree that\n> what you suggested looks better.\n\nOkay, I have added a sentence at the end of the requirement section,\nin a way similar to what we do for GNU.\n\n>> One issue I have with the meson page listing the requirements is that\n>> we don't directly mention Diff, but that's critical for the tests.\n> \n> I think that is because most distros come with a preinstalled\n> diffutils package. It is mentioned under the Windows requirements page\n> [1] since it does not come preinstalled. However, I agree that it\n> could be good to mention it under the meson page listing the\n> requirements.\n\nI have looked at that again and finished with the attached to move on\nwith the cleanup.\n\nThoughts, comments and/or opinions?\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 16:22:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn 2023-12-31 15:13:03 +0900, Michael Paquier wrote:\n> Some of my notes:\n> - How much does command line editing work on Windows? When it came to\n> VS, I always got the impression that this never worked. Andres in [2]\n> mentioned otherwise because meson makes that easier?\n\nYea, I made it work. I think the only issue was that the build process is a\nbit awkward.\n\n\n> - ActiveState perl should not be recommended IMO, as being able to use\n> a perl binary requires one to *register* into their service for tokens\n> that can be used to kick perl sessions, last time I checked.\n\nIndeed, it's far gone.\n\n\n> Two alternatives:\n> -- MinGW perl binary.\n> -- strawberry perl (?) and Chocolatey.\n\nMy experience with strawberry perl were, um, not encouraging.\n\n\n> Between MSYS2, mingw-w64 and Chocolatey, there are a lot of options\n> available to users. So shouldn't we try to recommend only of them,\n> then align the buildfarm and the CI to use one of them?\n\nI think regardless which of these we use, we should provide a commandline\ninvocation to actually install the necessary stuff, and occasionally test\nthat, perhaps automatedly.\n\nOne issue with most / all of the above is that that they tend to install only\noptimized libraries, which can cause issues when building a debug environment.\n\nI've in the past experimented with vcpkg and conan. Both unfortunately, at the\ntime at least, didn't install necessary binaries for the compression tools,\nwhich makes it harder to test. I had started to work on adding an option for\nthe relevant vcpkg packages to install the binaries, but got stuck on some CLA\nstuff (cleared up since), after finding that that required some bugfixes in\nzstd.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 11:53:35 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn 2024-02-09 11:53:35 -0800, Andres Freund wrote:\n> > Between MSYS2, mingw-w64 and Chocolatey, there are a lot of options\n> > available to users. So shouldn't we try to recommend only of them,\n> > then align the buildfarm and the CI to use one of them?\n> \n> I think regardless which of these we use, we should provide a commandline\n> invocation to actually install the necessary stuff, and occasionally test\n> that, perhaps automatedly.\n> \n> One issue with most / all of the above is that that they tend to install only\n> optimized libraries, which can cause issues when building a debug environment.\n> \n> I've in the past experimented with vcpkg and conan. Both unfortunately, at the\n> time at least, didn't install necessary binaries for the compression tools,\n> which makes it harder to test. I had started to work on adding an option for\n> the relevant vcpkg packages to install the binaries, but got stuck on some CLA\n> stuff (cleared up since), after finding that that required some bugfixes in\n> zstd.\n\nOne thing I forgot: I found chocolatey to be painfully slow to install. And\neven at runtime, the wrappers it installs cause build time slowdowns too. And\nunnecessary rebuilds with visual studio, because default install paths trigger\nsome odd msbuild heuristics.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 11:56:08 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn 2024-02-09 16:22:35 +0900, Michael Paquier wrote:\n> diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml\n> index ed5b285a5e..af77352e6e 100644\n> --- a/doc/src/sgml/installation.sgml\n> +++ b/doc/src/sgml/installation.sgml\n> @@ -193,6 +193,17 @@\n> example, <literal>ICU_CFLAGS=' '</literal>.)\n> </para>\n> </listitem>\n> +\n> + <listitem>\n> + <para>\n> + <indexterm>\n> + <primary>Diff</primary>\n> + </indexterm>\n> +\n> + <productname>Diff</productname> is required to run the regression\n> + tests. On Windows, it may not be available by default.\n> + </para>\n> + </listitem>\n> </itemizedlist>\n> </para>\n\nIt's not installed by default on a bunch of platforms, including in common\nlinux distros. The same is true for bison, flex etc, so I'm not sure it's\nreally worth calling this out here.\n\n\n> - <varlistentry>\n> - <term><productname>MIT Kerberos</productname></term>\n> - <listitem><para>\n> - Required for GSSAPI authentication support. MIT Kerberos can be\n> - downloaded from\n> - <ulink url=\"https://web.mit.edu/Kerberos/dist/index.html\"></ulink>.\n> - </para></listitem>\n> - </varlistentry>\n\nBtw, this is the only dependency that I hadn't been able to test on windows\nwhen I was hacking on the CI stuff last. I'm not sure it relly still works.\n\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 11:59:21 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Fri, Feb 09, 2024 at 11:56:08AM -0800, Andres Freund wrote:\n> One thing I forgot: I found chocolatey to be painfully slow to install. And\n> even at runtime, the wrappers it installs cause build time slowdowns too. And\n> unnecessary rebuilds with visual studio, because default install paths trigger\n> some odd msbuild heuristics.\n\nFor a standalone development machine, I've found that OK and found\nthat rather straight-forward when testing specific patches. Now\nthat's just one experience.\n--\nMichael", "msg_date": "Sat, 10 Feb 2024 09:19:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Fri, Feb 09, 2024 at 11:59:21AM -0800, Andres Freund wrote:\n> On 2024-02-09 16:22:35 +0900, Michael Paquier wrote:\n>> + <productname>Diff</productname> is required to run the regression\n>> + tests. On Windows, it may not be available by default.\n>> + </para>\n>> + </listitem>\n>> </itemizedlist>\n>> </para>\n> \n> It's not installed by default on a bunch of platforms, including in common\n> linux distros. The same is true for bison, flex etc, so I'm not sure it's\n> really worth calling this out here.\n\nRemoving the second sentence would be OK, I assume.\n\n>> - <varlistentry>\n>> - <term><productname>MIT Kerberos</productname></term>\n>> - <listitem><para>\n>> - Required for GSSAPI authentication support. MIT Kerberos can be\n>> - downloaded from\n>> - <ulink url=\"https://web.mit.edu/Kerberos/dist/index.html\"></ulink>.\n>> - </para></listitem>\n>> - </varlistentry>\n> \n> Btw, this is the only dependency that I hadn't been able to test on windows\n> when I was hacking on the CI stuff last. I'm not sure it relly still works.\n\nDo you think that [1] would help in that?\n\n[1]: https://community.chocolatey.org/packages/mitkerberos\n--\nMichael", "msg_date": "Sat, 10 Feb 2024 09:20:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Tue, Jan 30, 2024 at 3:02 AM Michael Paquier <[email protected]> wrote:\n> The more I think about this thread, the more I'd tend to wipe out most\n> of \"windows-requirements\" for the sole reason that it is the far-west\n> regarding the various ways it is possible to get the dependencies we\n> need for the build and at runtime. We could keep it minimal with the\n> set of requirements we are listing under meson in terms of versions:\n> https://www.postgresql.org/docs/devel/install-requirements.html\n\nI'm not very knowledgeable about building software about Windows in\ngeneral, but on the rare occasions that I've done it, it was MUCH\nharder to figure out where to get things like Perl that it is on Linux\nor macOS machines. On Linux, your package manager probably knows about\neverything you need, and if it doesn't, you can probably fix that by\nadding an additional RPM repository to your configuration or using\nsomething like CPAN to find Perl modules that your OS package manager\ndoesn't have. On macOS, you can install homebrew or macports and then\nget most things from there. But on Windows you have to go download\ninstallers individually for everything you need, and there's lots of\ninstallers on the Internet, and not all of them are prepared by\nequally friendly people, and not all of them necessarily work for\nbuilding PostgreSQL.\n\nSo I think that it's pretty darn helpful to have some installation\ninstructions in the documentation for stuff like this, just like I\nthink it's useful that in the documentation index we tell people how\nto get the doc toolchain working on various platforms. I understand\nthe concern about seeming to endorse particular Perl distributions or\nother software bundles, but I also don't like the idea of telling\npeople something that boils down to \"hey, it's possible to get this to\ncompile on Windows, and we know some methods that do work, but we're\nnot going to tell you what they are because we don't want to endorse\nanything so ... good luck!\". If we know a set of things that work, I\nthink we should list them in the documentation and just say that we're\nnot endorsing the use of these particular distributions but just\ntelling you that we've tested with them. And then I think we should\nupdate that as things change.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:34:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Fri, Mar 22, 2024 at 01:34:43PM -0400, Robert Haas wrote:\n> I'm not very knowledgeable about building software about Windows in\n> general, but on the rare occasions that I've done it, it was MUCH\n> harder to figure out where to get things like Perl that it is on Linux\n> or macOS machines. On Linux, your package manager probably knows about\n> everything you need, and if it doesn't, you can probably fix that by\n> adding an additional RPM repository to your configuration or using\n> something like CPAN to find Perl modules that your OS package manager\n> doesn't have. On macOS, you can install homebrew or macports and then\n> get most things from there. But on Windows you have to go download\n> installers individually for everything you need, and there's lots of\n> installers on the Internet, and not all of them are prepared by\n> equally friendly people, and not all of them necessarily work for\n> building PostgreSQL.\n\nYeah. These days I personally just go through stuff like Chocolatey\nor msys2 to get all my dependencies, or even a minimal set of them. I\nsuspect that most folks hanging around on pgsql-hackers do that as\nwell. Relying on individual MSIs with many dependencies has the large\ndownside of causing these to become easily outdated. When using the\nbuild scripts with src/tools/msvc, now gone, I've had a bunch of these\nin my environment hosts.\n\nAs of the buildfarm, we have currently (All Hail Andrew for\nmaintaining most of these):\n- faiywren, with strawberry perl and msys. OpenSSL is from a MSI. It\nuses meson.\n- drongo, with a 64b version of OpenSSL installed with a MSI. It uses\nmeson and chocolatey.\n- lorikeet, cygwin, which is an ecosystem of its own. OpenSSL has\nbeen installed from a MSI, there's a System32 path.\n- hamerkop, with meson. OpenSSL is installed from strawberry, not a\nseparate MSI. Python37 points to a custom MSI.\n\nSo, yes, you're right that removing completely this list may be too\naggressive for the end-user. As far as I can see, there are a few\nthings that stand out:\n- Diff is not mentioned in the list of dependencies on the meson page,\nand it may not exist by default on Windows. I think that we should\nadd it.\n- We don't use activeperl anymore in the buildfarm, and recommending\nit is not a good idea based on the state of the project. If we don't\nremove the entry, I would switch it to point to msys perl or even\nstrawberry perl. Andres has expressed concerns about the strawberry\npart, so perhaps mentioning only msys perl would be enough?\n- The portion of the docs about command line editing with psql, cygwin\nbeing mentioned as an exception, does not apply AFAIK.\n- Mentioning more the packaging options that exist to not have to\ninstall individual MSIs would be a good addition.\n\n> So I think that it's pretty darn helpful to have some installation\n> instructions in the documentation for stuff like this, just like I\n> think it's useful that in the documentation index we tell people how\n> to get the doc toolchain working on various platforms. I understand\n> the concern about seeming to endorse particular Perl distributions or\n> other software bundles, but I also don't like the idea of telling\n> people something that boils down to \"hey, it's possible to get this to\n> compile on Windows, and we know some methods that do work, but we're\n> not going to tell you what they are because we don't want to endorse\n> anything so ... good luck!\". If we know a set of things that work, I\n> think we should list them in the documentation and just say that we're\n> not endorsing the use of these particular distributions but just\n> telling you that we've tested with them. And then I think we should\n> update that as things change.\n\n+ <para>\n+ On Windows, you can find packages for build dependencies using\n+ <ulink url=\"https://www.msys2.org/\">MSYS2</ulink>\n+ or <ulink url=\"https://chocolatey.org/\">Chocolatey</ulink>.\n+ </para>\n\nThe last patch I've sent has this diff. Were you thinking about\ncompleting this list with more options and add more command-level\ninstructions about how to set up these environments in our docs? We\ncould just point to anything provided by these projects. As far as I\ncan see, MSYS2 and chocolatey are the interesting ones to mention, and\nthese are used in the buildfarm at some extent.\n--\nMichael", "msg_date": "Fri, 12 Apr 2024 10:27:01 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "Hi,\n\nOn 2024-04-12 10:27:01 +0900, Michael Paquier wrote:\n> Yeah. These days I personally just go through stuff like Chocolatey\n> or msys2 to get all my dependencies, or even a minimal set of them. I\n> suspect that most folks hanging around on pgsql-hackers do that as\n> well.\n\nDid that work with openssl for you? Because it didn't for me, when I tried\nthat for CI.\n\nI didn't find it easy to find a working openssl for msvc, and when I did, it\nwas one a page that could easily just have been some phishing attempt. Because\nof that I don't think we should remove the link to\nhttps://slproweb.com/products/Win32OpenSSL.html\n\n\n> So, yes, you're right that removing completely this list may be too\n> aggressive for the end-user. As far as I can see, there are a few\n> things that stand out:\n\n> - Diff is not mentioned in the list of dependencies on the meson page,\n> and it may not exist by default on Windows. I think that we should\n> add it.\n\nThat seems quite basic compared to everything else. But also not opposed.\n\nI guess it might be worth checking if diff is present during meson configure,\nso it's not just some weird error. I didn't really think about that when\nwriting the meson stuff, because it's just a hardcoded command in\npg_regress.c, not something that visible to src/tools/msvc, configure or such.\n\n\n> - We don't use activeperl anymore in the buildfarm, and recommending\n> it is not a good idea based on the state of the project. If we don't\n> remove the entry, I would switch it to point to msys perl or even\n> strawberry perl. Andres has expressed concerns about the strawberry\n> part, so perhaps mentioning only msys perl would be enough?\n\nI think it's nonobvious enough to install that I think it's worth keeping\nsomething. I tried at some point, and unfortunately the perl from\ngit-for-windows install doesn't quite work. It needs to be a perl targeting\nucrt (or perhaps some other native target).\n\n\n\n\n> > So I think that it's pretty darn helpful to have some installation\n> > instructions in the documentation for stuff like this, just like I\n> > think it's useful that in the documentation index we tell people how\n> > to get the doc toolchain working on various platforms.\n\n\nFWIW, here's the mingw install commands to install a suitable environment for\nbuilding postgres on windows with mingw, from the automated image generation\nfor CI:\n\nhttps://github.com/anarazel/pg-vm-images/blob/main/scripts/windows_install_mingw64.ps1#L21-L22\n\nI wonder if we should maintain something like that list somewhere in the\npostgres repo instead...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 12 Apr 2024 14:53:58 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Fri, Apr 12, 2024 at 02:53:58PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2024-04-12 10:27:01 +0900, Michael Paquier wrote:\n> > Yeah. These days I personally just go through stuff like Chocolatey\n> > or msys2 to get all my dependencies, or even a minimal set of them. I\n> > suspect that most folks hanging around on pgsql-hackers do that as\n> > well.\n> \n> Did that work with openssl for you? Because it didn't for me, when I tried\n> that for CI.\n\nYes, I recall pulling in OpenSSL from Chocolatey the last time I've\ntested it. Perhaps my memories are fuzzy though, it was a couple of\nmonths ago and I don't have the host at hand anymore.\n\n> I didn't find it easy to find a working openssl for msvc, and when I did, it\n> was one a page that could easily just have been some phishing attempt. Because\n> of that I don't think we should remove the link to\n> https://slproweb.com/products/Win32OpenSSL.html\n\nOkay, noted.\n\n>> So, yes, you're right that removing completely this list may be too\n>> aggressive for the end-user. As far as I can see, there are a few\n>> things that stand out:\n> \n>> - Diff is not mentioned in the list of dependencies on the meson page,\n>> and it may not exist by default on Windows. I think that we should\n>> add it.\n> \n> That seems quite basic compared to everything else. But also not opposed.\n> \n> I guess it might be worth checking if diff is present during meson configure,\n> so it's not just some weird error. I didn't really think about that when\n> writing the meson stuff, because it's just a hardcoded command in\n> pg_regress.c, not something that visible to src/tools/msvc, configure or such.\n\nA meson check would make sense here to catch that earlier. We do that\nfor IPC::Run.\n\n>> - We don't use activeperl anymore in the buildfarm, and recommending\n>> it is not a good idea based on the state of the project. If we don't\n>> remove the entry, I would switch it to point to msys perl or even\n>> strawberry perl. Andres has expressed concerns about the strawberry\n>> part, so perhaps mentioning only msys perl would be enough?\n> \n> I think it's nonobvious enough to install that I think it's worth keeping\n> something. I tried at some point, and unfortunately the perl from\n> git-for-windows install doesn't quite work. It needs to be a perl targeting\n> ucrt (or perhaps some other native target).\n\nThe question would be which one. Msys perl is used across a few\nbuildfarm members, and Andrew has some success with it.\n\n>>> So I think that it's pretty darn helpful to have some installation\n>>> instructions in the documentation for stuff like this, just like I\n>>> think it's useful that in the documentation index we tell people how\n>>> to get the doc toolchain working on various platforms.\n> \n> FWIW, here's the mingw install commands to install a suitable environment for\n> building postgres on windows with mingw, from the automated image generation\n> for CI:\n> \n> https://github.com/anarazel/pg-vm-images/blob/main/scripts/windows_install_mingw64.ps1#L21-L22\n> \n> I wonder if we should maintain something like that list somewhere in the\n> postgres repo instead...\n\n+1. That sounds to me like the doc material we could add in the\nWindows build section for meson.\n--\nMichael", "msg_date": "Sat, 13 Apr 2024 08:30:35 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Thu, Apr 11, 2024 at 9:27 PM Michael Paquier <[email protected]> wrote:\n> So, yes, you're right that removing completely this list may be too\n> aggressive for the end-user. As far as I can see, there are a few\n> things that stand out:\n> - Diff is not mentioned in the list of dependencies on the meson page,\n> and it may not exist by default on Windows. I think that we should\n> add it.\n> - We don't use activeperl anymore in the buildfarm, and recommending\n> it is not a good idea based on the state of the project. If we don't\n> remove the entry, I would switch it to point to msys perl or even\n> strawberry perl. Andres has expressed concerns about the strawberry\n> part, so perhaps mentioning only msys perl would be enough?\n> - The portion of the docs about command line editing with psql, cygwin\n> being mentioned as an exception, does not apply AFAIK.\n> - Mentioning more the packaging options that exist to not have to\n> install individual MSIs would be a good addition.\n\nI think that we need to get a more definitive answer to the question\nof whether command-line editing works or not. I have the impression\nthat it never has. If it's started working, we should establish that\nfor certain and probably also establish what made it start working; if\nit works provided you do X, Y, or Z, we should establish what those\nthings are.\n\nI'm cool with adding diff to the list of dependencies.\n\nI'd prefer to see us update the other links rather than delete them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 15 May 2024 11:25:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simplify documentation related to Windows builds" }, { "msg_contents": "On Wed, May 15, 2024 at 11:25:34AM -0400, Robert Haas wrote:\n> I think that we need to get a more definitive answer to the question\n> of whether command-line editing works or not. I have the impression\n> that it never has. If it's started working, we should establish that\n> for certain and probably also establish what made it start working; if\n> it works provided you do X, Y, or Z, we should establish what those\n> things are.\n\nRight.\n\n> I'm cool with adding diff to the list of dependencies.\n\nThis makes sense also to me, still the patch is not completely right\nbecause it has been adding diff in the list for build dependencies.\nPerhaps this should just be a new list.\n\n> I'd prefer to see us update the other links rather than delete them.\n\nOkay. I'm not sure where this patch is going, so I am going to\nwithdraw it for now. The state of Windows is going to be a topic at\nthe next pgconf.dev, anyway.\n--\nMichael", "msg_date": "Fri, 17 May 2024 07:54:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simplify documentation related to Windows builds" } ]
[ { "msg_contents": "Hello.\n\nWe've noticed that when walreceiver is waiting for a connection to\ncomplete, standby does not immediately respond to promotion\nrequests. In PG14, upon receiving a promotion request, walreceiver\nterminates instantly, but in PG16, it waits for connection\ntimeout. This behavior is attributed to commit 728f86fec65, where a\npart of libpqrcv_connect was simply replaced with a call to\nlibpqsrc_connect_params. This behavior can be verified by simply\ndropping packets from the standby to the primary.\n\nBy a simple thought, in walreceiver, libpqsrv_connect_internal could\njust call ProcessWalRcvInterrupts() instead of CHECK_FOR_INTERRUPTS(),\nbut this approach is quite ugly. Since ProcessWalRcvInterrupts()\noriginally calls CHECK_FOR_INTERRUPTS() and there are no standalone\ncalls to CHECK_FOR_INTERRUPTS() within walreceiver, I think it might\nbe better to use ProcDiePending instead of ShutdownRequestPending. I\nadded a subset function of die() as the SIGTERM handler in walsender\nin a crude patch attached.\n\nWhat do you think about the issue, and the approach?\n\nIf there are no issues or objections with this method, I will continue\nto refine this patch. For now, I plan to register it for the upcoming\ncommitfest.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Sun, 31 Dec 2023 20:02:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "libpqsrv_connect_params should call ProcessWalRcvInterrupts" }, { "msg_contents": "(Apology for resubmitting due to poor subject of the previous mail)\n---\nHello.\n\nWe've noticed that when walreceiver is waiting for a connection to\ncomplete, standby does not immediately respond to promotion\nrequests. In PG14, upon receiving a promotion request, walreceiver\nterminates instantly, but in PG16, it waits for connection\ntimeout. This behavior is attributed to commit 728f86fec65, where a\npart of libpqrcv_connect was simply replaced with a call to\nlibpqsrc_connect_params. This behavior can be verified by simply\ndropping packets from the standby to the primary.\n\nBy a simple thought, in walreceiver, libpqsrv_connect_internal could\njust call ProcessWalRcvInterrupts() instead of CHECK_FOR_INTERRUPTS(),\nbut this approach is quite ugly. Since ProcessWalRcvInterrupts()\noriginally calls CHECK_FOR_INTERRUPTS() and there are no standalone\ncalls to CHECK_FOR_INTERRUPTS() within walreceiver, I think it might\nbe better to use ProcDiePending instead of ShutdownRequestPending. I\nadded a subset function of die() as the SIGTERM handler in walsender\nin a crude patch attached.\n\nWhat do you think about the issue, and the approach?\n\nIf there are no issues or objections with this method, I will continue\nto refine this patch. For now, I plan to register it for the upcoming\ncommitfest.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Sun, 31 Dec 2023 20:07:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Network failure may prevent promotion" }, { "msg_contents": "At Sun, 31 Dec 2023 20:07:41 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> We've noticed that when walreceiver is waiting for a connection to\n> complete, standby does not immediately respond to promotion\n> requests. In PG14, upon receiving a promotion request, walreceiver\n> terminates instantly, but in PG16, it waits for connection\n> timeout. This behavior is attributed to commit 728f86fec65, where a\n> part of libpqrcv_connect was simply replaced with a call to\n> libpqsrc_connect_params. This behavior can be verified by simply\n> dropping packets from the standby to the primary.\n\nApologize for the inconvenience on my part, but I need to fix this\nbehavior. To continue this discussion, I'm providing a repro script\nhere.\n\nWith the script, the standby is expected to promote immediately,\nemitting the following log lines:\n\nstandby.log:\n> 2024-01-18 16:25:22.245 JST [31849] LOG: received promote request\n> 2024-01-18 16:25:22.245 JST [31850] FATAL: terminating walreceiver process due to administrator command\n> 2024-01-18 16:25:22.246 JST [31849] LOG: redo is not required\n> 2024-01-18 16:25:22.246 JST [31849] LOG: selected new timeline ID: 2\n> 2024-01-18 16:25:22.274 JST [31849] LOG: archive recovery complete\n> 2024-01-18 16:25:22.275 JST [31847] LOG: checkpoint starting: force\n> 2024-01-18 16:25:22.277 JST [31846] LOG: database system is ready to accept connections\n> 2024-01-18 16:25:22.280 JST [31847] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.005 s; sync files=2, longest=0.001 s, average=0.001 s; distance=0 kB, estimate=0 kB; lsn=0/1548E98, redo lsn=0/1548E40\n> 2024-01-18 16:25:22.356 JST [31846] LOG: received immediate shutdown request\n> 2024-01-18 16:25:22.361 JST [31846] LOG: database system is shut down\n\nAfter 728f86fec65 was introduced, promotion does not complete with the\nsame operation, as follows. The patch attached to the previous mail\nfixes this behavior to the old behavior above.\n\n> 2024-01-18 16:47:53.314 JST [34515] LOG: received promote request\n> 2024-01-18 16:48:03.947 JST [34512] LOG: received immediate shutdown request\n> 2024-01-18 16:48:03.952 JST [34512] LOG: database system is shut down\n\nThe attached script requires that sudo is executable. And there's\nanother point to note. The script attempts to establish a replication\nconnection to $primary_address:$primary_port. To packet-filter can\nwork, it must be a remote address that is accessible when no\npacket-filter setting is set up. The firewall-cmd setting, need to be\nconfigured to block this connection. If simply an inaccessible IP\naddress is set, the process will fail immediately with a \"No route to\nhost\" error before the first packet is sent out, and it will not be\nblocked as intended.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 18 Jan 2024 17:26:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On 18/01/2024 10:26, Kyotaro Horiguchi wrote:\n> At Sun, 31 Dec 2023 20:07:41 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in\n>> We've noticed that when walreceiver is waiting for a connection to\n>> complete, standby does not immediately respond to promotion\n>> requests. In PG14, upon receiving a promotion request, walreceiver\n>> terminates instantly, but in PG16, it waits for connection\n>> timeout. This behavior is attributed to commit 728f86fec65, where a\n>> part of libpqrcv_connect was simply replaced with a call to\n>> libpqsrc_connect_params. This behavior can be verified by simply\n>> dropping packets from the standby to the primary.\n> \n> Apologize for the inconvenience on my part, but I need to fix this\n> behavior. To continue this discussion, I'm providing a repro script\n> here.\n\nThanks for script, I can repro this with it.\n\nGiven that commit 728f86fec6 that introduced this issue was not strictly \nrequired, perhaps we should just revert it for v16.\n\nIn your patch, there's one more stray reference to \nProcessWalRcvInterrupts() in the comment above libpqrcv_PQexec. That \nmakes me wonder, why didn't commit 728f86fec6 go all the way and also \nreplace libpqrcv_PQexec and libpqrcv_PQgetResult with libpqsrv_exec and \nlibpqsrv_get_result?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 15:42:28 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Thu, Jan 18, 2024 at 03:42:28PM +0200, Heikki Linnakangas wrote:\n> Given that commit 728f86fec6 that introduced this issue was not strictly\n> required, perhaps we should just revert it for v16.\n\nIs there a point in keeping 728f86fec6 as well on HEAD? That does not\nstrike me as wise to keep that in the tree for now. If it needs to be\nreworked, looking at this problem from scratch would be a safer\napproach.\n--\nMichael", "msg_date": "Fri, 19 Jan 2024 12:28:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4748/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4748\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:19:05 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Thu, Jan 18, 2024 at 10:42 PM Heikki Linnakangas <[email protected]> wrote:\n> Given that commit 728f86fec6 that introduced this issue was not strictly\n> required, perhaps we should just revert it for v16.\n\n+1 for the revert.\n\nThis issue should be fixed in the upcoming minor release\nsince it might cause unexpected delays in failover times.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Tue, 23 Jan 2024 02:52:25 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "Hi,\n\nOn 2024-01-19 12:28:05 +0900, Michael Paquier wrote:\n> On Thu, Jan 18, 2024 at 03:42:28PM +0200, Heikki Linnakangas wrote:\n> > Given that commit 728f86fec6 that introduced this issue was not strictly\n> > required, perhaps we should just revert it for v16.\n> \n> Is there a point in keeping 728f86fec6 as well on HEAD? That does not\n> strike me as wise to keep that in the tree for now. If it needs to be\n> reworked, looking at this problem from scratch would be a safer\n> approach.\n\nIDK, I think we'll introduce this type of bug over and over if we don't fix it\nproperly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:29:10 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "At Mon, 22 Jan 2024 13:29:10 -0800, Andres Freund <[email protected]> wrote in \n> Hi,\n> \n> On 2024-01-19 12:28:05 +0900, Michael Paquier wrote:\n> > On Thu, Jan 18, 2024 at 03:42:28PM +0200, Heikki Linnakangas wrote:\n> > > Given that commit 728f86fec6 that introduced this issue was not strictly\n> > > required, perhaps we should just revert it for v16.\n> > \n> > Is there a point in keeping 728f86fec6 as well on HEAD? That does not\n> > strike me as wise to keep that in the tree for now. If it needs to be\n> > reworked, looking at this problem from scratch would be a safer\n> > approach.\n> \n> IDK, I think we'll introduce this type of bug over and over if we don't fix it\n> properly.\n\nJust to clarify my position, I thought that 728f86fec6 was heading the\nright direction. Considering the current approach to signal handling\nin walreceiver, I believed that it would be better to further\ngeneralize in this direction rather than reverting. That's why I\nproposed that patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 23 Jan 2024 13:23:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Tue, Jan 23, 2024 at 1:23 PM Kyotaro Horiguchi\n<[email protected]> wrote:\n>\n> At Mon, 22 Jan 2024 13:29:10 -0800, Andres Freund <[email protected]> wrote in\n> > Hi,\n> >\n> > On 2024-01-19 12:28:05 +0900, Michael Paquier wrote:\n> > > On Thu, Jan 18, 2024 at 03:42:28PM +0200, Heikki Linnakangas wrote:\n> > > > Given that commit 728f86fec6 that introduced this issue was not strictly\n> > > > required, perhaps we should just revert it for v16.\n> > >\n> > > Is there a point in keeping 728f86fec6 as well on HEAD? That does not\n> > > strike me as wise to keep that in the tree for now. If it needs to be\n> > > reworked, looking at this problem from scratch would be a safer\n> > > approach.\n> >\n> > IDK, I think we'll introduce this type of bug over and over if we don't fix it\n> > properly.\n>\n> Just to clarify my position, I thought that 728f86fec6 was heading the\n> right direction. Considering the current approach to signal handling\n> in walreceiver, I believed that it would be better to further\n> generalize in this direction rather than reverting. That's why I\n> proposed that patch.\n\nRegarding the patch, here are the review comments.\n\n+/*\n+ * Is current process a wal receiver?\n+ */\n+bool\n+IsWalReceiver(void)\n+{\n+ return WalRcv != NULL;\n+}\n\nThis looks wrong because WalRcv can be non-NULL in processes other\nthan walreceiver.\n\n- pqsignal(SIGTERM, SignalHandlerForShutdownRequest); /* request shutdown */\n+ pqsignal(SIGTERM, WalRcvShutdownSignalHandler); /* request shutdown */\n\nCan't we just use die(), instead?\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Tue, 23 Jan 2024 15:07:10 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "Thank you for looking this!\n\nAt Tue, 23 Jan 2024 15:07:10 +0900, Fujii Masao <[email protected]> wrote in \n> Regarding the patch, here are the review comments.\n> \n> +/*\n> + * Is current process a wal receiver?\n> + */\n> +bool\n> +IsWalReceiver(void)\n> +{\n> + return WalRcv != NULL;\n> +}\n> \n> This looks wrong because WalRcv can be non-NULL in processes other\n> than walreceiver.\n\nMmm. Sorry for the silly mistake. We can use B_WAL_RECEIVER\ninstead. I'm not sure if the new function IsWalReceiver() is\nrequired. The expression \"MyBackendType == B_WAL_RECEIVER\" is quite\ndescriptive. However, the function does make ProcessInterrupts() more\naligned regarding process types.\n\n> - pqsignal(SIGTERM, SignalHandlerForShutdownRequest); /* request shutdown */\n> + pqsignal(SIGTERM, WalRcvShutdownSignalHandler); /* request shutdown */\n> \n> Can't we just use die(), instead?\n\nThere was a comment explaining the problems associated with exiting\nwithin a signal handler;\n\n- * Currently, only SIGTERM is of interest. We can't just exit(1) within the\n- * SIGTERM signal handler, because the signal might arrive in the middle of\n- * some critical operation, like while we're holding a spinlock. Instead, the\n\nAnd I think we should keep the considerations it suggests. The patch\nremoves the comment itself, but it does so because it implements our\nstandard process exit procedure, which incorporates points suggested\nby the now-removed comment.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 23 Jan 2024 17:24:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On 23/01/2024 06:23, Kyotaro Horiguchi wrote:\n> At Mon, 22 Jan 2024 13:29:10 -0800, Andres Freund <[email protected]> wrote in\n>> Hi,\n>>\n>> On 2024-01-19 12:28:05 +0900, Michael Paquier wrote:\n>>> On Thu, Jan 18, 2024 at 03:42:28PM +0200, Heikki Linnakangas wrote:\n>>>> Given that commit 728f86fec6 that introduced this issue was not strictly\n>>>> required, perhaps we should just revert it for v16.\n>>>\n>>> Is there a point in keeping 728f86fec6 as well on HEAD? That does not\n>>> strike me as wise to keep that in the tree for now. If it needs to be\n>>> reworked, looking at this problem from scratch would be a safer\n>>> approach.\n>>\n>> IDK, I think we'll introduce this type of bug over and over if we don't fix it\n>> properly.\n> \n> Just to clarify my position, I thought that 728f86fec6 was heading the\n> right direction. Considering the current approach to signal handling\n> in walreceiver, I believed that it would be better to further\n> generalize in this direction rather than reverting. That's why I\n> proposed that patch.\n\nI reverted commit 728f86fec6 from REL_16_STABLE and master.\n\nI agree it was the right direction, so let's develop a complete patch, \nand re-apply it to master when we have the patch ready.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:57:16 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On 23/01/2024 10:24, Kyotaro Horiguchi wrote:\n> Thank you for looking this!\n> \n> At Tue, 23 Jan 2024 15:07:10 +0900, Fujii Masao <[email protected]> wrote in\n>> Regarding the patch, here are the review comments.\n>>\n>> +/*\n>> + * Is current process a wal receiver?\n>> + */\n>> +bool\n>> +IsWalReceiver(void)\n>> +{\n>> + return WalRcv != NULL;\n>> +}\n>>\n>> This looks wrong because WalRcv can be non-NULL in processes other\n>> than walreceiver.\n> \n> Mmm. Sorry for the silly mistake. We can use B_WAL_RECEIVER\n> instead. I'm not sure if the new function IsWalReceiver() is\n> required. The expression \"MyBackendType == B_WAL_RECEIVER\" is quite\n> descriptive. However, the function does make ProcessInterrupts() more\n> aligned regarding process types.\n\nThere's an existing AmWalReceiverProcess() macro too. Let's use that.\n\n(See also \nhttps://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0%40iki.fi \nfor refactoring in this area)\n\nHere's a patch set summarizing the changes so far. They should be \nsquashed, but I kept them separate for now to help with review:\n\n1. revert the revert of 728f86fec6.\n2. your walrcv_shutdown_deblocking_v2-2.patch\n3. Also replace libpqrcv_PQexec() and libpqrcv_PQgetResult() with the \nwrappers from libpq-be-fe-helpers.h\n4. Replace IsWalReceiver() with AmWalReceiverProcess()\n\n>> - pqsignal(SIGTERM, SignalHandlerForShutdownRequest); /* request shutdown */\n>> + pqsignal(SIGTERM, WalRcvShutdownSignalHandler); /* request shutdown */\n>>\n>> Can't we just use die(), instead?\n> \n> There was a comment explaining the problems associated with exiting\n> within a signal handler;\n> \n> - * Currently, only SIGTERM is of interest. We can't just exit(1) within the\n> - * SIGTERM signal handler, because the signal might arrive in the middle of\n> - * some critical operation, like while we're holding a spinlock. Instead, the\n> \n> And I think we should keep the considerations it suggests. The patch\n> removes the comment itself, but it does so because it implements our\n> standard process exit procedure, which incorporates points suggested\n> by the now-removed comment.\n\ndie() doesn't call exit(1). Unless DoingCommandRead is set, but it never \nis in the walreceiver. It looks just like the new \nWalRcvShutdownSignalHandler() function. Am I missing something?\n\nHmm, but doesn't bgworker_die() have that problem with exit(1)ing in the \nsignal handler?\n\nI also wonder if we should replace SignalHandlerForShutdownRequest() \ncompletely with die(), in all processes? The difference is that \nSignalHandlerForShutdownRequest() uses ShutdownRequestPending, while \ndie() uses ProcDiePending && InterruptPending to indicate that the \nsignal was received. Or do some of the processes want to check for \nShutdownRequestPending only at specific places, and don't want to get \nterminated at the any random CHECK_FOR_INTERRUPTS()?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 23 Jan 2024 11:43:43 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Tue, Jan 23, 2024 at 6:43 PM Heikki Linnakangas <[email protected]> wrote:\n> There's an existing AmWalReceiverProcess() macro too. Let's use that.\n\n+1\n\n> Hmm, but doesn't bgworker_die() have that problem with exit(1)ing in the\n> signal handler?\n\nYes, that's a problem. This issue was raised sometimes so far,\nbut has not been resolved yet.\n\n> I also wonder if we should replace SignalHandlerForShutdownRequest()\n> completely with die(), in all processes? The difference is that\n> SignalHandlerForShutdownRequest() uses ShutdownRequestPending, while\n> die() uses ProcDiePending && InterruptPending to indicate that the\n> signal was received. Or do some of the processes want to check for\n> ShutdownRequestPending only at specific places, and don't want to get\n> terminated at the any random CHECK_FOR_INTERRUPTS()?\n\nFor example, checkpointer seems to want to handle a shutdown request\nonly when no other checkpoint is in progress because initiating a shutdown\ncheckpoint while another checkpoint is running could lead to issues.\n\nAlso I just wonder if even walreceiver can exit safely at any random\nCHECK_FOR_INTERRUPTS()...\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Wed, 24 Jan 2024 20:29:07 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Wed, Jan 24, 2024 at 8:29 PM Fujii Masao <[email protected]> wrote:\n>\n> On Tue, Jan 23, 2024 at 6:43 PM Heikki Linnakangas <[email protected]> wrote:\n> > There's an existing AmWalReceiverProcess() macro too. Let's use that.\n>\n> +1\n>\n> > Hmm, but doesn't bgworker_die() have that problem with exit(1)ing in the\n> > signal handler?\n>\n> Yes, that's a problem. This issue was raised sometimes so far,\n> but has not been resolved yet.\n>\n> > I also wonder if we should replace SignalHandlerForShutdownRequest()\n> > completely with die(), in all processes? The difference is that\n> > SignalHandlerForShutdownRequest() uses ShutdownRequestPending, while\n> > die() uses ProcDiePending && InterruptPending to indicate that the\n> > signal was received. Or do some of the processes want to check for\n> > ShutdownRequestPending only at specific places, and don't want to get\n> > terminated at the any random CHECK_FOR_INTERRUPTS()?\n>\n> For example, checkpointer seems to want to handle a shutdown request\n> only when no other checkpoint is in progress because initiating a shutdown\n> checkpoint while another checkpoint is running could lead to issues.\n\nThis my comment is not right... Sorry for noise.\n\nRegards,\n\n-- \nFujii Masao\n\n\n", "msg_date": "Wed, 24 Jan 2024 22:05:44 +0900", "msg_from": "Fujii Masao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "Thank you fixing the issue.\n\nAt Tue, 23 Jan 2024 11:43:43 +0200, Heikki Linnakangas <[email protected]> wrote i\nn \n> There's an existing AmWalReceiverProcess() macro too. Let's use that.\n\nMmm. I sought an Is* function becuase \"IsLogicalWorker()\" is placed on\nthe previous line. Our convention regarding those functions (macros)\nand variables seems inconsistent. However, I can't say for sure that\nwe should unify all of them.\n\n> (See also\n> https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0%40iki.fi\n> for refactoring in this area)\n> \n> Here's a patch set summarizing the changes so far. They should be\n> squashed, but I kept them separate for now to help with review:\n> \n> 1. revert the revert of 728f86fec6.\n> 2. your walrcv_shutdown_deblocking_v2-2.patch\n> 3. Also replace libpqrcv_PQexec() and libpqrcv_PQgetResult() with the\n> wrappers from libpq-be-fe-helpers.h\n\nBoth replacements look fine. I didn't find another instance of similar\ncode.\n\n> 4. Replace IsWalReceiver() with AmWalReceiverProcess()\n\nJust look fine.\n\n> >> - pqsignal(SIGTERM, SignalHandlerForShutdownRequest); /* request\n> >> - shutdown */\n> >> + pqsignal(SIGTERM, WalRcvShutdownSignalHandler); /* request shutdown\n> >> */\n> >>\n> >> Can't we just use die(), instead?\n> > There was a comment explaining the problems associated with exiting\n> > within a signal handler;\n> > - * Currently, only SIGTERM is of interest. We can't just exit(1) within\n> > - * the\n> > - * SIGTERM signal handler, because the signal might arrive in the middle\n> > - * of\n> > - * some critical operation, like while we're holding a spinlock.\n> > - * Instead, the\n> > And I think we should keep the considerations it suggests. The patch\n> > removes the comment itself, but it does so because it implements our\n> > standard process exit procedure, which incorporates points suggested\n> > by the now-removed comment.\n> \n> die() doesn't call exit(1). Unless DoingCommandRead is set, but it\n> never is in the walreceiver. It looks just like the new\n> WalRcvShutdownSignalHandler() function. Am I missing something?\n\nUgh.. Doesn't the name 'die()' suggest exit()?\nI agree that die() can be used instad.\n\n> Hmm, but doesn't bgworker_die() have that problem with exit(1)ing in\n> the signal handler?\n\nI noticed that but ignored for this time.\n\n> I also wonder if we should replace SignalHandlerForShutdownRequest()\n> completely with die(), in all processes? The difference is that\n> SignalHandlerForShutdownRequest() uses ShutdownRequestPending, while\n> die() uses ProcDiePending && InterruptPending to indicate that the\n> signal was received. Or do some of the processes want to check for\n> ShutdownRequestPending only at specific places, and don't want to get\n> terminated at the any random CHECK_FOR_INTERRUPTS()?\n\nAt least, pg_log_backend_memory_context(<chkpt_pid>) causes a call to\nProcessInterrupts via \"ereport(LOG_SERVER_ONLY\" which can lead to an\nexit due to ProcDiePending. In this regard, checkpointer clearly\nrequires the distinction.\n\nRather than merely consolidating the notification variables and\nstriving to annihilate CFI calls in the execution path, I\nbelieve we need a shutdown mechanism that CFI doesn't react\nto. However, as for the method to achieve this, whether we should keep\nthe notification variables separate as they are now, or whether it\nwould be better to introduce a variable that causes CFI to ignore\nProcDiePending, is a matter I think is open to discussion.\n\nAttached patches are the rebased version of v3 (0003 is updated) and\nadditional 0005 that makes use of die() instead of walreceiver's\ncustom function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 29 Jan 2024 16:32:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Network failure may prevent promotion" }, { "msg_contents": "On Mon, Jan 29, 2024 at 2:32 AM Kyotaro Horiguchi\n<[email protected]> wrote:\n> [ new patch set ]\n\nHi,\n\nI think it would be helpful to make it more clear exactly what's going\non here. It looks 0001 is intended to revert\n21ef4d4d897563adb2f7920ad53b734950f1e0a4, which was itself a revert of\n728f86fec65537eade8d9e751961782ddb527934, and then I guess the\nremaining patches are to fix up issues created by that commit, but the\ncommit messages aren't meaningful so it's hard to understand what is\nbeing fixed.\n\nI think it would also be useful to clarify whether this is imagined to\nbe for master only, or something to be back-patched. In addition to\nmentioning that here, it would be good to add that information to the\ntarget version field of https://commitfest.postgresql.org/48/4748/\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 10:16:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Network failure may prevent promotion" } ]
[ { "msg_contents": "Hi pgsql-bugs,\n\nSeveral customers reported deadlock error raised after accidental \nrestart of server.\n\nProblem description\n--------------------\n\nClient backend tries to create first temporary table after connection \nand it gets deadlock error.\nHere is example of log file records:\n\n2023-12-29 13:48:21.977 MSK [29546] ERROR:  deadlock detected at \ncharacter 24\n2023-12-29 13:48:21.977 MSK [29546] DETAIL:  Process 29546 waits for \nAccessExclusiveLock on relation 416811 of database 5; blocked by process \n29513.\n         Process 29513 waits for AccessExclusiveLock on relation 416795 \nof database 5; blocked by process 29546.\n         Process 29546: create temporary table tt1 (c1 bytea, c2 bytea, \nc3 bytea, c4 bytea, c5 numeric(7, 0), c6 serial4 ) without oids;\n         Process 29513: <command string not enabled>\n\n29546 is PID of client backend, 29513 is PID of autovacuum worker.\n\nClient backend tries to initialize namespace and cleans up all orphan \nobjects into its temporary namespace. Whole cleanup is performed in \nscope of client transaction.\nAutovacuum worker tries to remove tables one-by-one, using one \ntransaction for each table.\n\nHow to reproduce\n-----------------\n\nTo reproduce it, use pgbench and attached SQL file. Start pgbench with \nfollowing arguments:\n\n  $ pgbench -n -j 1 -c 50 -T 10 -C -f create_temp_tables.sql \n--verbose-errors\n\nKill postgres processes to make table orphan. For local connections the \nfollowing command can be used:\n\n  $ pkill -9  -f \"postgres.*local\"\n\nWait a bit for recovery completion and start pgbench again:\n\n  $ pgbench -n -j 1 -c 50 -T 10 -C -f create_temp_tables.sql \n--verbose-errors\n\nAnd check log files for deadlock. As autovacuum start is hardly \npredictable, the launcher nap time is better to set to 1:\n\n  # alter system set autovacuum_naptime=1;\n\nI've noticed that preliminary \"vacuum database\" helps to reproduce it.\n\nHere is all my custom parameters:\nalter system set autovacuum_naptime=1;\nalter system set max_connections to 1000;\nalter system set backtrace_functions to 'DeadLockReport';\nalter system set logging_collector to on;\nalter system set log_filename to 'postgresql-%Y-%m-%d.log';\nalter system set autovacuum_max_workers to 1;\n\nStack traces\n--------------\n\nClient backend has following stack trace (thanks to backtrace_functions:\n         0xafbe7e <DeadLockReport+0x27e> at /opt/pgpro/mybuild/bin/postgres\n         0xb00dd0 <GrantAwaitedLock> at /opt/pgpro/mybuild/bin/postgres\n         0xaff983 <LockAcquireExtended+0x603> at \n/opt/pgpro/mybuild/bin/postgres\n         0xafd2f9 <LockRelationOid+0x69> at /opt/pgpro/mybuild/bin/postgres\n         0x742a07 <findDependentObjects+0x627> at \n/opt/pgpro/mybuild/bin/postgres\n         0x7422a4 <performDeletion+0xf4> at /opt/pgpro/mybuild/bin/postgres\n         0x754624 <AccessTempTableNamespace+0x134> at \n/opt/pgpro/mybuild/bin/postgres\n         0x754424 <RangeVarGetCreationNamespace+0xa4> at \n/opt/pgpro/mybuild/bin/postgres\n         0x754f6c <RangeVarGetAndCheckCreationNamespace+0x5c> at \n/opt/pgpro/mybuild/bin/postgres\n         0x7d36b0 <transformCreateStmt+0x50> at \n/opt/pgpro/mybuild/bin/postgres\n         0xb2a05f <ProcessUtilitySlow+0xdf> at \n/opt/pgpro/mybuild/bin/postgres\n         0xb28450 <standard_ProcessUtility+0x530> at \n/opt/pgpro/mybuild/bin/postgres\n         0xb27ea8 <ProcessUtility+0x68> at /opt/pgpro/mybuild/bin/postgres\n         0xb2745b <PortalRunUtility+0xab> at /opt/pgpro/mybuild/bin/postgres\n         0xb2681c <PortalRunMulti+0x1fc> at /opt/pgpro/mybuild/bin/postgres\n         0xb25eb8 <PortalRun+0x1e8> at /opt/pgpro/mybuild/bin/postgres\n         0xb24127 <exec_simple_query+0x5c7> at \n/opt/pgpro/mybuild/bin/postgres\n         0xb2170e <PostgresMain+0x119e> at /opt/pgpro/mybuild/bin/postgres\n         0xa4e4f8 <BackendRun+0x38> at /opt/pgpro/mybuild/bin/postgres\n         0xa4dabf <ServerLoop+0xdef> at /opt/pgpro/mybuild/bin/postgres\n         0xa4a824 <PostmasterMain+0x1594> at /opt/pgpro/mybuild/bin/postgres\n         0x93a58a <main+0x33a> at /opt/pgpro/mybuild/bin/postgres\n         0x82b77d36a <__libc_start1+0x12a> at /lib/libc.so.7\n\nAutovacuum worker (got by increase deadlock timeout and debugger):\n\n         WaitEventSetWaitBlock() at latch.c:1,649 0xae6ffb\n         WaitEventSetWait() at latch.c:1,435 0xae6ffb\n         WaitLatch() at latch.c:497 0xae6cc4\n         ProcSleep() at proc.c:1,341 0xb135de\n         WaitOnLock() at lock.c:1,859 0xb00d7d\n         LockAcquireExtended() at lock.c:1,101 0xaffa03\n         LockRelationOid() at lmgr.c:117 0xafd379\n         AcquireDeletionLock() at dependency.c:1,552 0x742a37\n         findDependentObjects() at dependency.c:894 0x742a37\n         performDeletion() at dependency.c:346 0x7422d4\n         do_autovacuum() at autovacuum.c:2,274 0xa41f5d\n         AutoVacWorkerMain() at autovacuum.c:1,716 0xa40200\n         StartAutoVacWorker() at autovacuum.c:1,494 0xa3fe26\n         StartAutovacuumWorker() at postmaster.c:5,463 0xa4b5e1\n         sigusr1_handler() at postmaster.c:5,172 0xa4b5e1\n         handle_signal() at thr_sig.c:301 0x82980755f\n         thr_sighandler() at thr_sig.c:244 0x829806b1b\n\nReason\n-------\n\nTable contains columns with serial type, i.e. with autogenerated \nsequences. When backend tries to clean namespace,\nit removes tables and sequences in random way. When autovacuum worker \ntries to remove table, at first it locks table,\nthen tries to remove sequence by dependency.\n\nSo it may happen that:\n  - Backend locks and removes sequence\n  - Autovacuum worker locks table to remove, finds sequence and tries to \nremove sequence and get locked.\n  - Backend tries to lock table locked by autovacuum worker and get locked.\n\nIdea how to fix it\n------------------\n\nAs of now, autovacuum worker tries to avoid concurrency by preliminary \nconditional lock of table. It provides guarantee to\navoid concurrency with other workers. To avoid concurrency with client \nbackend during orphan table removal, it may be worth\nto conditionally lock namespace before removal attempt.\n\nIf backend starts namespace removal, it locks namespace exclusively and \nautovacuum worker can identify it by conditional locking and can skip \nremoval.\nOn another hand, if worker gets namespace lock, then there is no backend \nremoving orphan tables.\n\nAlso it may worth to mention that worker can try to get AccessShare lock \nto allow concurrency between workers.\n\nPlease find attached patch for this idea.\n\nPlease feel free to ask any questions!\n\nThank you!\n\n-- \nMichael Zhilin\nPostgres Professional\nhttps://www.postgrespro.ru", "msg_date": "Sun, 31 Dec 2023 18:02:58 +0300", "msg_from": "Michael Zhilin <[email protected]>", "msg_from_op": true, "msg_subject": "BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" }, { "msg_contents": "Hii,\r\nI am currently trying to review the submitted patch but I am not able to apply it to the master branch. \r\n\r\nRegards,\r\nAkshat Jaimini", "msg_date": "Thu, 28 Mar 2024 05:51:07 +0000", "msg_from": "Akshat Jaimini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" }, { "msg_contents": "> On 28 Mar 2024, at 10:51, Akshat Jaimini <[email protected]> wrote:\n> \n> I am currently trying to review the submitted patch\n\nGreat, thank you!\n\n> but I am not able to apply it to the master branch. \n\nPlease find attached rebased version on current HEAD. For some reason CFbot did not notify about that rebases is needed.\n\n\nBest regards, Andrey Borodin.", "msg_date": "Thu, 28 Mar 2024 11:32:58 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHii,\r\n\r\nThanks for the updated patch. I ran make installcheck-world after applying the patch and recompiling it. It did fail for a particular test but from the logs it seems to be unrelated to this particular patch since it fails for the following:\r\n\r\n==========================\r\nselect error_trap_test();\r\n- error_trap_test \r\n----------------------------\r\n- division_by_zero detected\r\n-(1 row)\r\n-\r\n+ERROR: cannot start subtransactions during a parallel operation\r\n+CONTEXT: PL/pgSQL function error_trap_test() line 2 during statement block entry\r\n+parallel worker\r\n reset debug_parallel_query;\r\n drop function error_trap_test();\r\n drop function zero_divide();\r\n==========================\r\n\r\nThe code seems to implement the feature and has good and explanatory comments associated with it.\r\nI believe we can go ahead with committing patch although I would request some senior contributors to also take a look at this patch since I am relatively new to patch reviews.\r\nChanging the status to 'Ready for Committer'.\r\n\r\nRegards,\r\nAkshat Jaimini\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 29 Mar 2024 10:31:01 +0000", "msg_from": "Akshat Jaimini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" }, { "msg_contents": "Akshat Jaimini <[email protected]> writes:\n> The code seems to implement the feature and has good and explanatory comments associated with it.\n> I believe we can go ahead with committing patch although I would request some senior contributors to also take a look at this patch since I am relatively new to patch reviews.\n\nLooks like a good catch and a reasonable fix. Pushed after rewriting\nthe comments a bit.\n\nAs far as this goes:\n\n> I ran make installcheck-world after applying the patch and recompiling it. It did fail for a particular test but from the logs it seems to be unrelated to this particular patch since it fails for the following:\n\n> ==========================\n> select error_trap_test();\n> - error_trap_test \n> ----------------------------\n> - division_by_zero detected\n> -(1 row)\n> -\n> +ERROR: cannot start subtransactions during a parallel operation\n\n... that's the test case from 0075d7894, and the failure is what\nI'd expect from a backend older than that. Maybe you forgot to\nrecompile/reinstall after updating past that commit?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 15:04:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" }, { "msg_contents": "Thanks to all for review, testing and commit!!! \n\n\nOn 2 April 2024 22:04:54 GMT+03:00, Tom Lane <[email protected]> wrote:\n>Akshat Jaimini <[email protected]> writes:\n>> The code seems to implement the feature and has good and explanatory comments associated with it.\n>> I believe we can go ahead with committing patch although I would request some senior contributors to also take a look at this patch since I am relatively new to patch reviews.\n>\n>Looks like a good catch and a reasonable fix. Pushed after rewriting\n>the comments a bit.\n>\n>As far as this goes:\n>\n>> I ran make installcheck-world after applying the patch and recompiling it. It did fail for a particular test but from the logs it seems to be unrelated to this particular patch since it fails for the following:\n>\n>> ==========================\n>> select error_trap_test();\n>> - error_trap_test \n>> ----------------------------\n>> - division_by_zero detected\n>> -(1 row)\n>> -\n>> +ERROR: cannot start subtransactions during a parallel operation\n>\n>... that's the test case from 0075d7894, and the failure is what\n>I'd expect from a backend older than that. Maybe you forgot to\n>recompile/reinstall after updating past that commit?\n>\n>\t\t\tregards, tom lane\n\nThanks to all for review, testing and commit!!! On 2 April 2024 22:04:54 GMT+03:00, Tom Lane <[email protected]> wrote:\nAkshat Jaimini <[email protected]> writes:The code seems to implement the feature and has good and explanatory comments associated with it.I believe we can go ahead with committing patch although I would request some senior contributors to also take a look at this patch since I am relatively new to patch reviews.Looks like a good catch and a reasonable fix. Pushed after rewritingthe comments a bit.As far as this goes:I ran make installcheck-world after applying the patch and recompiling it. It did fail for a particular test but from the logs it seems to be unrelated to this particular patch since it fails for the following:select error_trap_test();- error_trap_test - division_by_zero detected-(1 row)-+ERROR: cannot start subtransactions during a parallel operation... that's the test case from 0075d7894, and the failure is whatI'd expect from a backend older than that. Maybe you forgot torecompile/reinstall after updating past that commit?\t\t\tregards, tom lane", "msg_date": "Wed, 03 Apr 2024 08:08:26 +0300", "msg_from": "Michael Zhilin <[email protected]>", "msg_from_op": true, "msg_subject": "=?US-ASCII?Q?Re=3A_BUG=3A_deadlock_between_au?=\n =?US-ASCII?Q?tovacuum_worker_and_client_ba?=\n =?US-ASCII?Q?ckend_during_removal_of_orphan_temp_tables_with_sequences?=" }, { "msg_contents": "Hi apologies for the late reply.\n> Maybe you forgot to recompile/reinstall after updating past that commit?\nI did recompile it earlier but just to be sure I followed the steps again\nand now its working!\n\nRegards,\nAkshat Jaimini\n\nOn Wed, Apr 3, 2024 at 12:34 AM Tom Lane <[email protected]> wrote:\n\n> Akshat Jaimini <[email protected]> writes:\n> > The code seems to implement the feature and has good and explanatory\n> comments associated with it.\n> > I believe we can go ahead with committing patch although I would request\n> some senior contributors to also take a look at this patch since I am\n> relatively new to patch reviews.\n>\n> Looks like a good catch and a reasonable fix. Pushed after rewriting\n> the comments a bit.\n>\n> As far as this goes:\n>\n> > I ran make installcheck-world after applying the patch and recompiling\n> it. It did fail for a particular test but from the logs it seems to be\n> unrelated to this particular patch since it fails for the following:\n>\n> > ==========================\n> > select error_trap_test();\n> > - error_trap_test\n> > ----------------------------\n> > - division_by_zero detected\n> > -(1 row)\n> > -\n> > +ERROR: cannot start subtransactions during a parallel operation\n>\n> ... that's the test case from 0075d7894, and the failure is what\n> I'd expect from a backend older than that. Maybe you forgot to\n> recompile/reinstall after updating past that commit?\n>\n> regards, tom lane\n>\n\nHi apologies for the late reply.> Maybe you forgot to recompile/reinstall after updating past that commit?I did recompile it earlier but just to be sure I followed the steps again and now its working!Regards,Akshat JaiminiOn Wed, Apr 3, 2024 at 12:34 AM Tom Lane <[email protected]> wrote:Akshat Jaimini <[email protected]> writes:\n> The code seems to implement the feature and has good and explanatory comments associated with it.\n> I believe we can go ahead with committing patch although I would request some senior contributors to also take a look at this patch since I am relatively new to patch reviews.\n\nLooks like a good catch and a reasonable fix.  Pushed after rewriting\nthe comments a bit.\n\nAs far as this goes:\n\n> I ran make installcheck-world after applying the patch and recompiling it. It did fail for a particular test but from the logs it seems to be unrelated to this particular patch since it fails for the following:\n\n> ==========================\n> select error_trap_test();\n> -      error_trap_test      \n> ----------------------------\n> - division_by_zero detected\n> -(1 row)\n> -\n> +ERROR:  cannot start subtransactions during a parallel operation\n\n... that's the test case from 0075d7894, and the failure is what\nI'd expect from a backend older than that.  Maybe you forgot to\nrecompile/reinstall after updating past that commit?\n\n                        regards, tom lane", "msg_date": "Thu, 4 Apr 2024 15:21:48 +0530", "msg_from": "Akshat Jaimini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: deadlock between autovacuum worker and client backend during\n removal of orphan temp tables with sequences" } ]
[ { "msg_contents": "Hi!\n\nMikhail Gribkov <youzhick(at)gmail(dot)com> writes:\n\n > > Honestly I'm not entirely sure fixing only two switched words is \nworth the\n > > effort, but the declared goal is clearly achieved.\n >\n >\n > > I think the patch is good to go, although you need to fix code \nformatting.\n >\n >\n > I took a brief look at this.  I concur that we shouldn't need to be\n > hugely concerned about the speed of this code path.  However, we *do*\n > need to be concerned about its maintainability, and I think the patch\n > falls down badly there: it adds a chunk of very opaque and essentially\n > undocumented code, that people will need to reverse-engineer anytime\n > they are studying this function.  That could be alleviated perhaps\n > with more work on comments, but I have to wonder whether it's worth\n > carrying this logic at all.  It's a rather strange behavior to add,\n > and I wonder if many users will want it.\n\nI encounter this problem all the time. I don't know, whether my clients \nare representative. But I see the problem, when the developers show me \ntheir code base all the time.\nIt's an issue for column names and table names alike. I personally spent \nhours watching developers trying various permutations.\nThey rarely request this feature. Usually they are to embarrassed for \nnot knowing their object names to request anything in that state.\nBut I want the database, which I support, to be gentle and helpful to \nthe user under these circumstances.\n\nRegarding complexity: I think the permutation matrix is the thing to \neasily get wrong. I had a one off bug writing it down initially.\nI tried to explain the conceptual approach better with a longer comment \nthan before.\n\n                 /*\n                  * Only consider mirroring permutations, since the \nthree simple rotations are already\n                  * (or will be for a later underscore_current) covered \nabove.\n                  *\n                  * The entries of the permutation matrix tell us, where \nwe should copy the tree segments to.\n                  * The zeroth dimension iterates over the permutations, \nwhile the first dimension iterates\n                  * over the three segments are permuted to.\n                  * Considering the string A_B_C the three segments are:\n                  * - before the initial underscore sections (A)\n                  * - between the underscore sections (B)\n                  * - after the later underscore sections (C)\n                  */\n\nIf anything is still unclear, I'd appreciate feedback about what might \nbe still unclear/confusing about this.\nI can't promise to be helpful, if something breaks. But I have \npractically forgotten how I did it, and I found it easy to extend it \nlike described below. It would have been embarrassing otherwise. Yet \nthis gives me hope, it should be possible to enable others the same way.\nI certainly want the code simple without need to reverse-engineer \nanything. Please let me know, if there are difficult to understand bits \nleft around.\n\n > One thing that struck me is that no care is being taken for adjacent\n > underscores (that is, \"foo__bar\" and similar cases).  It seems\n > unlikely that treating the zero-length substring between the\n > underscores as a word to permute is helpful; moreover, it adds\n > an edge case that the string-moving logic could easily get wrong.\n > I wonder if the code should treat any number of consecutive\n > underscores as a single separator.  (Somewhat related: I think it\n > will behave oddly when the first or last character is '_', since the\n > outer loop ignores those positions.)\n\nI wasn't sure how there could be any potential future bug with copying \nzero-length strings, i.e. doing nothing. And I still don't see that.\n\nThere is one point I agree with: Doing this seems rarely helpful. I \nchanged the code, so it treats sections delimited by an arbitrary amount \nof underscores.\nSo it never permutes with zero length strings within. I also added \nfunctionality to skip the zero length cases if we should encounter them \nat the end of the string.\nSo afaict there should be no zero length swaps left. Please let me know \nwhether this is more to your liking.\n\nI also replaced the hard limit of underscores with more nuanced limits \nof permutations to try before giving up.\n\n > > And it would be much more convenient to work with your patch if \nevery next\n > > version file will have a unique name (maybe something like \"_v2\", \"_v3\"\n > > etc. suffixes)\n >\n >\n > Please.  It's very confusing when there are multiple identically-named\n > patches in a thread.\n\nSorry, I started with this, because I confused cf bot in the past about \nwhether the patches should be applied on top of each other or not.\n\nFor me the cf-bot logic is a bit opaque there. But you are right, \nconfusing patch readers is definitely worse. I'll try to do that. I hope \nthe attached format is better.\n\n\nOne question about pgindent: I struggled a bit with getting the right \nversion of bsd_indent. I found versions labeled 2.2.1 and 2.1.1, but \napparently we work with 2.1.2. Where can I get that?\n\nRegards\nArne", "msg_date": "Sun, 31 Dec 2023 17:29:36 +0100", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nlike there were CFbot test failures last time it was run [2]. Please\nhave a look and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4282/\n[2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4282\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:38:42 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Thank you for bringing that to my attention. Is there a way to subscribe \nto cf-bot failures?\n\nApparently I confused myself with my naming. I attached a patch that \nfixes the bug (at least at my cassert test-world run).\n\nRegards\nArne\n\nOn 2024-01-22 06:38, Peter Smith wrote:\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> like there were CFbot test failures last time it was run [2]. Please\n> have a look and post an updated version if necessary.\n>\n> ======\n> [1] https://commitfest.postgresql.org/46/4282/\n> [2] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4282\n>\n> Kind Regards,\n> Peter Smith.", "msg_date": "Mon, 22 Jan 2024 18:14:14 +0100", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Arne Roland <[email protected]> writes:\n> Thank you for bringing that to my attention. Is there a way to subscribe \n> to cf-bot failures?\n\nI don't know of any push notification support in cfbot, but you\ncan bookmark the page with your own active patches, and check it\nperiodically:\n\nhttp://commitfest.cputube.org/arne-roland.html\n\n(For others, click on your own name in the main cfbot page's entry for\none of your patches to find out how it spelled your name for this\npurpose.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:22:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Thank you! I wasn't aware of the filter per person. It was quite simple \nintegrate a web scraper into my custom push system.\n\nRegarding the patch: I ran the 2.1.1 version of pg_bsd_indent now. I \nhope that suffices. I removed the matrix declaration to make it C90 \ncomplaint. I attached the result.\n\nRegards\nArne\n\nOn 2024-01-22 19:22, Tom Lane wrote:\n> Arne Roland <[email protected]> writes:\n>> Thank you for bringing that to my attention. Is there a way to subscribe\n>> to cf-bot failures?\n> I don't know of any push notification support in cfbot, but you\n> can bookmark the page with your own active patches, and check it\n> periodically:\n>\n> http://commitfest.cputube.org/arne-roland.html\n>\n> (For others, click on your own name in the main cfbot page's entry for\n> one of your patches to find out how it spelled your name for this\n> purpose.)\n>\n> \t\t\tregards, tom lane", "msg_date": "Tue, 23 Jan 2024 05:42:42 +0100", "msg_from": "Arne Roland <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "\n\n> On 23 Jan 2024, at 09:42, Arne Roland <[email protected]> wrote:\n> \n> <0001-fuzzy_underscore_permutation_v5.patch>\n\nMikhail, there’s a new patch version. May I ask you to review it?\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 4 Mar 2024 10:49:48 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" } ]
[ { "msg_contents": "Hi,\n\nI've written a patch set for vacuum to use the streaming read interface\nproposed in [1]. Making lazy_scan_heap() async-friendly required a bit\nof refactoring of lazy_scan_heap() and lazy_scan_skip(). I needed to\nconfine all of the skipping logic -- previously spread across\nlazy_scan_heap() and lazy_scan_skip() -- to lazy_scan_skip(). All of the\npatches doing this and other preparation for vacuum to use the streaming\nread API can be applied on top of master. The attached patch set does\nthis.\n\nThere are a few comments that still need to be updated. I also noticed I\nneeded to reorder and combine a couple of the commits. I wanted to\nregister this for the january commitfest, so I didn't quite have time\nfor the finishing touches.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com", "msg_date": "Sun, 31 Dec 2023 13:28:16 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Sun, Dec 31, 2023 at 1:28 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> There are a few comments that still need to be updated. I also noticed I\n> needed to reorder and combine a couple of the commits. I wanted to\n> register this for the january commitfest, so I didn't quite have time\n> for the finishing touches.\n\nI've updated this patch set to remove a commit that didn't make sense\non its own and do various other cleanup.\n\n- Melanie", "msg_date": "Tue, 2 Jan 2024 12:36:18 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "Hi,\n\nOn 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> nskippable_blocks\n\nI think these may lead to worse code - the compiler has to reload\nvacrel->rel_pages/next_unskippable_block for every loop iteration, because it\ncan't guarantee that they're not changed within one of the external functions\ncalled in the loop body.\n\n> Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state\n> \n> Future commits will remove all skipping logic from lazy_scan_heap() and\n> confine it to lazy_scan_skip(). To make those commits more clear, first\n> introduce the struct, VacSkipState, which will maintain the variables\n> needed to skip ranges less than SKIP_PAGES_THRESHOLD.\n\nWhy not add this to LVRelState, possibly as a struct embedded within it?\n\n\n> From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sat, 30 Dec 2023 16:59:27 -0500\n> Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip\n> \n> In preparation for vacuum to use the streaming read interface (and eventually\n> AIO), refactor vacuum's logic for skipping blocks such that it is entirely\n> confined to lazy_scan_skip(). This turns lazy_scan_skip() and the VacSkipState\n> it uses into an iterator which yields blocks to lazy_scan_heap(). Such a\n> structure is conducive to an async interface.\n\nAnd it's cleaner - I find the current code extremely hard to reason about.\n\n\n> By always calling lazy_scan_skip() -- instead of only when we have reached the\n> next unskippable block, we no longer need the skipping_current_range variable.\n> lazy_scan_heap() no longer needs to manage the skipped range -- checking if we\n> reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()\n> can derive the visibility status of a block from whether or not we are in a\n> skippable range -- that is, whether or not the next_block is equal to the next\n> unskippable block.\n\nI wonder if it should be renamed as part of this - the name is somewhat\nconfusing now (and perhaps before)? lazy_scan_get_next_block() or such?\n\n\n> +\twhile (true)\n> \t{\n> \t\tBuffer\t\tbuf;\n> \t\tPage\t\tpage;\n> -\t\tbool\t\tall_visible_according_to_vm;\n> \t\tLVPagePruneState prunestate;\n> \n> -\t\tif (blkno == vacskip.next_unskippable_block)\n> -\t\t{\n> -\t\t\t/*\n> -\t\t\t * Can't skip this page safely. Must scan the page. But\n> -\t\t\t * determine the next skippable range after the page first.\n> -\t\t\t */\n> -\t\t\tall_visible_according_to_vm = vacskip.next_unskippable_allvis;\n> -\t\t\tlazy_scan_skip(vacrel, &vacskip, blkno + 1);\n> -\n> -\t\t\tAssert(vacskip.next_unskippable_block >= blkno + 1);\n> -\t\t}\n> -\t\telse\n> -\t\t{\n> -\t\t\t/* Last page always scanned (may need to set nonempty_pages) */\n> -\t\t\tAssert(blkno < rel_pages - 1);\n> -\n> -\t\t\tif (vacskip.skipping_current_range)\n> -\t\t\t\tcontinue;\n> +\t\tblkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,\n> +\t\t\t\t\t\t\t &all_visible_according_to_vm);\n> \n> -\t\t\t/* Current range is too small to skip -- just scan the page */\n> -\t\t\tall_visible_according_to_vm = true;\n> -\t\t}\n> +\t\tif (blkno == InvalidBlockNumber)\n> +\t\t\tbreak;\n> \n> \t\tvacrel->scanned_pages++;\n>\n\nI don't like that we still do determination about the next block outside of\nlazy_scan_skip() and have duplicated exit conditions between lazy_scan_skip()\nand lazy_scan_heap().\n\nI'd probably change the interface to something like\n\nwhile (lazy_scan_get_next_block(vacrel, &blkno))\n{\n...\n}\n\n\n> From b6603e35147c4bbe3337280222e6243524b0110e Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sun, 31 Dec 2023 09:47:18 -0500\n> Subject: [PATCH v2 5/6] VacSkipState saves reference to LVRelState\n> \n> The streaming read interface can only give pgsr_next callbacks access to\n> two pieces of private data. As such, move a reference to the LVRelState\n> into the VacSkipState.\n> \n> This is a separate commit (as opposed to as part of the commit\n> introducing VacSkipState) because it is required for using the streaming\n> read interface but not a natural change on its own. VacSkipState is per\n> block and the LVRelState is referenced for the whole relation vacuum.\n\nI'd do it the other way round, i.e. either embed VacSkipState ino LVRelState\nor point to it from VacSkipState.\n\nLVRelState is already tied to the iteration state, so I don't think there's a\nreason not to do so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jan 2024 12:23:09 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "\n\n\n\n\nOn 1/4/24 2:23 PM, Andres Freund wrote:\n\n\nOn 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\nSubject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\nSubject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n nskippable_blocks\nI think these may lead to worse code - the compiler has to reload\nvacrel->rel_pages/next_unskippable_block for every loop iteration, because it\ncan't guarantee that they're not changed within one of the external functions\ncalled in the loop body.\n\nAdmittedly I'm not up to speed on recent vacuum changes, but I\n have to wonder if the concept of skipping should go away in the\n context of vector IO? Instead of thinking about \"we can skip this\n range of blocks\", why not maintain a list of \"here's the next X\n number of blocks that we need to vacuum\"?\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n", "msg_date": "Thu, 4 Jan 2024 17:25:22 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "Hi,\n\nOn Fri, 5 Jan 2024 at 02:25, Jim Nasby <[email protected]> wrote:\n>\n> On 1/4/24 2:23 PM, Andres Freund wrote:\n>\n> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n>\n> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> nskippable_blocks\n>\n> I think these may lead to worse code - the compiler has to reload\n> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n> can't guarantee that they're not changed within one of the external functions\n> called in the loop body.\n>\n> Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder if the concept of skipping should go away in the context of vector IO? Instead of thinking about \"we can skip this range of blocks\", why not maintain a list of \"here's the next X number of blocks that we need to vacuum\"?\n\nSorry if I misunderstood. AFAIU, with the help of the vectored IO;\n\"the next X number of blocks that need to be vacuumed\" will be\nprefetched by calculating the unskippable blocks ( using the\nlazy_scan_skip() function ) and the X will be determined by Postgres\nitself. Do you have something different in your mind?\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 5 Jan 2024 13:51:44 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "v3 attached\n\nOn Thu, Jan 4, 2024 at 3:23 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n> > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> > nskippable_blocks\n>\n> I think these may lead to worse code - the compiler has to reload\n> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n> can't guarantee that they're not changed within one of the external functions\n> called in the loop body.\n\nI buy that for 0001 but 0002 is still using local variables.\nnskippable_blocks was just another variable to keep track of even\nthough we could already get that info from local variables\nnext_unskippable_block and next_block.\n\nIn light of this comment, I've refactored 0003/0004 (0002 and 0003 in\nthis version [v3]) to use local variables in the loop as well. I had\nstarted using the members of the VacSkipState which I introduced.\n\n> > Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state\n> >\n> > Future commits will remove all skipping logic from lazy_scan_heap() and\n> > confine it to lazy_scan_skip(). To make those commits more clear, first\n> > introduce the struct, VacSkipState, which will maintain the variables\n> > needed to skip ranges less than SKIP_PAGES_THRESHOLD.\n>\n> Why not add this to LVRelState, possibly as a struct embedded within it?\n\nDone in attached.\n\n> > From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Sat, 30 Dec 2023 16:59:27 -0500\n> > Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip\n>\n> > By always calling lazy_scan_skip() -- instead of only when we have reached the\n> > next unskippable block, we no longer need the skipping_current_range variable.\n> > lazy_scan_heap() no longer needs to manage the skipped range -- checking if we\n> > reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()\n> > can derive the visibility status of a block from whether or not we are in a\n> > skippable range -- that is, whether or not the next_block is equal to the next\n> > unskippable block.\n>\n> I wonder if it should be renamed as part of this - the name is somewhat\n> confusing now (and perhaps before)? lazy_scan_get_next_block() or such?\n\nWhy stop there! I've removed lazy and called it\nheap_vac_scan_get_next_block() -- a little long, but...\n\n> > + while (true)\n> > {\n> > Buffer buf;\n> > Page page;\n> > - bool all_visible_according_to_vm;\n> > LVPagePruneState prunestate;\n> >\n> > - if (blkno == vacskip.next_unskippable_block)\n> > - {\n> > - /*\n> > - * Can't skip this page safely. Must scan the page. But\n> > - * determine the next skippable range after the page first.\n> > - */\n> > - all_visible_according_to_vm = vacskip.next_unskippable_allvis;\n> > - lazy_scan_skip(vacrel, &vacskip, blkno + 1);\n> > -\n> > - Assert(vacskip.next_unskippable_block >= blkno + 1);\n> > - }\n> > - else\n> > - {\n> > - /* Last page always scanned (may need to set nonempty_pages) */\n> > - Assert(blkno < rel_pages - 1);\n> > -\n> > - if (vacskip.skipping_current_range)\n> > - continue;\n> > + blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,\n> > + &all_visible_according_to_vm);\n> >\n> > - /* Current range is too small to skip -- just scan the page */\n> > - all_visible_according_to_vm = true;\n> > - }\n> > + if (blkno == InvalidBlockNumber)\n> > + break;\n> >\n> > vacrel->scanned_pages++;\n> >\n>\n> I don't like that we still do determination about the next block outside of\n> lazy_scan_skip() and have duplicated exit conditions between lazy_scan_skip()\n> and lazy_scan_heap().\n>\n> I'd probably change the interface to something like\n>\n> while (lazy_scan_get_next_block(vacrel, &blkno))\n> {\n> ...\n> }\n\nI've done this. I do now find the parameter names a bit confusing.\nThere is next_block (which is the \"next block in line\" and is an input\nparameter) and blkno, which is an output parameter with the next block\nthat should actually be processed. Maybe it's okay?\n\n> > From b6603e35147c4bbe3337280222e6243524b0110e Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <[email protected]>\n> > Date: Sun, 31 Dec 2023 09:47:18 -0500\n> > Subject: [PATCH v2 5/6] VacSkipState saves reference to LVRelState\n> >\n> > The streaming read interface can only give pgsr_next callbacks access to\n> > two pieces of private data. As such, move a reference to the LVRelState\n> > into the VacSkipState.\n> >\n> > This is a separate commit (as opposed to as part of the commit\n> > introducing VacSkipState) because it is required for using the streaming\n> > read interface but not a natural change on its own. VacSkipState is per\n> > block and the LVRelState is referenced for the whole relation vacuum.\n>\n> I'd do it the other way round, i.e. either embed VacSkipState ino LVRelState\n> or point to it from VacSkipState.\n>\n> LVRelState is already tied to the iteration state, so I don't think there's a\n> reason not to do so.\n\nDone, and, as such, this patch is dropped from the set.\n\n- Melane", "msg_date": "Thu, 11 Jan 2024 18:41:52 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>\n> On Fri, 5 Jan 2024 at 02:25, Jim Nasby <[email protected]> wrote:\n> >\n> > On 1/4/24 2:23 PM, Andres Freund wrote:\n> >\n> > On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n> >\n> > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> > nskippable_blocks\n> >\n> > I think these may lead to worse code - the compiler has to reload\n> > vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n> > can't guarantee that they're not changed within one of the external functions\n> > called in the loop body.\n> >\n> > Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder if the concept of skipping should go away in the context of vector IO? Instead of thinking about \"we can skip this range of blocks\", why not maintain a list of \"here's the next X number of blocks that we need to vacuum\"?\n>\n> Sorry if I misunderstood. AFAIU, with the help of the vectored IO;\n> \"the next X number of blocks that need to be vacuumed\" will be\n> prefetched by calculating the unskippable blocks ( using the\n> lazy_scan_skip() function ) and the X will be determined by Postgres\n> itself. Do you have something different in your mind?\n\nI think you are both right. As we gain more control of readahead from\nwithin Postgres, we will likely want to revisit this heuristic as it\nmay not serve us anymore. But the streaming read interface/vectored\nI/O is also not a drop-in replacement for it. To change anything and\nensure there is no regression, we will probably have to do\ncross-platform benchmarking, though.\n\nThat being said, I would absolutely love to get rid of the skippable\nranges because I find them very error-prone and confusing. Hopefully\nnow that the skipping logic is isolated to a single function, it will\nbe easier not to trip over it when working on lazy_scan_heap().\n\n- Melanie\n\n\n", "msg_date": "Thu, 11 Jan 2024 18:50:50 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 1/11/24 5:50 PM, Melanie Plageman wrote:\n> On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>>\n>> On Fri, 5 Jan 2024 at 02:25, Jim Nasby <[email protected]> wrote:\n>>>\n>>> On 1/4/24 2:23 PM, Andres Freund wrote:\n>>>\n>>> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n>>>\n>>> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n>>> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n>>> nskippable_blocks\n>>>\n>>> I think these may lead to worse code - the compiler has to reload\n>>> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n>>> can't guarantee that they're not changed within one of the external functions\n>>> called in the loop body.\n>>>\n>>> Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder if the concept of skipping should go away in the context of vector IO? Instead of thinking about \"we can skip this range of blocks\", why not maintain a list of \"here's the next X number of blocks that we need to vacuum\"?\n>>\n>> Sorry if I misunderstood. AFAIU, with the help of the vectored IO;\n>> \"the next X number of blocks that need to be vacuumed\" will be\n>> prefetched by calculating the unskippable blocks ( using the\n>> lazy_scan_skip() function ) and the X will be determined by Postgres\n>> itself. Do you have something different in your mind?\n> \n> I think you are both right. As we gain more control of readahead from\n> within Postgres, we will likely want to revisit this heuristic as it\n> may not serve us anymore. But the streaming read interface/vectored\n> I/O is also not a drop-in replacement for it. To change anything and\n> ensure there is no regression, we will probably have to do\n> cross-platform benchmarking, though.\n> \n> That being said, I would absolutely love to get rid of the skippable\n> ranges because I find them very error-prone and confusing. Hopefully\n> now that the skipping logic is isolated to a single function, it will\n> be easier not to trip over it when working on lazy_scan_heap().\n\nYeah, arguably it's just a matter of semantics, but IMO it's a lot \nclearer to simply think in terms of \"here's the next blocks we know we \nwant to vacuum\" instead of \"we vacuum everything, but sometimes we skip \nsome blocks\".\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:02:33 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Jan 12, 2024 at 2:02 PM Jim Nasby <[email protected]> wrote:\n>\n> On 1/11/24 5:50 PM, Melanie Plageman wrote:\n> > On Fri, Jan 5, 2024 at 5:51 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> >>\n> >> On Fri, 5 Jan 2024 at 02:25, Jim Nasby <[email protected]> wrote:\n> >>>\n> >>> On 1/4/24 2:23 PM, Andres Freund wrote:\n> >>>\n> >>> On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n> >>>\n> >>> Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> >>> Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> >>> nskippable_blocks\n> >>>\n> >>> I think these may lead to worse code - the compiler has to reload\n> >>> vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n> >>> can't guarantee that they're not changed within one of the external functions\n> >>> called in the loop body.\n> >>>\n> >>> Admittedly I'm not up to speed on recent vacuum changes, but I have to wonder if the concept of skipping should go away in the context of vector IO? Instead of thinking about \"we can skip this range of blocks\", why not maintain a list of \"here's the next X number of blocks that we need to vacuum\"?\n> >>\n> >> Sorry if I misunderstood. AFAIU, with the help of the vectored IO;\n> >> \"the next X number of blocks that need to be vacuumed\" will be\n> >> prefetched by calculating the unskippable blocks ( using the\n> >> lazy_scan_skip() function ) and the X will be determined by Postgres\n> >> itself. Do you have something different in your mind?\n> >\n> > I think you are both right. As we gain more control of readahead from\n> > within Postgres, we will likely want to revisit this heuristic as it\n> > may not serve us anymore. But the streaming read interface/vectored\n> > I/O is also not a drop-in replacement for it. To change anything and\n> > ensure there is no regression, we will probably have to do\n> > cross-platform benchmarking, though.\n> >\n> > That being said, I would absolutely love to get rid of the skippable\n> > ranges because I find them very error-prone and confusing. Hopefully\n> > now that the skipping logic is isolated to a single function, it will\n> > be easier not to trip over it when working on lazy_scan_heap().\n>\n> Yeah, arguably it's just a matter of semantics, but IMO it's a lot\n> clearer to simply think in terms of \"here's the next blocks we know we\n> want to vacuum\" instead of \"we vacuum everything, but sometimes we skip\n> some blocks\".\n\nEven \"we vacuum some stuff, but sometimes we skip some blocks\" would\nbe okay. What we have now is \"we vacuum some stuff, but sometimes we\nskip some blocks, but only if we would skip enough blocks, and, when\nwe decide to do that we can't go back and actually get visibility\ninformation for those blocks we skipped because we are too cheap\"\n\n- Melanie\n\n\n", "msg_date": "Fri, 12 Jan 2024 14:09:23 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, 12 Jan 2024 at 05:12, Melanie Plageman\n<[email protected]> wrote:\n>\n> v3 attached\n>\n> On Thu, Jan 4, 2024 at 3:23 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2024-01-02 12:36:18 -0500, Melanie Plageman wrote:\n> > > Subject: [PATCH v2 1/6] lazy_scan_skip remove unnecessary local var rel_pages\n> > > Subject: [PATCH v2 2/6] lazy_scan_skip remove unneeded local var\n> > > nskippable_blocks\n> >\n> > I think these may lead to worse code - the compiler has to reload\n> > vacrel->rel_pages/next_unskippable_block for every loop iteration, because it\n> > can't guarantee that they're not changed within one of the external functions\n> > called in the loop body.\n>\n> I buy that for 0001 but 0002 is still using local variables.\n> nskippable_blocks was just another variable to keep track of even\n> though we could already get that info from local variables\n> next_unskippable_block and next_block.\n>\n> In light of this comment, I've refactored 0003/0004 (0002 and 0003 in\n> this version [v3]) to use local variables in the loop as well. I had\n> started using the members of the VacSkipState which I introduced.\n>\n> > > Subject: [PATCH v2 3/6] Add lazy_scan_skip unskippable state\n> > >\n> > > Future commits will remove all skipping logic from lazy_scan_heap() and\n> > > confine it to lazy_scan_skip(). To make those commits more clear, first\n> > > introduce the struct, VacSkipState, which will maintain the variables\n> > > needed to skip ranges less than SKIP_PAGES_THRESHOLD.\n> >\n> > Why not add this to LVRelState, possibly as a struct embedded within it?\n>\n> Done in attached.\n>\n> > > From 335faad5948b2bec3b83c2db809bb9161d373dcb Mon Sep 17 00:00:00 2001\n> > > From: Melanie Plageman <[email protected]>\n> > > Date: Sat, 30 Dec 2023 16:59:27 -0500\n> > > Subject: [PATCH v2 4/6] Confine vacuum skip logic to lazy_scan_skip\n> >\n> > > By always calling lazy_scan_skip() -- instead of only when we have reached the\n> > > next unskippable block, we no longer need the skipping_current_range variable.\n> > > lazy_scan_heap() no longer needs to manage the skipped range -- checking if we\n> > > reached the end in order to then call lazy_scan_skip(). And lazy_scan_skip()\n> > > can derive the visibility status of a block from whether or not we are in a\n> > > skippable range -- that is, whether or not the next_block is equal to the next\n> > > unskippable block.\n> >\n> > I wonder if it should be renamed as part of this - the name is somewhat\n> > confusing now (and perhaps before)? lazy_scan_get_next_block() or such?\n>\n> Why stop there! I've removed lazy and called it\n> heap_vac_scan_get_next_block() -- a little long, but...\n>\n> > > + while (true)\n> > > {\n> > > Buffer buf;\n> > > Page page;\n> > > - bool all_visible_according_to_vm;\n> > > LVPagePruneState prunestate;\n> > >\n> > > - if (blkno == vacskip.next_unskippable_block)\n> > > - {\n> > > - /*\n> > > - * Can't skip this page safely. Must scan the page. But\n> > > - * determine the next skippable range after the page first.\n> > > - */\n> > > - all_visible_according_to_vm = vacskip.next_unskippable_allvis;\n> > > - lazy_scan_skip(vacrel, &vacskip, blkno + 1);\n> > > -\n> > > - Assert(vacskip.next_unskippable_block >= blkno + 1);\n> > > - }\n> > > - else\n> > > - {\n> > > - /* Last page always scanned (may need to set nonempty_pages) */\n> > > - Assert(blkno < rel_pages - 1);\n> > > -\n> > > - if (vacskip.skipping_current_range)\n> > > - continue;\n> > > + blkno = lazy_scan_skip(vacrel, &vacskip, blkno + 1,\n> > > + &all_visible_according_to_vm);\n> > >\n> > > - /* Current range is too small to skip -- just scan the page */\n> > > - all_visible_according_to_vm = true;\n> > > - }\n> > > + if (blkno == InvalidBlockNumber)\n> > > + break;\n> > >\n> > > vacrel->scanned_pages++;\n> > >\n> >\n> > I don't like that we still do determination about the next block outside of\n> > lazy_scan_skip() and have duplicated exit conditions between lazy_scan_skip()\n> > and lazy_scan_heap().\n> >\n> > I'd probably change the interface to something like\n> >\n> > while (lazy_scan_get_next_block(vacrel, &blkno))\n> > {\n> > ...\n> > }\n>\n> I've done this. I do now find the parameter names a bit confusing.\n> There is next_block (which is the \"next block in line\" and is an input\n> parameter) and blkno, which is an output parameter with the next block\n> that should actually be processed. Maybe it's okay?\n>\n> > > From b6603e35147c4bbe3337280222e6243524b0110e Mon Sep 17 00:00:00 2001\n> > > From: Melanie Plageman <[email protected]>\n> > > Date: Sun, 31 Dec 2023 09:47:18 -0500\n> > > Subject: [PATCH v2 5/6] VacSkipState saves reference to LVRelState\n> > >\n> > > The streaming read interface can only give pgsr_next callbacks access to\n> > > two pieces of private data. As such, move a reference to the LVRelState\n> > > into the VacSkipState.\n> > >\n> > > This is a separate commit (as opposed to as part of the commit\n> > > introducing VacSkipState) because it is required for using the streaming\n> > > read interface but not a natural change on its own. VacSkipState is per\n> > > block and the LVRelState is referenced for the whole relation vacuum.\n> >\n> > I'd do it the other way round, i.e. either embed VacSkipState ino LVRelState\n> > or point to it from VacSkipState.\n> >\n> > LVRelState is already tied to the iteration state, so I don't think there's a\n> > reason not to do so.\n>\n> Done, and, as such, this patch is dropped from the set.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== applying patch\n./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch\npatching file src/backend/access/heap/vacuumlazy.c\n...\nHunk #10 FAILED at 1042.\nHunk #11 FAILED at 1121.\nHunk #12 FAILED at 1132.\nHunk #13 FAILED at 1161.\nHunk #14 FAILED at 1172.\nHunk #15 FAILED at 1194.\n...\n6 out of 21 hunks FAILED -- saving rejects to file\nsrc/backend/access/heap/vacuumlazy.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4755.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 26 Jan 2024 18:58:05 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Jan 26, 2024 at 8:28 AM vignesh C <[email protected]> wrote:\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === applying patch\n> ./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch\n> patching file src/backend/access/heap/vacuumlazy.c\n> ...\n> Hunk #10 FAILED at 1042.\n> Hunk #11 FAILED at 1121.\n> Hunk #12 FAILED at 1132.\n> Hunk #13 FAILED at 1161.\n> Hunk #14 FAILED at 1172.\n> Hunk #15 FAILED at 1194.\n> ...\n> 6 out of 21 hunks FAILED -- saving rejects to file\n> src/backend/access/heap/vacuumlazy.c.rej\n>\n> Please post an updated version for the same.\n>\n> [1] - http://cfbot.cputube.org/patch_46_4755.log\n\nFixed in attached rebased v4\n\n- Melanie", "msg_date": "Mon, 29 Jan 2024 20:18:45 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Jan 29, 2024 at 8:18 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Fri, Jan 26, 2024 at 8:28 AM vignesh C <[email protected]> wrote:\n> >\n> > CFBot shows that the patch does not apply anymore as in [1]:\n> > === applying patch\n> > ./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch\n> > patching file src/backend/access/heap/vacuumlazy.c\n> > ...\n> > Hunk #10 FAILED at 1042.\n> > Hunk #11 FAILED at 1121.\n> > Hunk #12 FAILED at 1132.\n> > Hunk #13 FAILED at 1161.\n> > Hunk #14 FAILED at 1172.\n> > Hunk #15 FAILED at 1194.\n> > ...\n> > 6 out of 21 hunks FAILED -- saving rejects to file\n> > src/backend/access/heap/vacuumlazy.c.rej\n> >\n> > Please post an updated version for the same.\n> >\n> > [1] - http://cfbot.cputube.org/patch_46_4755.log\n>\n> Fixed in attached rebased v4\n\nIn light of Thomas' update to the streaming read API [1], I have\nrebased and updated this patch set.\n\nThe attached v5 has some simplifications when compared to v4 but takes\nlargely the same approach.\n\n0001-0004 are refactoring\n0005 is the streaming read code not yet in master\n0006 is the vacuum streaming read user for vacuum's first pass\n0007 is the vacuum streaming read user for vacuum's second pass\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJtLyxcAEvLhVUhgD4fMQkOu3PDaj8Qb9SR_UsmzgsBpQ%40mail.gmail.com", "msg_date": "Tue, 27 Feb 2024 14:47:03 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 27/02/2024 21:47, Melanie Plageman wrote:\n> The attached v5 has some simplifications when compared to v4 but takes\n> largely the same approach.\n> \n> 0001-0004 are refactoring\n\nI'm looking at just these 0001-0004 patches for now. I like those \nchanges a lot for the sake of readablity even without any of the later \npatches.\n\nI made some further changes. I kept them as separate commits for easier \nreview, see the commit messages for details. Any thoughts on those changes?\n\nI feel heap_vac_scan_get_next_block() function could use some love. \nMaybe just some rewording of the comments, or maybe some other \nrefactoring; not sure. But I'm pretty happy with the function signature \nand how it's called.\n\nBTW, do we have tests that would fail if we botched up \nheap_vac_scan_get_next_block() so that it would skip pages incorrectly, \nfor example? Not asking you to write them for this patch, but I'm just \nwondering.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Wed, 6 Mar 2024 21:55:21 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Tue, Feb 27, 2024 at 02:47:03PM -0500, Melanie Plageman wrote:\n> On Mon, Jan 29, 2024 at 8:18 PM Melanie Plageman\n> <[email protected]> wrote:\n> >\n> > On Fri, Jan 26, 2024 at 8:28 AM vignesh C <[email protected]> wrote:\n> > >\n> > > CFBot shows that the patch does not apply anymore as in [1]:\n> > > === applying patch\n> > > ./v3-0002-Add-lazy_scan_skip-unskippable-state-to-LVRelStat.patch\n> > > patching file src/backend/access/heap/vacuumlazy.c\n> > > ...\n> > > Hunk #10 FAILED at 1042.\n> > > Hunk #11 FAILED at 1121.\n> > > Hunk #12 FAILED at 1132.\n> > > Hunk #13 FAILED at 1161.\n> > > Hunk #14 FAILED at 1172.\n> > > Hunk #15 FAILED at 1194.\n> > > ...\n> > > 6 out of 21 hunks FAILED -- saving rejects to file\n> > > src/backend/access/heap/vacuumlazy.c.rej\n> > >\n> > > Please post an updated version for the same.\n> > >\n> > > [1] - http://cfbot.cputube.org/patch_46_4755.log\n> >\n> > Fixed in attached rebased v4\n> \n> In light of Thomas' update to the streaming read API [1], I have\n> rebased and updated this patch set.\n> \n> The attached v5 has some simplifications when compared to v4 but takes\n> largely the same approach.\n\nAttached is a patch set (v5a) which updates the streaming read user for\nvacuum to fix an issue Andrey Borodin pointed out to me off-list.\n\nNote that I started writing this email before seeing Heikki's upthread\nreview [1], so I will respond to that in a bit. There are no changes in\nv5a to any of the prelim refactoring patches which Heikki reviewed in\nthat email. I only changed the vacuum streaming read users (last two\npatches in the set).\n\nBack to this patch set:\nAndrey pointed out that it was failing to compile on windows and the\nreason is that I had accidentally left an undefined variable \"index\" in\nthese places\n\n\tAssert(index > 0);\n...\n\tereport(DEBUG2,\n\t\t\t(errmsg(\"table \\\"%s\\\": removed %lld dead item identifiers in %u pages\",\n\t\t\t\t\tvacrel->relname, (long long) index, vacuumed_pages)));\n\nSee https://cirrus-ci.com/task/6312305361682432\n\nI don't understand how this didn't warn me (or fail to compile) for an\nassert build on my own workstation. It seems to think \"index\" is a\nfunction?\n\nAnyway, thinking about what the correct assertion would be here:\n\n\tAssert(index > 0);\n\tAssert(vacrel->num_index_scans > 1 ||\n\t\t (rbstate->end_idx == vacrel->lpdead_items &&\n\t\t\tvacuumed_pages == vacrel->lpdead_item_pages));\n\nI think I can just replace \"index\" with \"rbstate->end_index\". At the end\nof reaping, this should have the same value that index would have had.\nThe issue with this is if pg_streaming_read_buffer_get_next() somehow\nnever returned a valid buffer (there were no dead items), then rbstate\nwould potentially be uninitialized. The old assertion (index > 0) would\nonly have been true if there were some dead items, but there isn't an\nexplicit assertion in this function that there were some dead items.\nPerhaps it is worth adding this? Even if we add this, perhaps it is\nunacceptable from a programming standpoint to use rbstate in that scope?\n\nIn addition to fixing this slip-up, I have done some performance testing\nfor streaming read vacuum. Note that these tests are for both vacuum\npasses (1 and 2) using streaming read.\n\nPerformance results:\n\nThe TL;DR of my performance results is that streaming read vacuum is\nfaster. However there is an issue with the interaction of the streaming\nread code and the vacuum buffer access strategy which must be addressed.\n\nNote that \"master\" in the results below is actually just a commit on my\nbranch [2] before the one adding the vacuum streaming read users. So it\nincludes all of my refactoring of the vacuum code from the preliminary\npatches.\n\nI tested two vacuum \"data states\". Both are relatively small tables\nbecause the impact of streaming read can easily be seen even at small\ntable sizes. DDL for both data states is at the end of the email.\n\nThe first data state is a 28 MB table which has never been vacuumed and\nhas one or two dead tuples on every block. All of the blocks have dead\ntuples, so all of the blocks must be vacuumed. We'll call this the\n\"sequential\" data state.\n\nThe second data state is a 67 MB table which has been vacuumed and then\na small percentage of the blocks (non-consecutive blocks at irregular\nintervals) are updated afterward. Because the visibility map has been\nupdated and only a few blocks have dead tuples, large ranges of blocks\ndo not need to be vacuumed. There is at least one run of blocks with\ndead tuples larger than 1 block but most of the blocks with dead tuples\nare a single block followed by many blocks with no dead tuples. We'll\ncall this the \"few\" data state.\n\nI tested these data states with \"master\" and with streaming read vacuum\nwith three caching options:\n\n- table data fully in shared buffers (via pg_prewarm)\n- table data in the kernel buffer cache but not in shared buffers\n- table data completely uncached\n\nI tested the OS cached and uncached caching options with both the\ndefault vacuum buffer access strategy and with BUFFER_USAGE_LIMIT 0\n(which uses as many shared buffers as needed).\n\nFor the streaming read vacuum, I tested with maintenance_io_concurrency\n10, 100, and 1000. 10 is the current default on master.\nmaintenance_io_concurrency is not used by vacuum on master AFAICT.\n\nmaintenance_io_concurrency is used by streaming read to determine how\nmany buffers it can pin at the same time (with the hope of combining\nconsecutive blocks into larger IOs) and, in the case of vacuum, it is\nused to determine prefetch distance.\n\nIn the following results, I ran vacuum at least five times and averaged\nthe timing results.\n\nTable data cached in shared buffers\n===================================\n\nSequential data state\n---------------------\n\nThe only noticeable difference in performance was that streaming read\nvacuum took 2% longer than master (19 ms vs 18.6 ms). It was a bit more\nnoticeable at maintenance_io_concurrency 1000 than 10.\n\nThis may be resolved by a patch Thomas is working on to avoid pinning\ntoo many buffers if larger IOs cannot be created (like in a fully SB\nresident workload). We should revisit this when that patch is available.\n\nFew data state\n--------------\n\nThere was no difference in timing for any of the scenarios.\n\nTable data cached in OS buffer cache\n====================================\n\nSequential data state\n---------------------\n\nWith the buffer access strategy disabled, streaming read vacuum took 11%\nless time regardless of maintenance_io_concurrency (26 ms vs 23 ms).\n\nWith the default vacuum buffer access strategy,\nmaintenance_io_concurrency had a large impact:\n\n Note that \"mic\" is maintenace_io_concurrency\n\n| data state | code | mic | time (ms) |\n+------------+-----------+------+-----------+\n| sequential | master | NA | 99 |\n| sequential | streaming | 10 | 122 |\n| sequential | streaming | 100 | 28 |\n\nThe streaming read API calculates the maximum number of pinned buffers\nas 4 * maintenance_io_concurrency. The default vacuum buffer access\nstrategy ring buffer is 256 kB -- which is 32 buffers.\n\nWith maintenance_io_concurrency 10, streaming read code wants to pin 40\nbuffers. There is likely an interaction between this and the buffer\naccess strategy which leads to the slowdown at\nmaintenance_io_concurrency 10.\n\nWe could change the default maintenance_io_concurrency, but a better\noption is to take the buffer access strategy into account in the\nstreaming read code.\n\nFew data state\n--------------\n\nThere was no difference in timing for any of the scenarios.\n\nTable data uncached\n===================\n\nSequential data state\n---------------------\n\nWhen the buffer access strategy is disabled, streaming read vacuum takes\n12% less time regardless of maintenance_io_concurrency (36 ms vs 41 ms).\n\nWith the default buffer access strategy (ring buffer 256 kB) and\nmaintenance_io_concurrency 10 (the default), the streaming read vacuum\ntakes 19% more time. But if we bump maintenance_io_concurrency up to\n100+, streaming read vacuum takes 64% less time:\n\n| data state | code | mic | time (ms) |\n+------------+-----------+------+-----------+\n| sequential | master | NA | 113 |\n| sequential | streaming | 10 | 140 |\n| sequential | streaming | 100 | 41 |\n\nThis is likely due to the same adverse interaction between streaming\nreads' max pinned buffers and the buffer access strategy ring buffer\nsize.\n\nFew data state\n--------------\n\nThe buffer access strategy had no impact here, so all of these results\nare with the default buffer access strategy. The streaming read vacuum\ntakes 20-25% less time than master vacuum.\n\n| data state | code | mic | time (ms) |\n+------------+-----------+------+-----------+\n| few | master | NA | 4.5 |\n| few | streaming | 10 | 3.4 |\n| few | streaming | 100 | 3.5 |\n\nThe improvement is likely due to prefetching and the one range of\nconsecutive blocks containing dead tuples which could be merged into a\nlarger IO.\n\nHigher maintenance_io_concurrency only helps a little probably because:\n\n1) most the blocks to vacuum are not consecutive so we can't make bigger\nIOs in most cases\n2) we are not vacuuming enough blocks such that we want to prefetch more\nthan 10 blocks.\n\nThis experiment should probably be redone with larger tables containing\nmore blocks needing vacuum. At 3-4 ms, a 20% performance difference\nisn't really that interesting.\n\nThe next step (other than the preliminary refactoring patches) is to\ndecide how the streaming read API should use the buffer access strategy.\n\nSequential Data State DDL:\n drop table if exists foo;\n create table foo (a int) with (autovacuum_enabled=false, fillfactor=25);\n insert into foo select i % 3 from generate_series(1,200000)i;\n update foo set a = 5 where a = 1;\n\nFew Data State DDL:\n drop table if exists foo;\n create table foo (a int) with (autovacuum_enabled=false, fillfactor=25);\n insert into foo select i from generate_series(2,20000)i;\n insert into foo select 1 from generate_series(1,200)i;\n insert into foo select i from generate_series(2,20000)i;\n insert into foo select 1 from generate_series(1,200)i;\n insert into foo select i from generate_series(2,200000)i;\n insert into foo select 1 from generate_series(1,200)i;\n insert into foo select i from generate_series(2,20000)i;\n insert into foo select 1 from generate_series(1,2000)i;\n insert into foo select i from generate_series(2,20000)i;\n insert into foo select 1 from generate_series(1,200)i;\n insert into foo select i from generate_series(2,200000)i;\n insert into foo select 1 from generate_series(1,200)i;\n vacuum (freeze) foo;\n update foo set a = 5 where a = 1;\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/1eeccf12-d5d1-4b7e-b88b-7342410129d7%40iki.fi\n[2] https://github.com/melanieplageman/postgres/tree/vac_pgsr", "msg_date": "Wed, 6 Mar 2024 18:47:33 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n> On 27/02/2024 21:47, Melanie Plageman wrote:\n> > The attached v5 has some simplifications when compared to v4 but takes\n> > largely the same approach.\n> > \n> > 0001-0004 are refactoring\n> \n> I'm looking at just these 0001-0004 patches for now. I like those changes a\n> lot for the sake of readablity even without any of the later patches.\n\nThanks! And thanks so much for the review!\n\nI've done a small performance experiment comparing a branch with all of\nthe patches applied (your v6 0001-0009) with master. I made an 11 GB\ntable that has 1,394,328 blocks. For setup, I vacuumed it to update the\nVM and made sure it was entirely in shared buffers. All of this was to\nmake sure all of the blocks would be skipped and we spend the majority\nof the time spinning through the lazy_scan_heap() code. Then I ran\nvacuum again (the actual test). I saw vacuum go from 13 ms to 10 ms\nwith the patches applied.\n\nI think I need to do some profiling to see if the difference is actually\ndue to our code changes, but I thought I would share preliminary\nresults.\n\n> I made some further changes. I kept them as separate commits for easier\n> review, see the commit messages for details. Any thoughts on those changes?\n\nI've given some inline feedback on most of the extra patches you added.\nShort answer is they all seem fine to me except I have a reservations\nabout 0008 because of the number of blkno variables flying around. I\ndidn't have a chance to rebase these into my existing changes today, so\neither I will do it tomorrow or, if you are feeling like you're on a\nroll and want to do it, that also works!\n\n> I feel heap_vac_scan_get_next_block() function could use some love. Maybe\n> just some rewording of the comments, or maybe some other refactoring; not\n> sure. But I'm pretty happy with the function signature and how it's called.\n\nI was wondering if we should remove the \"get\" and just go with\nheap_vac_scan_next_block(). I didn't do that originally because I didn't\nwant to imply that the next block was literally the sequentially next\nblock, but I think maybe I was overthinking it.\n\nAnother idea is to call it heap_scan_vac_next_block() and then the order\nof the words is more like the table AM functions that get the next block\n(e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to\nbe too similar to those since this isn't a table AM callback.\n\nAs for other refactoring and other rewording of comments and such, I\nwill take a pass at this tomorrow.\n\n> BTW, do we have tests that would fail if we botched up\n> heap_vac_scan_get_next_block() so that it would skip pages incorrectly, for\n> example? Not asking you to write them for this patch, but I'm just\n> wondering.\n\nSo, while developing this, when I messed up and skipped blocks I\nshouldn't, vacuum would error out with the \"found xmin from before\nrelfrozenxid\" error -- which would cause random tests to fail. I know\nthat's not a correctly failing test of this code. I think there might be\nsome tests in the verify_heapam tests that could/do test this kind of\nthing but I don't remember them failing for me during development -- so\nI didn't spend much time looking at them.\n\nI would also sometimes get freespace or VM tests that would fail because\nthose blocks that are incorrectly skipped were meant to be reflected in\nthe FSM or VM in those tests.\n\nAll of that is to say, perhaps we should write a more targeted test?\n\nWhen I was writing the code, I added logging of skipped blocks and then\ncame up with different scenarios and ran them on master and with the\npatch and diffed the logs.\n\n> From b4047b941182af0643838fde056c298d5cc3ae32 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 6 Mar 2024 20:13:42 +0200\n> Subject: [PATCH v6 5/9] Remove unused 'skipping_current_range' field\n> \n> ---\n> src/backend/access/heap/vacuumlazy.c | 2 --\n> 1 file changed, 2 deletions(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 65d257aab83..51391870bf3 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -217,8 +217,6 @@ typedef struct LVRelState\n> \t\tBuffer\t\tvmbuffer;\n> \t\t/* Next unskippable block's visibility status */\n> \t\tbool\t\tnext_unskippable_allvis;\n> -\t\t/* Whether or not skippable blocks should be skipped */\n> -\t\tbool\t\tskipping_current_range;\n> \t}\t\t\tskip;\n> } LVRelState;\n> \n> -- \n> 2.39.2\n> \n\nOops! I thought I removed this. I must have forgotten\n\n> From 27e431e8dc69bbf09d831cb1cf2903d16f177d74 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 6 Mar 2024 20:58:57 +0200\n> Subject: [PATCH v6 6/9] Move vmbuffer back to a local varible in\n> lazy_scan_heap()\n> \n> It felt confusing that we passed around the current block, 'blkno', as\n> an argument to lazy_scan_new_or_empty() and lazy_scan_prune(), but\n> 'vmbuffer' was accessed directly in the 'scan_state'.\n> \n> It was also a bit vague, when exactly 'vmbuffer' was valid. Calling\n> heap_vac_scan_get_next_block() set it, sometimes, to a buffer that\n> might or might not contain the VM bit for 'blkno'. But other\n> functions, like lazy_scan_prune(), assumed it to contain the correct\n> buffer. That was fixed up visibilitymap_pin(). But clearly it was not\n> \"owned\" by heap_vac_scan_get_next_block(), like the other 'scan_state'\n> fields.\n> \n> I moved it back to a local variable, like it was. Maybe there would be\n> even better ways to handle it, but at least this is not worse than\n> what we have in master currently.\n\nI'm fine with this. I did it the way I did (grouping it with the\n\"next_unskippable_block\" in the skip struct), because I think that this\nvmbuffer is always the buffer containing the VM bit for the next\nunskippable block -- which sometimes is the block returned by\nheap_vac_scan_get_next_block() and sometimes isn't.\n\nI agree it might be best as a local variable but perhaps we could retain\nthe comment about it being the block of the VM containing the bit for the\nnext unskippable block. (Honestly, the whole thing is very confusing).\n\n> From 519e26a01b6e6974f9e0edb94b00756af053f7ee Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 6 Mar 2024 20:27:57 +0200\n> Subject: [PATCH v6 7/9] Rename skip_state\n> \n> I don't want to emphasize the \"skipping\" part. Rather, it's the state\n> onwed by the heap_vac_scan_get_next_block() function\n\nThis makes sense to me. Skipping should be private details of vacuum's\nget_next_block functionality. Though the name is a bit long. Maybe we\ndon't need the \"get\" and \"state\" parts (it is already in a struct with\nstate in the name)?\n\n> From 6dfae936a29e2d3479273f8ab47778a596258b16 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 6 Mar 2024 21:03:19 +0200\n> Subject: [PATCH v6 8/9] Track 'current_block' in the skip state\n> \n> The caller was expected to always pass last blk + 1. It's not clear if\n> the next_unskippable block accounting would work correctly if you\n> passed something else. So rather than expecting the caller to do that,\n> have heap_vac_scan_get_next_block() keep track of the last returned\n> block itself, in the 'skip' state.\n> \n> This is largely redundant with the LVRelState->blkno field. But that\n> one is currently only used for error reporting, so it feels best to\n> give heap_vac_scan_get_next_block() its own field that it owns.\n\nI understand and agree with you that relying on blkno + 1 is bad and we\nshould make the \"next_block\" state keep track of the current block.\n\nThough, I now find it easy to confuse\nlvrelstate->get_next_block_state->current_block, lvrelstate->blkno and\nthe local variable blkno in lazy_scan_heap(). I think it is a naming\nthing and not that we shouldn't have all three. I'll think more about it\nin the morning.\n\n> From 619556cad4aad68d1711c12b962e9002e56d8db2 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Wed, 6 Mar 2024 21:35:11 +0200\n> Subject: [PATCH v6 9/9] Comment & whitespace cleanup\n> \n> I moved some of the paragraphs to inside the\n> heap_vac_scan_get_next_block() function. I found the explanation in\n> the function comment at the old place like too much detail. Someone\n> looking at the function signature and how to call it would not care\n> about all the details of what can or cannot be skipped.\n\nLGTM.\n\nThanks again.\n\n- Melanie\n\n\n", "msg_date": "Wed, 6 Mar 2024 22:00:23 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n> On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n> > I made some further changes. I kept them as separate commits for easier\n> > review, see the commit messages for details. Any thoughts on those changes?\n> \n> I've given some inline feedback on most of the extra patches you added.\n> Short answer is they all seem fine to me except I have a reservations\n> about 0008 because of the number of blkno variables flying around. I\n> didn't have a chance to rebase these into my existing changes today, so\n> either I will do it tomorrow or, if you are feeling like you're on a\n> roll and want to do it, that also works!\n\nAttached v7 contains all of the changes that you suggested plus some\nadditional cleanups here and there.\n\n> > I feel heap_vac_scan_get_next_block() function could use some love. Maybe\n> > just some rewording of the comments, or maybe some other refactoring; not\n> > sure. But I'm pretty happy with the function signature and how it's called.\n\nI've cleaned up the comments on heap_vac_scan_next_block() in the first\ncouple patches (not so much in the streaming read user). Let me know if\nit addresses your feelings or if I should look for other things I could\nchange.\n\nI will say that now all of the variable names are *very* long. I didn't\nwant to remove the \"state\" from LVRelState->next_block_state. (In fact, I\nkind of miss the \"get\". But I had to draw the line somewhere.) I think\nwithout \"state\" in the name, next_block sounds too much like a function.\n\nAny ideas for shortening the names of next_block_state and its members\nor are you fine with them?\n\n> I was wondering if we should remove the \"get\" and just go with\n> heap_vac_scan_next_block(). I didn't do that originally because I didn't\n> want to imply that the next block was literally the sequentially next\n> block, but I think maybe I was overthinking it.\n> \n> Another idea is to call it heap_scan_vac_next_block() and then the order\n> of the words is more like the table AM functions that get the next block\n> (e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to\n> be too similar to those since this isn't a table AM callback.\n\nI've done a version of this.\n\n> > From 27e431e8dc69bbf09d831cb1cf2903d16f177d74 Mon Sep 17 00:00:00 2001\n> > From: Heikki Linnakangas <[email protected]>\n> > Date: Wed, 6 Mar 2024 20:58:57 +0200\n> > Subject: [PATCH v6 6/9] Move vmbuffer back to a local varible in\n> > lazy_scan_heap()\n> > \n> > It felt confusing that we passed around the current block, 'blkno', as\n> > an argument to lazy_scan_new_or_empty() and lazy_scan_prune(), but\n> > 'vmbuffer' was accessed directly in the 'scan_state'.\n> > \n> > It was also a bit vague, when exactly 'vmbuffer' was valid. Calling\n> > heap_vac_scan_get_next_block() set it, sometimes, to a buffer that\n> > might or might not contain the VM bit for 'blkno'. But other\n> > functions, like lazy_scan_prune(), assumed it to contain the correct\n> > buffer. That was fixed up visibilitymap_pin(). But clearly it was not\n> > \"owned\" by heap_vac_scan_get_next_block(), like the other 'scan_state'\n> > fields.\n> > \n> > I moved it back to a local variable, like it was. Maybe there would be\n> > even better ways to handle it, but at least this is not worse than\n> > what we have in master currently.\n> \n> I'm fine with this. I did it the way I did (grouping it with the\n> \"next_unskippable_block\" in the skip struct), because I think that this\n> vmbuffer is always the buffer containing the VM bit for the next\n> unskippable block -- which sometimes is the block returned by\n> heap_vac_scan_get_next_block() and sometimes isn't.\n> \n> I agree it might be best as a local variable but perhaps we could retain\n> the comment about it being the block of the VM containing the bit for the\n> next unskippable block. (Honestly, the whole thing is very confusing).\n\nIn 0001-0004 I've stuck with only having the local variable vmbuffer in\nlazy_scan_heap().\n\nIn 0006 (introducing pass 1 vacuum streaming read user) I added a\nvmbuffer back to the next_block_state (while also keeping the local\nvariable vmbuffer in lazy_scan_heap()). The vmbuffer in lazy_scan_heap()\ncontains the block of the VM containing visi information for the next\nunskippable block or for the current block if its visi information\nhappens to be in the same block of the VM as either 1) the next\nunskippable block or 2) the most recently processed heap block.\n\nStreaming read vacuum separates this visibility check in\nheap_vac_scan_next_block() from the main loop of lazy_scan_heap(), so we\ncan't just use a local variable anymore. Now the local variable vmbuffer\nin lazy_scan_heap() will only already contain the block with the visi\ninformation for the to-be-processed block if it happens to be in the\nsame VM block as the most recently processed heap block. That means\npotentially more VM fetches.\n\nHowever, by adding a vmbuffer to next_block_state, the callback may be\nable to avoid extra VM fetches from one invocation to the next.\n\nNote that next_block->current_block in the streaming read vacuum context\nis actually the prefetch block.\n\n\n- Melanie", "msg_date": "Thu, 7 Mar 2024 19:46:14 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 08/03/2024 02:46, Melanie Plageman wrote:\n> On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n>>> I feel heap_vac_scan_get_next_block() function could use some love. Maybe\n>>> just some rewording of the comments, or maybe some other refactoring; not\n>>> sure. But I'm pretty happy with the function signature and how it's called.\n> \n> I've cleaned up the comments on heap_vac_scan_next_block() in the first\n> couple patches (not so much in the streaming read user). Let me know if\n> it addresses your feelings or if I should look for other things I could\n> change.\n\nThanks, that is better. I think I now finally understand how the \nfunction works, and now I can see some more issues and refactoring \nopportunities :-).\n\nLooking at current lazy_scan_skip() code in 'master', one thing now \ncaught my eye (and it's the same with your patches):\n\n> \t*next_unskippable_allvis = true;\n> \twhile (next_unskippable_block < rel_pages)\n> \t{\n> \t\tuint8\t\tmapbits = visibilitymap_get_status(vacrel->rel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t next_unskippable_block,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t vmbuffer);\n> \n> \t\tif ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)\n> \t\t{\n> \t\t\tAssert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);\n> \t\t\t*next_unskippable_allvis = false;\n> \t\t\tbreak;\n> \t\t}\n> \n> \t\t/*\n> \t\t * Caller must scan the last page to determine whether it has tuples\n> \t\t * (caller must have the opportunity to set vacrel->nonempty_pages).\n> \t\t * This rule avoids having lazy_truncate_heap() take access-exclusive\n> \t\t * lock on rel to attempt a truncation that fails anyway, just because\n> \t\t * there are tuples on the last page (it is likely that there will be\n> \t\t * tuples on other nearby pages as well, but those can be skipped).\n> \t\t *\n> \t\t * Implement this by always treating the last block as unsafe to skip.\n> \t\t */\n> \t\tif (next_unskippable_block == rel_pages - 1)\n> \t\t\tbreak;\n> \n> \t\t/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> \t\tif (!vacrel->skipwithvm)\n> \t\t{\n> \t\t\t/* Caller shouldn't rely on all_visible_according_to_vm */\n> \t\t\t*next_unskippable_allvis = false;\n> \t\t\tbreak;\n> \t\t}\n> \n> \t\t/*\n> \t\t * Aggressive VACUUM caller can't skip pages just because they are\n> \t\t * all-visible. They may still skip all-frozen pages, which can't\n> \t\t * contain XIDs < OldestXmin (XIDs that aren't already frozen by now).\n> \t\t */\n> \t\tif ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)\n> \t\t{\n> \t\t\tif (vacrel->aggressive)\n> \t\t\t\tbreak;\n> \n> \t\t\t/*\n> \t\t\t * All-visible block is safe to skip in non-aggressive case. But\n> \t\t\t * remember that the final range contains such a block for later.\n> \t\t\t */\n> \t\t\tskipsallvis = true;\n> \t\t}\n> \n> \t\t/* XXX: is it OK to remove this? */\n> \t\tvacuum_delay_point();\n> \t\tnext_unskippable_block++;\n> \t\tnskippable_blocks++;\n> \t}\n\nFirstly, it seems silly to check DISABLE_PAGE_SKIPPING within the loop. \nWhen DISABLE_PAGE_SKIPPING is set, we always return the next block and \nset *next_unskippable_allvis = false regardless of the visibility map, \nso why bother checking the visibility map at all?\n\nExcept at the very last block of the relation! If you look carefully, \nat the last block we do return *next_unskippable_allvis = true, if the \nVM says so, even if DISABLE_PAGE_SKIPPING is set. I think that's wrong. \nSurely the intention was to pretend that none of the VM bits were set if \nDISABLE_PAGE_SKIPPING is used, also for the last block.\n\nThis was changed in commit 980ae17310:\n\n> @@ -1311,7 +1327,11 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n> \n> /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> if (!vacrel->skipwithvm)\n> + {\n> + /* Caller shouldn't rely on all_visible_according_to_vm */\n> + *next_unskippable_allvis = false;\n> break;\n> + }\n\nBefore that, *next_unskippable_allvis was set correctly according to the \nVM, even when DISABLE_PAGE_SKIPPING was used. It's not clear to me why \nthat was changed. And I think setting it to 'true' would be a more \nfailsafe value than 'false'. When *next_unskippable_allvis is set to \ntrue, the caller cannot rely on it because a concurrent modification \ncould immediately clear the VM bit. But because VACUUM is the only \nprocess that sets VM bits, if it's set to false, the caller can assume \nthat it's still not set later on.\n\nOne consequence of that is that with DISABLE_PAGE_SKIPPING, \nlazy_scan_heap() dirties all pages, even if there are no changes. The \nattached test script demonstrates that.\n\nISTM we should revert the above hunk, and backpatch it to v16. I'm a \nlittle wary because I don't understand why that change was made in the \nfirst place, though. I think it was just an ill-advised attempt at \ntidying up the code as part of the larger commit, but I'm not sure. \nPeter, do you remember?\n\nI wonder if we should give up trying to set all_visible_according_to_vm \ncorrectly when we decide what to skip, and always do \n\"all_visible_according_to_vm = visibilitymap_get_status(...)\" in \nlazy_scan_prune(). It would be more expensive, but maybe it doesn't \nmatter in practice. It would get rid of this tricky bookkeeping in \nheap_vac_scan_next_block().\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Fri, 8 Mar 2024 15:49:47 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas <[email protected]> wrote:\n> ISTM we should revert the above hunk, and backpatch it to v16. I'm a\n> little wary because I don't understand why that change was made in the\n> first place, though. I think it was just an ill-advised attempt at\n> tidying up the code as part of the larger commit, but I'm not sure.\n> Peter, do you remember?\n\nI think that it makes sense to set the VM when indicated by\nlazy_scan_prune, independent of what either the visibility map or the\npage's PD_ALL_VISIBLE marking say. The whole point of\nDISABLE_PAGE_SKIPPING is to deal with VM corruption, after all.\n\nIn retrospect I didn't handle this particular aspect very well in\ncommit 980ae17310. The approach I took is a bit crude (and in any case\nslightly wrong in that it is inconsistent in how it handles the last\npage). But it has the merit of fixing the case where we just have the\nVM's all-frozen bit set for a given block (not the all-visible bit\nset) -- which is always wrong. There was good reason to be concerned\nabout that possibility when 980ae17310 went in.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Mar 2024 10:40:42 -0500", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas <[email protected]> wrote:\n>\n> On 08/03/2024 02:46, Melanie Plageman wrote:\n> > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n> >>> I feel heap_vac_scan_get_next_block() function could use some love. Maybe\n> >>> just some rewording of the comments, or maybe some other refactoring; not\n> >>> sure. But I'm pretty happy with the function signature and how it's called.\n> >\n> > I've cleaned up the comments on heap_vac_scan_next_block() in the first\n> > couple patches (not so much in the streaming read user). Let me know if\n> > it addresses your feelings or if I should look for other things I could\n> > change.\n>\n> Thanks, that is better. I think I now finally understand how the\n> function works, and now I can see some more issues and refactoring\n> opportunities :-).\n>\n> Looking at current lazy_scan_skip() code in 'master', one thing now\n> caught my eye (and it's the same with your patches):\n>\n> > *next_unskippable_allvis = true;\n> > while (next_unskippable_block < rel_pages)\n> > {\n> > uint8 mapbits = visibilitymap_get_status(vacrel->rel,\n> > next_unskippable_block,\n> > vmbuffer);\n> >\n> > if ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)\n> > {\n> > Assert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);\n> > *next_unskippable_allvis = false;\n> > break;\n> > }\n> >\n> > /*\n> > * Caller must scan the last page to determine whether it has tuples\n> > * (caller must have the opportunity to set vacrel->nonempty_pages).\n> > * This rule avoids having lazy_truncate_heap() take access-exclusive\n> > * lock on rel to attempt a truncation that fails anyway, just because\n> > * there are tuples on the last page (it is likely that there will be\n> > * tuples on other nearby pages as well, but those can be skipped).\n> > *\n> > * Implement this by always treating the last block as unsafe to skip.\n> > */\n> > if (next_unskippable_block == rel_pages - 1)\n> > break;\n> >\n> > /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> > if (!vacrel->skipwithvm)\n> > {\n> > /* Caller shouldn't rely on all_visible_according_to_vm */\n> > *next_unskippable_allvis = false;\n> > break;\n> > }\n> >\n> > /*\n> > * Aggressive VACUUM caller can't skip pages just because they are\n> > * all-visible. They may still skip all-frozen pages, which can't\n> > * contain XIDs < OldestXmin (XIDs that aren't already frozen by now).\n> > */\n> > if ((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0)\n> > {\n> > if (vacrel->aggressive)\n> > break;\n> >\n> > /*\n> > * All-visible block is safe to skip in non-aggressive case. But\n> > * remember that the final range contains such a block for later.\n> > */\n> > skipsallvis = true;\n> > }\n> >\n> > /* XXX: is it OK to remove this? */\n> > vacuum_delay_point();\n> > next_unskippable_block++;\n> > nskippable_blocks++;\n> > }\n>\n> Firstly, it seems silly to check DISABLE_PAGE_SKIPPING within the loop.\n> When DISABLE_PAGE_SKIPPING is set, we always return the next block and\n> set *next_unskippable_allvis = false regardless of the visibility map,\n> so why bother checking the visibility map at all?\n>\n> Except at the very last block of the relation! If you look carefully,\n> at the last block we do return *next_unskippable_allvis = true, if the\n> VM says so, even if DISABLE_PAGE_SKIPPING is set. I think that's wrong.\n> Surely the intention was to pretend that none of the VM bits were set if\n> DISABLE_PAGE_SKIPPING is used, also for the last block.\n\nI agree that having next_unskippable_allvis and, as a consequence,\nall_visible_according_to_vm set to true for the last block seems\nwrong. And It makes sense from a loop efficiency standpoint also to\nmove it up to the top. However, making that change would have us end\nup dirtying all pages in your example.\n\n> This was changed in commit 980ae17310:\n>\n> > @@ -1311,7 +1327,11 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n> >\n> > /* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> > if (!vacrel->skipwithvm)\n> > + {\n> > + /* Caller shouldn't rely on all_visible_according_to_vm */\n> > + *next_unskippable_allvis = false;\n> > break;\n> > + }\n>\n> Before that, *next_unskippable_allvis was set correctly according to the\n> VM, even when DISABLE_PAGE_SKIPPING was used. It's not clear to me why\n> that was changed. And I think setting it to 'true' would be a more\n> failsafe value than 'false'. When *next_unskippable_allvis is set to\n> true, the caller cannot rely on it because a concurrent modification\n> could immediately clear the VM bit. But because VACUUM is the only\n> process that sets VM bits, if it's set to false, the caller can assume\n> that it's still not set later on.\n>\n> One consequence of that is that with DISABLE_PAGE_SKIPPING,\n> lazy_scan_heap() dirties all pages, even if there are no changes. The\n> attached test script demonstrates that.\n\nThis does seem undesirable.\n\nHowever, if we do as you suggest above and don't check\nDISABLE_PAGE_SKIPPING in the loop and instead return without checking\nthe VM when DISABLE_PAGE_SKIPPING is passed, setting\nnext_unskippable_allvis = false, we will end up dirtying all pages as\nin your example. It would fix the last block issue but it would result\nin dirtying all pages in your example.\n\n> ISTM we should revert the above hunk, and backpatch it to v16. I'm a\n> little wary because I don't understand why that change was made in the\n> first place, though. I think it was just an ill-advised attempt at\n> tidying up the code as part of the larger commit, but I'm not sure.\n> Peter, do you remember?\n\nIf we revert this, then the when all_visible_according_to_vm and\nall_visible are true in lazy_scan_prune(), the VM will only get\nupdated when all_frozen is true and the VM doesn't have all frozen set\nyet, so maybe that is inconsistent with the goal of\nDISABLE_PAGE_SKIPPING to update the VM when its contents are \"suspect\"\n(according to docs).\n\n> I wonder if we should give up trying to set all_visible_according_to_vm\n> correctly when we decide what to skip, and always do\n> \"all_visible_according_to_vm = visibilitymap_get_status(...)\" in\n> lazy_scan_prune(). It would be more expensive, but maybe it doesn't\n> matter in practice. It would get rid of this tricky bookkeeping in\n> heap_vac_scan_next_block().\n\nI did some experiments on this in the past and thought that it did\nhave a perf impact to call visibilitymap_get_status() every time. But\nlet me try and dig those up. (doesn't speak to whether or not in\nmatters in practice)\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 10:44:26 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 10:41 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Mar 8, 2024 at 8:49 AM Heikki Linnakangas <[email protected]> wrote:\n> > ISTM we should revert the above hunk, and backpatch it to v16. I'm a\n> > little wary because I don't understand why that change was made in the\n> > first place, though. I think it was just an ill-advised attempt at\n> > tidying up the code as part of the larger commit, but I'm not sure.\n> > Peter, do you remember?\n>\n> I think that it makes sense to set the VM when indicated by\n> lazy_scan_prune, independent of what either the visibility map or the\n> page's PD_ALL_VISIBLE marking say. The whole point of\n> DISABLE_PAGE_SKIPPING is to deal with VM corruption, after all.\n\nNot that it will be fun to maintain another special case in the VM\nupdate code in lazy_scan_prune(), but we could have a special case\nthat checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if\nall_visible_according_to_vm is true and all_visible is true, we update\nthe VM but don't dirty the page. The docs on DISABLE_PAGE_SKIPPING say\nit is meant to deal with VM corruption -- it doesn't say anything\nabout dealing with incorrectly set PD_ALL_VISIBLE markings.\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 10:48:44 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman\n<[email protected]> wrote:\n> Not that it will be fun to maintain another special case in the VM\n> update code in lazy_scan_prune(), but we could have a special case\n> that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if\n> all_visible_according_to_vm is true and all_visible is true, we update\n> the VM but don't dirty the page.\n\nIt wouldn't necessarily have to be a special case, I think.\n\nWe already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in\nthe block where lazy_scan_prune marks a previously all-visible page\nall-frozen -- we don't want to dirty the page unnecessarily there.\nMaking it conditional is defensive in that particular block (this was\nalso added by this same commit of mine), and avoids dirtying the page.\n\nSeems like it might be possible to simplify/consolidate the VM-setting\ncode that's now located at the end of lazy_scan_prune. Perhaps the two\ndistinct blocks that call visibilitymap_set() could be combined into\none.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Mar 2024 11:00:02 -0500", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan <[email protected]> wrote:\n> Seems like it might be possible to simplify/consolidate the VM-setting\n> code that's now located at the end of lazy_scan_prune. Perhaps the two\n> distinct blocks that call visibilitymap_set() could be combined into\n> one.\n\nFWIW I think that my error here might have had something to do with\nhallucinating that the code already did things that way.\n\nAt the time this went in, I was working on a patchset that did things\nthis way (more or less). It broke the dependency on\nall_visible_according_to_vm entirely, which simplified the\nset-and-check-VM code that's now at the end of lazy_scan_prune.\n\nNot sure how practical it'd be to do something like that now (not\noffhand), but something to consider.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 8 Mar 2024 11:06:56 -0500", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 08/03/2024 02:46, Melanie Plageman wrote:\n> On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n>> On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n> I will say that now all of the variable names are *very* long. I didn't\n> want to remove the \"state\" from LVRelState->next_block_state. (In fact, I\n> kind of miss the \"get\". But I had to draw the line somewhere.) I think\n> without \"state\" in the name, next_block sounds too much like a function.\n> \n> Any ideas for shortening the names of next_block_state and its members\n> or are you fine with them?\n\nHmm, we can remove the inner struct and add the fields directly into \nLVRelState. LVRelState already contains many groups of variables, like \n\"Error reporting state\", with no inner structs. I did it that way in the \nattached patch. I also used local variables more.\n\n>> I was wondering if we should remove the \"get\" and just go with\n>> heap_vac_scan_next_block(). I didn't do that originally because I didn't\n>> want to imply that the next block was literally the sequentially next\n>> block, but I think maybe I was overthinking it.\n>>\n>> Another idea is to call it heap_scan_vac_next_block() and then the order\n>> of the words is more like the table AM functions that get the next block\n>> (e.g. heapam_scan_bitmap_next_block()). Though maybe we don't want it to\n>> be too similar to those since this isn't a table AM callback.\n> \n> I've done a version of this.\n\n+1\n\n> However, by adding a vmbuffer to next_block_state, the callback may be\n> able to avoid extra VM fetches from one invocation to the next.\n\nThat's a good idea, holding separate VM buffer pins for the \nnext-unskippable block and the block we're processing. I adopted that \napproach.\n\nMy compiler caught one small bug when I was playing with various \nrefactorings of this: heap_vac_scan_next_block() must set *blkno to \nrel_pages, not InvalidBlockNumber, after the last block. The caller uses \nthe 'blkno' variable also after the loop, and assumes that it's set to \nrel_pages.\n\nI'm pretty happy with the attached patches now. The first one fixes the \nexisting bug I mentioned in the other email (based on the on-going \ndiscussion that might not how we want to fix it though). Second commit \nis a squash of most of the patches. Third patch is the removal of the \ndelay point, that seems worthwhile to keep separate.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Fri, 8 Mar 2024 18:07:33 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan <[email protected]> wrote:\n>\n> On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman\n> <[email protected]> wrote:\n> > Not that it will be fun to maintain another special case in the VM\n> > update code in lazy_scan_prune(), but we could have a special case\n> > that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if\n> > all_visible_according_to_vm is true and all_visible is true, we update\n> > the VM but don't dirty the page.\n>\n> It wouldn't necessarily have to be a special case, I think.\n>\n> We already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in\n> the block where lazy_scan_prune marks a previously all-visible page\n> all-frozen -- we don't want to dirty the page unnecessarily there.\n> Making it conditional is defensive in that particular block (this was\n> also added by this same commit of mine), and avoids dirtying the page.\n\nAh, I see. I got confused. Even if the VM is suspect, if the page is\nall visible and the heap block is already set all-visible in the VM,\nthere is no need to update it.\n\nThis did make me realize that it seems like there is a case we don't\nhandle in master with the current code that would be fixed by changing\nthat code Heikki mentioned:\n\nRight now, even if the heap block is incorrectly marked all-visible in\nthe VM, if DISABLE_PAGE_SKIPPING is passed to vacuum,\nall_visible_according_to_vm will be passed to lazy_scan_prune() as\nfalse. Then even if lazy_scan_prune() finds that the page is not\nall-visible, we won't call visibilitymap_clear().\n\nIf we revert the code setting next_unskippable_allvis to false in\nlazy_scan_skip() when vacrel->skipwithvm is false and allow\nall_visible_according_to_vm to be true when the VM has it incorrectly\nset to true, then once lazy_scan_prune() discovers the page is not\nall-visible and assuming PD_ALL_VISIBLE is not marked so\nPageIsAllVisible() returns false, we will call visibilitymap_clear()\nto clear the incorrectly set VM bit (without dirtying the page).\n\nHere is a table of the variable states at the end of lazy_scan_prune()\nfor clarity:\n\nmaster:\nall_visible_according_to_vm: false\nall_visible: false\nVM says all vis: true\nPageIsAllVisible: false\n\nif fixed:\nall_visible_according_to_vm: true\nall_visible: false\nVM says all vis: true\nPageIsAllVisible: false\n\n> Seems like it might be possible to simplify/consolidate the VM-setting\n> code that's now located at the end of lazy_scan_prune. Perhaps the two\n> distinct blocks that call visibilitymap_set() could be combined into\n> one.\n\nI agree. I have some code to do that in an unproposed patch which\ncombines the VM updates into the prune record. We will definitely want\nto reorganize the code when we do that record combining.\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 11:31:11 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 11:31 AM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Fri, Mar 8, 2024 at 11:00 AM Peter Geoghegan <[email protected]> wrote:\n> >\n> > On Fri, Mar 8, 2024 at 10:48 AM Melanie Plageman\n> > <[email protected]> wrote:\n> > > Not that it will be fun to maintain another special case in the VM\n> > > update code in lazy_scan_prune(), but we could have a special case\n> > > that checks if DISABLE_PAGE_SKIPPING was passed to vacuum and if\n> > > all_visible_according_to_vm is true and all_visible is true, we update\n> > > the VM but don't dirty the page.\n> >\n> > It wouldn't necessarily have to be a special case, I think.\n> >\n> > We already conditionally set PD_ALL_VISIBLE/call PageIsAllVisible() in\n> > the block where lazy_scan_prune marks a previously all-visible page\n> > all-frozen -- we don't want to dirty the page unnecessarily there.\n> > Making it conditional is defensive in that particular block (this was\n> > also added by this same commit of mine), and avoids dirtying the page.\n>\n> Ah, I see. I got confused. Even if the VM is suspect, if the page is\n> all visible and the heap block is already set all-visible in the VM,\n> there is no need to update it.\n>\n> This did make me realize that it seems like there is a case we don't\n> handle in master with the current code that would be fixed by changing\n> that code Heikki mentioned:\n>\n> Right now, even if the heap block is incorrectly marked all-visible in\n> the VM, if DISABLE_PAGE_SKIPPING is passed to vacuum,\n> all_visible_according_to_vm will be passed to lazy_scan_prune() as\n> false. Then even if lazy_scan_prune() finds that the page is not\n> all-visible, we won't call visibilitymap_clear().\n>\n> If we revert the code setting next_unskippable_allvis to false in\n> lazy_scan_skip() when vacrel->skipwithvm is false and allow\n> all_visible_according_to_vm to be true when the VM has it incorrectly\n> set to true, then once lazy_scan_prune() discovers the page is not\n> all-visible and assuming PD_ALL_VISIBLE is not marked so\n> PageIsAllVisible() returns false, we will call visibilitymap_clear()\n> to clear the incorrectly set VM bit (without dirtying the page).\n>\n> Here is a table of the variable states at the end of lazy_scan_prune()\n> for clarity:\n>\n> master:\n> all_visible_according_to_vm: false\n> all_visible: false\n> VM says all vis: true\n> PageIsAllVisible: false\n>\n> if fixed:\n> all_visible_according_to_vm: true\n> all_visible: false\n> VM says all vis: true\n> PageIsAllVisible: false\n\nOkay, I now see from Heikki's v8-0001 that he was already aware of this.\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 11:41:59 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:\n> On 08/03/2024 02:46, Melanie Plageman wrote:\n> > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n> > > On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n> > I will say that now all of the variable names are *very* long. I didn't\n> > want to remove the \"state\" from LVRelState->next_block_state. (In fact, I\n> > kind of miss the \"get\". But I had to draw the line somewhere.) I think\n> > without \"state\" in the name, next_block sounds too much like a function.\n> > \n> > Any ideas for shortening the names of next_block_state and its members\n> > or are you fine with them?\n> \n> Hmm, we can remove the inner struct and add the fields directly into\n> LVRelState. LVRelState already contains many groups of variables, like\n> \"Error reporting state\", with no inner structs. I did it that way in the\n> attached patch. I also used local variables more.\n\n+1; I like the result of this.\n\n> > However, by adding a vmbuffer to next_block_state, the callback may be\n> > able to avoid extra VM fetches from one invocation to the next.\n> \n> That's a good idea, holding separate VM buffer pins for the next-unskippable\n> block and the block we're processing. I adopted that approach.\n\nCool. It can't be avoided with streaming read vacuum, but I wonder if\nthere would ever be adverse effects to doing it on master? Maybe if we\nare doing a lot of skipping and the block of the VM for the heap blocks\nwe are processing ends up changing each time but we would have had the\nright block of the VM if we used the one from\nheap_vac_scan_next_block()?\n\nFrankly, I'm in favor of just doing it now because it makes\nlazy_scan_heap() less confusing.\n\n> My compiler caught one small bug when I was playing with various\n> refactorings of this: heap_vac_scan_next_block() must set *blkno to\n> rel_pages, not InvalidBlockNumber, after the last block. The caller uses the\n> 'blkno' variable also after the loop, and assumes that it's set to\n> rel_pages.\n\nOops! Thanks for catching that.\n\n> I'm pretty happy with the attached patches now. The first one fixes the\n> existing bug I mentioned in the other email (based on the on-going\n> discussion that might not how we want to fix it though).\n\nISTM we should still do the fix you mentioned -- seems like it has more\nupsides than downsides?\n\n> From b68cb29c547de3c4acd10f31aad47b453d154666 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Fri, 8 Mar 2024 16:00:22 +0200\n> Subject: [PATCH v8 1/3] Set all_visible_according_to_vm correctly with\n> DISABLE_PAGE_SKIPPING\n> \n> It's important for 'all_visible_according_to_vm' to correctly reflect\n> whether the VM bit is set or not, even when we are not trusting the VM\n> to skip pages, because contrary to what the comment said,\n> lazy_scan_prune() relies on it.\n> \n> If it's incorrectly set to 'false', when the VM bit is in fact set,\n> lazy_scan_prune() will try to set the VM bit again and dirty the page\n> unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all\n> heap pages were dirtied, even if there were no changes. We would also\n> fail to clear any VM bits that were set incorrectly.\n> \n> This was broken in commit 980ae17310, so backpatch to v16.\n\nLGTM.\n\n> From 47af1ca65cf55ca876869b43bff47f9d43f0750e Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Fri, 8 Mar 2024 17:32:19 +0200\n> Subject: [PATCH v8 2/3] Confine vacuum skip logic to lazy_scan_skip()\n> ---\n> src/backend/access/heap/vacuumlazy.c | 256 +++++++++++++++------------\n> 1 file changed, 141 insertions(+), 115 deletions(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index ac55ebd2ae5..0aa08762015 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -204,6 +204,12 @@ typedef struct LVRelState\n> \tint64\t\tlive_tuples;\t/* # live tuples remaining */\n> \tint64\t\trecently_dead_tuples;\t/* # dead, but not yet removable */\n> \tint64\t\tmissed_dead_tuples; /* # removable, but not removed */\n\nPerhaps we should add a comment to the blkno member of LVRelState\nindicating that it is used for error reporting and logging?\n\n> +\t/* State maintained by heap_vac_scan_next_block() */\n> +\tBlockNumber current_block;\t/* last block returned */\n> +\tBlockNumber next_unskippable_block; /* next unskippable block */\n> +\tbool\t\tnext_unskippable_allvis;\t/* its visibility status */\n> +\tBuffer\t\tnext_unskippable_vmbuffer;\t/* buffer containing its VM bit */\n> } LVRelState;\n\n> /*\n> -static BlockNumber\n> -lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n> -\t\t\t bool *next_unskippable_allvis, bool *skipping_current_range)\n> +static bool\n> +heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,\n> +\t\t\t\t\t\t bool *all_visible_according_to_vm)\n> {\n> -\tBlockNumber rel_pages = vacrel->rel_pages,\n> -\t\t\t\tnext_unskippable_block = next_block,\n> -\t\t\t\tnskippable_blocks = 0;\n> +\tBlockNumber next_block;\n> \tbool\t\tskipsallvis = false;\n> +\tBlockNumber rel_pages = vacrel->rel_pages;\n> +\tBlockNumber next_unskippable_block;\n> +\tbool\t\tnext_unskippable_allvis;\n> +\tBuffer\t\tnext_unskippable_vmbuffer;\n> \n> -\t*next_unskippable_allvis = true;\n> -\twhile (next_unskippable_block < rel_pages)\n> -\t{\n> -\t\tuint8\t\tmapbits = visibilitymap_get_status(vacrel->rel,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t next_unskippable_block,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t vmbuffer);\n> +\t/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */\n> +\tnext_block = vacrel->current_block + 1;\n> \n> -\t\tif ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)\n> +\t/* Have we reached the end of the relation? */\n> +\tif (next_block >= rel_pages)\n> +\t{\n> +\t\tif (BufferIsValid(vacrel->next_unskippable_vmbuffer))\n> \t\t{\n> -\t\t\tAssert((mapbits & VISIBILITYMAP_ALL_FROZEN) == 0);\n> -\t\t\t*next_unskippable_allvis = false;\n> -\t\t\tbreak;\n> +\t\t\tReleaseBuffer(vacrel->next_unskippable_vmbuffer);\n> +\t\t\tvacrel->next_unskippable_vmbuffer = InvalidBuffer;\n> \t\t}\n\nGood catch here. Also, I noticed that I set current_block to\nInvalidBlockNumber too which seems strictly worse than leaving it as\nrel_pages + 1 -- just in case a future dev makes a change that\naccidentally causes heap_vac_scan_next_block() to be called again and\nadding InvalidBlockNumber + 1 would end up going back to 0. So this all\nlooks correct to me.\n\n> +\t\t*blkno = rel_pages;\n> +\t\treturn false;\n> +\t}\n\n> +\tnext_unskippable_block = vacrel->next_unskippable_block;\n> +\tnext_unskippable_allvis = vacrel->next_unskippable_allvis;\n\nWishe there was a newline here.\n\nI see why you removed my treatise-level comment that was here about\nunskipped skippable blocks. However, when I was trying to understand\nthis code, I did wish there was some comment that explained to me why we\nneeded all of the variables next_unskippable_block,\nnext_unskippable_allvis, all_visible_according_to_vm, and current_block.\n\nThe idea that we would choose not to skip a skippable block because of\nkernel readahead makes sense. The part that I had trouble wrapping my\nhead around was that we want to also keep the visibility status of both\nthe beginning and ending blocks of the skippable range and then use\nthose to infer the visibility status of the intervening blocks without\nanother VM lookup if we decide not to skip them.\n\n> +\tif (next_unskippable_block == InvalidBlockNumber ||\n> +\t\tnext_block > next_unskippable_block)\n> +\t{\n> \t\t/*\n> -\t\t * Caller must scan the last page to determine whether it has tuples\n> -\t\t * (caller must have the opportunity to set vacrel->nonempty_pages).\n> -\t\t * This rule avoids having lazy_truncate_heap() take access-exclusive\n> -\t\t * lock on rel to attempt a truncation that fails anyway, just because\n> -\t\t * there are tuples on the last page (it is likely that there will be\n> -\t\t * tuples on other nearby pages as well, but those can be skipped).\n> -\t\t *\n> -\t\t * Implement this by always treating the last block as unsafe to skip.\n> +\t\t * Find the next unskippable block using the visibility map.\n> \t\t */\n> -\t\tif (next_unskippable_block == rel_pages - 1)\n> -\t\t\tbreak;\n> +\t\tnext_unskippable_block = next_block;\n> +\t\tnext_unskippable_vmbuffer = vacrel->next_unskippable_vmbuffer;\n> +\t\tfor (;;)\n\nAh yes, my old loop condition was redundant with the break if\nnext_unskippable_block == rel_pages - 1. This is better\n\n> +\t\t{\n> +\t\t\tuint8\t\tmapbits = visibilitymap_get_status(vacrel->rel,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t next_unskippable_block,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t &next_unskippable_vmbuffer);\n> \n> -\t\t/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> -\t\tif (!vacrel->skipwithvm)\n\n...\n> +\t\t\t}\n> +\n> +\t\t\tvacuum_delay_point();\n> +\t\t\tnext_unskippable_block++;\n> \t\t}\n\nWould love a newline here\n\n> +\t\t/* write the local variables back to vacrel */\n> +\t\tvacrel->next_unskippable_block = next_unskippable_block;\n> +\t\tvacrel->next_unskippable_allvis = next_unskippable_allvis;\n> +\t\tvacrel->next_unskippable_vmbuffer = next_unskippable_vmbuffer;\n> \n...\n\n> -\tif (nskippable_blocks < SKIP_PAGES_THRESHOLD)\n> -\t\t*skipping_current_range = false;\n> +\tif (next_block == next_unskippable_block)\n> +\t\t*all_visible_according_to_vm = next_unskippable_allvis;\n> \telse\n> -\t{\n> -\t\t*skipping_current_range = true;\n> -\t\tif (skipsallvis)\n> -\t\t\tvacrel->skippedallvis = true;\n> -\t}\n> -\n> -\treturn next_unskippable_block;\n> +\t\t*all_visible_according_to_vm = true;\n\nAlso a newline here\n\n> +\t*blkno = vacrel->current_block = next_block;\n> +\treturn true;\n> }\n> \n\n> From 941ae7522ab6ac24ca5981303e4e7f6e2cba7458 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <[email protected]>\n> Date: Sun, 31 Dec 2023 12:49:56 -0500\n> Subject: [PATCH v8 3/3] Remove unneeded vacuum_delay_point from\n> heap_vac_scan_get_next_block\n> \n> heap_vac_scan_get_next_block() does relatively little work, so there is\n> no need to call vacuum_delay_point(). A future commit will call\n> heap_vac_scan_get_next_block() from a callback, and we would like to\n> avoid calling vacuum_delay_point() in that callback.\n> ---\n> src/backend/access/heap/vacuumlazy.c | 1 -\n> 1 file changed, 1 deletion(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 0aa08762015..e1657ef4f9b 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1172,7 +1172,6 @@ heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,\n> \t\t\t\tskipsallvis = true;\n> \t\t\t}\n> \n> -\t\t\tvacuum_delay_point();\n> \t\t\tnext_unskippable_block++;\n> \t\t}\n> \t\t/* write the local variables back to vacrel */\n> -- \n> 2.39.2\n> \n\nLGTM\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 12:34:10 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Mar 8, 2024 at 12:34 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:\n> > On 08/03/2024 02:46, Melanie Plageman wrote:\n> > > On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n> > > > On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n> > > I will say that now all of the variable names are *very* long. I didn't\n> > > want to remove the \"state\" from LVRelState->next_block_state. (In fact, I\n> > > kind of miss the \"get\". But I had to draw the line somewhere.) I think\n> > > without \"state\" in the name, next_block sounds too much like a function.\n> > >\n> > > Any ideas for shortening the names of next_block_state and its members\n> > > or are you fine with them?\n> >\n> > Hmm, we can remove the inner struct and add the fields directly into\n> > LVRelState. LVRelState already contains many groups of variables, like\n> > \"Error reporting state\", with no inner structs. I did it that way in the\n> > attached patch. I also used local variables more.\n>\n> +1; I like the result of this.\n\nI did some perf testing of 0002 and 0003 using that fully-in-SB vacuum\ntest I mentioned in an earlier email. 0002 is a vacuum time reduction\nfrom an average of 11.5 ms on master to 9.6 ms with 0002 applied. And\n0003 reduces the time vacuum takes from 11.5 ms on master to 7.4 ms\nwith 0003 applied.\n\nI profiled them and 0002 seems to simply spend less time in\nheap_vac_scan_next_block() than master did in lazy_scan_skip().\n\nAnd 0003 reduces the time vacuum takes because vacuum_delay_point()\nshows up pretty high in the profile.\n\nHere are the profiles for my test.\n\nprofile of master:\n\n+ 29.79% postgres postgres [.] visibilitymap_get_status\n+ 27.35% postgres postgres [.] vacuum_delay_point\n+ 17.00% postgres postgres [.] lazy_scan_skip\n+ 6.59% postgres postgres [.] heap_vacuum_rel\n+ 6.43% postgres postgres [.] BufferGetBlockNumber\n\nprofile with 0001-0002:\n\n+ 40.30% postgres postgres [.] visibilitymap_get_status\n+ 20.32% postgres postgres [.] vacuum_delay_point\n+ 20.26% postgres postgres [.] heap_vacuum_rel\n+ 5.17% postgres postgres [.] BufferGetBlockNumber\n\nprofile with 0001-0003\n\n+ 59.77% postgres postgres [.] visibilitymap_get_status\n+ 23.86% postgres postgres [.] heap_vacuum_rel\n+ 6.59% postgres postgres [.] StrategyGetBuffer\n\nTest DDL and setup:\n\npsql -c \"ALTER SYSTEM SET shared_buffers = '16 GB';\"\npsql -c \"CREATE TABLE foo(id INT, a INT, b INT, c INT, d INT, e INT, f\nINT, g INT) with (autovacuum_enabled=false, fillfactor=25);\"\npsql -c \"INSERT INTO foo SELECT i, i, i, i, i, i, i, i FROM\ngenerate_series(1, 46000000)i;\"\npsql -c \"VACUUM (FREEZE) foo;\"\npg_ctl restart\npsql -c \"SELECT pg_prewarm('foo');\"\n# make sure there isn't an ill-timed checkpoint\npsql -c \"\\timing on\" -c \"vacuum (verbose) foo;\"\n\n- Melanie\n\n\n", "msg_date": "Fri, 8 Mar 2024 13:21:36 -0500", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Wed, Mar 6, 2024 at 6:47 PM Melanie Plageman\n<[email protected]> wrote:\n>\n> Performance results:\n>\n> The TL;DR of my performance results is that streaming read vacuum is\n> faster. However there is an issue with the interaction of the streaming\n> read code and the vacuum buffer access strategy which must be addressed.\n\nI have investigated the interaction between\nmaintenance_io_concurrency, streaming reads, and the vacuum buffer\naccess strategy (BAS_VACUUM).\n\nThe streaming read API limits max_pinned_buffers to a pinned buffer\nmultiplier (currently 4) * maintenance_io_concurrency buffers with the\ngoal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.\n\nSince the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with\ndefault block size, that means that for a fully uncached vacuum in\nwhich all blocks must be vacuumed and will be dirtied, you'd have to\nset maintenance_io_concurrency at 8 or lower to see the same number of\nreuses (and shared buffer consumption) as master.\n\nGiven that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it\nseems like we should force max_pinned_buffers to a value that\nguarantees the expected shared buffer usage by vacuum. But that means\nthat maintenance_io_concurrency does not have a predictable impact on\nstreaming read vacuum.\n\nWhat is the right thing to do here?\n\nAt the least, the default size of the BAS_VACUUM ring buffer should be\nBLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency\n(probably rounded up to the next power of two) bytes.\n\n- Melanie\n\n\n", "msg_date": "Sun, 10 Mar 2024 12:31:12 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Mar 11, 2024 at 5:31 AM Melanie Plageman\n<[email protected]> wrote:\n> On Wed, Mar 6, 2024 at 6:47 PM Melanie Plageman\n> <[email protected]> wrote:\n> > Performance results:\n> >\n> > The TL;DR of my performance results is that streaming read vacuum is\n> > faster. However there is an issue with the interaction of the streaming\n> > read code and the vacuum buffer access strategy which must be addressed.\n\nWoo.\n\n> I have investigated the interaction between\n> maintenance_io_concurrency, streaming reads, and the vacuum buffer\n> access strategy (BAS_VACUUM).\n>\n> The streaming read API limits max_pinned_buffers to a pinned buffer\n> multiplier (currently 4) * maintenance_io_concurrency buffers with the\n> goal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.\n>\n> Since the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with\n> default block size, that means that for a fully uncached vacuum in\n> which all blocks must be vacuumed and will be dirtied, you'd have to\n> set maintenance_io_concurrency at 8 or lower to see the same number of\n> reuses (and shared buffer consumption) as master.\n>\n> Given that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it\n> seems like we should force max_pinned_buffers to a value that\n> guarantees the expected shared buffer usage by vacuum. But that means\n> that maintenance_io_concurrency does not have a predictable impact on\n> streaming read vacuum.\n>\n> What is the right thing to do here?\n>\n> At the least, the default size of the BAS_VACUUM ring buffer should be\n> BLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency\n> (probably rounded up to the next power of two) bytes.\n\nHmm, does the v6 look-ahead distance control algorithm mitigate that\nproblem? Using the ABC classifications from the streaming read\nthread, I think for A it should now pin only 1, for B 16 and for C, it\ndepends on the size of the random 'chunks': if you have a lot of size\n1 random reads then it shouldn't go above 10 because of (default)\nmaintenance_io_concurrency. The only way to get up to very high\nnumbers would be to have a lot of random chunks triggering behaviour\nC, but each made up of long runs of misses. For example one can\ncontrive a BHS query that happens to read pages 0-15 then 20-35 then\n40-55 etc etc so that we want to get lots of wide I/Os running\nconcurrently. Unless vacuum manages to do something like that, it\nshouldn't be able to exceed 32 buffers very easily.\n\nI suspect that if we taught streaming_read.c to ask the\nBufferAccessStrategy (if one is passed in) what its recommended pin\nlimit is (strategy->nbuffers?), we could just clamp\nmax_pinned_buffers, and it would be hard to find a workload where that\nmakes a difference, and we could think about more complicated logic\nlater.\n\nIn other words, I think/hope your complaints about excessive pinning\nfrom v5 WRT all-cached heap scans might have also already improved\nthis case by happy coincidence? I haven't tried it out though, I just\nread your description of the problem...\n\n\n", "msg_date": "Mon, 11 Mar 2024 16:01:16 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 08/03/2024 19:34, Melanie Plageman wrote:\n> On Fri, Mar 08, 2024 at 06:07:33PM +0200, Heikki Linnakangas wrote:\n>> On 08/03/2024 02:46, Melanie Plageman wrote:\n>>> On Wed, Mar 06, 2024 at 10:00:23PM -0500, Melanie Plageman wrote:\n>>>> On Wed, Mar 06, 2024 at 09:55:21PM +0200, Heikki Linnakangas wrote:\n>>> However, by adding a vmbuffer to next_block_state, the callback may be\n>>> able to avoid extra VM fetches from one invocation to the next.\n>>\n>> That's a good idea, holding separate VM buffer pins for the next-unskippable\n>> block and the block we're processing. I adopted that approach.\n> \n> Cool. It can't be avoided with streaming read vacuum, but I wonder if\n> there would ever be adverse effects to doing it on master? Maybe if we\n> are doing a lot of skipping and the block of the VM for the heap blocks\n> we are processing ends up changing each time but we would have had the\n> right block of the VM if we used the one from\n> heap_vac_scan_next_block()?\n> \n> Frankly, I'm in favor of just doing it now because it makes\n> lazy_scan_heap() less confusing.\n\n+1\n\n>> I'm pretty happy with the attached patches now. The first one fixes the\n>> existing bug I mentioned in the other email (based on the on-going\n>> discussion that might not how we want to fix it though).\n> \n> ISTM we should still do the fix you mentioned -- seems like it has more\n> upsides than downsides?\n> \n>> From b68cb29c547de3c4acd10f31aad47b453d154666 Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Fri, 8 Mar 2024 16:00:22 +0200\n>> Subject: [PATCH v8 1/3] Set all_visible_according_to_vm correctly with\n>> DISABLE_PAGE_SKIPPING\n>>\n>> It's important for 'all_visible_according_to_vm' to correctly reflect\n>> whether the VM bit is set or not, even when we are not trusting the VM\n>> to skip pages, because contrary to what the comment said,\n>> lazy_scan_prune() relies on it.\n>>\n>> If it's incorrectly set to 'false', when the VM bit is in fact set,\n>> lazy_scan_prune() will try to set the VM bit again and dirty the page\n>> unnecessarily. As a result, if you used DISABLE_PAGE_SKIPPING, all\n>> heap pages were dirtied, even if there were no changes. We would also\n>> fail to clear any VM bits that were set incorrectly.\n>>\n>> This was broken in commit 980ae17310, so backpatch to v16.\n> \n> LGTM.\n\nCommitted and backpatched this.\n\n>> From 47af1ca65cf55ca876869b43bff47f9d43f0750e Mon Sep 17 00:00:00 2001\n>> From: Heikki Linnakangas <[email protected]>\n>> Date: Fri, 8 Mar 2024 17:32:19 +0200\n>> Subject: [PATCH v8 2/3] Confine vacuum skip logic to lazy_scan_skip()\n>> ---\n>> src/backend/access/heap/vacuumlazy.c | 256 +++++++++++++++------------\n>> 1 file changed, 141 insertions(+), 115 deletions(-)\n>>\n>> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n>> index ac55ebd2ae5..0aa08762015 100644\n>> --- a/src/backend/access/heap/vacuumlazy.c\n>> +++ b/src/backend/access/heap/vacuumlazy.c\n>> @@ -204,6 +204,12 @@ typedef struct LVRelState\n>> \tint64\t\tlive_tuples;\t/* # live tuples remaining */\n>> \tint64\t\trecently_dead_tuples;\t/* # dead, but not yet removable */\n>> \tint64\t\tmissed_dead_tuples; /* # removable, but not removed */\n> \n> Perhaps we should add a comment to the blkno member of LVRelState\n> indicating that it is used for error reporting and logging?\n\nWell, it's already under the \"/* Error reporting state */\" section. I \nagree this is a little confusing, the name 'blkno' doesn't convey that \nit's supposed to be used just for error reporting. But it's a \npre-existing issue so I left it alone. It can be changed with a separate \npatch if we come up with a good idea.\n\n> I see why you removed my treatise-level comment that was here about\n> unskipped skippable blocks. However, when I was trying to understand\n> this code, I did wish there was some comment that explained to me why we\n> needed all of the variables next_unskippable_block,\n> next_unskippable_allvis, all_visible_according_to_vm, and current_block.\n> \n> The idea that we would choose not to skip a skippable block because of\n> kernel readahead makes sense. The part that I had trouble wrapping my\n> head around was that we want to also keep the visibility status of both\n> the beginning and ending blocks of the skippable range and then use\n> those to infer the visibility status of the intervening blocks without\n> another VM lookup if we decide not to skip them.\n\nRight, I removed the comment because looked a little out of place and it \nduplicated the other comments sprinkled in the function. I agree this \ncould still use some more comments though.\n\nHere's yet another attempt at making this more readable. I moved the \nlogic to find the next unskippable block to a separate function, and \nadded comments to make the states more explicit. What do you think?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 11 Mar 2024 11:29:44 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Mar 11, 2024 at 11:29:44AM +0200, Heikki Linnakangas wrote:\n>\n> > I see why you removed my treatise-level comment that was here about\n> > unskipped skippable blocks. However, when I was trying to understand\n> > this code, I did wish there was some comment that explained to me why we\n> > needed all of the variables next_unskippable_block,\n> > next_unskippable_allvis, all_visible_according_to_vm, and current_block.\n> > \n> > The idea that we would choose not to skip a skippable block because of\n> > kernel readahead makes sense. The part that I had trouble wrapping my\n> > head around was that we want to also keep the visibility status of both\n> > the beginning and ending blocks of the skippable range and then use\n> > those to infer the visibility status of the intervening blocks without\n> > another VM lookup if we decide not to skip them.\n> \n> Right, I removed the comment because looked a little out of place and it\n> duplicated the other comments sprinkled in the function. I agree this could\n> still use some more comments though.\n> \n> Here's yet another attempt at making this more readable. I moved the logic\n> to find the next unskippable block to a separate function, and added\n> comments to make the states more explicit. What do you think?\n\nOh, I like the new structure. Very cool! Just a few remarks:\n\n> From c21480e9da61e145573de3b502551dde1b8fa3f6 Mon Sep 17 00:00:00 2001\n> From: Heikki Linnakangas <[email protected]>\n> Date: Fri, 8 Mar 2024 17:32:19 +0200\n> Subject: [PATCH v9 1/2] Confine vacuum skip logic to lazy_scan_skip()\n> \n> Rename lazy_scan_skip() to heap_vac_scan_next_block() and move more\n> code into the function, so that the caller doesn't need to know about\n> ranges or skipping anymore. heap_vac_scan_next_block() returns the\n> next block to process, and the logic for determining that block is all\n> within the function. This makes the skipping logic easier to\n> understand, as it's all in the same function, and makes the calling\n> code easier to understand as it's less cluttered. The state variables\n> needed to manage the skipping logic are moved to LVRelState.\n> \n> heap_vac_scan_next_block() now manages its own VM buffer separately\n> from the caller's vmbuffer variable. The caller's vmbuffer holds the\n> VM page for the current block its processing, while\n> heap_vac_scan_next_block() keeps a pin on the VM page for the next\n> unskippable block. Most of the time they are the same, so we hold two\n> pins on the same buffer, but it's more convenient to manage them\n> separately.\n> \n> For readability inside heap_vac_scan_next_block(), move the logic of\n> finding the next unskippable block to separate function, and add some\n> comments.\n> \n> This refactoring will also help future patches to switch to using a\n> streaming read interface, and eventually AIO\n> (https://postgr.es/m/CA%2BhUKGJkOiOCa%2Bmag4BF%2BzHo7qo%3Do9CFheB8%3Dg6uT5TUm2gkvA%40mail.gmail.com)\n> \n> Author: Melanie Plageman, with some changes by me\n\nI'd argue you earned co-authorship by now :)\n\n> Discussion: https://postgr.es/m/CAAKRu_Yf3gvXGcCnqqfoq0Q8LX8UM-e-qbm_B1LeZh60f8WhWA%40mail.gmail.com\n> ---\n> src/backend/access/heap/vacuumlazy.c | 233 +++++++++++++++++----------\n> 1 file changed, 146 insertions(+), 87 deletions(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index ac55ebd2ae..1757eb49b7 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> +\n\n> /*\n> - *\tlazy_scan_skip() -- set up range of skippable blocks using visibility map.\n> + *\theap_vac_scan_next_block() -- get next block for vacuum to process\n> *\n> - * lazy_scan_heap() calls here every time it needs to set up a new range of\n> - * blocks to skip via the visibility map. Caller passes the next block in\n> - * line. We return a next_unskippable_block for this range. When there are\n> - * no skippable blocks we just return caller's next_block. The all-visible\n> - * status of the returned block is set in *next_unskippable_allvis for caller,\n> - * too. Block usually won't be all-visible (since it's unskippable), but it\n> - * can be during aggressive VACUUMs (as well as in certain edge cases).\n> + * lazy_scan_heap() calls here every time it needs to get the next block to\n> + * prune and vacuum. The function uses the visibility map, vacuum options,\n> + * and various thresholds to skip blocks which do not need to be processed and\n\nI wonder if \"need\" is too strong a word since this function\n(heap_vac_scan_next_block()) specifically can set blkno to a block which\ndoesn't *need* to be processed but which it chooses to process because\nof SKIP_PAGES_THRESHOLD.\n\n> + * sets blkno to the next block that actually needs to be processed.\n> *\n> - * Sets *skipping_current_range to indicate if caller should skip this range.\n> - * Costs and benefits drive our decision. Very small ranges won't be skipped.\n> + * The block number and visibility status of the next block to process are set\n> + * in *blkno and *all_visible_according_to_vm. The return value is false if\n> + * there are no further blocks to process.\n> + *\n> + * vacrel is an in/out parameter here; vacuum options and information about\n> + * the relation are read, and vacrel->skippedallvis is set to ensure we don't\n> + * advance relfrozenxid when we have skipped vacuuming all-visible blocks. It\n\nMaybe this should say when we have skipped vacuuming all-visible blocks\nwhich are not all-frozen or just blocks which are not all-frozen.\n\n> + * also holds information about the next unskippable block, as bookkeeping for\n> + * this function.\n> *\n> * Note: our opinion of which blocks can be skipped can go stale immediately.\n> * It's okay if caller \"misses\" a page whose all-visible or all-frozen marking\n\nWonder if it makes sense to move this note to\nfind_next_nunskippable_block().\n\n> @@ -1098,26 +1081,119 @@ lazy_scan_heap(LVRelState *vacrel)\n> * older XIDs/MXIDs. The vacrel->skippedallvis flag will be set here when the\n> * choice to skip such a range is actually made, making everything safe.)\n> */\n> -static BlockNumber\n> -lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n> -\t\t\t bool *next_unskippable_allvis, bool *skipping_current_range)\n> +static bool\n> +heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,\n> +\t\t\t\t\t\t bool *all_visible_according_to_vm)\n> {\n> -\tBlockNumber rel_pages = vacrel->rel_pages,\n> -\t\t\t\tnext_unskippable_block = next_block,\n> -\t\t\t\tnskippable_blocks = 0;\n> -\tbool\t\tskipsallvis = false;\n> +\tBlockNumber next_block;\n> \n> -\t*next_unskippable_allvis = true;\n> -\twhile (next_unskippable_block < rel_pages)\n> +\t/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */\n> +\tnext_block = vacrel->current_block + 1;\n> +\n> +\t/* Have we reached the end of the relation? */\n\nNo strong opinion on this, but I wonder if being at the end of the\nrelation counts as a fourth state?\n\n> +\tif (next_block >= vacrel->rel_pages)\n> +\t{\n> +\t\tif (BufferIsValid(vacrel->next_unskippable_vmbuffer))\n> +\t\t{\n> +\t\t\tReleaseBuffer(vacrel->next_unskippable_vmbuffer);\n> +\t\t\tvacrel->next_unskippable_vmbuffer = InvalidBuffer;\n> +\t\t}\n> +\t\t*blkno = vacrel->rel_pages;\n> +\t\treturn false;\n> +\t}\n> +\n> +\t/*\n> +\t * We must be in one of the three following states:\n> +\t */\n> +\tif (vacrel->next_unskippable_block == InvalidBlockNumber ||\n> +\t\tnext_block > vacrel->next_unskippable_block)\n> +\t{\n> +\t\t/*\n> +\t\t * 1. We have just processed an unskippable block (or we're at the\n> +\t\t * beginning of the scan). Find the next unskippable block using the\n> +\t\t * visibility map.\n> +\t\t */\n\nI would reorder the options in the comment or in the if statement since\nthey seem to be in the reverse order.\n\n> +\t\tbool\t\tskipsallvis;\n> +\n> +\t\tfind_next_unskippable_block(vacrel, &skipsallvis);\n> +\n> +\t\t/*\n> +\t\t * We now know the next block that we must process. It can be the\n> +\t\t * next block after the one we just processed, or something further\n> +\t\t * ahead. If it's further ahead, we can jump to it, but we choose to\n> +\t\t * do so only if we can skip at least SKIP_PAGES_THRESHOLD consecutive\n> +\t\t * pages. Since we're reading sequentially, the OS should be doing\n> +\t\t * readahead for us, so there's no gain in skipping a page now and\n> +\t\t * then. Skipping such a range might even discourage sequential\n> +\t\t * detection.\n> +\t\t *\n> +\t\t * This test also enables more frequent relfrozenxid advancement\n> +\t\t * during non-aggressive VACUUMs. If the range has any all-visible\n> +\t\t * pages then skipping makes updating relfrozenxid unsafe, which is a\n> +\t\t * real downside.\n> +\t\t */\n> +\t\tif (vacrel->next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)\n> +\t\t{\n> +\t\t\tnext_block = vacrel->next_unskippable_block;\n> +\t\t\tif (skipsallvis)\n> +\t\t\t\tvacrel->skippedallvis = true;\n> +\t\t}\n\n> +\n> +/*\n> + * Find the next unskippable block in a vacuum scan using the visibility map.\n\nTo expand this comment, I might mention it is a helper function for\nheap_vac_scan_next_block(). I would also say that the next unskippable\nblock and its visibility information are recorded in vacrel. And that\nskipsallvis is set to true if any of the intervening skipped blocks are\nnot all-frozen.\n\n> + */\n> +static void\n> +find_next_unskippable_block(LVRelState *vacrel, bool *skipsallvis)\n> +{\n> +\tBlockNumber rel_pages = vacrel->rel_pages;\n> +\tBlockNumber next_unskippable_block = vacrel->next_unskippable_block + 1;\n> +\tBuffer\t\tnext_unskippable_vmbuffer = vacrel->next_unskippable_vmbuffer;\n> +\tbool\t\tnext_unskippable_allvis;\n> +\n> +\t*skipsallvis = false;\n> +\n> +\tfor (;;)\n> \t{\n> \t\tuint8\t\tmapbits = visibilitymap_get_status(vacrel->rel,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t next_unskippable_block,\n> -\t\t\t\t\t\t\t\t\t\t\t\t\t vmbuffer);\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t &next_unskippable_vmbuffer);\n> \n> -\t\tif ((mapbits & VISIBILITYMAP_ALL_VISIBLE) == 0)\n> +\t\tnext_unskippable_allvis = (mapbits & VISIBILITYMAP_ALL_VISIBLE) != 0;\n\nOtherwise LGTM\n\n- Melanie\n\n\n", "msg_date": "Mon, 11 Mar 2024 12:15:44 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On 11/03/2024 18:15, Melanie Plageman wrote:\n> On Mon, Mar 11, 2024 at 11:29:44AM +0200, Heikki Linnakangas wrote:\n>> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n>> index ac55ebd2ae..1757eb49b7 100644\n>> --- a/src/backend/access/heap/vacuumlazy.c\n>> +++ b/src/backend/access/heap/vacuumlazy.c\n>> +\n> \n>> /*\n>> - *\tlazy_scan_skip() -- set up range of skippable blocks using visibility map.\n>> + *\theap_vac_scan_next_block() -- get next block for vacuum to process\n>> *\n>> - * lazy_scan_heap() calls here every time it needs to set up a new range of\n>> - * blocks to skip via the visibility map. Caller passes the next block in\n>> - * line. We return a next_unskippable_block for this range. When there are\n>> - * no skippable blocks we just return caller's next_block. The all-visible\n>> - * status of the returned block is set in *next_unskippable_allvis for caller,\n>> - * too. Block usually won't be all-visible (since it's unskippable), but it\n>> - * can be during aggressive VACUUMs (as well as in certain edge cases).\n>> + * lazy_scan_heap() calls here every time it needs to get the next block to\n>> + * prune and vacuum. The function uses the visibility map, vacuum options,\n>> + * and various thresholds to skip blocks which do not need to be processed and\n>> + * sets blkno to the next block that actually needs to be processed.\n> \n> I wonder if \"need\" is too strong a word since this function\n> (heap_vac_scan_next_block()) specifically can set blkno to a block which\n> doesn't *need* to be processed but which it chooses to process because\n> of SKIP_PAGES_THRESHOLD.\n\nOk yeah, there's a lot of \"needs\" here :-). Fixed.\n\n>> *\n>> - * Sets *skipping_current_range to indicate if caller should skip this range.\n>> - * Costs and benefits drive our decision. Very small ranges won't be skipped.\n>> + * The block number and visibility status of the next block to process are set\n>> + * in *blkno and *all_visible_according_to_vm. The return value is false if\n>> + * there are no further blocks to process.\n>> + *\n>> + * vacrel is an in/out parameter here; vacuum options and information about\n>> + * the relation are read, and vacrel->skippedallvis is set to ensure we don't\n>> + * advance relfrozenxid when we have skipped vacuuming all-visible blocks. It\n> \n> Maybe this should say when we have skipped vacuuming all-visible blocks\n> which are not all-frozen or just blocks which are not all-frozen.\n\nOk, rephrased.\n\n>> + * also holds information about the next unskippable block, as bookkeeping for\n>> + * this function.\n>> *\n>> * Note: our opinion of which blocks can be skipped can go stale immediately.\n>> * It's okay if caller \"misses\" a page whose all-visible or all-frozen marking\n> \n> Wonder if it makes sense to move this note to\n> find_next_nunskippable_block().\n\nMoved.\n\n>> @@ -1098,26 +1081,119 @@ lazy_scan_heap(LVRelState *vacrel)\n>> * older XIDs/MXIDs. The vacrel->skippedallvis flag will be set here when the\n>> * choice to skip such a range is actually made, making everything safe.)\n>> */\n>> -static BlockNumber\n>> -lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n>> -\t\t\t bool *next_unskippable_allvis, bool *skipping_current_range)\n>> +static bool\n>> +heap_vac_scan_next_block(LVRelState *vacrel, BlockNumber *blkno,\n>> +\t\t\t\t\t\t bool *all_visible_according_to_vm)\n>> {\n>> -\tBlockNumber rel_pages = vacrel->rel_pages,\n>> -\t\t\t\tnext_unskippable_block = next_block,\n>> -\t\t\t\tnskippable_blocks = 0;\n>> -\tbool\t\tskipsallvis = false;\n>> +\tBlockNumber next_block;\n>> \n>> -\t*next_unskippable_allvis = true;\n>> -\twhile (next_unskippable_block < rel_pages)\n>> +\t/* relies on InvalidBlockNumber + 1 overflowing to 0 on first call */\n>> +\tnext_block = vacrel->current_block + 1;\n>> +\n>> +\t/* Have we reached the end of the relation? */\n> \n> No strong opinion on this, but I wonder if being at the end of the\n> relation counts as a fourth state?\n\nYeah, perhaps. But I think it makes sense to treat it as a special case.\n\n>> +\tif (next_block >= vacrel->rel_pages)\n>> +\t{\n>> +\t\tif (BufferIsValid(vacrel->next_unskippable_vmbuffer))\n>> +\t\t{\n>> +\t\t\tReleaseBuffer(vacrel->next_unskippable_vmbuffer);\n>> +\t\t\tvacrel->next_unskippable_vmbuffer = InvalidBuffer;\n>> +\t\t}\n>> +\t\t*blkno = vacrel->rel_pages;\n>> +\t\treturn false;\n>> +\t}\n>> +\n>> +\t/*\n>> +\t * We must be in one of the three following states:\n>> +\t */\n>> +\tif (vacrel->next_unskippable_block == InvalidBlockNumber ||\n>> +\t\tnext_block > vacrel->next_unskippable_block)\n>> +\t{\n>> +\t\t/*\n>> +\t\t * 1. We have just processed an unskippable block (or we're at the\n>> +\t\t * beginning of the scan). Find the next unskippable block using the\n>> +\t\t * visibility map.\n>> +\t\t */\n> \n> I would reorder the options in the comment or in the if statement since\n> they seem to be in the reverse order.\n\nReordered them in the statement.\n\nIt feels a bit wrong to test next_block > vacrel->next_unskippable_block \nbefore vacrel->next_unskippable_block == InvalidBlockNumber. But it \nworks, and that order makes more sense in the comment IMHO.\n\n>> +\t\tbool\t\tskipsallvis;\n>> +\n>> +\t\tfind_next_unskippable_block(vacrel, &skipsallvis);\n>> +\n>> +\t\t/*\n>> +\t\t * We now know the next block that we must process. It can be the\n>> +\t\t * next block after the one we just processed, or something further\n>> +\t\t * ahead. If it's further ahead, we can jump to it, but we choose to\n>> +\t\t * do so only if we can skip at least SKIP_PAGES_THRESHOLD consecutive\n>> +\t\t * pages. Since we're reading sequentially, the OS should be doing\n>> +\t\t * readahead for us, so there's no gain in skipping a page now and\n>> +\t\t * then. Skipping such a range might even discourage sequential\n>> +\t\t * detection.\n>> +\t\t *\n>> +\t\t * This test also enables more frequent relfrozenxid advancement\n>> +\t\t * during non-aggressive VACUUMs. If the range has any all-visible\n>> +\t\t * pages then skipping makes updating relfrozenxid unsafe, which is a\n>> +\t\t * real downside.\n>> +\t\t */\n>> +\t\tif (vacrel->next_unskippable_block - next_block >= SKIP_PAGES_THRESHOLD)\n>> +\t\t{\n>> +\t\t\tnext_block = vacrel->next_unskippable_block;\n>> +\t\t\tif (skipsallvis)\n>> +\t\t\t\tvacrel->skippedallvis = true;\n>> +\t\t}\n> \n>> +\n>> +/*\n>> + * Find the next unskippable block in a vacuum scan using the visibility map.\n> \n> To expand this comment, I might mention it is a helper function for\n> heap_vac_scan_next_block(). I would also say that the next unskippable\n> block and its visibility information are recorded in vacrel. And that\n> skipsallvis is set to true if any of the intervening skipped blocks are\n> not all-frozen.\n\nAdded comments.\n\n> Otherwise LGTM\n\nOk, pushed! Thank you, this is much more understandable now!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 11 Mar 2024 20:47:19 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Mar 11, 2024 at 2:47 PM Heikki Linnakangas <[email protected]> wrote:\n>\n>\n> > Otherwise LGTM\n>\n> Ok, pushed! Thank you, this is much more understandable now!\n\nCool, thanks!\n\n\n", "msg_date": "Mon, 11 Mar 2024 16:41:29 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Sun, Mar 10, 2024 at 11:01 PM Thomas Munro <[email protected]> wrote:\n>\n> On Mon, Mar 11, 2024 at 5:31 AM Melanie Plageman\n> <[email protected]> wrote:\n> > I have investigated the interaction between\n> > maintenance_io_concurrency, streaming reads, and the vacuum buffer\n> > access strategy (BAS_VACUUM).\n> >\n> > The streaming read API limits max_pinned_buffers to a pinned buffer\n> > multiplier (currently 4) * maintenance_io_concurrency buffers with the\n> > goal of constructing reads of at least MAX_BUFFERS_PER_TRANSFER size.\n> >\n> > Since the BAS_VACUUM ring buffer is size 256 kB or 32 buffers with\n> > default block size, that means that for a fully uncached vacuum in\n> > which all blocks must be vacuumed and will be dirtied, you'd have to\n> > set maintenance_io_concurrency at 8 or lower to see the same number of\n> > reuses (and shared buffer consumption) as master.\n> >\n> > Given that we allow users to specify BUFFER_USAGE_LIMIT to vacuum, it\n> > seems like we should force max_pinned_buffers to a value that\n> > guarantees the expected shared buffer usage by vacuum. But that means\n> > that maintenance_io_concurrency does not have a predictable impact on\n> > streaming read vacuum.\n> >\n> > What is the right thing to do here?\n> >\n> > At the least, the default size of the BAS_VACUUM ring buffer should be\n> > BLCKSZ * pinned_buffer_multiplier * default maintenance_io_concurrency\n> > (probably rounded up to the next power of two) bytes.\n>\n> Hmm, does the v6 look-ahead distance control algorithm mitigate that\n> problem? Using the ABC classifications from the streaming read\n> thread, I think for A it should now pin only 1, for B 16 and for C, it\n> depends on the size of the random 'chunks': if you have a lot of size\n> 1 random reads then it shouldn't go above 10 because of (default)\n> maintenance_io_concurrency. The only way to get up to very high\n> numbers would be to have a lot of random chunks triggering behaviour\n> C, but each made up of long runs of misses. For example one can\n> contrive a BHS query that happens to read pages 0-15 then 20-35 then\n> 40-55 etc etc so that we want to get lots of wide I/Os running\n> concurrently. Unless vacuum manages to do something like that, it\n> shouldn't be able to exceed 32 buffers very easily.\n>\n> I suspect that if we taught streaming_read.c to ask the\n> BufferAccessStrategy (if one is passed in) what its recommended pin\n> limit is (strategy->nbuffers?), we could just clamp\n> max_pinned_buffers, and it would be hard to find a workload where that\n> makes a difference, and we could think about more complicated logic\n> later.\n>\n> In other words, I think/hope your complaints about excessive pinning\n> from v5 WRT all-cached heap scans might have also already improved\n> this case by happy coincidence? I haven't tried it out though, I just\n> read your description of the problem...\n\nI've rebased the attached v10 over top of the changes to\nlazy_scan_heap() Heikki just committed and over the v6 streaming read\npatch set. I started testing them and see that you are right, we no\nlonger pin too many buffers. However, the uncached example below is\nnow slower with streaming read than on master -- it looks to be\nbecause it is doing twice as many WAL writes and syncs. I'm still\ninvestigating why that is.\n\npsql \\\n-c \"create table small (a int) with (autovacuum_enabled=false,\nfillfactor=25);\" \\\n-c \"insert into small select generate_series(1,200000) % 3;\" \\\n-c \"update small set a = 6 where a = 1;\"\n\npg_ctl stop\n# drop caches\npg_ctl start\n\npsql -c \"\\timing on\" -c \"vacuum (verbose) small\"\n\n- Melanie", "msg_date": "Mon, 11 Mar 2024 17:02:57 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Tue, Mar 12, 2024 at 10:03 AM Melanie Plageman\n<[email protected]> wrote:\n> I've rebased the attached v10 over top of the changes to\n> lazy_scan_heap() Heikki just committed and over the v6 streaming read\n> patch set. I started testing them and see that you are right, we no\n> longer pin too many buffers. However, the uncached example below is\n> now slower with streaming read than on master -- it looks to be\n> because it is doing twice as many WAL writes and syncs. I'm still\n> investigating why that is.\n\nThat makes sense to me. We have 256kB of buffers in our ring, but now\nwe're trying to read ahead 128kB at a time, so it works out that we\ncan only flush the WAL accumulated while dirtying half the blocks at a\ntime, so we flush twice as often.\n\nIf I change the ring size to 384kB, allowing for that read-ahead\nwindow, I see approximately the same WAL flushes. Surely we'd never\nbe able to get the behaviour to match *and* keep the same ring size?\nWe simply need those 16 extra buffers to have a chance of accumulating\n32 dirty buffers, and the associated WAL. Do you see the same result,\nor do you think something more than that is wrong here?\n\nHere are some system call traces using your test that helped me see\nthe behaviour:\n\n1. Unpatched, ie no streaming read, we flush 90kB of WAL generated by\n32 pages before we write them out one at a time just before we read in\ntheir replacements. One flush covers the LSNs of all the pages that\nwill be written, even though it's only called for the first page to be\nwritten. That's because XLogFlush(lsn), if it decides to do anything,\nflushes as far as it can... IOW when we hit the *oldest* dirty block,\nthat's when we write out the WAL up to where we dirtied the *newest*\nblock, which covers the 32 pwrite() calls here:\n\npwrite(30,...,90112,0xf90000) = 90112 (0x16000)\nfdatasync(30) = 0 (0x0)\npwrite(27,...,8192,0x0) = 8192 (0x2000)\npread(27,...,8192,0x40000) = 8192 (0x2000)\npwrite(27,...,8192,0x2000) = 8192 (0x2000)\npread(27,...,8192,0x42000) = 8192 (0x2000)\npwrite(27,...,8192,0x4000) = 8192 (0x2000)\npread(27,...,8192,0x44000) = 8192 (0x2000)\npwrite(27,...,8192,0x6000) = 8192 (0x2000)\npread(27,...,8192,0x46000) = 8192 (0x2000)\npwrite(27,...,8192,0x8000) = 8192 (0x2000)\npread(27,...,8192,0x48000) = 8192 (0x2000)\npwrite(27,...,8192,0xa000) = 8192 (0x2000)\npread(27,...,8192,0x4a000) = 8192 (0x2000)\npwrite(27,...,8192,0xc000) = 8192 (0x2000)\npread(27,...,8192,0x4c000) = 8192 (0x2000)\npwrite(27,...,8192,0xe000) = 8192 (0x2000)\npread(27,...,8192,0x4e000) = 8192 (0x2000)\npwrite(27,...,8192,0x10000) = 8192 (0x2000)\npread(27,...,8192,0x50000) = 8192 (0x2000)\npwrite(27,...,8192,0x12000) = 8192 (0x2000)\npread(27,...,8192,0x52000) = 8192 (0x2000)\npwrite(27,...,8192,0x14000) = 8192 (0x2000)\npread(27,...,8192,0x54000) = 8192 (0x2000)\npwrite(27,...,8192,0x16000) = 8192 (0x2000)\npread(27,...,8192,0x56000) = 8192 (0x2000)\npwrite(27,...,8192,0x18000) = 8192 (0x2000)\npread(27,...,8192,0x58000) = 8192 (0x2000)\npwrite(27,...,8192,0x1a000) = 8192 (0x2000)\npread(27,...,8192,0x5a000) = 8192 (0x2000)\npwrite(27,...,8192,0x1c000) = 8192 (0x2000)\npread(27,...,8192,0x5c000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e000) = 8192 (0x2000)\npread(27,...,8192,0x5e000) = 8192 (0x2000)\npwrite(27,...,8192,0x20000) = 8192 (0x2000)\npread(27,...,8192,0x60000) = 8192 (0x2000)\npwrite(27,...,8192,0x22000) = 8192 (0x2000)\npread(27,...,8192,0x62000) = 8192 (0x2000)\npwrite(27,...,8192,0x24000) = 8192 (0x2000)\npread(27,...,8192,0x64000) = 8192 (0x2000)\npwrite(27,...,8192,0x26000) = 8192 (0x2000)\npread(27,...,8192,0x66000) = 8192 (0x2000)\npwrite(27,...,8192,0x28000) = 8192 (0x2000)\npread(27,...,8192,0x68000) = 8192 (0x2000)\npwrite(27,...,8192,0x2a000) = 8192 (0x2000)\npread(27,...,8192,0x6a000) = 8192 (0x2000)\npwrite(27,...,8192,0x2c000) = 8192 (0x2000)\npread(27,...,8192,0x6c000) = 8192 (0x2000)\npwrite(27,...,8192,0x2e000) = 8192 (0x2000)\npread(27,...,8192,0x6e000) = 8192 (0x2000)\npwrite(27,...,8192,0x30000) = 8192 (0x2000)\npread(27,...,8192,0x70000) = 8192 (0x2000)\npwrite(27,...,8192,0x32000) = 8192 (0x2000)\npread(27,...,8192,0x72000) = 8192 (0x2000)\npwrite(27,...,8192,0x34000) = 8192 (0x2000)\npread(27,...,8192,0x74000) = 8192 (0x2000)\npwrite(27,...,8192,0x36000) = 8192 (0x2000)\npread(27,...,8192,0x76000) = 8192 (0x2000)\npwrite(27,...,8192,0x38000) = 8192 (0x2000)\npread(27,...,8192,0x78000) = 8192 (0x2000)\npwrite(27,...,8192,0x3a000) = 8192 (0x2000)\npread(27,...,8192,0x7a000) = 8192 (0x2000)\npwrite(27,...,8192,0x3c000) = 8192 (0x2000)\npread(27,...,8192,0x7c000) = 8192 (0x2000)\npwrite(27,...,8192,0x3e000) = 8192 (0x2000)\npread(27,...,8192,0x7e000) = 8192 (0x2000)\n\n(Digression: this alternative tail-write-head-read pattern defeats the\nread-ahead and write-behind on a bunch of OSes, but not Linux because\nit only seems to worry about the reads, while other Unixes have\nwrite-behind detection too, and I believe at least some are confused\nby this pattern of tiny writes following along some distance behind\ntiny reads; Andrew Gierth figured that out after noticing poor ring\nbuffer performance, and we eventually got that fixed for one such\nsystem[1], separating the sequence detection for reads and writes.)\n\n2. With your patches, we replace all those little pread calls with\nnice wide calls, yay!, but now we only manage to write out about half\nthe amount of WAL at a time as you discovered. The repeating blocks\nof system calls now look like this, but there are twice as many of\nthem:\n\npwrite(32,...,40960,0x224000) = 40960 (0xa000)\nfdatasync(32) = 0 (0x0)\npwrite(27,...,8192,0x5c000) = 8192 (0x2000)\npreadv(27,[...],3,0x7e000) = 131072 (0x20000)\npwrite(27,...,8192,0x5e000) = 8192 (0x2000)\npwrite(27,...,8192,0x60000) = 8192 (0x2000)\npwrite(27,...,8192,0x62000) = 8192 (0x2000)\npwrite(27,...,8192,0x64000) = 8192 (0x2000)\npwrite(27,...,8192,0x66000) = 8192 (0x2000)\npwrite(27,...,8192,0x68000) = 8192 (0x2000)\npwrite(27,...,8192,0x6a000) = 8192 (0x2000)\npwrite(27,...,8192,0x6c000) = 8192 (0x2000)\npwrite(27,...,8192,0x6e000) = 8192 (0x2000)\npwrite(27,...,8192,0x70000) = 8192 (0x2000)\npwrite(27,...,8192,0x72000) = 8192 (0x2000)\npwrite(27,...,8192,0x74000) = 8192 (0x2000)\npwrite(27,...,8192,0x76000) = 8192 (0x2000)\npwrite(27,...,8192,0x78000) = 8192 (0x2000)\npwrite(27,...,8192,0x7a000) = 8192 (0x2000)\n\n3. With your patches and test but this time using VACUUM\n(BUFFER_USAGE_LIMIT = '384kB'), the repeating block grows bigger and\nwe get the larger WAL flushes back again, because now we're able to\ncollect 32 blocks' worth of WAL up front again:\n\npwrite(32,...,90112,0x50c000) = 90112 (0x16000)\nfdatasync(32) = 0 (0x0)\npwrite(27,...,8192,0x1dc000) = 8192 (0x2000)\npread(27,...,131072,0x21e000) = 131072 (0x20000)\npwrite(27,...,8192,0x1de000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e0000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e2000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e4000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e6000) = 8192 (0x2000)\npwrite(27,...,8192,0x1e8000) = 8192 (0x2000)\npwrite(27,...,8192,0x1ea000) = 8192 (0x2000)\npwrite(27,...,8192,0x1ec000) = 8192 (0x2000)\npwrite(27,...,8192,0x1ee000) = 8192 (0x2000)\npwrite(27,...,8192,0x1f0000) = 8192 (0x2000)\npwrite(27,...,8192,0x1f2000) = 8192 (0x2000)\npwrite(27,...,8192,0x1f4000) = 8192 (0x2000)\npwrite(27,...,8192,0x1f6000) = 8192 (0x2000)\npwrite(27,...,8192,0x1f8000) = 8192 (0x2000)\npwrite(27,...,8192,0x1fa000) = 8192 (0x2000)\npwrite(27,...,8192,0x1fc000) = 8192 (0x2000)\npreadv(27,[...],3,0x23e000) = 131072 (0x20000)\npwrite(27,...,8192,0x1fe000) = 8192 (0x2000)\npwrite(27,...,8192,0x200000) = 8192 (0x2000)\npwrite(27,...,8192,0x202000) = 8192 (0x2000)\npwrite(27,...,8192,0x204000) = 8192 (0x2000)\npwrite(27,...,8192,0x206000) = 8192 (0x2000)\npwrite(27,...,8192,0x208000) = 8192 (0x2000)\npwrite(27,...,8192,0x20a000) = 8192 (0x2000)\npwrite(27,...,8192,0x20c000) = 8192 (0x2000)\npwrite(27,...,8192,0x20e000) = 8192 (0x2000)\npwrite(27,...,8192,0x210000) = 8192 (0x2000)\npwrite(27,...,8192,0x212000) = 8192 (0x2000)\npwrite(27,...,8192,0x214000) = 8192 (0x2000)\npwrite(27,...,8192,0x216000) = 8192 (0x2000)\npwrite(27,...,8192,0x218000) = 8192 (0x2000)\npwrite(27,...,8192,0x21a000) = 8192 (0x2000)\n\n4. For learning/exploration only, I rebased my experimental vectored\nFlushBuffers() patch, which teaches the checkpointer to write relation\ndata out using smgrwritev(). The checkpointer explicitly sorts\nblocks, but I think ring buffers should naturally often contain\nconsecutive blocks in ring order. Highly experimental POC code pushed\nto a public branch[2], but I am not proposing anything here, just\ntrying to understand things. The nicest looking system call trace was\nwith BUFFER_USAGE_LIMIT set to 512kB, so it could do its writes, reads\nand WAL writes 128kB at a time:\n\npwrite(32,...,131072,0xfc6000) = 131072 (0x20000)\nfdatasync(32) = 0 (0x0)\npwrite(27,...,131072,0x6c0000) = 131072 (0x20000)\npread(27,...,131072,0x73e000) = 131072 (0x20000)\npwrite(27,...,131072,0x6e0000) = 131072 (0x20000)\npread(27,...,131072,0x75e000) = 131072 (0x20000)\npwritev(27,[...],3,0x77e000) = 131072 (0x20000)\npreadv(27,[...],3,0x77e000) = 131072 (0x20000)\n\nThat was a fun experiment, but... I recognise that efficient cleaning\nof ring buffers is a Hard Problem requiring more concurrency: it's\njust too late to be flushing that WAL. But we also don't want to\nstart writing back data immediately after dirtying pages (cf. OS\nwrite-behind for big sequential writes in traditional Unixes), because\nwe're not allowed to write data out without writing the WAL first and\nwe currently need to build up bigger WAL writes to do so efficiently\n(cf. some other systems that can write out fragments of WAL\nconcurrently so the latency-vs-throughput trade-off doesn't have to be\nso extreme). So we want to defer writing it, but not too long. We\nneed something cleaning our buffers (or at least flushing the\nassociated WAL, but preferably also writing the data) not too late and\nnot too early, and more in sync with our scan than the WAL writer is.\nWhat that machinery should look like I don't know (but I believe\nAndres has ideas).\n\n[1] https://github.com/freebsd/freebsd-src/commit/f2706588730a5d3b9a687ba8d4269e386650cc4f\n[2] https://github.com/macdice/postgres/tree/vectored-ring-buffer\n\n\n", "msg_date": "Sun, 17 Mar 2024 19:53:10 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Sun, Mar 17, 2024 at 2:53 AM Thomas Munro <[email protected]> wrote:\n>\n> On Tue, Mar 12, 2024 at 10:03 AM Melanie Plageman\n> <[email protected]> wrote:\n> > I've rebased the attached v10 over top of the changes to\n> > lazy_scan_heap() Heikki just committed and over the v6 streaming read\n> > patch set. I started testing them and see that you are right, we no\n> > longer pin too many buffers. However, the uncached example below is\n> > now slower with streaming read than on master -- it looks to be\n> > because it is doing twice as many WAL writes and syncs. I'm still\n> > investigating why that is.\n\n--snip--\n\n> 4. For learning/exploration only, I rebased my experimental vectored\n> FlushBuffers() patch, which teaches the checkpointer to write relation\n> data out using smgrwritev(). The checkpointer explicitly sorts\n> blocks, but I think ring buffers should naturally often contain\n> consecutive blocks in ring order. Highly experimental POC code pushed\n> to a public branch[2], but I am not proposing anything here, just\n> trying to understand things. The nicest looking system call trace was\n> with BUFFER_USAGE_LIMIT set to 512kB, so it could do its writes, reads\n> and WAL writes 128kB at a time:\n>\n> pwrite(32,...,131072,0xfc6000) = 131072 (0x20000)\n> fdatasync(32) = 0 (0x0)\n> pwrite(27,...,131072,0x6c0000) = 131072 (0x20000)\n> pread(27,...,131072,0x73e000) = 131072 (0x20000)\n> pwrite(27,...,131072,0x6e0000) = 131072 (0x20000)\n> pread(27,...,131072,0x75e000) = 131072 (0x20000)\n> pwritev(27,[...],3,0x77e000) = 131072 (0x20000)\n> preadv(27,[...],3,0x77e000) = 131072 (0x20000)\n>\n> That was a fun experiment, but... I recognise that efficient cleaning\n> of ring buffers is a Hard Problem requiring more concurrency: it's\n> just too late to be flushing that WAL. But we also don't want to\n> start writing back data immediately after dirtying pages (cf. OS\n> write-behind for big sequential writes in traditional Unixes), because\n> we're not allowed to write data out without writing the WAL first and\n> we currently need to build up bigger WAL writes to do so efficiently\n> (cf. some other systems that can write out fragments of WAL\n> concurrently so the latency-vs-throughput trade-off doesn't have to be\n> so extreme). So we want to defer writing it, but not too long. We\n> need something cleaning our buffers (or at least flushing the\n> associated WAL, but preferably also writing the data) not too late and\n> not too early, and more in sync with our scan than the WAL writer is.\n> What that machinery should look like I don't know (but I believe\n> Andres has ideas).\n\nI've attached a WIP v11 streaming vacuum patch set here that is\nrebased over master (by Thomas), so that I could add a CF entry for\nit. It still has the problem with the extra WAL write and fsync calls\ninvestigated by Thomas above. Thomas has some work in progress doing\nstreaming write-behind to alleviate the issues with the buffer access\nstrategy and streaming reads. When he gets a version of that ready to\nshare, he will start a new \"Streaming Vacuum\" thread.\n\n- Melanie", "msg_date": "Fri, 28 Jun 2024 17:36:25 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Fri, Jun 28, 2024 at 05:36:25PM -0400, Melanie Plageman wrote:\n> I've attached a WIP v11 streaming vacuum patch set here that is\n> rebased over master (by Thomas), so that I could add a CF entry for\n> it. It still has the problem with the extra WAL write and fsync calls\n> investigated by Thomas above. Thomas has some work in progress doing\n> streaming write-behind to alleviate the issues with the buffer access\n> strategy and streaming reads. When he gets a version of that ready to\n> share, he will start a new \"Streaming Vacuum\" thread.\n\nTo avoid reviewing the wrong patch, I'm writing to verify the status here.\nThis is Needs Review in the commitfest. I think one of these two holds:\n\n1. Needs Review is valid.\n2. It's actually Waiting on Author. You're commissioning a review of the\n future-thread patch, not this one.\n\nIf it's (1), given the WIP marking, what is the scope of the review you seek?\nI'm guessing performance is out of scope; what else is in or out of scope?\n\n\n", "msg_date": "Sun, 7 Jul 2024 07:49:44 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Sun, Jul 7, 2024 at 10:49 AM Noah Misch <[email protected]> wrote:\n>\n> On Fri, Jun 28, 2024 at 05:36:25PM -0400, Melanie Plageman wrote:\n> > I've attached a WIP v11 streaming vacuum patch set here that is\n> > rebased over master (by Thomas), so that I could add a CF entry for\n> > it. It still has the problem with the extra WAL write and fsync calls\n> > investigated by Thomas above. Thomas has some work in progress doing\n> > streaming write-behind to alleviate the issues with the buffer access\n> > strategy and streaming reads. When he gets a version of that ready to\n> > share, he will start a new \"Streaming Vacuum\" thread.\n>\n> To avoid reviewing the wrong patch, I'm writing to verify the status here.\n> This is Needs Review in the commitfest. I think one of these two holds:\n>\n> 1. Needs Review is valid.\n> 2. It's actually Waiting on Author. You're commissioning a review of the\n> future-thread patch, not this one.\n>\n> If it's (1), given the WIP marking, what is the scope of the review you seek?\n> I'm guessing performance is out of scope; what else is in or out of scope?\n\nAh, you're right. I moved it to \"Waiting on Author\" as we are waiting\non Thomas' version which has a fix for the extra WAL write/sync\nbehavior.\n\nSorry for the \"Needs Review\" noise!\n\n- Melanie\n\n\n", "msg_date": "Mon, 8 Jul 2024 14:16:13 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Jul 8, 2024 at 2:49 AM Noah Misch <[email protected]> wrote:\n> what is the scope of the review you seek?\n\nThe patch \"Refactor tidstore.c memory management.\" could definitely\nuse some review. I wasn't sure if that should be proposed in a new\nthread of its own, but then the need for it comes from this\nstreamifying project, so... The basic problem was that we want to\nbuild up a stream of block to be vacuumed (so that we can perform the\nI/O combining etc) + some extra data attached to each buffer, in this\ncase the TID list, but the implementation of tidstore.c in master\nwould require us to make an extra intermediate copy of the TIDs,\nbecause it keeps overwriting its internal buffer. The proposal here\nis to make it so that you can get get a tiny copyable object that can\nlater be used to retrieve the data into a caller-supplied buffer, so\nthat tidstore.c's iterator machinery doesn't have to have its own\ninternal buffer at all, and then calling code can safely queue up a\nfew of these at once.\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:26:32 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Mon, Jul 15, 2024 at 03:26:32PM +1200, Thomas Munro wrote:\n> On Mon, Jul 8, 2024 at 2:49 AM Noah Misch <[email protected]> wrote:\n> > what is the scope of the review you seek?\n> \n> The patch \"Refactor tidstore.c memory management.\" could definitely\n> use some review.\n\nThat's reasonable. radixtree already forbids mutations concurrent with\niteration, so there's no new concurrency hazard. One alternative is\nper_buffer_data big enough for MaxOffsetNumber, but that might thrash caches\nmeasurably. That patch is good to go apart from these trivialities:\n\n> -\treturn &(iter->output);\n> +\treturn &iter->output;\n\nThis cosmetic change is orthogonal to the patch's mission.\n\n> -\t\tfor (wordnum = 0; wordnum < page->header.nwords; wordnum++)\n> +\t\tfor (int wordnum = 0; wordnum < page->header.nwords; wordnum++)\n\nLikewise.\n\n\n", "msg_date": "Mon, 15 Jul 2024 18:52:26 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" }, { "msg_contents": "On Tue, Jul 16, 2024 at 1:52 PM Noah Misch <[email protected]> wrote:\n> On Mon, Jul 15, 2024 at 03:26:32PM +1200, Thomas Munro wrote:\n> That's reasonable. radixtree already forbids mutations concurrent with\n> iteration, so there's no new concurrency hazard. One alternative is\n> per_buffer_data big enough for MaxOffsetNumber, but that might thrash caches\n> measurably. That patch is good to go apart from these trivialities:\n\nThanks! I have pushed that patch, without those changes you didn't like.\n\nHere's are Melanie's patches again. They work, and the WAL flush\nfrequency problem is mostly gone since we increased the BAS_VACUUM\ndefault ring size (commit 98f320eb), but I'm still looking into how\nthis read-ahead and the write-behind generated by vacuum (using\npatches not yet posted) should interact with each other and the ring\nsystem, and bouncing ideas around about that with my colleagues. More\non that soon, hopefully. I suspect that there won't be changes to\nthese patches as a result, but I still want to hold off for a bit.", "msg_date": "Wed, 24 Jul 2024 17:40:12 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confine vacuum skip logic to lazy_scan_skip" } ]
[ { "msg_contents": "Hi,\n(CC-ing Bruce)\n\nAs of this new year, and in the vein of c8e1ba736b2b. Bruce, are you\nplanning an update of the copyright dates with a run of\n./src/tools/copyright.pl on HEAD, and the smallish updates of the back\nbranches?\n\nAnd of course, Happy New Year to all!\n--\nMichael", "msg_date": "Mon, 1 Jan 2024 19:25:16 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Update for copyright messages to 2024 (Happy New Year!)" }, { "msg_contents": "On Mon, Jan 1, 2024 at 07:25:16PM +0900, Michael Paquier wrote:\n> Hi,\n> (CC-ing Bruce)\n> \n> As of this new year, and in the vein of c8e1ba736b2b. Bruce, are you\n> planning an update of the copyright dates with a run of\n> ./src/tools/copyright.pl on HEAD, and the smallish updates of the back\n> branches?\n> \n> And of course, Happy New Year to all!\n\nDone, sorry for the delay.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 3 Jan 2024 20:49:17 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update for copyright messages to 2024 (Happy New Year!)" } ]
[ { "msg_contents": "I happened to notice that there is a not-quite-theoretical crash\nhazard in spcache_init(). If we see that SPCACHE_RESET_THRESHOLD\nis exceeded and decide to reset the cache, but then nsphash_create\nfails for some reason (perhaps OOM), an error will be thrown\nleaving the SearchPathCache pointer pointing at already-freed\nmemory. Next time through, we'll try to dereference that dangling\npointer, potentially causing SIGSEGV, or worse we might find a\nvalue less than SPCACHE_RESET_THRESHOLD and decide that the cache\nis okay despite having been freed.\n\nThe fix of course is to make sure we reset the pointer variables\n*before* the MemoryContextReset.\n\nI also observed that the code seems to have been run through\npgindent without fixing typedefs.list, making various places\nuglier than they should be.\n\nThe attached proposed cleanup patch fixes those things and in\npassing improves (IMO anyway) some comments. I assume it wasn't\nintentional to leave two copies of the same comment block in\ncheck_search_path().\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 01 Jan 2024 16:38:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Minor cleanup for search path cache" }, { "msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Jan 2, 2024 at 05:38 +0800, Tom Lane <[email protected]>, wrote:\n> I happened to notice that there is a not-quite-theoretical crash\n> hazard in spcache_init(). If we see that SPCACHE_RESET_THRESHOLD\n> is exceeded and decide to reset the cache, but then nsphash_create\n> fails for some reason (perhaps OOM), an error will be thrown\n> leaving the SearchPathCache pointer pointing at already-freed\n> memory. Next time through, we'll try to dereference that dangling\n> pointer, potentially causing SIGSEGV, or worse we might find a\n> value less than SPCACHE_RESET_THRESHOLD and decide that the cache\n> is okay despite having been freed.\n>\n> The fix of course is to make sure we reset the pointer variables\n> *before* the MemoryContextReset.\n>\n> I also observed that the code seems to have been run through\n> pgindent without fixing typedefs.list, making various places\n> uglier than they should be.\n>\n> The attached proposed cleanup patch fixes those things and in\n> passing improves (IMO anyway) some comments. I assume it wasn't\n> intentional to leave two copies of the same comment block in\n> check_search_path().\n>\n> regards, tom lane\n>\nOnly me?\n\nzml@localhashdata postgres % git apply minor-search-path-cache-cleanup.patch\nerror: patch failed: src/backend/catalog/namespace.c:156\nerror: src/backend/catalog/namespace.c: patch does not apply\nerror: patch failed: src/tools/pgindent/typedefs.list:2479\nerror: src/tools/pgindent/typedefs.list: patch does not apply\n\nI’m in commit 9a17be1e24 Allow upgrades to preserve the full subscription's state\n\n\n\n\n\n\n\nHi,\n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Jan 2, 2024 at 05:38 +0800, Tom Lane <[email protected]>, wrote:\nI happened to notice that there is a not-quite-theoretical crash\nhazard in spcache_init(). If we see that SPCACHE_RESET_THRESHOLD\nis exceeded and decide to reset the cache, but then nsphash_create\nfails for some reason (perhaps OOM), an error will be thrown\nleaving the SearchPathCache pointer pointing at already-freed\nmemory. Next time through, we'll try to dereference that dangling\npointer, potentially causing SIGSEGV, or worse we might find a\nvalue less than SPCACHE_RESET_THRESHOLD and decide that the cache\nis okay despite having been freed.\n\nThe fix of course is to make sure we reset the pointer variables\n*before* the MemoryContextReset.\n\nI also observed that the code seems to have been run through\npgindent without fixing typedefs.list, making various places\nuglier than they should be.\n\nThe attached proposed cleanup patch fixes those things and in\npassing improves (IMO anyway) some comments. I assume it wasn't\nintentional to leave two copies of the same comment block in\ncheck_search_path().\n\nregards, tom lane\n\nOnly me?\n\nzml@localhashdata postgres % git apply minor-search-path-cache-cleanup.patch\nerror: patch failed: src/backend/catalog/namespace.c:156\nerror: src/backend/catalog/namespace.c: patch does not apply\nerror: patch failed: src/tools/pgindent/typedefs.list:2479\nerror: src/tools/pgindent/typedefs.list: patch does not apply\n\nI’m in commit 9a17be1e24 Allow upgrades to preserve the full subscription's state", "msg_date": "Tue, 2 Jan 2024 13:12:22 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor cleanup for search path cache" }, { "msg_contents": "Zhang Mingli <[email protected]> writes:\n> Only me?\n\n> zml@localhashdata postgres % git apply minor-search-path-cache-cleanup.patch\n> error: patch failed: src/backend/catalog/namespace.c:156\n> error: src/backend/catalog/namespace.c: patch does not apply\n> error: patch failed: src/tools/pgindent/typedefs.list:2479\n> error: src/tools/pgindent/typedefs.list: patch does not apply\n\nUse patch(1). git-apply is extremely fragile.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jan 2024 00:20:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor cleanup for search path cache" }, { "msg_contents": "On Mon, 2024-01-01 at 16:38 -0500, Tom Lane wrote:\n> I happened to notice that there is a not-quite-theoretical crash\n> hazard in spcache_init().  If we see that SPCACHE_RESET_THRESHOLD\n> is exceeded and decide to reset the cache, but then nsphash_create\n> fails for some reason (perhaps OOM), an error will be thrown\n> leaving the SearchPathCache pointer pointing at already-freed\n> memory.\n\nGood catch, thank you. I tried to avoid OOM hazards (e.g. b282fa88df,\n8efa301532), but I missed this one.\n\n> I also observed that the code seems to have been run through\n> pgindent without fixing typedefs.list, making various places\n> uglier than they should be.\n> \n> The attached proposed cleanup patch fixes those things and in\n> passing improves (IMO anyway) some comments.  I assume it wasn't\n> intentional to leave two copies of the same comment block in\n> check_search_path().\n\nLooks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 02 Jan 2024 11:20:24 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Minor cleanup for search path cache" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> Looks good to me.\n\nThanks for reviewing, will push shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jan 2024 14:30:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Minor cleanup for search path cache" } ]
[ { "msg_contents": "Hello postgres hackers:\nI recently came across a scenario involving system catalog \"pg_default_acl\"\nwhere a tuple contains a NULL value for the \"defaclacl\" attribute. This can cause\nconfusion while dropping a role whose default ACL has been changed.\nHere is a way to reproduce that:\n\n``` example\npostgres=# create user adminuser;\nCREATE ROLE\npostgres=# create user normaluser;\nCREATE ROLE\npostgres=# alter default privileges for role adminuser grant all on tables to normaluser;\nALTER DEFAULT PRIVILEGES\npostgres=# alter default privileges for role adminuser revoke all ON tables from adminuser;\nALTER DEFAULT PRIVILEGES\npostgres=# alter default privileges for role adminuser revoke all ON tables from normaluser;\nALTER DEFAULT PRIVILEGES\npostgres=# select * from pg_default_acl where pg_get_userbyid(defaclrole) = 'adminuser';\n oid | defaclrole | defaclnamespace | defaclobjtype | defaclacl\n-------+------------+-----------------+---------------+-----------\n 16396 | 16394 | 0 | r | {}\n(1 row)\npostgres=# drop user adminuser ;\nERROR: role \"adminuser\" cannot be dropped because some objects depend on it\nDETAIL: owner of default privileges on new relations belonging to role adminuser\n```\n\nI believe this is a bug since the tuple can be deleted if we revoke from \"normaluser\"\nfirst. Besides, according to the document:\nhttps://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html\n> If you wish to drop a role for which the default privileges have been altered,\n> it is necessary to reverse the changes in its default privileges or use DROP OWNED BY\n> to get rid of the default privileges entry for the role.\nThere must be a way to \"reverse the changes\", but NULL value of \"defaclacl\"\nprevents it. Luckily, \"DROP OWNED BY\" works well.\n\nThe code-level reason could be that the function \"SetDefaultACL\" doesn't handle\nthe situation where \"new_acl\" is NULL. So I present a simple patch here.\n\ndiff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c\nindex 01ff575b093..0e313526b28 100644\n--- a/src/backend/catalog/aclchk.c\n+++ b/src/backend/catalog/aclchk.c\n@@ -1335,7 +1335,7 @@ SetDefaultACL(InternalDefaultACL *iacls)\n */\n aclitemsort(new_acl);\n aclitemsort(def_acl);\n- if (aclequal(new_acl, def_acl))\n+ if (aclequal(new_acl, def_acl) || ACL_NUM(new_acl) == 0)\n {\n /* delete old entry, if indeed there is one */\n if (!isNew)\n\nBest regards,\nBoyu Yang\nHello postgres hackers:I recently came across a scenario involving system catalog \"pg_default_acl\"where a tuple contains a NULL value for the \"defaclacl\" attribute. This can causeconfusion while dropping a role whose default ACL has been changed.Here is a way to reproduce that:``` examplepostgres=# create user adminuser;CREATE ROLEpostgres=# create user normaluser;CREATE ROLEpostgres=# alter default privileges for role adminuser grant all on tables to normaluser;ALTER DEFAULT PRIVILEGESpostgres=# alter default privileges for role adminuser revoke all ON tables from adminuser;ALTER DEFAULT PRIVILEGESpostgres=# alter default privileges for role adminuser revoke all ON tables from normaluser;ALTER DEFAULT PRIVILEGESpostgres=# select * from pg_default_acl where pg_get_userbyid(defaclrole) = 'adminuser';  oid  | defaclrole | defaclnamespace | defaclobjtype | defaclacl-------+------------+-----------------+---------------+----------- 16396 |      16394 |               0 | r             | {}(1 row)postgres=# drop user adminuser ;ERROR:  role \"adminuser\" cannot be dropped because some objects depend on itDETAIL:  owner of default privileges on new relations belonging to role adminuser```I believe this is a bug since the tuple can be deleted if we revoke from \"normaluser\"first. Besides, according to the document:https://www.postgresql.org/docs/current/sql-alterdefaultprivileges.html> If you wish to drop a role for which the default privileges have been altered,> it is necessary to reverse the changes in its default privileges or use DROP OWNED BY> to get rid of the default privileges entry for the role.There must be a way to \"reverse the changes\", but NULL value of \"defaclacl\"prevents it. Luckily, \"DROP OWNED BY\" works well.The code-level reason could be that the function \"SetDefaultACL\" doesn't handlethe situation where \"new_acl\" is NULL. So I present a simple patch here.diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.cindex 01ff575b093..0e313526b28 100644--- a/src/backend/catalog/aclchk.c+++ b/src/backend/catalog/aclchk.c@@ -1335,7 +1335,7 @@ SetDefaultACL(InternalDefaultACL *iacls)   */  aclitemsort(new_acl);  aclitemsort(def_acl);- if (aclequal(new_acl, def_acl))+ if (aclequal(new_acl, def_acl) || ACL_NUM(new_acl) == 0)  {   /* delete old entry, if indeed there is one */   if (!isNew)Best regards,Boyu Yang", "msg_date": "Tue, 02 Jan 2024 14:33:07 +0800", "msg_from": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?VGhlIHByZXNlbmNlIG9mIGEgTlVMTCAiZGVmYWNsYWNsIiB2YWx1ZSBpbiBwZ19kZWZhdWx0?=\n =?UTF-8?B?X2FjbCBwcmV2ZW50cyB0aGUgZHJvcHBpbmcgb2YgYSByb2xlLg==?=" }, { "msg_contents": "\"=?UTF-8?B?5p2o5Lyv5a6HKOmVv+Wggik=?=\" <[email protected]> writes:\n> postgres=# create user adminuser;\n> CREATE ROLE\n> postgres=# create user normaluser;\n> CREATE ROLE\n> postgres=# alter default privileges for role adminuser grant all on tables to normaluser;\n> ALTER DEFAULT PRIVILEGES\n> postgres=# alter default privileges for role adminuser revoke all ON tables from adminuser;\n> ALTER DEFAULT PRIVILEGES\n> postgres=# alter default privileges for role adminuser revoke all ON tables from normaluser;\n> ALTER DEFAULT PRIVILEGES\n> postgres=# select * from pg_default_acl where pg_get_userbyid(defaclrole) = 'adminuser';\n> oid | defaclrole | defaclnamespace | defaclobjtype | defaclacl\n> -------+------------+-----------------+---------------+-----------\n> 16396 | 16394 | 0 | r | {}\n> (1 row)\n> postgres=# drop user adminuser ;\n> ERROR: role \"adminuser\" cannot be dropped because some objects depend on it\n> DETAIL: owner of default privileges on new relations belonging to role adminuser\n\nThis looks perfectly normal to me: the privileges for 'adminuser'\nitself are not at the default state. If you then do\n\nregression=# alter default privileges for role adminuser grant all on tables to adminuser ;\nALTER DEFAULT PRIVILEGES\n\nthen things are back to normal, and the pg_default_acl entry goes away:\n\nregression=# select * from pg_default_acl;\n oid | defaclrole | defaclnamespace | defaclobjtype | defaclacl \n-----+------------+-----------------+---------------+-----------\n(0 rows)\n\nand you can drop the user:\n\nregression=# drop user adminuser ;\nDROP ROLE\n\nYou could argue that there's no need to be picky about an entry that\nonly controls privileges for the user-to-be-dropped, but it is working\nas designed and documented.\n\nI fear your proposed patch is likely to break more things than it fixes.\nIn particular it looks like it would forget the existence of the\nuser's self-revocation altogether, even before the drop of the user.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Jan 2024 11:56:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The presence of a NULL \"defaclacl\" value in pg_default_acl\n prevents the dropping of a role." } ]
[ { "msg_contents": "In the following paragraph in information_schema:\n\n <term>character encoding form</term>\n <listitem>\n <para>\n An encoding of some character repertoire. Most older character\n repertoires only use one encoding form, and so there are no\n separate names for them (e.g., <literal>LATIN1</literal> is an\n encoding form applicable to the <literal>LATIN1</literal>\n repertoire). But for example Unicode has the encoding forms\n <literal>UTF8</literal>, <literal>UTF16</literal>, etc. (not\n all supported by PostgreSQL). Encoding forms are not exposed\n as an SQL object, but are visible in this view.\n\nThis claims that the LATIN1 repertoire only uses one encoding form,\nbut actually LATIN1 can be encoded in another form: ISO-2022-JP-2 (a 7\nbit encoding. See RFC 1554\n(https://datatracker.ietf.org/doc/html/rfc1554) for more details).\n\nIf we still want to list a use-one-encoding-form example, probably we\ncould use LATIN2 instead or others that are not supported by\nISO-2022-JP-2 (ISO-2022-JP-2 supports LATIN1 and LATIN7).\n\nAttached is the patch that does this.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 02 Jan 2024 15:39:25 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "INFORMATION_SCHEMA node" }, { "msg_contents": "(typo in the subject fixed)\n\n> In the following paragraph in information_schema:\n> \n> <term>character encoding form</term>\n> <listitem>\n> <para>\n> An encoding of some character repertoire. Most older character\n> repertoires only use one encoding form, and so there are no\n> separate names for them (e.g., <literal>LATIN1</literal> is an\n> encoding form applicable to the <literal>LATIN1</literal>\n> repertoire). But for example Unicode has the encoding forms\n> <literal>UTF8</literal>, <literal>UTF16</literal>, etc. (not\n> all supported by PostgreSQL). Encoding forms are not exposed\n> as an SQL object, but are visible in this view.\n> \n> This claims that the LATIN1 repertoire only uses one encoding form,\n> but actually LATIN1 can be encoded in another form: ISO-2022-JP-2 (a 7\n> bit encoding. See RFC 1554\n> (https://datatracker.ietf.org/doc/html/rfc1554) for more details).\n> \n> If we still want to list a use-one-encoding-form example, probably we\n> could use LATIN2 instead or others that are not supported by\n> ISO-2022-JP-2 (ISO-2022-JP-2 supports LATIN1 and LATIN7).\n> \n> Attached is the patch that does this.\n\nAny objection?\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Thu, 04 Jan 2024 21:39:46 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INFORMATION_SCHEMA note" }, { "msg_contents": "> On 4 Jan 2024, at 13:39, Tatsuo Ishii <[email protected]> wrote:\n\n>> Attached is the patch that does this.\n\nI don't think the patch was attached?\n\n> Any objection?\n\nI didn't study the RFC in depth but as expected it seems to back up your change\nso the change seems reasonable.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 8 Jan 2024 13:48:54 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INFORMATION_SCHEMA note" }, { "msg_contents": ">> On 4 Jan 2024, at 13:39, Tatsuo Ishii <[email protected]> wrote:\n> \n>>> Attached is the patch that does this.\n> \n> I don't think the patch was attached?\n> \n>> Any objection?\n> \n> I didn't study the RFC in depth but as expected it seems to back up your change\n> so the change seems reasonable.\n\nOops. Sorry. Patch attached.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Tue, 09 Jan 2024 08:54:11 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INFORMATION_SCHEMA note" }, { "msg_contents": "> On 9 Jan 2024, at 00:54, Tatsuo Ishii <[email protected]> wrote:\n> \n>>> On 4 Jan 2024, at 13:39, Tatsuo Ishii <[email protected]> wrote:\n>> \n>>>> Attached is the patch that does this.\n>> \n>> I don't think the patch was attached?\n>> \n>>> Any objection?\n>> \n>> I didn't study the RFC in depth but as expected it seems to back up your change\n>> so the change seems reasonable.\n> \n> Oops. Sorry. Patch attached.\n\nThat's exactly what I expected it to be, and it LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 08:26:29 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INFORMATION_SCHEMA note" }, { "msg_contents": ">> On 9 Jan 2024, at 00:54, Tatsuo Ishii <[email protected]> wrote:\n>> \n>>>> On 4 Jan 2024, at 13:39, Tatsuo Ishii <[email protected]> wrote:\n>>> \n>>>>> Attached is the patch that does this.\n>>> \n>>> I don't think the patch was attached?\n>>> \n>>>> Any objection?\n>>> \n>>> I didn't study the RFC in depth but as expected it seems to back up your change\n>>> so the change seems reasonable.\n>> \n>> Oops. Sorry. Patch attached.\n> \n> That's exactly what I expected it to be, and it LGTM.\n\nThanks for looking into it. Pushed to all supported branches.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Tue, 09 Jan 2024 20:08:45 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: INFORMATION_SCHEMA note" } ]
[ { "msg_contents": "(new thread)\n\nOn Tue, Jan 02, 2024 at 10:34:11AM -0500, Robert Haas wrote:\n> On Wed, Dec 27, 2023 at 10:36 AM Nathan Bossart\n> <[email protected]> wrote:\n>> Thanks! I also noticed that WALSummarizerLock probably needs a mention in\n>> wait_event_names.txt.\n> \n> Fixed.\n\nI think we're supposed to omit the \"Lock\" suffix in wait_event_names.txt.\n\n> It seems like it would be good if there were an automated cross-check\n> between lwlocknames.txt and wait_event_names.txt.\n\n+1. Here's a hastily-thrown-together patch for that. I basically copied\n003_check_guc.pl and adjusted it for this purpose. This test only checks\nthat everything in lwlocknames.txt has a matching entry in\nwait_event_names.txt. It doesn't check that everything in the predefined\nLWLock section of wait_event_names.txt has an entry in lwlocknames.txt.\nAFAICT that would be a little more difficult because you can't distinguish\nbetween the two in pg_wait_events.\n\nEven with this test, I worry that we could easily forget to add entries in\nwait_event_names.txt for the non-predefined locks, but I don't presently\nhave a proposal for how to prevent that.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 2 Jan 2024 11:31:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 2, 2024 at 12:31 PM Nathan Bossart <[email protected]> wrote:\n> I think we're supposed to omit the \"Lock\" suffix in wait_event_names.txt.\n\nUgh, sorry. But also, why in the world?\n\n> > It seems like it would be good if there were an automated cross-check\n> > between lwlocknames.txt and wait_event_names.txt.\n>\n> +1. Here's a hastily-thrown-together patch for that. I basically copied\n> 003_check_guc.pl and adjusted it for this purpose. This test only checks\n> that everything in lwlocknames.txt has a matching entry in\n> wait_event_names.txt. It doesn't check that everything in the predefined\n> LWLock section of wait_event_names.txt has an entry in lwlocknames.txt.\n> AFAICT that would be a little more difficult because you can't distinguish\n> between the two in pg_wait_events.\n>\n> Even with this test, I worry that we could easily forget to add entries in\n> wait_event_names.txt for the non-predefined locks, but I don't presently\n> have a proposal for how to prevent that.\n\nIt certainly seems better to check what we can than to check nothing.\n\nSuggestions:\n\n- Check in both directions instead of just one?\n\n- Verify ordering?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 13:13:16 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 02, 2024 at 01:13:16PM -0500, Robert Haas wrote:\n> On Tue, Jan 2, 2024 at 12:31 PM Nathan Bossart <[email protected]> wrote:\n>> I think we're supposed to omit the \"Lock\" suffix in wait_event_names.txt.\n> \n> Ugh, sorry. But also, why in the world?\n\nThat seems to date back to commit 14a9101. I can agree that the suffix is\nsomewhat redundant since these are already marked as type \"LWLock\", but\nI'll admit I've been surprised by this before, too. IMHO it makes this\nproposed test more important because you can't just grep for a different\nlock to find all the places you need to update.\n\n> - Check in both directions instead of just one?\n> \n> - Verify ordering?\n\nTo do those things, I'd probably move the test to one of the scripts that\ngenerates the documentation or header file (pg_wait_events doesn't tell us\nwhether a lock is predefined or what order it's listed in). That'd cause\nfailures at build time instead of during testing, which might be kind of\nnice, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 15:45:44 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 02, 2024 at 11:31:20AM -0600, Nathan Bossart wrote:\n> +# Find the location of lwlocknames.h.\n> +my $include_dir = $node->config_data('--includedir');\n> +my $lwlocknames_file = \"$include_dir/server/storage/lwlocknames.h\";\n\nI am afraid that this is incorrect because an installation could\ndecide to install server-side headers in a different path than\n$include/server/. Using --includedir-server would be the correct\nanswer, appending \"storage/lwlocknames.h\" to the path retrieved from\npg_config.\n--\nMichael", "msg_date": "Wed, 3 Jan 2024 11:34:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 2, 2024 at 4:45 PM Nathan Bossart <[email protected]> wrote:\n> That seems to date back to commit 14a9101. I can agree that the suffix is\n> somewhat redundant since these are already marked as type \"LWLock\", but\n> I'll admit I've been surprised by this before, too. IMHO it makes this\n> proposed test more important because you can't just grep for a different\n> lock to find all the places you need to update.\n\nI agree. I am pretty sure that the reason this happened in the first\nplace is that I grepped for the name of some other LWLock and adjusted\nthings for the new lock at every place where that found a hit.\n\n> > - Check in both directions instead of just one?\n> >\n> > - Verify ordering?\n>\n> To do those things, I'd probably move the test to one of the scripts that\n> generates the documentation or header file (pg_wait_events doesn't tell us\n> whether a lock is predefined or what order it's listed in). That'd cause\n> failures at build time instead of during testing, which might be kind of\n> nice, too.\n\nYeah, I think that would be better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 2 Jan 2024 22:49:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 02, 2024 at 10:49:03PM -0500, Robert Haas wrote:\n> On Tue, Jan 2, 2024 at 4:45 PM Nathan Bossart <[email protected]> wrote:\n> > That seems to date back to commit 14a9101. I can agree that the suffix is\n> > somewhat redundant since these are already marked as type \"LWLock\", but\n> > I'll admit I've been surprised by this before, too. IMHO it makes this\n> > proposed test more important because you can't just grep for a different\n> > lock to find all the places you need to update.\n> \n> I agree. I am pretty sure that the reason this happened in the first\n> place is that I grepped for the name of some other LWLock and adjusted\n> things for the new lock at every place where that found a hit.\n> \n> > > - Check in both directions instead of just one?\n> > >\n> > > - Verify ordering?\n> >\n> > To do those things, I'd probably move the test to one of the scripts that\n> > generates the documentation or header file (pg_wait_events doesn't tell us\n> > whether a lock is predefined or what order it's listed in). That'd cause\n> > failures at build time instead of during testing, which might be kind of\n> > nice, too.\n> \n> Yeah, I think that would be better.\n\n+1 to add a test and put in a place that would produce failures at build time.\nI think that having the test in the script that generates the header file is more\nappropriate (as building the documentation looks less usual to me when working on\na patch).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Jan 2024 07:59:45 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Wed, Jan 03, 2024 at 07:59:45AM +0000, Bertrand Drouvot wrote:\n> +1 to add a test and put in a place that would produce failures at build time.\n> I think that having the test in the script that generates the header file is more\n> appropriate (as building the documentation looks less usual to me when working on\n> a patch).\n\nOkay, I did that in v2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 5 Jan 2024 00:11:44 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Wed, Jan 03, 2024 at 11:34:25AM +0900, Michael Paquier wrote:\n> On Tue, Jan 02, 2024 at 11:31:20AM -0600, Nathan Bossart wrote:\n>> +# Find the location of lwlocknames.h.\n>> +my $include_dir = $node->config_data('--includedir');\n>> +my $lwlocknames_file = \"$include_dir/server/storage/lwlocknames.h\";\n> \n> I am afraid that this is incorrect because an installation could\n> decide to install server-side headers in a different path than\n> $include/server/. Using --includedir-server would be the correct\n> answer, appending \"storage/lwlocknames.h\" to the path retrieved from\n> pg_config.\n\nAh, good to know, thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 00:14:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 05, 2024 at 12:11:44AM -0600, Nathan Bossart wrote:\n> On Wed, Jan 03, 2024 at 07:59:45AM +0000, Bertrand Drouvot wrote:\n> > +1 to add a test and put in a place that would produce failures at build time.\n> > I think that having the test in the script that generates the header file is more\n> > appropriate (as building the documentation looks less usual to me when working on\n> > a patch).\n> \n> Okay, I did that in v2.\n\nThanks!\n\n> +# NB: Predefined locks (those declared in lwlocknames.txt) must be listed in\n> +# the top section of locks and must be listed in the same order as in\n> +# lwlocknames.txt.\n> +#\n> \n> Section: ClassName - WaitEventLWLock\n> \n> @@ -326,6 +330,12 @@ NotifyQueueTail\t\"Waiting to update limit on <command>NOTIFY</command> message st\n> WaitEventExtension\t\"Waiting to read or update custom wait events information for extensions.\"\n> WALSummarizer\t\"Waiting to read or update WAL summarization state.\"\n> \n> +#\n> +# Predefined LWLocks (those declared in lwlocknames.txt) must be listed in the\n> +# section above and must be listed in the same order as in lwlocknames.txt.\n> +# Other LWLocks must be listed in the section below.\n> +#\n> +\n\nAnother option could be to create a sub-section for predefined LWLocks that are\npart of lwlocknames.txt and then sort both list (the one in the sub-section and\nthe one in lwlocknames.txt). That would avoid the \"must be listed in the same order\"\nconstraint. That said, I think the way it is done in the patch is fine because\nif one does not follow the constraint then the build would fail.\n\nI did a few tests leading to:\n\nCommitBDTTsSLRALock defined in lwlocknames.txt but missing from wait_event_names.txt at ./generate-lwlocknames.pl line 107, <$lwlocknames> line 58.\n\nOR \n\nlists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match at ./generate-lwlocknames.pl line 109, <$lwlocknames> line 46.\n\nOR\n\nCommitBDTTsSLRALock defined in wait_event_names.txt but missing from lwlocknames.txt at ./generate-lwlocknames.pl line 126, <$lwlocknames> line 57.\n\nSo, that looks good to me except one remark regarding:\n\n+ die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match\"\n+ unless $wait_event_lwlocks[$i] eq $lockname;\n\nWhat about printing $wait_event_lwlocks[$i] and $lockname in the error message?\nSomething like?\n\n\"\n die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match (comparing $lockname and $wait_event_lwlocks[$i])\"\n unless $wait_event_lwlocks[$i] eq $lockname;\n\"\n\nI think that would give more clues for debugging purpose.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 07:39:39 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Thanks for reviewing.\n\nOn Fri, Jan 05, 2024 at 07:39:39AM +0000, Bertrand Drouvot wrote:\n> Another option could be to create a sub-section for predefined LWLocks that are\n> part of lwlocknames.txt and then sort both list (the one in the sub-section and\n> the one in lwlocknames.txt). That would avoid the \"must be listed in the same order\"\n> constraint. That said, I think the way it is done in the patch is fine because\n> if one does not follow the constraint then the build would fail.\n\nIMHO the ordering constraint makes it easier for humans to verify the lists\nmatch.\n\n> + die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match\"\n> + unless $wait_event_lwlocks[$i] eq $lockname;\n> \n> What about printing $wait_event_lwlocks[$i] and $lockname in the error message?\n> Something like?\n> \n> \"\n> die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match (comparing $lockname and $wait_event_lwlocks[$i])\"\n> unless $wait_event_lwlocks[$i] eq $lockname;\n> \"\n> \n> I think that would give more clues for debugging purpose.\n\nSure, I'll add something like that. I think this particular scenario is\nless likely, but that's not a reason to make the error message hard to\ndecipher.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 10:42:03 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Fri, Jan 05, 2024 at 10:42:03AM -0600, Nathan Bossart wrote:\n> On Fri, Jan 05, 2024 at 07:39:39AM +0000, Bertrand Drouvot wrote:\n>> + die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match\"\n>> + unless $wait_event_lwlocks[$i] eq $lockname;\n>> \n>> What about printing $wait_event_lwlocks[$i] and $lockname in the error message?\n>> Something like?\n>> \n>> \"\n>> die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match (comparing $lockname and $wait_event_lwlocks[$i])\"\n>> unless $wait_event_lwlocks[$i] eq $lockname;\n>> \"\n>> \n>> I think that would give more clues for debugging purpose.\n> \n> Sure, I'll add something like that. I think this particular scenario is\n> less likely, but that's not a reason to make the error message hard to\n> decipher.\n\nHere is a new version of the patch with this change.\n\nI also tried to make the verification logic less fragile. Instead of\ndepending on the exact location of empty lines in wait_event_names.txt, v3\nadds a marker comment below the list that clearly indicates it should not\nbe changed. This simplifies the verification code a bit, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 5 Jan 2024 11:46:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 05, 2024 at 11:46:20AM -0600, Nathan Bossart wrote:\n> On Fri, Jan 05, 2024 at 10:42:03AM -0600, Nathan Bossart wrote:\n> > On Fri, Jan 05, 2024 at 07:39:39AM +0000, Bertrand Drouvot wrote:\n> >> + die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match\"\n> >> + unless $wait_event_lwlocks[$i] eq $lockname;\n> >> \n> >> What about printing $wait_event_lwlocks[$i] and $lockname in the error message?\n> >> Something like?\n> >> \n> >> \"\n> >> die \"lists of predefined LWLocks in lwlocknames.txt and wait_event_names.txt do not match (comparing $lockname and $wait_event_lwlocks[$i])\"\n> >> unless $wait_event_lwlocks[$i] eq $lockname;\n> >> \"\n> >> \n> >> I think that would give more clues for debugging purpose.\n> > \n> > Sure, I'll add something like that. I think this particular scenario is\n> > less likely, but that's not a reason to make the error message hard to\n> > decipher.\n> \n> Here is a new version of the patch with this change.\n\nThanks!\n\n> I also tried to make the verification logic less fragile. Instead of\n> depending on the exact location of empty lines in wait_event_names.txt, v3\n> adds a marker comment below the list that clearly indicates it should not\n> be changed. This simplifies the verification code a bit, too.\n\nYeah, good idea, I think that's easier to read.\n\nSorry, I missed this in my first review, but instead of:\n\n- input: files('../../backend/storage/lmgr/lwlocknames.txt'),\n+ input: [files('../../backend/storage/lmgr/lwlocknames.txt'), files('../../backend/utils/activity/wait_event_names.txt')],\n\nwhat about?\n\n input: files(\n '../../backend/storage/lmgr/lwlocknames.txt',\n '../../backend/utils/activity/wait_event_names.txt',\n ),\n\nIt's done that way in doc/src/sgml/meson.build for example.\n\nExcept for the above, the patch looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 6 Jan 2024 09:03:52 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Sat, Jan 06, 2024 at 09:03:52AM +0000, Bertrand Drouvot wrote:\n> Sorry, I missed this in my first review, but instead of:\n> \n> - input: files('../../backend/storage/lmgr/lwlocknames.txt'),\n> + input: [files('../../backend/storage/lmgr/lwlocknames.txt'), files('../../backend/utils/activity/wait_event_names.txt')],\n> \n> what about?\n> \n> input: files(\n> '../../backend/storage/lmgr/lwlocknames.txt',\n> '../../backend/utils/activity/wait_event_names.txt',\n> ),\n> \n> It's done that way in doc/src/sgml/meson.build for example.\n\nI fixed this in v4.\n\n> Except for the above, the patch looks good to me.\n\nThanks for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 6 Jan 2024 10:18:52 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn Sat, Jan 06, 2024 at 10:18:52AM -0600, Nathan Bossart wrote:\n> On Sat, Jan 06, 2024 at 09:03:52AM +0000, Bertrand Drouvot wrote:\n> > Sorry, I missed this in my first review, but instead of:\n> > \n> > - input: files('../../backend/storage/lmgr/lwlocknames.txt'),\n> > + input: [files('../../backend/storage/lmgr/lwlocknames.txt'), files('../../backend/utils/activity/wait_event_names.txt')],\n> > \n> > what about?\n> > \n> > input: files(\n> > '../../backend/storage/lmgr/lwlocknames.txt',\n> > '../../backend/utils/activity/wait_event_names.txt',\n> > ),\n> > \n> > It's done that way in doc/src/sgml/meson.build for example.\n> \n> I fixed this in v4.\n\nThanks!\n\n+ input: [files(\n+ '../../backend/storage/lmgr/lwlocknames.txt',\n+ '../../backend/utils/activity/wait_event_names.txt')],\n\nI think the \"[\" and \"]\" are not needed here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 07:59:10 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Mon, Jan 08, 2024 at 07:59:10AM +0000, Bertrand Drouvot wrote:\n> + input: [files(\n> + '../../backend/storage/lmgr/lwlocknames.txt',\n> + '../../backend/utils/activity/wait_event_names.txt')],\n> \n> I think the \"[\" and \"]\" are not needed here.\n\nD'oh! Fixed in v5.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 Jan 2024 14:11:30 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Sorry for the noise. I spent some more time tidying this up for commit,\nwhich I am hoping to do in the next day or two.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 8 Jan 2024 16:02:12 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 08, 2024 at 04:02:12PM -0600, Nathan Bossart wrote:\n> Sorry for the noise. I spent some more time tidying this up for commit,\n> which I am hoping to do in the next day or two.\n\nThanks! v6 looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 04:55:07 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 09, 2024 at 04:55:07AM +0000, Bertrand Drouvot wrote:\n> Thanks! v6 looks good to me.\n\nWFM. Thanks for putting in place this sanity check when compiling.\n--\nMichael", "msg_date": "Tue, 9 Jan 2024 14:26:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" }, { "msg_contents": "On Tue, Jan 09, 2024 at 02:26:20PM +0900, Michael Paquier wrote:\n> On Tue, Jan 09, 2024 at 04:55:07AM +0000, Bertrand Drouvot wrote:\n>> Thanks! v6 looks good to me.\n> \n> WFM. Thanks for putting in place this sanity check when compiling.\n\nCommitted. Thanks for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 11:11:12 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: verify predefined LWLocks have entries in wait_event_names.txt" } ]
[ { "msg_contents": "Hi hackers,\n\nAfter discussing this with David offlist, I decided to reinitiate this\ndiscussion that has already been raised and discussed several times in the\npast. [1] [2]\n\nCurrently, if JIT is enabled, the decision for JIT compilation is purely\ntied to the total cost of the query. The number of expressions to be JIT\ncompiled is not taken into consideration, however the time spent JITing\nalso depends on that number. This may cause the cost of JITing to become\ntoo much that it hurts rather than improves anything.\nAn example case would be that you have many partitions and run a query that\ntouches one, or only a few, of those partitions. If there is no partition\npruning done in planning time, all 1000 partitions are JIT compiled while\nmost will not even be executed.\n\nProposed patch (based on the patch from [1]) simply changes consideration\nof JIT from plan level to per-plan-node level. Instead of depending on the\ntotal cost, we decide whether to perform JIT on a node or not by\nconsidering only that node's cost. This allows us to only JIT compile plan\nnodes with high costs.\n\nHere is a small test case to see the issue and the benefit of the patch:\n\nCREATE TABLE listp(a int, b int) PARTITION BY LIST(a);\nSELECT 'CREATE TABLE listp'|| x || ' PARTITION OF listp FOR VALUES IN\n('||x||');' FROM generate_Series(1,1000) x; \\gexec\nINSERT INTO listp SELECT 1,x FROM generate_series(1,10000000) x;\n\nEXPLAIN (VERBOSE, ANALYZE) SELECT COUNT(*) FROM listp WHERE b < 0;\n\nmaster jit=off:\n Planning Time: 25.113 ms\n Execution Time: 315.896 ms\n\nmaster jit=on:\n Planning Time: 24.664 ms\n JIT:\n Functions: 9008\n Options: Inlining false, Optimization false, Expressions true, Deforming\ntrue\n Timing: Generation 290.705 ms (Deform 108.274 ms), Inlining 0.000 ms,\nOptimization 165.991 ms, Emission 3232.775 ms, Total 3689.472 ms\n Execution Time: 1612.817 ms\n\npatch jit=on:\n Planning Time: 24.055 ms\n JIT:\n Functions: 17\n Options: Inlining false, Optimization false, Expressions true, Deforming\ntrue\n Timing: Generation 1.463 ms (Deform 0.232 ms), Inlining 0.000 ms,\nOptimization 0.766 ms, Emission 11.609 ms, Total 13.837 ms\n Execution Time: 299.721 ms\n\n\nA bit more on what this patch does:\n- It introduces a new context to keep track of the number of estimated\ncalls and if JIT is decided for each node that the context applies.\n- The number of estimated calls are especially useful where a node is\nexpected to be rescanned, such as Gather. Gather Merge, Memoize and Nested\nLoop. Knowing the estimated number of calls for a node allows us to rely on\ntotal cost multiplied by the estimated calls instead of only total cost for\nthe node.\n- For each node, the planner considers if the node should be JITed. If the\ncost of the node * the number of estimated calls is greater than\njit_above_cost, it's decided to be JIT compiled. Note that this changes\nthe meaning of jit_above_cost, it's now a threshold for a single plan node\nand not the whole query. Additionally, this change in JIT consideration is\nonly for JIT compilations. Inlining and optimizations continue to be for\nthe whole query and based on the overall cost of the query.\n- EXPLAIN shows estimated number of \"loops\" and whether JIT is true or not\nfor the node. For text format, JIT=true/false information is shown only if\nit's VERBOSE. (no reason to not show this info even if not VERBOSE. Showing\nfor only VERBOSE just requires less changes in tests, so I did this for\nsimplicity at the moment).\n\n\nThere are also some things that I'm not sure of:\n- What are other places where a node is likely to be rescanned, thus we\nneed to take estimated calls into account properly? Maybe recursive CTEs?\n- This change can make jit_above_cost mean something different. Should we\nrename it or introduce a new GUC? If it'll be kept as it is now, then it\nwould probably be better to change its default value at least.\n- What can we do for inlining and optimization? AFAIU performing those per\nnode may be very costly and not make that much sense.But I'm not sure about\nhow to handle those operations.\n- What about parallel queries? Total cost of the node is divided by the\nnumber of workers, which can seem like the cost reduced quite a bit. The\npatch amplifies the cost by the number of workers (by setting estimated\ncalls to the number of workers) to make it more likely to perform JIT for\nGather/Gather Merge nodes. OTOH JIT compilations are performed per worker\nand this can make workers decide JIT compile when it's not really needed.\n\n\nI'd appreciate any thought/feedback.\n\nThanks,\nMelih Mutlu\nMicrosoft\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAApHDvrEoQ5p61NjDCKVgEWaH0qm1KprYw2-7m8-6ZGGJ8A2Dw%40mail.gmail.com", "msg_date": "Tue, 2 Jan 2024 22:50:17 +0300", "msg_from": "Melih Mutlu <[email protected]>", "msg_from_op": true, "msg_subject": "JIT compilation per plan node" }, { "msg_contents": "Hi Melih,\n\nOn 1/2/24 20:50, Melih Mutlu wrote:\n> Hi hackers,\n> \n> After discussing this with David offlist, I decided to reinitiate this\n> discussion that has already been raised and discussed several times in the\n> past. [1] [2]\n> \n> Currently, if JIT is enabled, the decision for JIT compilation is purely\n> tied to the total cost of the query. The number of expressions to be JIT\n> compiled is not taken into consideration, however the time spent JITing\n> also depends on that number. This may cause the cost of JITing to become\n> too much that it hurts rather than improves anything.\n> An example case would be that you have many partitions and run a query that\n> touches one, or only a few, of those partitions. If there is no partition\n> pruning done in planning time, all 1000 partitions are JIT compiled while\n> most will not even be executed.\n> \n> Proposed patch (based on the patch from [1]) simply changes consideration\n> of JIT from plan level to per-plan-node level. Instead of depending on the\n> total cost, we decide whether to perform JIT on a node or not by\n> considering only that node's cost. This allows us to only JIT compile plan\n> nodes with high costs.\n> \n> Here is a small test case to see the issue and the benefit of the patch:\n> \n> CREATE TABLE listp(a int, b int) PARTITION BY LIST(a);\n> SELECT 'CREATE TABLE listp'|| x || ' PARTITION OF listp FOR VALUES IN\n> ('||x||');' FROM generate_Series(1,1000) x; \\gexec\n> INSERT INTO listp SELECT 1,x FROM generate_series(1,10000000) x;\n> \n> EXPLAIN (VERBOSE, ANALYZE) SELECT COUNT(*) FROM listp WHERE b < 0;\n> \n> master jit=off:\n> Planning Time: 25.113 ms\n> Execution Time: 315.896 ms\n> \n> master jit=on:\n> Planning Time: 24.664 ms\n> JIT:\n> Functions: 9008\n> Options: Inlining false, Optimization false, Expressions true, Deforming\n> true\n> Timing: Generation 290.705 ms (Deform 108.274 ms), Inlining 0.000 ms,\n> Optimization 165.991 ms, Emission 3232.775 ms, Total 3689.472 ms\n> Execution Time: 1612.817 ms\n> \n> patch jit=on:\n> Planning Time: 24.055 ms\n> JIT:\n> Functions: 17\n> Options: Inlining false, Optimization false, Expressions true, Deforming\n> true\n> Timing: Generation 1.463 ms (Deform 0.232 ms), Inlining 0.000 ms,\n> Optimization 0.766 ms, Emission 11.609 ms, Total 13.837 ms\n> Execution Time: 299.721 ms\n> \n\nThanks for the updated / refreshed patch.\n\nI think one of the main challenges this patch faces is that there's a\ncouple old threads with previous attempts, and the current thread simply\nbuilds on top of them, without explaining stuff fully. But people either\ndon't realize that, or don't have time to read old threads just in case,\nso can't follow some of the decisions :-(\n\nI think it'd be good to maybe try to explain some of the problems and\nsolutions more thoroughly, or at least point to the relevant places in\nthe old threads ...\n\n> \n> A bit more on what this patch does:\n> - It introduces a new context to keep track of the number of estimated\n> calls and if JIT is decided for each node that the context applies.\n\nAFAIK this is an attempt to deal with passing the necessary information\nwhile constructing the plan, which David originally tried [1] doing by\npassing est_calls during create_plan ...\n\nI doubt CreatePlanContext is a great way to achieve this. For one, it\nbreaks the long-standing custom that PlannerInfo is the first parameter,\nusually followed by RelOptInfo, etc. CreatePlanContext is added to some\nfunctions (but not all), which makes it ... unpredictable.\n\nFWIW it's not clear to me if/how this solves the problem with early\ncreate_plan() calls for subplans. Or is it still the same?\n\nWouldn't it be simpler to just build the plan as we do now, and then\nhave an expression_tree_walker that walks the complete plan top-down,\ninspects the nodes, enables JIT where appropriate and so on? That can\nhave arbitrary context, no problem with that.\n\nConsidering we decide JIT pretty late anyway (long after costing and\nother stuff that might affect the plan selection), the result should be\nexactly the same, without the extensive createplan.c disruption ...\n\n(usual caveat: I haven't tried, maybe there's something that means this\ncan't work)\n\n\n> - The number of estimated calls are especially useful where a node is\n> expected to be rescanned, such as Gather. Gather Merge, Memoize and Nested\n> Loop. Knowing the estimated number of calls for a node allows us to rely on\n> total cost multiplied by the estimated calls instead of only total cost for\n> the node.\n\nNot sure I follow. Why would be these nodes (Gather, Gather Merge, ...)\nmore likely to be rescanned compared to other nodes?\n\n> - For each node, the planner considers if the node should be JITed. If the\n> cost of the node * the number of estimated calls is greater than\n> jit_above_cost, it's decided to be JIT compiled. Note that this changes\n> the meaning of jit_above_cost, it's now a threshold for a single plan node\n> and not the whole query. Additionally, this change in JIT consideration is\n> only for JIT compilations. Inlining and optimizations continue to be for\n> the whole query and based on the overall cost of the query.\n\nIt's not clear to me why JIT compilation is decided for each node, while\nthe inlining/optimization is decided for the plan as a whole. I'm not\nfamiliar with the JIT stuff, so maybe it's obvious to others ...\n\n> - EXPLAIN shows estimated number of \"loops\" and whether JIT is true or not\n> for the node. For text format, JIT=true/false information is shown only if\n> it's VERBOSE. (no reason to not show this info even if not VERBOSE. Showing\n> for only VERBOSE just requires less changes in tests, so I did this for\n> simplicity at the moment).\n> \n\ntypo in sgml docs: ovarall\n\n> \n> There are also some things that I'm not sure of:\n> - What are other places where a node is likely to be rescanned, thus we\n> need to take estimated calls into account properly? Maybe recursive CTEs?\n\nWhy would it matter if a node is more/less likely to be rescanned?\nEither the node is rescanned in the plan or not, and we have nloops to\nknow how many rescans to expect.\n\n> - This change can make jit_above_cost mean something different. Should we\n> rename it or introduce a new GUC? If it'll be kept as it is now, then it\n> would probably be better to change its default value at least.\n\nYou mean it changes from per-query to per-node threshold? In general I\nthink we should not repurpose GUCs (because people are likely to copy\nand keep the old config, not realizing it works differently). But I'm\nnot sure this change is different enough to be an issue. And it's in the\nopposite direction than usually causes problems (i.e. it would disable\nJIT in cases where it was enabled before).\n\n> - What can we do for inlining and optimization? AFAIU performing those per\n> node may be very costly and not make that much sense.But I'm not sure about\n> how to handle those operations.\n\nNot sure. I don't think I understand JIT details enough to have a good\nopinion on this.\n\n> - What about parallel queries? Total cost of the node is divided by the\n> number of workers, which can seem like the cost reduced quite a bit. The\n> patch amplifies the cost by the number of workers (by setting estimated\n> calls to the number of workers) to make it more likely to perform JIT for\n> Gather/Gather Merge nodes. OTOH JIT compilations are performed per worker\n> and this can make workers decide JIT compile when it's not really needed.\n> \n\nUsing the number of workers as \"number of calls\" seems wrong to me. Why\nshouldn't each worker do the JIT decision on it's own, as if it was the\nonly worker running (but seeing only it's fraction of the data)? Kinda\nas if there were multiple independent backends running \"small\" queries?\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvoq5VhV%3D2euyjgBN2bC8Bds9Dtr0bG7R%3DreeefJWKJRXA%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 19 Feb 2024 17:26:21 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 20 Feb 2024 at 05:26, Tomas Vondra\n<[email protected]> wrote:\n> I doubt CreatePlanContext is a great way to achieve this. For one, it\n> breaks the long-standing custom that PlannerInfo is the first parameter,\n> usually followed by RelOptInfo, etc. CreatePlanContext is added to some\n> functions (but not all), which makes it ... unpredictable.\n\nI suggested this to Melih as a way to do this based on what Andy wrote\nin [1]. I agree with Andy that it's not great to add est_calls to\nevery function in createplan.c. I feel that CreatePlanContext is a way\nto take the hit of adding a parameter once and hopefully never having\nto do it again to this degree. I wondered if PlannerInfo could be a\nfield in CreatePlanContext.\n\n> FWIW it's not clear to me if/how this solves the problem with early\n> create_plan() calls for subplans. Or is it still the same?\n>\n> Wouldn't it be simpler to just build the plan as we do now, and then\n> have an expression_tree_walker that walks the complete plan top-down,\n> inspects the nodes, enables JIT where appropriate and so on? That can\n> have arbitrary context, no problem with that.\n\nWhy walk the entire plan tree again to do something we could do when\nbuilding it in the first place? Recursively walking trees isn't great\nfrom a performance point of view. It would be nice to avoid this if we\ncan find some other way to handle subplans. I do have a few other\nreasons up my sleeve that subplan creation should be delayed until\nlater, so maybe we should fix that to unblock those issues.\n\n> Considering we decide JIT pretty late anyway (long after costing and\n> other stuff that might affect the plan selection), the result should be\n> exactly the same, without the extensive createplan.c disruption ...\n>\n> (usual caveat: I haven't tried, maybe there's something that means this\n> can't work)\n\nIt's not like we can look at the top-node's cost as a pre-check to\nskip the recursive step for cheap plans as it's perfectly valid that a\nnode closer to the root of the plan tree have a lower total cost than\nthat node's subnodes. e.g LIMIT 1.\n\n> > - The number of estimated calls are especially useful where a node is\n> > expected to be rescanned, such as Gather. Gather Merge, Memoize and Nested\n> > Loop. Knowing the estimated number of calls for a node allows us to rely on\n> > total cost multiplied by the estimated calls instead of only total cost for\n> > the node.\n>\n> Not sure I follow. Why would be these nodes (Gather, Gather Merge, ...)\n> more likely to be rescanned compared to other nodes?\n\nI think Melih is listing nodes that can change the est_calls. Any\nnode can be rescanned, but only a subset of nodes can adjust the\nnumber of times they call their subnode vs how many times they\nthemselves are called.\n\n> > - For each node, the planner considers if the node should be JITed. If the\n> > cost of the node * the number of estimated calls is greater than\n> > jit_above_cost, it's decided to be JIT compiled. Note that this changes\n> > the meaning of jit_above_cost, it's now a threshold for a single plan node\n> > and not the whole query. Additionally, this change in JIT consideration is\n> > only for JIT compilations. Inlining and optimizations continue to be for\n> > the whole query and based on the overall cost of the query.\n>\n> It's not clear to me why JIT compilation is decided for each node, while\n> the inlining/optimization is decided for the plan as a whole. I'm not\n> familiar with the JIT stuff, so maybe it's obvious to others ...\n\nThis is a problem with LLVM, IIRC. The problem is it's a decision\nthat has to be made for an entire compilation unit and it can't be\ndecided at the expression level. This is pretty annoying as it's\npretty hard to decide the best logic to use to enable optimisations\nand inlining :-(\n\nI think the best thing we could discuss right now is, is this the best\nway to fix the JIT costing problem. In [2] I did link to a complaint\nabout the JIT costings. See [3]. The OP there wanted to keep the plan\nprivate, but I did get to see it and described the problem on the\nlist.\n\nAlso, I don't happen to think the decision about JITting per plan node\nis perfect as the node's total costs can be high for reasons other\nthan the cost of evaluating expressions. Also, the number of times a\ngiven expression is evaluated can vary quite a bit based on when the\nexpression is evaluated. For example, a foreign table scan that does\nmost of the filtering remotely, but has a non-shippable expr that\nneeds to be evaluated locally. The foreign scan might be very\nexpensive, especially if lots of filtering is done by a Seq Scan and\nnot many rows might make it back to the local server to benefit from\nJITting the non-shippable expression.\n\nA counter-example is the join condition of a non-parameterized nested\nloop. Those get evaluated n_outer_rows * n_inner_rows times.\n\nI think the savings JIT gives us on evaluation of expressions is going\nto be more closely tied to the number of times an expression is\nevaluated than the total cost of the node. However, it's likely more\ncomplex for optimisations and inlining as I imagine the size and\ncomplexity of the comparison function matters too.\n\nIt would be good to all agree on how we're going to fix this problem\nexactly before Melih gets in too deep fixing the finer details of the\npatch. If anyone isn't convinced enough there's a problem with the JIT\ncostings then I can see if I can dig up other threads where this is\nbeing complained about.\n\nDoes anyone want to disagree with the general idea of making the\ncompilation decision based on the total cost of the node? Or have a\nbetter idea?\n\nDavid\n\n[1] https://postgr.es/m/CAKU4AWqqSAi%2B-1ZaFawY300WknH79J9dhx%3DpU5%2BbyAbShjUjCQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/[email protected]\n\n\n", "msg_date": "Tue, 20 Feb 2024 18:14:57 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Tue, 20 Feb 2024 at 05:26, Tomas Vondra\n> <[email protected]> wrote:\n>> Wouldn't it be simpler to just build the plan as we do now, and then\n>> have an expression_tree_walker that walks the complete plan top-down,\n>> inspects the nodes, enables JIT where appropriate and so on? That can\n>> have arbitrary context, no problem with that.\n\n> Why walk the entire plan tree again to do something we could do when\n> building it in the first place?\n\nFWIW, I seriously doubt that an extra walk of the plan tree is even\nmeasurable compared to the number of cycles JIT compilation will\nexpend if it's called. So I don't buy your argument here.\nWe would be better off to do this in a way that's clean and doesn't\nadd overhead for non-JIT-enabled builds.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Feb 2024 00:31:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 20 Feb 2024 at 18:31, Tom Lane <[email protected]> wrote:\n> FWIW, I seriously doubt that an extra walk of the plan tree is even\n> measurable compared to the number of cycles JIT compilation will\n> expend if it's called. So I don't buy your argument here.\n> We would be better off to do this in a way that's clean and doesn't\n> add overhead for non-JIT-enabled builds.\n\nThe extra walk of the tree would need to be done for every plan, not\njust the ones where we do JIT. I'd rather find a way to not add this\nextra plan tree walk, especially since the vast majority of cases on\nan average instance won't be doing any JIT.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Feb 2024 18:38:15 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On 2/20/24 06:38, David Rowley wrote:\n> On Tue, 20 Feb 2024 at 18:31, Tom Lane <[email protected]> wrote:\n>> FWIW, I seriously doubt that an extra walk of the plan tree is even\n>> measurable compared to the number of cycles JIT compilation will\n>> expend if it's called. So I don't buy your argument here.\n>> We would be better off to do this in a way that's clean and doesn't\n>> add overhead for non-JIT-enabled builds.\n> \n> The extra walk of the tree would need to be done for every plan, not\n> just the ones where we do JIT. I'd rather find a way to not add this\n> extra plan tree walk, especially since the vast majority of cases on\n> an average instance won't be doing any JIT.\n> \n\nI believe Tom was talking about non-JIT-enabled-builds, i.e. builds that\neither don't support JIT at all, or where jit=off. Those would certainly\nnot need the extra walk.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 20 Feb 2024 11:04:21 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 20 Feb 2024 at 23:04, Tomas Vondra\n<[email protected]> wrote:\n>\n> On 2/20/24 06:38, David Rowley wrote:\n> > On Tue, 20 Feb 2024 at 18:31, Tom Lane <[email protected]> wrote:\n> >> FWIW, I seriously doubt that an extra walk of the plan tree is even\n> >> measurable compared to the number of cycles JIT compilation will\n> >> expend if it's called. So I don't buy your argument here.\n> >> We would be better off to do this in a way that's clean and doesn't\n> >> add overhead for non-JIT-enabled builds.\n> >\n> > The extra walk of the tree would need to be done for every plan, not\n> > just the ones where we do JIT. I'd rather find a way to not add this\n> > extra plan tree walk, especially since the vast majority of cases on\n> > an average instance won't be doing any JIT.\n> >\n>\n> I believe Tom was talking about non-JIT-enabled-builds, i.e. builds that\n> either don't support JIT at all, or where jit=off. Those would certainly\n> not need the extra walk.\n\nI don't believe so as he talked about the fact that the JIT cycles\nwould drown out the tree walk. There are no JIT cycles when the cost\nthreshold isn't met, but we still incur the cost of walking the plan\ntree.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Feb 2024 23:26:59 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "\n\nOn 2/20/24 06:14, David Rowley wrote:\n> On Tue, 20 Feb 2024 at 05:26, Tomas Vondra\n> <[email protected]> wrote:\n>> I doubt CreatePlanContext is a great way to achieve this. For one, it\n>> breaks the long-standing custom that PlannerInfo is the first parameter,\n>> usually followed by RelOptInfo, etc. CreatePlanContext is added to some\n>> functions (but not all), which makes it ... unpredictable.\n> \n> I suggested this to Melih as a way to do this based on what Andy wrote\n> in [1]. I agree with Andy that it's not great to add est_calls to\n> every function in createplan.c. I feel that CreatePlanContext is a way\n> to take the hit of adding a parameter once and hopefully never having\n> to do it again to this degree. I wondered if PlannerInfo could be a\n> field in CreatePlanContext.\n> \n\nYou mean we'd be adding more parameters to CreatePlanContext in the\nfuture? Not sure that's a great idea, there are reasons why we have\nseparate arguments in function signature and not a single struct.\n\nI think contexts are nice to group attributes with a particular purpose,\nnot as a replacement for arguments to make function signatures simpler.\n\n>> FWIW it's not clear to me if/how this solves the problem with early\n>> create_plan() calls for subplans. Or is it still the same?\n>>\n>> Wouldn't it be simpler to just build the plan as we do now, and then\n>> have an expression_tree_walker that walks the complete plan top-down,\n>> inspects the nodes, enables JIT where appropriate and so on? That can\n>> have arbitrary context, no problem with that.\n> \n> Why walk the entire plan tree again to do something we could do when\n> building it in the first place? Recursively walking trees isn't great\n> from a performance point of view. It would be nice to avoid this if we\n> can find some other way to handle subplans. I do have a few other\n> reasons up my sleeve that subplan creation should be delayed until\n> later, so maybe we should fix that to unblock those issues.\n> \n>> Considering we decide JIT pretty late anyway (long after costing and\n>> other stuff that might affect the plan selection), the result should be\n>> exactly the same, without the extensive createplan.c disruption ...\n>>\n>> (usual caveat: I haven't tried, maybe there's something that means this\n>> can't work)\n> \n> It's not like we can look at the top-node's cost as a pre-check to\n> skip the recursive step for cheap plans as it's perfectly valid that a\n> node closer to the root of the plan tree have a lower total cost than\n> that node's subnodes. e.g LIMIT 1.\n> \n\nI'd argue that's actually a reason to do the precheck, exactly because\nof the LIMIT. The fact that some node has high total cost does not\nmatter if there is LIMIT 1 above it. What matters is which fraction of\nthe plan we execute, not the total cost.\n\nImagine you have something like\n\n -> Limit 1 (cost=0..1 rows=1 ...)\n -> Seqscan (cost=0..100000000 rows=1000000 ...)\n\nI'd argue JIT-ing the seqscan is likely pointless, because on average\nwe'll execute ~1/1000000 of the scan, and the actual cost will be ~100.\n\n>>> - The number of estimated calls are especially useful where a node is\n>>> expected to be rescanned, such as Gather. Gather Merge, Memoize and Nested\n>>> Loop. Knowing the estimated number of calls for a node allows us to rely on\n>>> total cost multiplied by the estimated calls instead of only total cost for\n>>> the node.\n>>\n>> Not sure I follow. Why would be these nodes (Gather, Gather Merge, ...)\n>> more likely to be rescanned compared to other nodes?\n> \n> I think Melih is listing nodes that can change the est_calls. Any\n> node can be rescanned, but only a subset of nodes can adjust the\n> number of times they call their subnode vs how many times they\n> themselves are called.\n> \n\nOK\n\n>>> - For each node, the planner considers if the node should be JITed. If the\n>>> cost of the node * the number of estimated calls is greater than\n>>> jit_above_cost, it's decided to be JIT compiled. Note that this changes\n>>> the meaning of jit_above_cost, it's now a threshold for a single plan node\n>>> and not the whole query. Additionally, this change in JIT consideration is\n>>> only for JIT compilations. Inlining and optimizations continue to be for\n>>> the whole query and based on the overall cost of the query.\n>>\n>> It's not clear to me why JIT compilation is decided for each node, while\n>> the inlining/optimization is decided for the plan as a whole. I'm not\n>> familiar with the JIT stuff, so maybe it's obvious to others ...\n> \n> This is a problem with LLVM, IIRC. The problem is it's a decision\n> that has to be made for an entire compilation unit and it can't be\n> decided at the expression level. This is pretty annoying as it's\n> pretty hard to decide the best logic to use to enable optimisations\n> and inlining :-(\n> \n> I think the best thing we could discuss right now is, is this the best\n> way to fix the JIT costing problem. In [2] I did link to a complaint\n> about the JIT costings. See [3]. The OP there wanted to keep the plan\n> private, but I did get to see it and described the problem on the\n> list.\n> \n> Also, I don't happen to think the decision about JITting per plan node\n> is perfect as the node's total costs can be high for reasons other\n> than the cost of evaluating expressions. Also, the number of times a\n> given expression is evaluated can vary quite a bit based on when the\n> expression is evaluated. For example, a foreign table scan that does\n> most of the filtering remotely, but has a non-shippable expr that\n> needs to be evaluated locally. The foreign scan might be very\n> expensive, especially if lots of filtering is done by a Seq Scan and\n> not many rows might make it back to the local server to benefit from\n> JITting the non-shippable expression.\n> \n> A counter-example is the join condition of a non-parameterized nested\n> loop. Those get evaluated n_outer_rows * n_inner_rows times.\n> \n> I think the savings JIT gives us on evaluation of expressions is going\n> to be more closely tied to the number of times an expression is\n> evaluated than the total cost of the node. However, it's likely more\n> complex for optimisations and inlining as I imagine the size and\n> complexity of the comparison function matters too.\n> \n> It would be good to all agree on how we're going to fix this problem\n> exactly before Melih gets in too deep fixing the finer details of the\n> patch. If anyone isn't convinced enough there's a problem with the JIT\n> costings then I can see if I can dig up other threads where this is\n> being complained about.\n> \n> Does anyone want to disagree with the general idea of making the\n> compilation decision based on the total cost of the node? Or have a\n> better idea?\n> \n\nI certainly agree that the current JIT costing is quite crude, and we've\nall seen cases where the decision turns out to not be great. And I think\nthe plan to make the decisions at the node level makes sense, so +1 to\nthat in general.\n\nAnd I think you're right that looking just at the node total cost may\nnot be sufficient - that we may need a better cost model, considering\nhow many times an expression is executed and so on. But I think we\nshould try to do this in smaller steps, meaningful on their own,\notherwise we won't move at all. The two threads linked by Melih are ~4y\nold and *nothing* changed since then, AFAIK.\n\nI think it's reasonable to start by moving the decision to the node\nlevel - it's where the JIT happens, anyway. It may not be perfect, but\nit seems like a clear improvement. And if we then choose to improve the\n\"JIT cost model\" to address some of the issues you pointed out, surely\nthat would need to happen at the node level too ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 20 Feb 2024 11:31:12 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, Feb 20, 2024 at 5:31 AM Tomas Vondra\n<[email protected]> wrote:\n> I certainly agree that the current JIT costing is quite crude, and we've\n> all seen cases where the decision turns out to not be great. And I think\n> the plan to make the decisions at the node level makes sense, so +1 to\n> that in general.\n\nSeems reasonable to me also.\n\n> And I think you're right that looking just at the node total cost may\n> not be sufficient - that we may need a better cost model, considering\n> how many times an expression is executed and so on. But I think we\n> should try to do this in smaller steps, meaningful on their own,\n> otherwise we won't move at all. The two threads linked by Melih are ~4y\n> old and *nothing* changed since then, AFAIK.\n>\n> I think it's reasonable to start by moving the decision to the node\n> level - it's where the JIT happens, anyway. It may not be perfect, but\n> it seems like a clear improvement. And if we then choose to improve the\n> \"JIT cost model\" to address some of the issues you pointed out, surely\n> that would need to happen at the node level too ...\n\nI'm not sure I understand whether you (Tomas) think that this patch is\na good idea or a bad idea as it stands. I read the first of these two\nparagraphs to suggest that the patch hasn't really evolved much in the\nlast few years, perhaps suggesting that if it wasn't good enough to\ncommit back then, it still isn't now. But the second of these two\nparagraphs seems more supportive.\n\n From my own point of view, I definitely agree with David's statement\nthat what we really want to know is how many times each expression\nwill be evaluated. If we had that information, or just an estimate, I\nthink we could make much better decisions in this area. But we don't\nhave that infrastructure now, and it doesn't seem easy to create, so\nit seems to me that what we have to decide now is whether applying a\ncost threshold on a per-plan-node basis will produce better or worse\nresults than what making one decision for the whole plan. David's\nprovided an example of where it does indeed work better back in\nhttps://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com\n- but could there be enough cases where the opposite happens to make\nus think that the patch is overall a bad idea?\n\nI personally find that a bit unlikely, although not impossible. I see\na couple of ways that using the per-node cost can distort things -- it\nseems like it will tend to heavily feature JIT for \"interior\" plan\nnodes because the cost of a plan node includes it's children -- and as\nwas mentioned previously, it doesn't really care whether the node cost\nis high because of expression evaluation or something else. But\nneither of those things seem like they'd be bad enough to make this a\nbad way forward over all. For the patch to lose, it seems like we'd\nneed a case where the overall plan cost would have been high enough to\ntrigger JIT pre-patch, but most of the benefit would have come from\nrelatively low-cost nodes that don't get JITted post-patch. The\neasiest way for that to happen is if the planner's estimates are off,\nbut that's not really an argument against this patch as much as it is\nan argument that query planning is hard in general.\n\nA slightly subtler way the patch could lose is if the new threshold is\nharder to adjust than the old one. For example, imagine that you have\na query that does a Cartesian join. That makes the cost of the input\nnodes rather small compared to the cost of the join node, and it also\nmeans that JITting the inner join child in particular is probably\nrather important. But if you set join_above_cost low enough to JIT\nthat node post-patch, then maybe you'll also JIT a bunch of things\nthat aren't on the inner side of a nested loop and which might\ntherefore not really need JIT. Unless I'm missing something, this is a\nfairly realistic example of where this patch's approach to costing\ncould turn out to be painful ... but it's not like the current system\nis pain-free either.\n\nI don't really know what's best here, but I'm mildly inclined to\nbelieve that the patch might be a change for the better. I have not\nreviewed the implementation and have no comment on whether it's good\nor bad from that point of view.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:14:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "Thanks for chipping in here.\n\nOn Fri, 15 Mar 2024 at 08:14, Robert Haas <[email protected]> wrote:\n> A slightly subtler way the patch could lose is if the new threshold is\n> harder to adjust than the old one. For example, imagine that you have\n> a query that does a Cartesian join. That makes the cost of the input\n> nodes rather small compared to the cost of the join node, and it also\n> means that JITting the inner join child in particular is probably\n> rather important. But if you set join_above_cost low enough to JIT\n> that node post-patch, then maybe you'll also JIT a bunch of things\n> that aren't on the inner side of a nested loop and which might\n> therefore not really need JIT. Unless I'm missing something, this is a\n> fairly realistic example of where this patch's approach to costing\n> could turn out to be painful ... but it's not like the current system\n> is pain-free either.\n\nI think this case would be covered as the cost of the inner side of\nthe join would be multiplied by the estimated outer-side rows.\nEffectively, making this part work is the bulk of the patch as we\ncurrently don't know the estimated number of loops of a node during\ncreate plan.\n\nDavid\n\n\n", "msg_date": "Fri, 15 Mar 2024 08:54:18 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "\n\nOn 3/14/24 20:14, Robert Haas wrote:\n> On Tue, Feb 20, 2024 at 5:31 AM Tomas Vondra\n> <[email protected]> wrote:\n>> I certainly agree that the current JIT costing is quite crude, and we've\n>> all seen cases where the decision turns out to not be great. And I think\n>> the plan to make the decisions at the node level makes sense, so +1 to\n>> that in general.\n> \n> Seems reasonable to me also.\n> \n>> And I think you're right that looking just at the node total cost may\n>> not be sufficient - that we may need a better cost model, considering\n>> how many times an expression is executed and so on. But I think we\n>> should try to do this in smaller steps, meaningful on their own,\n>> otherwise we won't move at all. The two threads linked by Melih are ~4y\n>> old and *nothing* changed since then, AFAIK.\n>>\n>> I think it's reasonable to start by moving the decision to the node\n>> level - it's where the JIT happens, anyway. It may not be perfect, but\n>> it seems like a clear improvement. And if we then choose to improve the\n>> \"JIT cost model\" to address some of the issues you pointed out, surely\n>> that would need to happen at the node level too ...\n> \n> I'm not sure I understand whether you (Tomas) think that this patch is\n> a good idea or a bad idea as it stands. I read the first of these two\n> paragraphs to suggest that the patch hasn't really evolved much in the\n> last few years, perhaps suggesting that if it wasn't good enough to\n> commit back then, it still isn't now. But the second of these two\n> paragraphs seems more supportive.\n> \n\nTo clarify, I think the patch is a step in the right direction, and a\nmeaningful improvement. It may not be the perfect solution we imagine\n(but who knows how far we are from that), but AFAIK moving these\ndecisions to the node level is something the ideal solution would need\nto do too.\n\nThe reference to the 4y old patches was meant to support this patch as\nan improvement - perhaps incomplete, but still an improvement. We keep\nimagining \"perfect solutions\" and then end up doing nothing.\n\nI recognize there's a risk we may never get to have the ideal solution\n(e.g. because it requires information we don't possess). But I still\nthink moving the decision to the node level would allow us to do better\ndecisions compared to just doing it for the query as a whole.\n\n> From my own point of view, I definitely agree with David's statement\n> that what we really want to know is how many times each expression\n> will be evaluated. If we had that information, or just an estimate, I\n> think we could make much better decisions in this area. But we don't\n> have that infrastructure now, and it doesn't seem easy to create, so\n> it seems to me that what we have to decide now is whether applying a\n> cost threshold on a per-plan-node basis will produce better or worse\n> results than what making one decision for the whole plan. David's\n> provided an example of where it does indeed work better back in\n> https://www.postgresql.org/message-id/CAApHDvpQJqLrNOSi8P1JLM8YE2C%2BksKFpSdZg%3Dq6sTbtQ-v%3Daw%40mail.gmail.com\n> - but could there be enough cases where the opposite happens to make\n> us think that the patch is overall a bad idea?\n> \n\nRight, this risk or regression is always there, and I'm sure it'd be\npossible to construct such cases. But considering how crude the current\ncosting is, I'd be surprised if this ends up being a net negative.\n\nAlso, is the number of executions really the thing we're missing? Surely\nwe know the number of rows the node is dealing with, so we could use\nthis (yes, I realize there are issues, but we deal with that when\ncosting quals too). Isn't it much bigger issue that we have pretty much\nno cost model for the actual JIT (compilation/optimization) depending on\nhow many expressions it deals with?\n\n> I personally find that a bit unlikely, although not impossible. I see\n> a couple of ways that using the per-node cost can distort things -- it\n> seems like it will tend to heavily feature JIT for \"interior\" plan\n> nodes because the cost of a plan node includes it's children -- and as\n> was mentioned previously, it doesn't really care whether the node cost\n> is high because of expression evaluation or something else. But\n> neither of those things seem like they'd be bad enough to make this a\n> bad way forward over all. For the patch to lose, it seems like we'd\n> need a case where the overall plan cost would have been high enough to\n> trigger JIT pre-patch, but most of the benefit would have come from\n> relatively low-cost nodes that don't get JITted post-patch. The\n> easiest way for that to happen is if the planner's estimates are off,\n> but that's not really an argument against this patch as much as it is\n> an argument that query planning is hard in general.\n> \n> A slightly subtler way the patch could lose is if the new threshold is\n> harder to adjust than the old one. For example, imagine that you have\n> a query that does a Cartesian join. That makes the cost of the input\n> nodes rather small compared to the cost of the join node, and it also\n> means that JITting the inner join child in particular is probably\n> rather important. But if you set join_above_cost low enough to JIT\n> that node post-patch, then maybe you'll also JIT a bunch of things\n> that aren't on the inner side of a nested loop and which might\n> therefore not really need JIT. Unless I'm missing something, this is a\n> fairly realistic example of where this patch's approach to costing\n> could turn out to be painful ... but it's not like the current system\n> is pain-free either.\n> \n> I don't really know what's best here, but I'm mildly inclined to\n> believe that the patch might be a change for the better. I have not\n> reviewed the implementation and have no comment on whether it's good\n> or bad from that point of view.\n> \n\nI think it would be good to construct a bunch of cases where we think\nthis approach would misbehave, and see if the current code handles that\nany better and/or if there's a way to improve that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 14 Mar 2024 22:13:23 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Fri, 15 Mar 2024 at 10:13, Tomas Vondra\n<[email protected]> wrote:\n> To clarify, I think the patch is a step in the right direction, and a\n> meaningful improvement. It may not be the perfect solution we imagine\n> (but who knows how far we are from that), but AFAIK moving these\n> decisions to the node level is something the ideal solution would need\n> to do too.\n\nI got thinking about this patch again while working on [1]. I want to\nwrite this down as I don't quite have time to get fully back into this\nright now...\n\nCurrently, during execution, ExecCreateExprSetupSteps() traverses the\nNode tree of the Expr to figure out the max varattno of for each slot.\nThat's done so all of the tuple deforming happens at once rather than\nincrementally. Figuring out the max varattno is a price we have to pay\nfor every execution of the query. I think we'd be better off doing\nthat in the planner.\n\nTo do this, I thought that setrefs.c could do this processing in\nfix_join_expr / fix_upper_expr and wrap up the expression in a new\nNode type that stores the max varattno for each special var type.\n\nThis idea is related to this discussion because another thing that\ncould be stored in the very same struct is the \"num_exec\" value. I\nfeel the number of executions of an ExprState is a better gauge of how\nuseful JIT will be than the cost of the plan node. Now, looking at\nset_join_references(), the execution estimates are not exactly\nperfect. For example;\n\n#define NUM_EXEC_QUAL(parentplan) ((parentplan)->plan_rows * 2.0)\n\nthat's not a great estimate for a Nested Loop's joinqual, but we could\neasily make efforts to improve those and that could likely be done\nindependently and concurrently with other work to make JIT more\ngranular.\n\nThe problem with doing this is that there's just a huge amount of code\nchurn in the executor. I am keen to come up with a prototype so I can\nget a better understanding of if this is a practical solution. I\ndon't want to go down that path if it's just me that thinks the number\nof times an ExprState is evaluated is a better measure to go on for\nJIT vs no JIT than the plan node's total cost.\n\nDoes anyone have any thoughts on that idea?\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvoexAxgQFNQD_GRkr2O_eJUD1-wUGm=m0L+Gc=T=kEa4g@mail.gmail.com\n\n\n", "msg_date": "Tue, 14 May 2024 16:10:52 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 20 Feb 2024 at 06:38, David Rowley <[email protected]> wrote:\n>\n> On Tue, 20 Feb 2024 at 18:31, Tom Lane <[email protected]> wrote:\n> > FWIW, I seriously doubt that an extra walk of the plan tree is even\n> > measurable compared to the number of cycles JIT compilation will\n> > expend if it's called. So I don't buy your argument here.\n> > We would be better off to do this in a way that's clean and doesn't\n> > add overhead for non-JIT-enabled builds.\n>\n> The extra walk of the tree would need to be done for every plan, not\n> just the ones where we do JIT. I'd rather find a way to not add this\n> extra plan tree walk, especially since the vast majority of cases on\n> an average instance won't be doing any JIT.\n\nI'm not saying I'd prefer the extra walk, but I don't think you'd need\nto do this extra walk for all plans. Afaict you could skip the extra\nwalk when top_plan->total_cost < jit_above_cost. i.e. only doing the\nextra walk to determine which exact nodes to JIT for cases where we\ncurrently JIT all nodes. That would limit the extra walk overhead to\ncases where we currently already spend significant resources on JITing\nstuff.\n\n\n", "msg_date": "Tue, 14 May 2024 09:55:48 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 14 May 2024 at 19:56, Jelte Fennema-Nio <[email protected]> wrote:\n> I'm not saying I'd prefer the extra walk, but I don't think you'd need\n> to do this extra walk for all plans. Afaict you could skip the extra\n> walk when top_plan->total_cost < jit_above_cost. i.e. only doing the\n> extra walk to determine which exact nodes to JIT for cases where we\n> currently JIT all nodes. That would limit the extra walk overhead to\n> cases where we currently already spend significant resources on JITing\n> stuff.\n\nYou could do that, but wouldn't it just cause us to sometimes miss\ndoing JIT for plan nodes that have a total cost above the top node's?\nTo me, it seems like a shortcut that someone might complain about one\nday and fixing it might require removing the short cut, which would\nlead to traversing the whole plan tree.\n\nHere's a plan where the total cost of a subnode is greater than the\ntotal cost of the top node:\n\nset max_parallel_workers_per_gather=0;\ncreate table t0 as select a from generate_Series(1,1000000)a;\nanalyze t0;\nexplain select * from t0 order by a limit 1;\n QUERY PLAN\n------------------------------------------------------------------------\n Limit (cost=19480.00..19480.00 rows=1 width=4)\n -> Sort (cost=19480.00..21980.00 rows=1000000 width=4)\n Sort Key: a\n -> Seq Scan on t0 (cost=0.00..14480.00 rows=1000000 width=4)\n\nAnyway, I don't think it's worth talking in detail about specifics\nabout implementation for the total cost of the node idea when the\nwhole replacement costing model design is still undecided. It feels\nlike we're trying to decide what colour to paint the bathroom when we\nhaven't even come up with a design for the house yet.\n\nI'd be interested to hear your thoughts on using the estimated number\nof invocations of the function to drive the JIT flags on a\nper-expression level.\n\nDavid\n\n\n", "msg_date": "Tue, 14 May 2024 20:18:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" }, { "msg_contents": "On Tue, 14 May 2024 at 10:19, David Rowley <[email protected]> wrote:\n> Here's a plan where the total cost of a subnode is greater than the\n> total cost of the top node:\n\nAh I didn't realize it was possible for that to happen. **reads up on\nplan costs**\n\nThis actually makes me think that using total_cost of the sub-nodes is\nnot the enough to determine determining if the node should be JITet.\nWe wouldn't want to start jitting plans like this, i.e. introducing\nall the JIT setup overhead for just a single row:\n\nset max_parallel_workers_per_gather=0;\ncreate table t0 as select a from generate_Series(1,1000000)a;\nanalyze t0;\nexplain select a+a*a+a*a+a from t0 limit 1;\n QUERY PLAN\n-----------------------------------------------------\n Limit (cost=0.00..0.03 rows=1 width=4)\n -> Seq Scan on t0 (cost=0.00..26980.00 rows=1000000 width=4)\n\nAn easy way to work around that issue I guess is by using the minimum\ntotal_cost of all the total_costs from the current sub-node up to the\nroot node. The current minimum could be passed along as a part of the\ncontext I guess.\n\n\n", "msg_date": "Tue, 14 May 2024 13:31:13 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JIT compilation per plan node" } ]
[ { "msg_contents": "LLVM 16 provided a new function name[1], and LLVM 18 (not shipped yet)\nhas started complaining[2] about the old spelling.\n\nHere's a patch.\n\n[1] https://github.com/llvm/llvm-project/commit/1b97645e56bf321b06d1353024339958b64fd242\n[2] https://github.com/llvm/llvm-project/commit/5ac12951b4e9bbfcc5791282d0961ec2b65575e9", "msg_date": "Wed, 3 Jan 2024 18:04:17 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "LLVM 18" }, { "msg_contents": "On Wed, Jan 3, 2024 at 6:04 PM Thomas Munro <[email protected]> wrote:\n> LLVM 16 provided a new function name[1], and LLVM 18 (not shipped yet)\n> has started complaining[2] about the old spelling.\n>\n> Here's a patch.\n\nAnd pushed.\n\nJust in case anyone else is confused by this, be aware that they've\nchanged their numbering scheme. The 18.1 schedule visible on llvm.org\ndoesn't imply that 18.0 has already shipped, it's just that they've\ndecided to start at X.1.\n\nBy the way, while testing on my Debian system with apt.llvm.org\npackages, I discovered that we crash with its latest llvm-18 package,\nnamely:\n\nllvm-18_1%3a18~++20240122112312+ad01447d30ed-1~exp1~20240122112329.478_amd64.deb\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00007f033e73f5f8 in llvm::InlineFunction(llvm::CallBase&,\nllvm::InlineFunctionInfo&, bool, llvm::AAResults*, bool,\nllvm::Function*) () from /lib/x86_64-linux-gnu/libLLVM-18.so.1\n\n... so I re-confirmed that I wasn't hallucinating and it did work\nbefore I disappeared for the holidays by downgrading to the one before\nthat from my /var/cache/apt/archives, namely:\n\nllvm-18_1%3a18~++20231218112348+a4deb14e353c-1~exp1~20231218112405.407_amd64.deb\n\nSo I built the tip of their release/18.x branch so I could try to get\nsome more information out of my debugger and perhaps their assertions,\nbut it worked. So I have to assume that something was broken at their\ncommit ad01447d30ed and has been fixed in the past few days, but I\ndidn't have time to dig further, and will re-check a bit later when a\nfresh package shows up.\n\n\n", "msg_date": "Thu, 25 Jan 2024 14:17:31 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LLVM 18" }, { "msg_contents": "Re: Thomas Munro\n> By the way, while testing on my Debian system with apt.llvm.org\n> packages, I discovered that we crash with its latest llvm-18 package,\n> namely:\n\nUbuntu in their infinite wisdom have switched to LLVM 18 as default\nfor their upcoming 24.04 \"noble\" LTS release while Debian is still\ndefaulting to 16. I'm now seeing LLVM crashes on the 4 architectures\nwe support on noble.\n\nShould LLVM 18 be supported by now?\n\nChristoph\n\n\n", "msg_date": "Fri, 29 Mar 2024 19:07:31 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LLVM 18" }, { "msg_contents": "On Sat, Mar 30, 2024 at 7:07 AM Christoph Berg <[email protected]> wrote:\n> Ubuntu in their infinite wisdom have switched to LLVM 18 as default\n> for their upcoming 24.04 \"noble\" LTS release while Debian is still\n> defaulting to 16. I'm now seeing LLVM crashes on the 4 architectures\n> we support on noble.\n>\n> Should LLVM 18 be supported by now?\n\nHi Christoph,\n\nSeems there is a bug somewhere, probably (?) not in our code, but\nperhaps we should be looking for a workaround... here's the thread:\n\nhttps://www.postgresql.org/message-id/flat/CAFj8pRACpVFr7LMdVYENUkScG5FCYMZDDdSGNU-tch%2Bw98OxYg%40mail.gmail.com\n\n\n", "msg_date": "Sat, 30 Mar 2024 12:02:11 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LLVM 18" } ]
[ { "msg_contents": "Hi,\n\nI wonder what you think of making pg_prewarm use recent addition on \nsmgrprefetch and readv ?\n\n\nIn order to try, I did it anyway in the attached patches. They contain \nno doc update, but I will proceed if it is of interest.\n\nIn summary:\n\n1. The first one adds a new check on parameters (checking last block is \nindeed not before first block).\nConsequence is an ERROR is raised instead of silently doing nothing.\n\n2. The second one does implement smgrprefetch with range and loops by \ndefault per segment to still have a check for interrupts.\n\n3. The third one provides smgrreadv instead of smgrread,  by default on \na range of 8 buffers. I am absolutely unsure that I used readv correctly...\n\nQ: posix_fadvise may not work exactly the way you think it does, or does \nit ?\n\n\nIn details, and for the question:\n\nIt's not so obvious that the \"feature\" is really required or wanted, \ndepending on what are the expectations from user point of view.\n\nThe kernel decides on what to do with posix_fadvise calls, and how we \npass parameters does impact the decision.\nWith the current situation where prefetch is done step by step, block by \nblock, they are very probably most of the time all loaded even if those \nfrom the beginning of the relation can be discarded at the end of the \nprefetch.\n\nHowever,  if instead you provide a real range, or the magic len=0 to \nposix_fadvise, then blocks are \"more\" loaded according to effective vm \npressure (which is not the case on the previous example).\nAs a result only a small part of the relation might be loaded, and this \nis probably not what end-users expect despite being probably a good \nchoice (you can still free cache beforehand to help the kernel).\n\nAn example, below I'm using vm_relation_cachestat() which provides linux \ncachestat output, and vm_relation_fadvise() to unload cache, and \npg_prewarm for the demo:\n\n# clear cache: (nr_cache is the number of file system pages in cache, \nnot postgres blocks)\n\n```\npostgres=# select block_start, block_count, nr_pages, nr_cache from \nvm_relation_cachestat('foo',range:=1024*32);\nblock_start | block_count | nr_pages | nr_cache\n-------------+-------------+----------+----------\n           0 |       32768 |    65536 |        0\n       32768 |       32768 |    65536 |        0\n       65536 |       32768 |    65536 |        0\n       98304 |       32768 |    65536 |        0\n      131072 |        1672 |     3344 |        0\n```\n\n# load full relation with pg_prewarm (patched)\n\n```\npostgres=# select pg_prewarm('foo','prefetch');\npg_prewarm\n------------\n     132744\n(1 row)\n```\n\n# Checking results:\n\n```\npostgres=# select block_start, block_count, nr_pages, nr_cache from \nvm_relation_cachestat('foo',range:=1024*32);\nblock_start | block_count | nr_pages | nr_cache\n-------------+-------------+----------+----------\n           0 |       32768 |    65536 |      320\n       32768 |       32768 |    65536 |        0\n       65536 |       32768 |    65536 |        0\n       98304 |       32768 |    65536 |        0\n      131072 |        1672 |     3344 |      320  <-- segment 1\n\n```\n\n# Load block by block and check:\n\n```\npostgres=# select from generate_series(0, 132743) g(n), lateral \npg_prewarm('foo','prefetch', 'main', n, n);\npostgres=# select block_start, block_count, nr_pages, nr_cache from \nvm_relation_cachestat('foo',range:=1024*32);\nblock_start | block_count | nr_pages | nr_cache\n-------------+-------------+----------+----------\n           0 |       32768 |    65536 |    65536\n       32768 |       32768 |    65536 |    65536\n       65536 |       32768 |    65536 |    65536\n       98304 |       32768 |    65536 |    65536\n      131072 |        1672 |     3344 |     3344\n\n```\n\nThe duration of the last example is also really significant: full \nrelation is 0.3ms and block by block is 1550ms!\nYou might think it's because of generate_series or whatever, but I have \nthe exact same behavior with pgfincore.\nI can compare loading and unloading duration for similar \"async\" work, \nhere each call is from block 0 with len of 132744 and a range of 1 block \n(i.e. posix_fadvise on 8kB at a time).\nSo they have exactly the same number of operations doing DONTNEED or \nWILLNEED, but distinct duration on the first \"load\":\n\n```\n\npostgres=# select * from \nvm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');\nvm_relation_fadvise\n---------------------\n\n(1 row)\n\nTime: 25.202 ms\npostgres=# select * from \nvm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\nvm_relation_fadvise\n---------------------\n\n(1 row)\n\nTime: 1523.636 ms (00:01.524) <----- not free !\npostgres=# select * from \nvm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\nvm_relation_fadvise\n---------------------\n\n(1 row)\n\nTime: 24.967 ms\n```\n\nThank you for your time reading this longer than expected email.\n\nComments ?\n\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D", "msg_date": "Thu, 4 Jan 2024 00:23:43 +0100", "msg_from": "Cedric Villemain <[email protected]>", "msg_from_op": true, "msg_subject": "Change prefetch and read strategies to use range in pg_prewarm ...\n and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi,\n\nThanks for working on this!\n\nThe patches are cleanly applied on top of the current master and all\ntests are passed.\n\nOn Thu, 4 Jan 2024 at 02:23, Cedric Villemain\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I wonder what you think of making pg_prewarm use recent addition on\n> smgrprefetch and readv ?\n>\n>\n> In order to try, I did it anyway in the attached patches. They contain\n> no doc update, but I will proceed if it is of interest.\n>\n> In summary:\n>\n> 1. The first one adds a new check on parameters (checking last block is\n> indeed not before first block).\n> Consequence is an ERROR is raised instead of silently doing nothing.\n\nThis is a general improvement and can be committed without other patches.\n\n> 2. The second one does implement smgrprefetch with range and loops by\n> default per segment to still have a check for interrupts.\n\nIt looks good codewise but RELSEG_SIZE is too big to prefetch. Man\npage of posix_fadvise [1] states that: \"The amount of data read may be\ndecreased by the kernel depending on virtual memory load. (A few\nmegabytes will usually be fully satisfied, and more is rarely\nuseful.)\". It is trying to prefetch 1GB data now. That could explain\nyour observation about differences between nr_cache numbers.\n\n> 3. The third one provides smgrreadv instead of smgrread, by default on\n> a range of 8 buffers. I am absolutely unsure that I used readv correctly...\n\nLooks good to me.\n\n> Q: posix_fadvise may not work exactly the way you think it does, or does\n> it ?\n>\n>\n> In details, and for the question:\n>\n> It's not so obvious that the \"feature\" is really required or wanted,\n> depending on what are the expectations from user point of view.\n>\n> The kernel decides on what to do with posix_fadvise calls, and how we\n> pass parameters does impact the decision.\n> With the current situation where prefetch is done step by step, block by\n> block, they are very probably most of the time all loaded even if those\n> from the beginning of the relation can be discarded at the end of the\n> prefetch.\n>\n> However, if instead you provide a real range, or the magic len=0 to\n> posix_fadvise, then blocks are \"more\" loaded according to effective vm\n> pressure (which is not the case on the previous example).\n> As a result only a small part of the relation might be loaded, and this\n> is probably not what end-users expect despite being probably a good\n> choice (you can still free cache beforehand to help the kernel).\n>\n> An example, below I'm using vm_relation_cachestat() which provides linux\n> cachestat output, and vm_relation_fadvise() to unload cache, and\n> pg_prewarm for the demo:\n>\n> # clear cache: (nr_cache is the number of file system pages in cache,\n> not postgres blocks)\n>\n> ```\n> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> vm_relation_cachestat('foo',range:=1024*32);\n> block_start | block_count | nr_pages | nr_cache\n> -------------+-------------+----------+----------\n> 0 | 32768 | 65536 | 0\n> 32768 | 32768 | 65536 | 0\n> 65536 | 32768 | 65536 | 0\n> 98304 | 32768 | 65536 | 0\n> 131072 | 1672 | 3344 | 0\n> ```\n>\n> # load full relation with pg_prewarm (patched)\n>\n> ```\n> postgres=# select pg_prewarm('foo','prefetch');\n> pg_prewarm\n> ------------\n> 132744\n> (1 row)\n> ```\n>\n> # Checking results:\n>\n> ```\n> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> vm_relation_cachestat('foo',range:=1024*32);\n> block_start | block_count | nr_pages | nr_cache\n> -------------+-------------+----------+----------\n> 0 | 32768 | 65536 | 320\n> 32768 | 32768 | 65536 | 0\n> 65536 | 32768 | 65536 | 0\n> 98304 | 32768 | 65536 | 0\n> 131072 | 1672 | 3344 | 320 <-- segment 1\n>\n> ```\n>\n> # Load block by block and check:\n>\n> ```\n> postgres=# select from generate_series(0, 132743) g(n), lateral\n> pg_prewarm('foo','prefetch', 'main', n, n);\n> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> vm_relation_cachestat('foo',range:=1024*32);\n> block_start | block_count | nr_pages | nr_cache\n> -------------+-------------+----------+----------\n> 0 | 32768 | 65536 | 65536\n> 32768 | 32768 | 65536 | 65536\n> 65536 | 32768 | 65536 | 65536\n> 98304 | 32768 | 65536 | 65536\n> 131072 | 1672 | 3344 | 3344\n>\n> ```\n>\n> The duration of the last example is also really significant: full\n> relation is 0.3ms and block by block is 1550ms!\n> You might think it's because of generate_series or whatever, but I have\n> the exact same behavior with pgfincore.\n> I can compare loading and unloading duration for similar \"async\" work,\n> here each call is from block 0 with len of 132744 and a range of 1 block\n> (i.e. posix_fadvise on 8kB at a time).\n> So they have exactly the same number of operations doing DONTNEED or\n> WILLNEED, but distinct duration on the first \"load\":\n>\n> ```\n>\n> postgres=# select * from\n> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');\n> vm_relation_fadvise\n> ---------------------\n>\n> (1 row)\n>\n> Time: 25.202 ms\n> postgres=# select * from\n> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n> vm_relation_fadvise\n> ---------------------\n>\n> (1 row)\n>\n> Time: 1523.636 ms (00:01.524) <----- not free !\n> postgres=# select * from\n> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n> vm_relation_fadvise\n> ---------------------\n>\n> (1 row)\n>\n> Time: 24.967 ms\n> ```\n\nI confirm that there is a time difference between calling pg_prewarm\nby full relation and block by block, but IMO this is expected. When\npg_prewarm is called by full relation, it does the initialization part\njust once but when it is called block by block, it does initialization\nfor each call, right?\n\nI run 'select pg_prewarm('foo','prefetch', 'main', n, n) FROM\ngenerate_series(0, 132744)n;' a couple of times consecutively but I\ncould not see the time difference between first run (first load) and\nthe consecutive runs. Am I doing something wrong?\n\n[1] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html#DESCRIPTION\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 5 Mar 2024 14:07:01 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi Nazir,\n\n\nthank you for your review. I comment below.\n\n\nOn 05/03/2024 12:07, Nazir Bilal Yavuz wrote:\n>> 2. The second one does implement smgrprefetch with range and loops by\n>> default per segment to still have a check for interrupts.\n> It looks good codewise but RELSEG_SIZE is too big to prefetch. Man\n> page of posix_fadvise [1] states that: \"The amount of data read may be\n> decreased by the kernel depending on virtual memory load. (A few\n> megabytes will usually be fully satisfied, and more is rarely\n> useful.)\". It is trying to prefetch 1GB data now. That could explain\n> your observation about differences between nr_cache numbers.\n\n From an \"adminsys\" point of view I will find beneficial to get a single \nsyscall per file, respecting the logic and behavior of underlying system \ncall.\n\nThe behavior is 100% OK, and in fact it might a bad idea to prefetch \nblock by block as the result is just to put more pressure on a system if \nit is already under pressure.\n\nThough there are use cases and it's nice to be able to do that too at \nthis per page level.\n\nAbout [1], it's very old statement about resources. And Linux manages a \npart of the problem for us here I think [2]:\n\n/*\n  * Chunk the readahead into 2 megabyte units, so that we don't pin too much\n  * memory at once.\n  */\nvoid force_page_cache_ra(....)\n\n>> Q: posix_fadvise may not work exactly the way you think it does, or does\n>> it ?\n>>\n>>\n>> In details, and for the question:\n>>\n>> However, if instead you provide a real range, or the magic len=0 to\n>> posix_fadvise, then blocks are \"more\" loaded according to effective vm\n>> pressure (which is not the case on the previous example).\n>> As a result only a small part of the relation might be loaded, and this\n>> is probably not what end-users expect despite being probably a good\n>> choice (you can still free cache beforehand to help the kernel).\n\nI think it's a matter of documenting well the feature, and if at all \npossible, as usual, not let users be negatively impacted by default.\n\n\n>> An example, below I'm using vm_relation_cachestat() which provides linux\n>> cachestat output, and vm_relation_fadvise() to unload cache, and\n>> pg_prewarm for the demo:\n>>\n>> # clear cache: (nr_cache is the number of file system pages in cache,\n>> not postgres blocks)\n>>\n>> ```\n>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>> vm_relation_cachestat('foo',range:=1024*32);\n>> block_start | block_count | nr_pages | nr_cache\n>> -------------+-------------+----------+----------\n>> 0 | 32768 | 65536 | 0\n>> 32768 | 32768 | 65536 | 0\n>> 65536 | 32768 | 65536 | 0\n>> 98304 | 32768 | 65536 | 0\n>> 131072 | 1672 | 3344 | 0\n>> ```\n>>\n>> # load full relation with pg_prewarm (patched)\n>>\n>> ```\n>> postgres=# select pg_prewarm('foo','prefetch');\n>> pg_prewarm\n>> ------------\n>> 132744\n>> (1 row)\n>> ```\n>>\n>> # Checking results:\n>>\n>> ```\n>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>> vm_relation_cachestat('foo',range:=1024*32);\n>> block_start | block_count | nr_pages | nr_cache\n>> -------------+-------------+----------+----------\n>> 0 | 32768 | 65536 | 320\n>> 32768 | 32768 | 65536 | 0\n>> 65536 | 32768 | 65536 | 0\n>> 98304 | 32768 | 65536 | 0\n>> 131072 | 1672 | 3344 | 320 <-- segment 1\n>>\n>> ```\n>>\n>> # Load block by block and check:\n>>\n>> ```\n>> postgres=# select from generate_series(0, 132743) g(n), lateral\n>> pg_prewarm('foo','prefetch', 'main', n, n);\n>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>> vm_relation_cachestat('foo',range:=1024*32);\n>> block_start | block_count | nr_pages | nr_cache\n>> -------------+-------------+----------+----------\n>> 0 | 32768 | 65536 | 65536\n>> 32768 | 32768 | 65536 | 65536\n>> 65536 | 32768 | 65536 | 65536\n>> 98304 | 32768 | 65536 | 65536\n>> 131072 | 1672 | 3344 | 3344\n>>\n>> ```\n>>\n>> The duration of the last example is also really significant: full\n>> relation is 0.3ms and block by block is 1550ms!\n>> You might think it's because of generate_series or whatever, but I have\n>> the exact same behavior with pgfincore.\n>> I can compare loading and unloading duration for similar \"async\" work,\n>> here each call is from block 0 with len of 132744 and a range of 1 block\n>> (i.e. posix_fadvise on 8kB at a time).\n>> So they have exactly the same number of operations doing DONTNEED or\n>> WILLNEED, but distinct duration on the first \"load\":\n>>\n>> ```\n>>\n>> postgres=# select * from\n>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');\n>> vm_relation_fadvise\n>> ---------------------\n>>\n>> (1 row)\n>>\n>> Time: 25.202 ms\n>> postgres=# select * from\n>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n>> vm_relation_fadvise\n>> ---------------------\n>>\n>> (1 row)\n>>\n>> Time: 1523.636 ms (00:01.524) <----- not free !\n>> postgres=# select * from\n>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n>> vm_relation_fadvise\n>> ---------------------\n>>\n>> (1 row)\n>>\n>> Time: 24.967 ms\n>> ```\n> I confirm that there is a time difference between calling pg_prewarm\n> by full relation and block by block, but IMO this is expected. When\n> pg_prewarm is called by full relation, it does the initialization part\n> just once but when it is called block by block, it does initialization\n> for each call, right?\n\n\nNot sure what initialization is here exactly, in my example with \nWILLNEED/DONTNEED there are exactly the same code pattern and syscall \nrequest(s), just the flag is distinct, so initialization cost are \nexpected to be very similar.\nI'll try to move forward on those vm_relation functions into pgfincore \nso it'll be easier to run similar tests and compare.\n\n\n>\n> I run 'select pg_prewarm('foo','prefetch', 'main', n, n) FROM\n> generate_series(0, 132744)n;' a couple of times consecutively but I\n> could not see the time difference between first run (first load) and\n> the consecutive runs. Am I doing something wrong?\n\n\nMaybe the system is overloaded and thus by the time you're done \nprefetching tail blocks, the heads ones have been dropped already. So \nlooping on that leads to similar duration.\nIf it's already in cache and not removed from it, execution time is \nstable. This point (in cache or not) is hard to guess right until you do \ncheck the status, or you ensure to clean it first.\n\n> [1] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html#DESCRIPTION\n\n[2] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L303\n\nMy apologize about the email address with sub-address which leads to \nundelivered email. Please update with the current one.\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n", "msg_date": "Wed, 6 Mar 2024 16:23:01 +0100", "msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi,\n\nOn Wed, 6 Mar 2024 at 18:23, Cédric Villemain\n<[email protected]> wrote:\n>\n> Hi Nazir,\n>\n>\n> thank you for your review. I comment below.\n>\n>\n> On 05/03/2024 12:07, Nazir Bilal Yavuz wrote:\n> >> 2. The second one does implement smgrprefetch with range and loops by\n> >> default per segment to still have a check for interrupts.\n> > It looks good codewise but RELSEG_SIZE is too big to prefetch. Man\n> > page of posix_fadvise [1] states that: \"The amount of data read may be\n> > decreased by the kernel depending on virtual memory load. (A few\n> > megabytes will usually be fully satisfied, and more is rarely\n> > useful.)\". It is trying to prefetch 1GB data now. That could explain\n> > your observation about differences between nr_cache numbers.\n>\n> From an \"adminsys\" point of view I will find beneficial to get a single\n> syscall per file, respecting the logic and behavior of underlying system\n> call.\n\nI agree.\n\n> The behavior is 100% OK, and in fact it might a bad idea to prefetch\n> block by block as the result is just to put more pressure on a system if\n> it is already under pressure.\n>\n> Though there are use cases and it's nice to be able to do that too at\n> this per page level.\n\nYes, I do not know which one is more important, cache more blocks but\ncreate more pressure or create less pressure but cache less blocks.\nAlso, pg_prewarm is designed to be run at startup so I guess there\nwill not be much pressure.\n\n> About [1], it's very old statement about resources. And Linux manages a\n> part of the problem for us here I think [2]:\n>\n> /*\n> * Chunk the readahead into 2 megabyte units, so that we don't pin too much\n> * memory at once.\n> */\n> void force_page_cache_ra(....)\n\nThanks for pointing out the actual code. Yes, it looks like the kernel\nis already doing that. I would like to do more testing when you\nforward vm_relation functions into pgfincore.\n\n> >> Q: posix_fadvise may not work exactly the way you think it does, or does\n> >> it ?\n> >>\n> >>\n> >> In details, and for the question:\n> >>\n> >> However, if instead you provide a real range, or the magic len=0 to\n> >> posix_fadvise, then blocks are \"more\" loaded according to effective vm\n> >> pressure (which is not the case on the previous example).\n> >> As a result only a small part of the relation might be loaded, and this\n> >> is probably not what end-users expect despite being probably a good\n> >> choice (you can still free cache beforehand to help the kernel).\n>\n> I think it's a matter of documenting well the feature, and if at all\n> possible, as usual, not let users be negatively impacted by default.\n>\n>\n> >> An example, below I'm using vm_relation_cachestat() which provides linux\n> >> cachestat output, and vm_relation_fadvise() to unload cache, and\n> >> pg_prewarm for the demo:\n> >>\n> >> # clear cache: (nr_cache is the number of file system pages in cache,\n> >> not postgres blocks)\n> >>\n> >> ```\n> >> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> >> vm_relation_cachestat('foo',range:=1024*32);\n> >> block_start | block_count | nr_pages | nr_cache\n> >> -------------+-------------+----------+----------\n> >> 0 | 32768 | 65536 | 0\n> >> 32768 | 32768 | 65536 | 0\n> >> 65536 | 32768 | 65536 | 0\n> >> 98304 | 32768 | 65536 | 0\n> >> 131072 | 1672 | 3344 | 0\n> >> ```\n> >>\n> >> # load full relation with pg_prewarm (patched)\n> >>\n> >> ```\n> >> postgres=# select pg_prewarm('foo','prefetch');\n> >> pg_prewarm\n> >> ------------\n> >> 132744\n> >> (1 row)\n> >> ```\n> >>\n> >> # Checking results:\n> >>\n> >> ```\n> >> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> >> vm_relation_cachestat('foo',range:=1024*32);\n> >> block_start | block_count | nr_pages | nr_cache\n> >> -------------+-------------+----------+----------\n> >> 0 | 32768 | 65536 | 320\n> >> 32768 | 32768 | 65536 | 0\n> >> 65536 | 32768 | 65536 | 0\n> >> 98304 | 32768 | 65536 | 0\n> >> 131072 | 1672 | 3344 | 320 <-- segment 1\n> >>\n> >> ```\n> >>\n> >> # Load block by block and check:\n> >>\n> >> ```\n> >> postgres=# select from generate_series(0, 132743) g(n), lateral\n> >> pg_prewarm('foo','prefetch', 'main', n, n);\n> >> postgres=# select block_start, block_count, nr_pages, nr_cache from\n> >> vm_relation_cachestat('foo',range:=1024*32);\n> >> block_start | block_count | nr_pages | nr_cache\n> >> -------------+-------------+----------+----------\n> >> 0 | 32768 | 65536 | 65536\n> >> 32768 | 32768 | 65536 | 65536\n> >> 65536 | 32768 | 65536 | 65536\n> >> 98304 | 32768 | 65536 | 65536\n> >> 131072 | 1672 | 3344 | 3344\n> >>\n> >> ```\n> >>\n> >> The duration of the last example is also really significant: full\n> >> relation is 0.3ms and block by block is 1550ms!\n> >> You might think it's because of generate_series or whatever, but I have\n> >> the exact same behavior with pgfincore.\n> >> I can compare loading and unloading duration for similar \"async\" work,\n> >> here each call is from block 0 with len of 132744 and a range of 1 block\n> >> (i.e. posix_fadvise on 8kB at a time).\n> >> So they have exactly the same number of operations doing DONTNEED or\n> >> WILLNEED, but distinct duration on the first \"load\":\n> >>\n> >> ```\n> >>\n> >> postgres=# select * from\n> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');\n> >> vm_relation_fadvise\n> >> ---------------------\n> >>\n> >> (1 row)\n> >>\n> >> Time: 25.202 ms\n> >> postgres=# select * from\n> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n> >> vm_relation_fadvise\n> >> ---------------------\n> >>\n> >> (1 row)\n> >>\n> >> Time: 1523.636 ms (00:01.524) <----- not free !\n> >> postgres=# select * from\n> >> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n> >> vm_relation_fadvise\n> >> ---------------------\n> >>\n> >> (1 row)\n> >>\n> >> Time: 24.967 ms\n> >> ```\n> > I confirm that there is a time difference between calling pg_prewarm\n> > by full relation and block by block, but IMO this is expected. When\n> > pg_prewarm is called by full relation, it does the initialization part\n> > just once but when it is called block by block, it does initialization\n> > for each call, right?\n>\n>\n> Not sure what initialization is here exactly, in my example with\n> WILLNEED/DONTNEED there are exactly the same code pattern and syscall\n> request(s), just the flag is distinct, so initialization cost are\n> expected to be very similar.\n\nSorry, there was a miscommunication. I was talking about pg_prewarm's\ninitialization, meaning if the pg_prewarm is called block by block (by\nusing generate_series); it will make block_count times initialization\nand if it is called by full relation it will just do it once but it\nseems that is not the case, see below.\n\n> I'll try to move forward on those vm_relation functions into pgfincore\n> so it'll be easier to run similar tests and compare.\n\nThanks, that will be helpful for the testing.\n\n> >\n> > I run 'select pg_prewarm('foo','prefetch', 'main', n, n) FROM\n> > generate_series(0, 132744)n;' a couple of times consecutively but I\n> > could not see the time difference between first run (first load) and\n> > the consecutive runs. Am I doing something wrong?\n>\n>\n> Maybe the system is overloaded and thus by the time you're done\n> prefetching tail blocks, the heads ones have been dropped already. So\n> looping on that leads to similar duration.\n> If it's already in cache and not removed from it, execution time is\n> stable. This point (in cache or not) is hard to guess right until you do\n> check the status, or you ensure to clean it first.\n\nMy bad. I was trying to drop buffers from the postgres cache, not from\nthe kernel cache. See my results now:\n\npatched | prefetch test\n\n$ create_the_data [3]\n$ drop_kernel_cache [4]\n$ first_run_full_relation_prefetch [5] -> Time: 11.395 ms\n$ second_run_full_relation_prefetch [5] -> Time: 0.887 ms\n\nmaster | prefetch test\n\n$ create_the_data [3]\n$ drop_kernel_cache [4]\n$ first_run_full_relation_prefetch [5] -> Time: 3208.944 ms\n$ second_run_full_relation_prefetch [5] -> Time: 283.905 ms\n\nI did more perf tests about comparison between first and second run\nfor the prefetch and found this on master:\n\nfirst run:\n- 86.40% generic_fadvise\n - 86.24% force_page_cache_ra\n - 85.99% page_cache_ra_unbounded\n + 37.36% filemap_add_folio\n + 34.14% read_pages\n + 8.31% folio_alloc\n + 4.55% up_read\n 0.77% xa_load\n\nsecond run:\n- 20.64% generic_fadvise\n - 18.64% force_page_cache_ra\n - 17.46% page_cache_ra_unbounded\n + 8.54% xa_load\n 2.82% down_read\n 2.29% read_pages\n 1.45% up_read\n\nSo, it looks like the difference between the first and the second run\ncomes from kernel optimization that does not do prefetch if the page\nis already in the cache [6]. Saying that, I do not know the difference\nbetween WILLNEED/DONTNEED and I do not have enough materials to test\nit but I guess it is something similar.\n\nI did not test read performance but I am planning to do that soon.\n\n> > [1] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html#DESCRIPTION\n>\n> [2] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L303\n\n[3]\nCREATE EXTENSION pg_prewarm;\ndrop table if exists foo;\ncreate table foo ( id int, c text) with (autovacuum_enabled=false);\ninsert into foo select i, repeat('a', 1000) from generate_series(1,10000000)i;\n\n[4] echo 3 | sudo tee /proc/sys/vm/drop_caches\n\n[5] select pg_prewarm('foo', 'prefetch', 'main');\n\n[6] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L232\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 7 Mar 2024 14:19:16 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi Nazir,\n\nOn 07/03/2024 12:19, Nazir Bilal Yavuz wrote:\n> On Wed, 6 Mar 2024 at 18:23, Cédric Villemain\n> <[email protected]> wrote:\n>> The behavior is 100% OK, and in fact it might a bad idea to prefetch\n>> block by block as the result is just to put more pressure on a system if\n>> it is already under pressure.\n>>\n>> Though there are use cases and it's nice to be able to do that too at\n>> this per page level.\n> Yes, I do not know which one is more important, cache more blocks but\n> create more pressure or create less pressure but cache less blocks.\n> Also, pg_prewarm is designed to be run at startup so I guess there\n> will not be much pressure.\n\nautowarm is designed for that purpose but pg_prewarm is free to use when \nneeed.\n\n>> About [1], it's very old statement about resources. And Linux manages a\n>> part of the problem for us here I think [2]:\n>>\n>> /*\n>> * Chunk the readahead into 2 megabyte units, so that we don't pin too much\n>> * memory at once.\n>> */\n>> void force_page_cache_ra(....)\n> Thanks for pointing out the actual code. Yes, it looks like the kernel\n> is already doing that. I would like to do more testing when you\n> forward vm_relation functions into pgfincore.\n\n\nI hope to be able to get back there next week max.\n\n\n>>>> An example, below I'm using vm_relation_cachestat() which provides linux\n>>>> cachestat output, and vm_relation_fadvise() to unload cache, and\n>>>> pg_prewarm for the demo:\n>>>>\n>>>> # clear cache: (nr_cache is the number of file system pages in cache,\n>>>> not postgres blocks)\n>>>>\n>>>> ```\n>>>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>>>> vm_relation_cachestat('foo',range:=1024*32);\n>>>> block_start | block_count | nr_pages | nr_cache\n>>>> -------------+-------------+----------+----------\n>>>> 0 | 32768 | 65536 | 0\n>>>> 32768 | 32768 | 65536 | 0\n>>>> 65536 | 32768 | 65536 | 0\n>>>> 98304 | 32768 | 65536 | 0\n>>>> 131072 | 1672 | 3344 | 0\n>>>> ```\n>>>>\n>>>> # load full relation with pg_prewarm (patched)\n>>>>\n>>>> ```\n>>>> postgres=# select pg_prewarm('foo','prefetch');\n>>>> pg_prewarm\n>>>> ------------\n>>>> 132744\n>>>> (1 row)\n>>>> ```\n>>>>\n>>>> # Checking results:\n>>>>\n>>>> ```\n>>>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>>>> vm_relation_cachestat('foo',range:=1024*32);\n>>>> block_start | block_count | nr_pages | nr_cache\n>>>> -------------+-------------+----------+----------\n>>>> 0 | 32768 | 65536 | 320\n>>>> 32768 | 32768 | 65536 | 0\n>>>> 65536 | 32768 | 65536 | 0\n>>>> 98304 | 32768 | 65536 | 0\n>>>> 131072 | 1672 | 3344 | 320 <-- segment 1\n>>>>\n>>>> ```\n>>>>\n>>>> # Load block by block and check:\n>>>>\n>>>> ```\n>>>> postgres=# select from generate_series(0, 132743) g(n), lateral\n>>>> pg_prewarm('foo','prefetch', 'main', n, n);\n>>>> postgres=# select block_start, block_count, nr_pages, nr_cache from\n>>>> vm_relation_cachestat('foo',range:=1024*32);\n>>>> block_start | block_count | nr_pages | nr_cache\n>>>> -------------+-------------+----------+----------\n>>>> 0 | 32768 | 65536 | 65536\n>>>> 32768 | 32768 | 65536 | 65536\n>>>> 65536 | 32768 | 65536 | 65536\n>>>> 98304 | 32768 | 65536 | 65536\n>>>> 131072 | 1672 | 3344 | 3344\n>>>>\n>>>> ```\n>>>>\n>>>> The duration of the last example is also really significant: full\n>>>> relation is 0.3ms and block by block is 1550ms!\n>>>> You might think it's because of generate_series or whatever, but I have\n>>>> the exact same behavior with pgfincore.\n>>>> I can compare loading and unloading duration for similar \"async\" work,\n>>>> here each call is from block 0 with len of 132744 and a range of 1 block\n>>>> (i.e. posix_fadvise on 8kB at a time).\n>>>> So they have exactly the same number of operations doing DONTNEED or\n>>>> WILLNEED, but distinct duration on the first \"load\":\n>>>>\n>>>> ```\n>>>>\n>>>> postgres=# select * from\n>>>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_DONTNEED');\n>>>> vm_relation_fadvise\n>>>> ---------------------\n>>>>\n>>>> (1 row)\n>>>>\n>>>> Time: 25.202 ms\n>>>> postgres=# select * from\n>>>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n>>>> vm_relation_fadvise\n>>>> ---------------------\n>>>>\n>>>> (1 row)\n>>>>\n>>>> Time: 1523.636 ms (00:01.524) <----- not free !\n>>>> postgres=# select * from\n>>>> vm_relation_fadvise('foo','main',0,132744,1,'POSIX_FADV_WILLNEED');\n>>>> vm_relation_fadvise\n>>>> ---------------------\n>>>>\n>>>> (1 row)\n>>>>\n>>>> Time: 24.967 ms\n>>>> ```\n>>> I confirm that there is a time difference between calling pg_prewarm\n>>> by full relation and block by block, but IMO this is expected. When\n>>> pg_prewarm is called by full relation, it does the initialization part\n>>> just once but when it is called block by block, it does initialization\n>>> for each call, right?\n>>\n>> Not sure what initialization is here exactly, in my example with\n>> WILLNEED/DONTNEED there are exactly the same code pattern and syscall\n>> request(s), just the flag is distinct, so initialization cost are\n>> expected to be very similar.\n> Sorry, there was a miscommunication. I was talking about pg_prewarm's\n> initialization, meaning if the pg_prewarm is called block by block (by\n> using generate_series); it will make block_count times initialization\n> and if it is called by full relation it will just do it once but it\n> seems that is not the case, see below.\n\n\nOK.\n\n>> I'll try to move forward on those vm_relation functions into pgfincore\n>> so it'll be easier to run similar tests and compare.\n> Thanks, that will be helpful for the testing.\n>\n>>> I run 'select pg_prewarm('foo','prefetch', 'main', n, n) FROM\n>>> generate_series(0, 132744)n;' a couple of times consecutively but I\n>>> could not see the time difference between first run (first load) and\n>>> the consecutive runs. Am I doing something wrong?\n>>\n>> Maybe the system is overloaded and thus by the time you're done\n>> prefetching tail blocks, the heads ones have been dropped already. So\n>> looping on that leads to similar duration.\n>> If it's already in cache and not removed from it, execution time is\n>> stable. This point (in cache or not) is hard to guess right until you do\n>> check the status, or you ensure to clean it first.\n> My bad. I was trying to drop buffers from the postgres cache, not from\n> the kernel cache. See my results now:\n>\n> patched | prefetch test\n>\n> $ create_the_data [3]\n> $ drop_kernel_cache [4]\n> $ first_run_full_relation_prefetch [5] -> Time: 11.395 ms\n> $ second_run_full_relation_prefetch [5] -> Time: 0.887 ms\n>\n> master | prefetch test\n>\n> $ create_the_data [3]\n> $ drop_kernel_cache [4]\n> $ first_run_full_relation_prefetch [5] -> Time: 3208.944 ms\n> $ second_run_full_relation_prefetch [5] -> Time: 283.905 ms\n>\n> I did more perf tests about comparison between first and second run\n> for the prefetch and found this on master:\n>\n> first run:\n> - 86.40% generic_fadvise\n> - 86.24% force_page_cache_ra\n> - 85.99% page_cache_ra_unbounded\n> + 37.36% filemap_add_folio\n> + 34.14% read_pages\n> + 8.31% folio_alloc\n> + 4.55% up_read\n> 0.77% xa_load\n>\n> second run:\n> - 20.64% generic_fadvise\n> - 18.64% force_page_cache_ra\n> - 17.46% page_cache_ra_unbounded\n> + 8.54% xa_load\n> 2.82% down_read\n> 2.29% read_pages\n> 1.45% up_read\n>\n> So, it looks like the difference between the first and the second run\n> comes from kernel optimization that does not do prefetch if the page\n> is already in the cache [6]. Saying that, I do not know the difference\n> between WILLNEED/DONTNEED and I do not have enough materials to test\n> it but I guess it is something similar.\n\nPatched: Clearly, only a small part has been read and put into VM during \nthe first pass, but still some pages, and the second one probably did \nnothing at all.\nMaster: Apparently it takes around 3.2 seconds to read all (which \noutlines that the first pass, patched, read few). On the second pass \nit's already in cache, so it goes fast. you're correct. But given it \nstill required 2803ms, there is something.\nYou may want to test the status with vm_relation_cachestat() [7], it's \nin a branch, not main or master. It requires linux 6.5, but allows to \nget information about memory eviction, which is super handy (and super \nfast)!\nIt returns:\n  - nr_cache is Number of cached pages\n  - nr_dirty is Number of dirty pages\n  - nr_writeback is Number of pages marked for writeback\n  - nr_evicted is Number of pages evicted from the cache\n  - nr_recently_evicted is Number of pages recently evicted from the cache\n/*\n  * A page is recently evicted if its last eviction was recent enough \nthat its\n  * reentry to the cache would indicate that it is actively being used \nby the\n  * system, and that there is memory pressure on the system.\n  */\n\nWILLNEED posix fadvise flag leads to what used to be call \"prefetch\": \nreading from disk, and put into VM. (it's not as simple, but this is the \nidea).\nDONTNEED flushes from VM.\n\nMight be interesting to compare with prewarm called on each block of the \nrelation, one way to do it with current path is to change the constant:\n#define PREWARM_PREFETCH_RANGE    RELSEG_SIZE\n\nRELSEG_SIZE is 131071 IIRC\n\nHere you can set to 1 and you'll have prewarm working on all pages, one \nby one, which should be similar to current behavior.\nIn pgfincore I have a \"range\" parameter for that purpose so end-user can \nadjust exactly as desired.\nI was not looking after change to prewarm function parameters but if \nit's better...\n\n> I did not test read performance but I am planning to do that soon.\n\n\nNice, thank you for the effort!\n\n>>> [1] https://man7.org/linux/man-pages/man2/posix_fadvise.2.html#DESCRIPTION\n>> [2] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L303\n> [3]\n> CREATE EXTENSION pg_prewarm;\n> drop table if exists foo;\n> create table foo ( id int, c text) with (autovacuum_enabled=false);\n> insert into foo select i, repeat('a', 1000) from generate_series(1,10000000)i;\n>\n> [4] echo 3 | sudo tee /proc/sys/vm/drop_caches\n>\n> [5] select pg_prewarm('foo', 'prefetch', 'main');\n>\n> [6] https://elixir.bootlin.com/linux/latest/source/mm/readahead.c#L232\n\n[7] \nhttps://github.com/klando/pgfincore/blob/vm_relation_cachestat/pgfincore--1.3.1--1.4.0.sql#L54\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n", "msg_date": "Thu, 7 Mar 2024 13:26:00 +0100", "msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi,\n\nOn Thu, 7 Mar 2024 at 15:26, Cédric Villemain\n<[email protected]> wrote:\n>\n> On 07/03/2024 12:19, Nazir Bilal Yavuz wrote:\n> >\n> > I did not test read performance but I am planning to do that soon.\n\nI did not have the time to check other things you mentioned but I\ntested the read performance. The table size is 5.5GB, I did 20 runs in\ntotal.\n\nWhen the OS cache is cleared:\n\nMaster -> Median: 2266.293ms, Avg: 2265.5038ms\nPatched -> Median: 2166.493ms, Avg: 2183.6208ms\n\nWhen the buffers are in the OS cache:\n\nMaster -> Median: 585.719ms, Avg: 583.5032ms\nPatched -> Median: 533.071ms, Avg: 532.7119ms\n\nPatched version is better on both. ~4% when the OS cache is cleared,\n~%9 when the buffers are in the OS cache.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Fri, 15 Mar 2024 15:12:31 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "\n\n> On 15 Mar 2024, at 17:12, Nazir Bilal Yavuz <[email protected]> wrote:\n> \n> I did not have the time to check other things you mentioned but I\n> tested the read performance. The table size is 5.5GB, I did 20 runs in\n> total.\n\nHi Nazir!\n\nDo you plan to review anything else? Or do you think it worth to look at by someone else? Or is the patch Ready for Committer? If so, please swap CF entry [0] to status accordingly, currently it's \"Waiting on Author\".\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4763/\n\n", "msg_date": "Sun, 7 Apr 2024 10:29:40 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" }, { "msg_contents": "Hi Andrey,\n\nOn Sun, 7 Apr 2024 at 08:29, Andrey M. Borodin <[email protected]> wrote:\n>\n>\n>\n> > On 15 Mar 2024, at 17:12, Nazir Bilal Yavuz <[email protected]> wrote:\n> >\n> > I did not have the time to check other things you mentioned but I\n> > tested the read performance. The table size is 5.5GB, I did 20 runs in\n> > total.\n>\n> Hi Nazir!\n>\n> Do you plan to review anything else? Or do you think it worth to look at by someone else? Or is the patch Ready for Committer? If so, please swap CF entry [0] to status accordingly, currently it's \"Waiting on Author\".\n\nThanks for reminding me! I think this needs review by someone else\n(especially the prefetch part) so I changed it to 'Needs review'.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Sun, 7 Apr 2024 11:26:34 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Change prefetch and read strategies to use range in pg_prewarm\n ... and raise a question about posix_fadvise WILLNEED" } ]
[ { "msg_contents": "Hi,\n\nfor 15 years pgfincore has been sitting quietly and being used in large \nsetups to help in HA and resources management.\nIt can perfectly stay as is, to be honest I was expecting to one day \ninclude a windows support and propose that to PostgreSQL, it appears \ngetting support on linux and BSD is more than enough today.\n\nSo I wonder if there are interest for having virtual memory snapshot and \nrestore operations with, for example, pg_prewarm/autowarm ?\n\nSome usecases covered: snapshot/restore cache around cronjobs, around \ndumps, switchover, failover, on stop/start of postgres (think kernel \nupgrade with a cold restart), ...\n\npgfincore also provides some nice information with mincore (on FreeBSD \nmincore is more interesting) or cachestat, again it can remain as an out \nof tree extension but I will be happy to add to commitfest if there are \ninterest from the community.\nAn example of cachestat output:\n\npostgres=# select *from vm_relation_cachestat('foo',range:=1024*32);\nblock_start | block_count | nr_pages | nr_cache | nr_dirty | \nnr_writeback | nr_evicted | nr_recently_evicted\n-------------+-------------+----------+----------+----------+--------------+------------+--------------------- \n\n           0 |       32768 |    65536 |    62294 |        0 | \n            0 |       3242 |                3242\n       32768 |       32768 |    65536 |    39279 |        0 | \n            0 |      26257 |               26257\n       65536 |       32768 |    65536 |    22516 |        0 | \n            0 |      43020 |               43020\n       98304 |       32768 |    65536 |    24944 |        0 | \n            0 |      40592 |               40592\n      131072 |        1672 |     3344 |      487 |        0 | \n            0 |       2857 |                2857\n\n\nComments?\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n\n\n\nHi,\nfor 15 years pgfincore has been sitting quietly and being used in\n large setups to help in HA and resources management.\n It can perfectly stay as is, to be honest I was expecting to one\n day include a windows support and propose that to PostgreSQL, it\n appears getting support on linux and BSD is more than enough\n today.\nSo I wonder if there are interest for having virtual memory\n snapshot and restore operations with, for example,\n pg_prewarm/autowarm ?\n\nSome usecases covered: snapshot/restore cache around cronjobs,\n around dumps, switchover, failover, on stop/start of postgres\n (think kernel upgrade with a cold restart), ...\n\n\npgfincore also provides some nice information with mincore (on\n FreeBSD mincore is more interesting) or cachestat, again it can\n remain as an out of tree extension but I will be happy to add to\n commitfest if there are interest from the community.\n An example of cachestat output:\npostgres=#\n select *from vm_relation_cachestat('foo',range:=1024*32);\n \n block_start | block_count | nr_pages | nr_cache | nr_dirty |\n nr_writeback | nr_evicted | nr_recently_evicted  \n-------------+-------------+----------+----------+----------+--------------+------------+---------------------\n \n           0 |       32768 |    65536 |    62294 |        0 |\n            0 |       3242 |                3242\n \n       32768 |       32768 |    65536 |    39279 |        0 |\n            0 |      26257 |               26257\n \n       65536 |       32768 |    65536 |    22516 |        0 |\n            0 |      43020 |               43020\n \n       98304 |       32768 |    65536 |    24944 |        0 |\n            0 |      40592 |               40592\n \n      131072 |        1672 |     3344 |      487 |        0 |\n            0 |       2857 |                2857\n\n\n\n\nComments?\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D", "msg_date": "Thu, 4 Jan 2024 00:57:33 +0100", "msg_from": "Cedric Villemain <[email protected]>", "msg_from_op": true, "msg_subject": "doing also VM cache snapshot and restore with pg_prewarm, having more\n information of the VM inside PostgreSQL" }, { "msg_contents": "\n\n\n\n\nOn 1/3/24 5:57 PM, Cedric Villemain\n wrote:\n\n\n\nfor 15 years pgfincore has been sitting quietly and being used\n in large setups to help in HA and resources management.\n It can perfectly stay as is, to be honest I was expecting to one\n day include a windows support and propose that to PostgreSQL, it\n appears getting support on linux and BSD is more than enough\n today.\nSo I wonder if there are interest for having virtual memory\n snapshot and restore operations with, for example,\n pg_prewarm/autowarm ?\n\n\nIMHO, to improve the user experience here we'd need something\n that combined the abilities of all these extensions into a\n cohesive interface that allowed users to simply say \"please get\n this data into cache\". Simply moving pgfincore into core Postgres\n wouldn't satisfy that need.\nSo I think the real question is whether the community feels\n spport for better cache (buffercache + filesystem) management is a\n worthwhile feature to add to Postgres.\nMicromanaging cache contents for periodic jobs seems almost like\n a mis-feature. While it's a useful tool to have in the toolbox,\n it's also a non-trivial layer of complexity. IMHO not something\n we'd want to add. Though, there might be smaller items that would\n make creating tools to do that easier, such as some ability to see\n what blocks a backend is accessing (perhaps via a hook).\nOn the surface, improving RTO via cache warming sounds\n interesting ... but I'm not sure how useful it would be in\n reality. Users that care about RTO would almost always have some\n form of hot-standby, and generally those will already have a lot\n of data in cache. While they won't have read-only data in cache, I\n have to wonder if the answer to that problem is allowing writers\n to tell a replica what blocks are being read, so the replica can\n keep them in cache. Also, most (all?) operations that require a\n restart could be handled via a failover, so I'm not sure how much\n cache management moves the needle there.\n\n\n \nSome usecases covered: snapshot/restore cache around cronjobs,\n around dumps, switchover, failover, on stop/start of postgres\n (think kernel upgrade with a cold restart), ...\n\n\npgfincore also provides some nice information with mincore (on\n FreeBSD mincore is more interesting) or cachestat, again it can\n remain as an out of tree extension but I will be happy to add to\n commitfest if there are interest from the community.\n An example of cachestat output:\npostgres=#\n select *from vm_relation_cachestat('foo',range:=1024*32); \n block_start | block_count | nr_pages | nr_cache | nr_dirty |\n nr_writeback | nr_evicted | nr_recently_evicted  \n-------------+-------------+----------+----------+----------+--------------+------------+---------------------\n \n           0 |       32768 |    65536 |    62294 |        0 |\n            0 |       3242 |                3242 \n       32768 |       32768 |    65536 |    39279 |        0 |\n            0 |      26257 |               26257 \n       65536 |       32768 |    65536 |    22516 |        0 |\n            0 |      43020 |               43020 \n       98304 |       32768 |    65536 |    24944 |        0 |\n            0 |      40592 |               40592 \n      131072 |        1672 |     3344 |      487 |        0 |\n            0 |       2857 |                2857\n\n\n\n\nComments?\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n", "msg_date": "Thu, 4 Jan 2024 16:41:55 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doing also VM cache snapshot and restore with pg_prewarm, having\n more information of the VM inside PostgreSQL" }, { "msg_contents": "Le 04/01/2024 à 23:41, Jim Nasby a écrit :\n> On 1/3/24 5:57 PM, Cedric Villemain wrote:\n>>\n>> for 15 years pgfincore has been sitting quietly and being used in \n>> large setups to help in HA and resources management.\n>> It can perfectly stay as is, to be honest I was expecting to one day \n>> include a windows support and propose that to PostgreSQL, it appears \n>> getting support on linux and BSD is more than enough today.\n>>\n>> So I wonder if there are interest for having virtual memory snapshot \n>> and restore operations with, for example, pg_prewarm/autowarm ?\n>>\n> IMHO, to improve the user experience here we'd need something that \n> combined the abilities of all these extensions into a cohesive interface \n> that allowed users to simply say \"please get this data into cache\". \n> Simply moving pgfincore into core Postgres wouldn't satisfy that need.\n\nThis is exactly why I proposed those additions to pg_prewarm and autowarm.\n\n> So I think the real question is whether the community feels spport for \n> better cache (buffercache + filesystem) management is a worthwhile \n> feature to add to Postgres.\n\nAgreed, to add in an extension more probably and only the \"filesystem\" \npart as the buffercache is done already.\n\n> Micromanaging cache contents for periodic jobs seems almost like a \n> mis-feature. While it's a useful tool to have in the toolbox, it's also \n> a non-trivial layer of complexity. IMHO not something we'd want to add. \n> Though, there might be smaller items that would make creating tools to \n> do that easier, such as some ability to see what blocks a backend is \n> accessing (perhaps via a hook).\n\n From my point of view it's not so complex, but this is subjective and I \nwon't argue in this area.\n\nI confirm that having someway to get feedback on current position/block \nactivity (and distance/context: heap scan, index build, analyze, ...) \nwould be super useful to allow external management. Maybe the \"progress\" \nfacilities can be used for that. Maybe worth looking at that for another \nproposal than the current one.\n\nTo be clear I am not proposing that PostgreSQL handles those tasks \ntransparently or itself, but offering options to the users via \nextensions like we do with pg_prewarm and pg_buffercache.\nIt's just the same for virtual memory/filesystem.\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n\n\n", "msg_date": "Fri, 5 Jan 2024 10:32:53 +0100", "msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doing also VM cache snapshot and restore with pg_prewarm, having\n more information of the VM inside PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nfrom src/backend/storage/lmgr/README:\n\n\"\"\"\nSpinlocks. These are intended for *very* short-term locks. If a lock\nis to be held more than a few dozen instructions, or across any sort of\nkernel call (or even a call to a nontrivial subroutine), don't use a\nspinlock. Spinlocks are primarily used as infrastructure for lightweight\nlocks.\n\"\"\"\n\nI totally agree with this and IIUC spin lock is usually used with the\nfollowing functions.\n\n#define init_local_spin_delay(status) ..\nvoid perform_spin_delay(SpinDelayStatus *status);\nvoid finish_spin_delay(SpinDelayStatus *status);\n\nDuring the perform_spin_delay, we have the following codes:\n\nvoid\nperform_spin_delay(SpinDelayStatus *status)\n\n\t/* Block the process every spins_per_delay tries */\n\tif (++(status->spins) >= spins_per_delay)\n\t{\n\t\tif (++(status->delays) > NUM_DELAYS)\n\t\t\ts_lock_stuck(status->file, status->line, status->func);\n\nthe s_lock_stuck will PAINC the entire system.\n\nMy question is if someone doesn't obey the rule by mistake (everyone\ncan make mistake), shall we PANIC on a production environment? IMO I\nthink it can be a WARNING on a production environment and be a stuck\nwhen 'ifdef USE_ASSERT_CHECKING'.\n\nPeople may think spin lock may consume too much CPU, but it is not true\nin the discussed scene since perform_spin_delay have pg_usleep in it,\nand the MAX_DELAY_USEC is 1 second and MIN_DELAY_USEC is 0.001s.\n\nI notice this issue actually because of the patch \"Cache relation\nsizes?\" from Thomas Munro [1], where the latest patch[2] still have the \nfollowing code. \n+\t\tsr = smgr_alloc_sr(); <-- HERE a spin lock is hold\n+\n+\t\t/* Upgrade to exclusive lock so we can create a mapping. */\n+\t\tLWLockAcquire(mapping_lock, LW_EXCLUSIVE); <-- HERE a complex\n operation is needed. it may take a long time.\n\nOur internal testing system found more misuses on our own PG version.\n\nI think a experienced engineer like Thomas can make this mistake and the\npatch was reviewed by 3 peoples, the bug is still there. It is not easy\nto say just don't do it. \n\nthe attached code show the prototype in my mind. Any feedback is welcome. \n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BhUKGJg%2BgqCs0dgo94L%3D1J9pDp5hKkotji9A05k2nhYQhF4%2Bw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/attachment/123659/v5-0001-WIP-Track-relation-sizes-in-shared-memory.patch \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 04 Jan 2024 14:59:06 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, 4 Jan 2024 at 08:09, Andy Fan <[email protected]> wrote:\n>\n> My question is if someone doesn't obey the rule by mistake (everyone\n> can make mistake), shall we PANIC on a production environment? IMO I\n> think it can be a WARNING on a production environment and be a stuck\n> when 'ifdef USE_ASSERT_CHECKING'.\n> [...]\n> I think a experienced engineer like Thomas can make this mistake and the\n> patch was reviewed by 3 peoples, the bug is still there. It is not easy\n> to say just don't do it.\n>\n> the attached code show the prototype in my mind. Any feedback is welcome.\n\nWhile I understand your point and could maybe agree with the change\nitself (a crash isn't great), I don't think it's an appropriate fix\nfor the problem of holding a spinlock while waiting for a LwLock, as\nspin.h specifically mentions the following (and you quoted the same):\n\n\"\"\"\nKeep in mind the coding rule that spinlocks must not be held for more\nthan a few instructions.\n\"\"\"\n\nI suspect that we'd be better off with stronger protections against\nwaiting for LwLocks while we hold any spin lock. More specifically,\nI'm thinking about something like tracking how many spin locks we\nhold, and Assert()-ing that we don't hold any such locks when we start\nto wait for an LwLock or run CHECK_FOR_INTERRUPTS-related code (with\npotential manual contextual overrides in specific areas of code where\nspecific care has been taken to make it safe to hold spin locks while\ndoing those operations - although I consider their existence unlikely\nI can't rule them out as I've yet to go through all lock-touching\ncode). This would probably work in a similar manner as\nSTART_CRIT_SECTION/END_CRIT_SECTION.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 4 Jan 2024 11:23:48 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 4, 2024 at 2:09 AM Andy Fan <[email protected]> wrote:\n> My question is if someone doesn't obey the rule by mistake (everyone\n> can make mistake), shall we PANIC on a production environment? IMO I\n> think it can be a WARNING on a production environment and be a stuck\n> when 'ifdef USE_ASSERT_CHECKING'.\n>\n> People may think spin lock may consume too much CPU, but it is not true\n> in the discussed scene since perform_spin_delay have pg_usleep in it,\n> and the MAX_DELAY_USEC is 1 second and MIN_DELAY_USEC is 0.001s.\n>\n> I notice this issue actually because of the patch \"Cache relation\n> sizes?\" from Thomas Munro [1], where the latest patch[2] still have the\n> following code.\n> + sr = smgr_alloc_sr(); <-- HERE a spin lock is hold\n> +\n> + /* Upgrade to exclusive lock so we can create a mapping. */\n> + LWLockAcquire(mapping_lock, LW_EXCLUSIVE); <-- HERE a complex\n> operation is needed. it may take a long time.\n\nI'm not sure that the approach this patch takes is correct in detail,\nbut I kind of agree with you about the overall point. I mean, the idea\nof the PANIC is to avoid having the system just sit there in a state\nfrom which it will never recover ... but it can also have the effect\nof killing a system that wasn't really dead. I'm not sure what the\nbest thing to do here is, but it's worth talking about, IMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 08:35:53 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi Matthias and Robert,\n\nMatthias van de Meent <[email protected]> writes:\n\n> On Thu, 4 Jan 2024 at 08:09, Andy Fan <[email protected]> wrote:\n>>\n>> My question is if someone doesn't obey the rule by mistake (everyone\n>> can make mistake), shall we PANIC on a production environment? IMO I\n>> think it can be a WARNING on a production environment and be a stuck\n>> when 'ifdef USE_ASSERT_CHECKING'.\n>> [...]\n>> I think a experienced engineer like Thomas can make this mistake and the\n>> patch was reviewed by 3 peoples, the bug is still there. It is not easy\n>> to say just don't do it.\n>>\n>> the attached code show the prototype in my mind. Any feedback is welcome.\n>\n> While I understand your point and could maybe agree with the change\n> itself (a crash isn't great),\n\nIt's great that both of you agree that the crash is not great. \n\n> I don't think it's an appropriate fix\n> for the problem of holding a spinlock while waiting for a LwLock, as\n> spin.h specifically mentions the following (and you quoted the same):\n>\n> \"\"\"\n> Keep in mind the coding rule that spinlocks must not be held for more\n> than a few instructions.\n> \"\"\"\n\nYes, I agree that the known [LW]LockAcquire after holding a Spin lock\nshould be fixed at the first chance rather than pander to it with my\nprevious patch. My previous patch just take care of the *unknown*\ncases (and I cced thomas in the hope that he knows the bug). I also\nagree that the special case about [LW]LockAcquire should be detected\nmore effective as you suggested below. So v2 comes and commit 2 is for\nthis suggestion. \n\n>\n> I suspect that we'd be better off with stronger protections against\n> waiting for LwLocks while we hold any spin lock. More specifically,\n> I'm thinking about something like tracking how many spin locks we\n> hold, and Assert()-ing that we don't hold any such locks when we start\n> to wait for an LwLock or run CHECK_FOR_INTERRUPTS-related code (with\n> potential manual contextual overrides in specific areas of code where\n> specific care has been taken to make it safe to hold spin locks while\n> doing those operations - although I consider their existence unlikely\n> I can't rule them out as I've yet to go through all lock-touching\n> code). This would probably work in a similar manner as\n> START_CRIT_SECTION/END_CRIT_SECTION.\n>\n> Kind regards,\n>\n> Matthias van de Meent\n> Neon (https://neon.tech)\n\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 04 Jan 2024 22:24:50 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I'm not sure that the approach this patch takes is correct in detail,\n> but I kind of agree with you about the overall point. I mean, the idea\n> of the PANIC is to avoid having the system just sit there in a state\n> from which it will never recover ... but it can also have the effect\n> of killing a system that wasn't really dead. I'm not sure what the\n> best thing to do here is, but it's worth talking about, IMHO.\n\nI'm not a fan of adding overhead to such a performance-critical\nthing as spinlocks in order to catch coding errors that are easily\ndetectable statically. IMV the correct usage of spinlocks is that\nthey should only be held across *short, straight line* code segments.\nWe should be making an effort to ban coding patterns like\n\"return with spinlock still held\", because they're just too prone\nto errors similar to this one. Note that trying to take another\nlock is far from the only bad thing that can happen if you're\nnot very conservative about what code can execute with a spinlock\nheld.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jan 2024 10:22:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 4, 2024 at 10:22 AM Tom Lane <[email protected]> wrote:\n> I'm not a fan of adding overhead to such a performance-critical\n> thing as spinlocks in order to catch coding errors that are easily\n> detectable statically. IMV the correct usage of spinlocks is that\n> they should only be held across *short, straight line* code segments.\n> We should be making an effort to ban coding patterns like\n> \"return with spinlock still held\", because they're just too prone\n> to errors similar to this one. Note that trying to take another\n> lock is far from the only bad thing that can happen if you're\n> not very conservative about what code can execute with a spinlock\n> held.\n\nI agree that we don't want to add overhead, and also about how\nspinlocks should be used, but I dispute \"easily detectable\nstatically.\" I mean, if you or I look at some code that uses a\nspinlock, we'll know whether the pattern that you mention is being\nfollowed or not, modulo differences of opinion in debatable cases. But\nyou and I cannot be there to look at all the code all the time. If we\nhad a static checking tool that was run as part of every build, or in\nthe buildfarm, or by cfbot, or somewhere else that raised the alarm if\nthis rule was violated, then we could claim to be effectively\nenforcing this rule. But with 20-30 active committers and ~100 active\ndevelopers at any given point in time, any system that relies on every\nrelevant person knowing all the rules and remembering to enforce them\non every commit is bound to be less than 100% effective. Some people\nwon't know what the rule is, some people will decide that their\nparticular situation is Very Special, some people will just forget to\ncheck for violations, and some people will check for violations but\nmiss something.\n\nI think the question we should be asking here is what the purpose of\nthe PANIC is. I can think of two possible purposes. It could be either\n(a) an attempt to prevent real-world harm by turning database hangs\ninto database panics, so that at least the system will restart and get\nmoving again instead of sitting there stuck for all eternity or (b) an\nattempt to punish people for writing bad code by turning coding rule\nviolations into panics on production systems. If it's (a), that's\ndefensible, though we can still ask whether it does more harm than\ngood. If it's (b), that's not a good way of handling that problem,\nbecause (b1) it affects production builds and not just development\nbuilds, (b2) many coding rule violations are vanishingly unlikely to\ntrigger that PANIC in practice, and (b3) if the PANIC does fire, it\ngives you basically zero help in figuring out where the actual problem\nis. The PostgreSQL code base is way too big for \"ERROR: you screwed\nup\" to be an acceptable diagnostic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 11:06:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jan 4, 2024 at 10:22 AM Tom Lane <[email protected]> wrote:\n>> We should be making an effort to ban coding patterns like\n>> \"return with spinlock still held\", because they're just too prone\n>> to errors similar to this one.\n\n> I agree that we don't want to add overhead, and also about how\n> spinlocks should be used, but I dispute \"easily detectable\n> statically.\" I mean, if you or I look at some code that uses a\n> spinlock, we'll know whether the pattern that you mention is being\n> followed or not, modulo differences of opinion in debatable cases. But\n> you and I cannot be there to look at all the code all the time. If we\n> had a static checking tool that was run as part of every build, or in\n> the buildfarm, or by cfbot, or somewhere else that raised the alarm if\n> this rule was violated, then we could claim to be effectively\n> enforcing this rule.\n\nI was indeed suggesting that maybe we could find a way to detect\nsuch things automatically. While I've not been paying close\nattention, I recall there's been some discussions of using LLVM/clang\ninfrastructure for customized static analysis, so maybe it'd be\npossible without an undue amount of effort.\n\n> I think the question we should be asking here is what the purpose of\n> the PANIC is. I can think of two possible purposes. It could be either\n> (a) an attempt to prevent real-world harm by turning database hangs\n> into database panics, so that at least the system will restart and get\n> moving again instead of sitting there stuck for all eternity or (b) an\n> attempt to punish people for writing bad code by turning coding rule\n> violations into panics on production systems.\n\nI believe it's (a). No matter what the reason for a stuck spinlock\nis, the only reliable method of getting out of the problem is to\nblow things up and start over. The patch proposed at the top of this\nthread would leave the system unable to recover on its own, with the\nonly recourse being for the DBA to manually force a crash/restart ...\nonce she figured out that that was necessary, which might take a long\nwhile if the only external evidence is an occasional WARNING that\nmight not even be making it to the postmaster log. How's that better?\n\n> ... (b3) if the PANIC does fire, it\n> gives you basically zero help in figuring out where the actual problem\n> is. The PostgreSQL code base is way too big for \"ERROR: you screwed\n> up\" to be an acceptable diagnostic.\n\nIdeally I agree with the latter, but that doesn't mean that doing\nbetter is easy or even possible. (The proposed patch certainly does\nnothing to help diagnose such issues.) As for the former point,\npanicking here at least offers the chance of getting a stack trace,\nwhich might help a developer find the problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Jan 2024 11:33:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 4, 2024 at 11:33 AM Tom Lane <[email protected]> wrote:\n> I believe it's (a). No matter what the reason for a stuck spinlock\n> is, the only reliable method of getting out of the problem is to\n> blow things up and start over. The patch proposed at the top of this\n> thread would leave the system unable to recover on its own, with the\n> only recourse being for the DBA to manually force a crash/restart ...\n> once she figured out that that was necessary, which might take a long\n> while if the only external evidence is an occasional WARNING that\n> might not even be making it to the postmaster log. How's that better?\n\nIt's a fair question. I think you're correct if we assume that\neveryone's following the coding rule ... at least assuming that the\ntarget system isn't too insanely slow, and I've seen some pretty\ncrazily overloaded machines. But if the coding rule is not being\nfollowed, then \"the only reliable method of getting out of the problem\nis to blow things up and start over\" becomes a dubious conclusion.\n\nAlso, I wonder if many or even all uses of spinlocks uses ought to be\nreplaced with either LWLocks or atomics. An LWLock might be slightly\nslower when contention is low, but it scales better when contention is\nhigh, displays a wait event so that you can see that you have\ncontention if you do, and doesn't PANIC the system if the contention\ngets too bad. And an atomic will just be faster, in the cases where\nit's adequate.\n\nThe trouble with trying to do a replacement is that some of the\nspinlock-using code is ancient and quite hairy. info_lck in particular\nlooks like a hot mess -- it's used in complex ways and in performance\ncritical paths, with terrifying comments like this:\n\n * To read XLogCtl->LogwrtResult, you must hold either info_lck or\n * WALWriteLock. To update it, you need to hold both locks. The point of\n * this arrangement is that the value can be examined by code that already\n * holds WALWriteLock without needing to grab info_lck as well. In addition\n * to the shared variable, each backend has a private copy of LogwrtResult,\n * which is updated when convenient.\n *\n * The request bookkeeping is simpler: there is a shared XLogCtl->LogwrtRqst\n * (protected by info_lck), but we don't need to cache any copies of it.\n *\n * info_lck is only held long enough to read/update the protected variables,\n * so it's a plain spinlock. The other locks are held longer (potentially\n * over I/O operations), so we use LWLocks for them. These locks are:\n\nBut info_lck was introduced in 1999 and this scheme was introduced in\n2012, and a lot has changed since then. Whatever benchmarking was done\nto validate this locking regime is probably obsolete at this point.\nBack then, LWLocks were built on top of spinlocks, and were, I\nbelieve, a lot slower than they are now. Plus CPU performance\ncharacteristics have changed a lot. So who really knows if the way\nwe're doing things here makes any sense at all these days? But one\ndoesn't want to do a naive replacement and pessimize things, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 4 Jan 2024 12:04:07 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-04 12:04:07 -0500, Robert Haas wrote:\n> On Thu, Jan 4, 2024 at 11:33 AM Tom Lane <[email protected]> wrote:\n> > I believe it's (a). No matter what the reason for a stuck spinlock\n> > is, the only reliable method of getting out of the problem is to\n> > blow things up and start over. The patch proposed at the top of this\n> > thread would leave the system unable to recover on its own, with the\n> > only recourse being for the DBA to manually force a crash/restart ...\n> > once she figured out that that was necessary, which might take a long\n> > while if the only external evidence is an occasional WARNING that\n> > might not even be making it to the postmaster log. How's that better?\n>\n> It's a fair question. I think you're correct if we assume that\n> everyone's following the coding rule ... at least assuming that the\n> target system isn't too insanely slow, and I've seen some pretty\n> crazily overloaded machines. But if the coding rule is not being\n> followed, then \"the only reliable method of getting out of the problem\n> is to blow things up and start over\" becomes a dubious conclusion.\n\nIf the coding rule isn't being followed, a crash restart is the least of ones\nproblems... But that doesn't mean we shouldn't add infrastructure to make it\neasier to detect violations of the spinlock rules - we've had lots of buglets\naround this over the years ourselves, so we hardly can expect extension\nauthors to get this right. Particularly because we don't even document the\nrules well, afair.\n\nI think we should add cassert-only infrastructure tracking whether we\ncurrently hold spinlocks, are in a signal handler and perhaps a few other\nstates. That'd allow us to add assertions like:\n\n- no memory allocations when holding spinlocks or in signal handlers\n- no lwlocks while holding spinlocks\n- no lwlocks or spinlocks while in signal handlers\n\n\n\n> Also, I wonder if many or even all uses of spinlocks uses ought to be\n> replaced with either LWLocks or atomics. An LWLock might be slightly\n> slower when contention is low, but it scales better when contention is\n> high, displays a wait event so that you can see that you have\n> contention if you do, and doesn't PANIC the system if the contention\n> gets too bad. And an atomic will just be faster, in the cases where\n> it's adequate.\n\nI tried to replace all - unfortunately the results were not great. The problem\nisn't primarily the lack of spinning (although it might be worth adding that\nto lwlocks) or the cost of error recovery, the problem is that a reader-writer\nlock are inherently more expensive than simpler locks that don't have multiple\nlevels.\n\nOne example of such increased overhead is that on x86 an lwlock unlock has to\nbe an atomic operation (to maintain the lock count), whereas as spinlock\nunlock can just be a write + compiler barrier. Unfortunately the added atomic\noperation turns out to matter in some performance critical cases like the\ninsertpos_lck.\n\nI think we ought to split lwlocks into reader/writer and simpler mutex. The\nsimpler mutex still will be slower than s_lock in some relevant cases,\ne.g. due to the error recovery handling, but it'd be \"local\" overhead, rather\nthan something affecting scalability.\n\n\n\nFWIW, these days spinlocks do report a wait event when in perform_spin_delay()\n- albeit without detail which lock is being held.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jan 2024 14:54:03 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\n\n\n\n\nOn 1/4/24 10:33 AM, Tom Lane wrote:\n\n\nRobert Haas <[email protected]> writes:\nOn Thu, Jan 4, 2024 at 10:22 AM Tom Lane <[email protected]> wrote:\nWe should be making an effort to ban coding patterns like\n\"return with spinlock still held\", because they're just too prone\nto errors similar to this one.\nI agree that we don't want to add overhead, and also about how\nspinlocks should be used, but I dispute \"easily detectable\nstatically.\" I mean, if you or I look at some code that uses a\nspinlock, we'll know whether the pattern that you mention is being\nfollowed or not, modulo differences of opinion in debatable cases. But\nyou and I cannot be there to look at all the code all the time. If we\nhad a static checking tool that was run as part of every build, or in\nthe buildfarm, or by cfbot, or somewhere else that raised the alarm if\nthis rule was violated, then we could claim to be effectively\nenforcing this rule.\nI was indeed suggesting that maybe we could find a way to detect\nsuch things automatically. While I've not been paying close\nattention, I recall there's been some discussions of using LLVM/clang\ninfrastructure for customized static analysis, so maybe it'd be\npossible without an undue amount of effort.\n\nFWIW, the lackey[1] tool in Valgrind is able to do some kinds of\n instruction counting, so it might be possible to measure how many\n instructions are actualyl being executed while holding a spinlock.\n Might be easier than code analysis.\nAnother possibility might be using the CPUs timestamp counter.\n\n1: https://valgrind.org/docs/manual/lk-manual.html\n\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n", "msg_date": "Thu, 4 Jan 2024 17:03:18 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-04 14:59:06 +0800, Andy Fan wrote:\n> My question is if someone doesn't obey the rule by mistake (everyone\n> can make mistake), shall we PANIC on a production environment? IMO I\n> think it can be a WARNING on a production environment and be a stuck\n> when 'ifdef USE_ASSERT_CHECKING'.\n>\n> [...]\n>\n> I notice this issue actually because of the patch \"Cache relation\n> sizes?\" from Thomas Munro [1], where the latest patch[2] still have the\n> following code.\n> +\t\tsr = smgr_alloc_sr(); <-- HERE a spin lock is hold\n> +\n> +\t\t/* Upgrade to exclusive lock so we can create a mapping. */\n> +\t\tLWLockAcquire(mapping_lock, LW_EXCLUSIVE); <-- HERE a complex\n> operation is needed. it may take a long time.\n>\n> Our internal testing system found more misuses on our own PG version.\n\n> I think a experienced engineer like Thomas can make this mistake and the\n> patch was reviewed by 3 peoples, the bug is still there. It is not easy\n> to say just don't do it.\n\nI don't follow this argument - just ignoring the problem, which emitting a\nWARNING basically is, doesn't reduce the impact of the bug, it *increases* the\nimpact, because now the system will not recover from the bug without explicit\noperator intervention. During that time the system might be essentially\nunresponsive, because all backends end up contending for some spinlock, which\nmakes investigating such issues very hard.\n\n\nI think we should add infrastructure to detect bugs like this during\ndevelopment, but not PANICing when this happens in production seems completely\nnon-viable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jan 2024 15:06:28 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nHi,\n\nAndres Freund <[email protected]> writes:\n\n>\n> On 2024-01-04 14:59:06 +0800, Andy Fan wrote:\n>> My question is if someone doesn't obey the rule by mistake (everyone\n>> can make mistake), shall we PANIC on a production environment? IMO I\n>> think it can be a WARNING on a production environment and be a stuck\n>> when 'ifdef USE_ASSERT_CHECKING'.\n>>\n>> [...]\n>>\n>> I notice this issue actually because of the patch \"Cache relation\n>> sizes?\" from Thomas Munro [1], where the latest patch[2] still have the\n>> following code.\n>> +\t\tsr = smgr_alloc_sr(); <-- HERE a spin lock is hold\n>> +\n>> +\t\t/* Upgrade to exclusive lock so we can create a mapping. */\n>> +\t\tLWLockAcquire(mapping_lock, LW_EXCLUSIVE); <-- HERE a complex\n>> operation is needed. it may take a long time.\n>>\n>> Our internal testing system found more misuses on our own PG version.\n>\n>> I think a experienced engineer like Thomas can make this mistake and the\n>> patch was reviewed by 3 peoples, the bug is still there. It is not easy\n>> to say just don't do it.\n>\n> I don't follow this argument - just ignoring the problem,\n\nI agree with you but I'm feeling you ignored my post at [1], where I\nsaid for the known issue, it should be fixed at the first chance.\n\n> which emitting a\n> WARNING basically is, doesn't reduce the impact of the bug, it *increases* the\n> impact, because now the system will not recover from the bug without explicit\n> operator intervention. During that time the system might be essentially\n> unresponsive, because all backends end up contending for some spinlock, which\n> makes investigating such issues very hard.\n\nAcutally they are doing pg_usleep at the most time.\n\nBesides what Robert said, one more reason to question PANIC is that: PAINC\ncan't always make the system recovery faster because: a). In the most\nsystem, PANIC makes a core dump which take times and spaces. b). After\nthe reboot, all the caches like relcache, plancache, fdcache need to be\nrebuit. c). Customer needs to handle failure better or else they will be\nhurt *more often*. All of such sense cause slowness as well.\n\n>\n> I think we should add infrastructure to detect bugs like this during\n> development,\n\nThe commit 2 in [1] does something like this. for the details, I missed the\ncheck for memory allocation case as you suggested at [2], but checked\nheavyweight lock as well. others should be same IIUC.\n\n> but not PANICing when this happens in production seems completely\n> non-viable.\n>\n\nNot sure what does *this* exactly means. If it means the bug in Thomas's \npatch, I absoluately agree with you(since it is a known bug and it\nshould be fixed). If it means the general *unknown* case, it's something\nwe talked above.\n\nI'm also agree that some LLVM static checker should be pretty good\nideally, it just requires more knowledge base and future maintain\neffort. I am willing to have a try shortly. \n\n[1] https://www.postgresql.org/message-id/871qaxp3ly.fsf%40163.com\n[2]\nhttps://www.postgresql.org/message-id/20240104225403.dgmbbfffmm3srpgq%40awork3.anarazel.de\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 05 Jan 2024 10:20:39 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-04 17:03:18 -0600, Jim Nasby wrote:\n> On 1/4/24 10:33 AM, Tom Lane wrote:\n> \n> Robert Haas <[email protected]> writes:\n> \n> On Thu, Jan 4, 2024 at 10:22 AM Tom Lane <[email protected]> wrote:\n> \n> We should be making an effort to ban coding patterns like\n> \"return with spinlock still held\", because they're just too prone\n> to errors similar to this one.\n> \n> I agree that we don't want to add overhead, and also about how\n> spinlocks should be used, but I dispute \"easily detectable\n> statically.\" I mean, if you or I look at some code that uses a\n> spinlock, we'll know whether the pattern that you mention is being\n> followed or not, modulo differences of opinion in debatable cases. But\n> you and I cannot be there to look at all the code all the time. If we\n> had a static checking tool that was run as part of every build, or in\n> the buildfarm, or by cfbot, or somewhere else that raised the alarm if\n> this rule was violated, then we could claim to be effectively\n> enforcing this rule.\n> \n> I was indeed suggesting that maybe we could find a way to detect\n> such things automatically. While I've not been paying close\n> attention, I recall there's been some discussions of using LLVM/clang\n> infrastructure for customized static analysis, so maybe it'd be\n> possible without an undue amount of effort.\n\nI played around with this a while back. One reference with a link to a\nplayground to experiment with attributes:\nhttps://www.postgresql.org/message-id/20200616233105.sm5bvodo6unigno7%40alap3.anarazel.de\n\nUnfortunately clang's thread safety analysis doesn't handle conditionally\nacquired locks, which made it far less promising than I initially thought.\n\nI think there might be some other approaches, but they will all suffer from\nnot understanding \"danger\" encountered indirectly, via function calls doing\ndangerous things. Which we would like to exclude, but I don't think that's\ntrivial either.\n\n\n> FWIW, the lackey[1] tool in Valgrind is able to do some kinds of instruction\n> counting, so it might be possible to measure how many instructions are actualyl\n> being executed while holding a spinlock. Might be easier than code analysis.\n\nI don't think that's particularly promising. Lackey is *slow*. And it requires\nactually reaching problematic states. Consider e.g. the case that was reported\nupthread, an lwlock acquired within a spinlock protected section - most of the\ntime that's not going to result in a lot of cycles, because the lwlock is\nfree.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 4 Jan 2024 18:21:07 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 4, 2024 at 6:06 PM Andres Freund <[email protected]> wrote:\n> I think we should add infrastructure to detect bugs like this during\n> development, but not PANICing when this happens in production seems completely\n> non-viable.\n\nI mean +1 for the infrastructure, but \"completely non-viable\"? Why?\n\nI've only very rarely seen this PANIC occur, and in the few cases\nwhere I've seen it, it was entirely unclear that the problem was due\nto a bug where somebody failed to release a spinlock. It seemed more\nlikely that the machine was just not really functioning, and the PANIC\nwas a symptom of processes not getting scheduled rather than a PG bug.\nAnd every time I tell a user that they might need to use a debugger\nto, say, set VacuumCostActive = false, or to get a backtrace, or any\nother reason, I have to tell them to make sure to detach the debugger\nin under 60 seconds, because in the unlikely event that they attach\nwhile the process is holding a spinlock, failure to detach in under 60\nseconds will take their production system down for no reason. Now, if\nyou're about to say that people shouldn't need to use a debugger on\ntheir production instance, I entirely agree ... but in the world I\ninhabit, that's often the only way to solve a customer problem, and it\nprobably will continue to be until we have much better ways of getting\nbacktraces without using a debugger than is currently the case.\n\nHave you seen real cases where this PANIC prevents a hangup? If yes,\nthat PANIC traced back to a bug in PostgreSQL? And why didn't the user\njust keep hitting the same bug over and PANICing in an endless loop?\n\nI feel like this is one of those things that has just been this way\nforever and we don't question it because it's become an article of\nfaith that it's something we have to have. But I have a very hard time\nexplaining why it's even a net positive, let alone the unquestionable\ngood that you seem to think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 08:51:53 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-05 08:51:53 -0500, Robert Haas wrote:\n> On Thu, Jan 4, 2024 at 6:06 PM Andres Freund <[email protected]> wrote:\n> > I think we should add infrastructure to detect bugs like this during\n> > development, but not PANICing when this happens in production seems completely\n> > non-viable.\n> \n> I mean +1 for the infrastructure, but \"completely non-viable\"? Why?\n> \n> I've only very rarely seen this PANIC occur, and in the few cases\n> where I've seen it, it was entirely unclear that the problem was due\n> to a bug where somebody failed to release a spinlock.\n\nI see it fairly regularly. Including finding several related bugs that lead to\nstuck systems last year (signal handlers are a menace).\n\n\n> It seemed more likely that the machine was just not really functioning, and\n> the PANIC was a symptom of processes not getting scheduled rather than a PG\n> bug.\n\nIf processes don't get scheduled for that long a crash-restart doesn't seem\nthat bad anymore :)\n\n\n> And every time I tell a user that they might need to use a debugger to, say,\n> set VacuumCostActive = false, or to get a backtrace, or any other reason, I\n> have to tell them to make sure to detach the debugger in under 60 seconds,\n> because in the unlikely event that they attach while the process is holding\n> a spinlock, failure to detach in under 60 seconds will take their production\n> system down for no reason.\n\nHm - isn't the stuck lock timeout more like 900s (MAX_DELAY_USEC * NUM_DELAYS\n= 1000s, but we start at a lower delay)? One issue with the code as-is is\nthat interrupted sleeps count towards to the timeout, despite possibly\nsleeping much shorter. We should probably fix that, and also report the time\nthe lock was stuck for in s_lock_stuck().\n\n\n> Now, if you're about to say that people shouldn't need to use a debugger on\n> their production instance, I entirely agree ... but in the world I inhabit,\n> that's often the only way to solve a customer problem, and it probably will\n> continue to be until we have much better ways of getting backtraces without\n> using a debugger than is currently the case.\n> \n> Have you seen real cases where this PANIC prevents a hangup? If yes,\n> that PANIC traced back to a bug in PostgreSQL? And why didn't the user\n> just keep hitting the same bug over and PANICing in an endless loop?\n\nMany, as hinted above. Some bugs in postgres, more bugs in extensions. IME\nthese bugs aren't hit commonly, so a crash-restart at least allows to hobble\nalong. The big issue with not crash-restarting is that often the system ends\nup inaccessible, which makes it very hard to investigate the issue.\n\n\n> I feel like this is one of those things that has just been this way\n> forever and we don't question it because it's become an article of\n> faith that it's something we have to have. But I have a very hard time\n> explaining why it's even a net positive, let alone the unquestionable\n> good that you seem to think.\n\nI don't think it's an unquestionable good, I just think the alternative of\njust endlessly spinning is way worse.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Jan 2024 11:11:39 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Fri, Jan 5, 2024 at 2:11 PM Andres Freund <[email protected]> wrote:\n> I see it fairly regularly. Including finding several related bugs that lead to\n> stuck systems last year (signal handlers are a menace).\n\nIn that case, I think this proposal is dead. I can't personally\ntestify to this code being a force for good, but it sounds like you\ncan. So be it!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Jan 2024 14:19:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-05 10:20:39 +0800, Andy Fan wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-01-04 14:59:06 +0800, Andy Fan wrote:\n> >> My question is if someone doesn't obey the rule by mistake (everyone\n> >> can make mistake), shall we PANIC on a production environment? IMO I\n> >> think it can be a WARNING on a production environment and be a stuck\n> >> when 'ifdef USE_ASSERT_CHECKING'.\n> >>\n> >> [...]\n> >>\n> >> I notice this issue actually because of the patch \"Cache relation\n> >> sizes?\" from Thomas Munro [1], where the latest patch[2] still have the\n> >> following code.\n> >> +\t\tsr = smgr_alloc_sr(); <-- HERE a spin lock is hold\n> >> +\n> >> +\t\t/* Upgrade to exclusive lock so we can create a mapping. */\n> >> +\t\tLWLockAcquire(mapping_lock, LW_EXCLUSIVE); <-- HERE a complex\n> >> operation is needed. it may take a long time.\n> >>\n> >> Our internal testing system found more misuses on our own PG version.\n> >\n> >> I think a experienced engineer like Thomas can make this mistake and the\n> >> patch was reviewed by 3 peoples, the bug is still there. It is not easy\n> >> to say just don't do it.\n> >\n> > I don't follow this argument - just ignoring the problem,\n>\n> I agree with you but I'm feeling you ignored my post at [1], where I\n> said for the known issue, it should be fixed at the first chance.\n\nWith \"ignoring the problem\" I was referencing emitting a WARNING instead of\ncrash-restart.\n\nIME stuck spinlocks are caused by issues like not releasing a spinlock,\npossibly due to returning early due to an error or such, having lock-nesting\nissues leading to deadlocks, acquiring spinlocks or lwlocks in signal\nhandlers, blocking in signal handlers while holding a spinlock outside of the\nsignal handers and many variations of those. The common theme of these causes\nis that they don't resolve after some time. The only way out of the situation\nis to crash-restart, either \"automatically\" or by operator intervention.\n\n\n> > which emitting a\n> > WARNING basically is, doesn't reduce the impact of the bug, it *increases* the\n> > impact, because now the system will not recover from the bug without explicit\n> > operator intervention. During that time the system might be essentially\n> > unresponsive, because all backends end up contending for some spinlock, which\n> > makes investigating such issues very hard.\n>\n> Acutally they are doing pg_usleep at the most time.\n\nSure - but that changes nothing about the problem. The concern isn't CPU\nusage, the concern is that there's often no possible forward progress. To take\na recent-ish production issue I looked at, a buggy signal handler lead to a\nbackend sleeping while holding a spinlock. Soon after the entire system got\nstuck, because they also acquired the spinlock. The person initially\ninvestigating the issue at first contacted me because they couldn't even log\ninto the system, because connection establishment also acquired the spinlock\n(I'm not sure anymore which spinlock it was, possibly xlog.c's info_lck?).\n\n\n> > but not PANICing when this happens in production seems completely\n> > non-viable.\n> >\n>\n> Not sure what does *this* exactly means. If it means the bug in Thomas's\n> patch, I absoluately agree with you(since it is a known bug and it\n> should be fixed). If it means the general *unknown* case, it's something\n> we talked above.\n\nI mean that I think that not PANICing anymore would be a seriously bad idea\nand cause way more problems than the PANIC.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Jan 2024 11:27:13 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-05 14:19:23 -0500, Robert Haas wrote:\n> On Fri, Jan 5, 2024 at 2:11 PM Andres Freund <[email protected]> wrote:\n> > I see it fairly regularly. Including finding several related bugs that lead to\n> > stuck systems last year (signal handlers are a menace).\n> \n> In that case, I think this proposal is dead. I can't personally\n> testify to this code being a force for good, but it sounds like you\n> can. So be it!\n\nI think the proposal to make it a WARNING shouldn't go anywhere, but I think\nthere are several improvements that could come out of this discussion:\n\n- assertion checks against doing dangerous stuff\n- compile time help for detecting bad stuff without hitting it at runtime\n- make the stuck lock message more informative, e.g. by including the length\n of time the lock was stuck for\n- make sure that interrupts can't trigger the stuck lock much quicker, which\n afaict can happen today\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Jan 2024 11:33:18 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\n\n> On 2024-01-05 14:19:23 -0500, Robert Haas wrote:\n>> On Fri, Jan 5, 2024 at 2:11 PM Andres Freund <[email protected]> wrote:\n>> > I see it fairly regularly. Including finding several related bugs that lead to\n>> > stuck systems last year (signal handlers are a menace).\n>> \n>> In that case, I think this proposal is dead. I can't personally\n>> testify to this code being a force for good, but it sounds like you\n>> can. So be it!\n>\n> I think the proposal to make it a WARNING shouldn't go anywhere,\n\nOK, I give up the WARNING method as well.\n\n> but I think\n> there are several improvements that could come out of this discussion:\n>\n> - assertion checks against doing dangerous stuff\n> - make the stuck lock message more informative, e.g. by including the length\n> of time the lock was stuck for\n\nCould you check the attached to see if it is something similar in your\nmind?\n\ncommit e264da3050285cffd4885637ee97b2326d2f3938 SHOULD **NOT** BE COMMITTED.\nAuthor: yizhi.fzh <[email protected]>\nDate: Sun Jan 7 15:06:14 2024 +0800\n\n simple code to prove previously commit works.\n\ncommit 80cf987d1abe2cdae195bd5eea520e28142885b4\nAuthor: yizhi.fzh <[email protected]>\nDate: Thu Jan 4 22:19:50 2024 +0800\n\n Detect more misuse of spin lock automatically\n \n spin lock are intended for *very* short-term locks, but it is possible\n to be misused in many cases. e.g. Acquiring another LWLocks or regular\n locks, memory allocation. In this patch, all of such cases will be\n automatically detected in an ASSERT_CHECKING build.\n \n Signal handle should be avoided when holding a spin lock because it is\n nearly impossible to release the spin lock correctly if that happens.\n\n\nLuckly after applying the patch, there is no failure when run 'make\ncheck-world'.\n\n> - make sure that interrupts can't trigger the stuck lock much quicker, which\n> afaict can happen today\n\nI can't follow this, do you mind explain more about this a bit?\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Sun, 07 Jan 2024 15:09:24 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nHi!\n\n>\n> I think we should add cassert-only infrastructure tracking whether we\n> currently hold spinlocks, are in a signal handler and perhaps a few other\n> states. That'd allow us to add assertions like:\n..\n> - no lwlocks or ... while in signal handlers\n\nI *wish* lwlocks should *not* be held while in signal handlers since it\ninspired me for a direction of a low-frequency internal bug where a\nbackend acuqire a LWLock when it has acuqired it before. However when I\nread more document and code, I am thinking this should not be a\nproblem.\n\nper: src/backend/storage/lmgr/README\n\n\"\"\"\nLWLock manager will automatically release held LWLocks during elog()\nrecovery, so it is safe to raise an error while holding LWLocks.\n\"\"\"\n\nThe code shows us after we acquire a LWLock, such information will be\nadded into a global variable named held_lwlocks, and whenever we want to\nrelease all the them, we can just call LWLockReleaseAll.\nLWLockReleaseAll is called in AbortTransaction, AbortSubTransaction, \nProcKill, AuxiliaryProcKill and so on. the code is same with what the\nREADME said. So suppose we have some codes like this:\n\nLWLockAcquire(...);\nCHECK_FOR_INTERRUPTS();\nLWLockRelease();\n\nEven we got ERROR/FATAL in the CHECK_FOR_INTERRUPTS, I think the LWLock\nare suppose to be released because of the above statement. Am I missing\nanything? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 08 Jan 2024 10:41:01 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Sun, Jan 7, 2024 at 9:52 PM Andy Fan <[email protected]> wrote:\n> > I think we should add cassert-only infrastructure tracking whether we\n> > currently hold spinlocks, are in a signal handler and perhaps a few other\n> > states. That'd allow us to add assertions like:\n> ..\n> > - no lwlocks or ... while in signal handlers\n>\n> I *wish* lwlocks should *not* be held while in signal handlers since it\n> inspired me for a direction of a low-frequency internal bug where a\n> backend acuqire a LWLock when it has acuqired it before. However when I\n> read more document and code, I am thinking this should not be a\n> problem.\n\nIt's not safe to acquire an LWLock in a signal handler unless we know\nthat the code that the signal handler is interrupting can't already be\ndoing that. Consider this code from LWLockAcquire:\n\n /* Add lock to list of locks held by this backend */\n held_lwlocks[num_held_lwlocks].lock = lock;\n held_lwlocks[num_held_lwlocks++].mode = mode;\n\nImagine that we're executing this code and we get to the point where\nwe've set held_lwlocks[num_held_lwlocks].lock = lock and\nheld_lwlock[num_held_lwlocks].mode = mode, but we haven't yet\nincremented num_held_lwlocks. Then a signal arrives and we jump into\nthe signal handler, which also calls LWLockAcquire(). Hopefully you\ncan see that the new lock and mode will be written over the old one,\nand now held_lwlocks[] is corrupted. Oops.\n\nBecause the PostgreSQL code relies extensively on global variables, it\nhas problems like this all over the place. Another example is the\nerror-reporting infrastructure. ereport(yadda yadda...) fills out a\nbunch of global variables before really going and doing anything. If a\nsignal handler were to fire and start another ereport(...), chaos\nwould ensue, for similar reasons as here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 14:57:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nHi,\nRobert Haas <[email protected]> writes:\n\n> On Sun, Jan 7, 2024 at 9:52 PM Andy Fan <[email protected]> wrote:\n>> > I think we should add cassert-only infrastructure tracking whether we\n>> > currently hold spinlocks, are in a signal handler and perhaps a few other\n>> > states. That'd allow us to add assertions like:\n>> ..\n>> > - no lwlocks or ... while in signal handlers\n>>\n>> I *wish* lwlocks should *not* be held while in signal handlers since it\n>> inspired me for a direction of a low-frequency internal bug where a\n>> backend acuqire a LWLock when it has acuqired it before. However when I\n>> read more document and code, I am thinking this should not be a\n>> problem.\n>\n> It's not safe to acquire an LWLock in a signal handler unless we know\n> that the code that the signal handler is interrupting can't already be\n> doing that. Consider this code from LWLockAcquire:\n\nThanks for the explaination! I can follow the sistuation you descirbe\nhere, then I found I asked a bad question because I didn't clarify what\n\"signal handlers\" I was refering to, sorry about that!\n\nIn your statement, I guess you are talking about the signal handler from\nLinux. However I *assumed* such handlers are doing pretty similar stuff\nlike set a 'GlobalVarialbe=true'. If my assumption was right, I think\nthat should not be take cared. For example:\n\nspin_or_lwlock_acquire();\n... (linux signal handler may be invovked here no matther what ... code is)\nspin_or_lwlock_relase()\n\nSince the linux signal hander are pretty simply, so it can come back to\n'spin_or_lwlock_relase' anyway. (However my assumption may be wrong and\nthanks for highlight this, and it is helpful for me to debug my internal\nbug!)\n\nThe singler handler I was refering to is 'CHECK_FOR_INTERRUPTS', Based\non this, spin_lock and lwlock are acted pretty differently. \n\nspin_lock_acuqire();\nCHECK_FOR_INTERRUPT();\nspin_lock_release();\n\nSince CHECK_FOR_INTERRUPT usually goes to the ERROR system which makes it\nis hard to go back to 'spin_lock_release()', then spin lock leaks! so\nCHECK_FOR_INTERRUPT is the place I Assert *spin lock* should not be\nhandled in my patch. and I understood what Andres was talking about is\nthe same thing. (Of course I can add the \"Assert no spin lock is held\"\ninto every linux single handler as well).\n\nBased on the above, I asked my question in my previous post, where I am\nnot sure if we should do the same('Assert no-lwlock should be held') for\n*lwlock* in CHECK_FOR_INTERRUPT since lwlocks can be released no matter\nwhere CHECK_FOR_INTERRUPT jump to.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 09 Jan 2024 10:01:59 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Mon, Jan 8, 2024 at 9:40 PM Andy Fan <[email protected]> wrote:\n> The singler handler I was refering to is 'CHECK_FOR_INTERRUPTS', Based\n> on this, spin_lock and lwlock are acted pretty differently.\n\nCHECK_FOR_INTERRUPTS() is not a signal handler, and it's OK to acquire\nand release spin locks or lwlocks there. We have had (and I think\nstill do have) cases where signal handlers do non-trivial work,\nresulting in serious problems in some cases. A bunch of that stuff has\nbeen rewritten to just set a flag and then let the calling code sort\nit out, but not everything has been rewritten that way (I think) and\nthere's always a danger of future hackers introducing new problem\ncases.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Jan 2024 10:44:49 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi, \n\nRobert Haas <[email protected]> writes:\n\n> On Mon, Jan 8, 2024 at 9:40 PM Andy Fan <[email protected]> wrote:\n>> The singler handler I was refering to is 'CHECK_FOR_INTERRUPTS', Based\n>> on this, spin_lock and lwlock are acted pretty differently.\n>\n> CHECK_FOR_INTERRUPTS() is not a signal handler,\n\nhmm, I knew this but .... I think we haven't big difference in mind\nactually. \n\nSince all of them agreed that we should do something in infrastructure\nto detect some misuse of spin. I want to know if Andres or you have plan\nto do some code review. I don't expect this would happen very soon, just\nwant to make sure this will not happen that both of you think the other\none will do, but actually none of them does it in fact. a commit fest\n[1] has been added for this. \n\nThere is a test code show the bad practice which is detected by this\npatch in [2]\n\n[1] https://commitfest.postgresql.org/47/4768/\n[2] https://www.postgresql.org/message-id/87le91obp7.fsf%40163.com.\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 10 Jan 2024 09:26:50 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Wed, 10 Jan 2024 at 02:44, Andy Fan <[email protected]> wrote:\n> Hi,\n>\n> I want to know if Andres or you have plan\n> to do some code review. I don't expect this would happen very soon, just\n> want to make sure this will not happen that both of you think the other\n> one will do, but actually none of them does it in fact. a commit fest\n> [1] has been added for this.\n\n\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -5419,6 +5419,7 @@ LockBufHdr(BufferDesc *desc)\n> perform_spin_delay(&delayStatus);\n> }\n> finish_spin_delay(&delayStatus);\n> + START_SPIN_LOCK();\n> return old_buf_state | BM_LOCKED;\n> }\n\nI think that we need to 'arm' the checks just before we lock the spin\nlock, and 'disarm' the checks just after we unlock the spin lock,\nrather than after and before, respectively. That way, we won't have a\nchance of false negatives: with your current patch it is possible that\nan interrupt fires between the acquisition of the lock and the code in\nSTART_SPIN_LOCK() marking the thread as holding a spin lock, which\nwould cause any check in that signal handler to incorrectly read that\nwe don't hold any spin locks.\n\n> +++ b/src/backend/storage/lmgr/lock.c\n> @@ -776,6 +776,8 @@ LockAcquireExtended(const LOCKTAG *locktag,\n> bool found_conflict;\n> bool log_lock = false;\n>\n> + Assert(SpinLockCount == 0);\n> +\n\nI'm not 100% sure on the policy of this, but theoretically you could\nuse LockAquireExtended(dontWait=true) while holding a spin lock, as\nthat would not have an unknown duration. Then again, this function\nalso does elog/ereport, which would cause issues, still, so this code\nmay be the better option.\n\n> + elog(PANIC, \"stuck spinlock detected at %s, %s:%d after waiting for %u ms\",\n> + func, file, line, delay_ms);\n\npg_usleep doesn't actually guarantee that we'll wait for exactly that\nduration; depending on signals received while spinning and/or OS\nscheduling decisions it may be off by orders of magnitude.\n\n> +++ b/src/common/scram-common.c\n\nThis is unrelated to the main patchset.\n\n> +++ b/src/include/storage/spin.h\n\nMinor: I think these changes could better be included in miscadmin, or\nat least the definition for SpinLockCount should be moved there: The\nspin lock system itself shouldn't be needed in places where we need to\nmake sure that we don't hold any spinlocks, and miscadmin.h already\nholds things related to \"System interrupt and critical section\nhandling\", which seems quite related.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:03:28 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Thanks for jumping in with a review, Matthias!\n\nOn Wed, Jan 10, 2024 at 8:03 AM Matthias van de Meent\n<[email protected]> wrote:\n> I'm not 100% sure on the policy of this, but theoretically you could\n> use LockAquireExtended(dontWait=true) while holding a spin lock, as\n> that would not have an unknown duration. Then again, this function\n> also does elog/ereport, which would cause issues, still, so this code\n> may be the better option.\n\nThis is definitely not allowable, and anybody who is thinking about\ndoing it should replace the spinlock with an LWLock.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 11:13:10 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi Matthias,\n\nThanks for the review!\n\nMatthias van de Meent <[email protected]> writes:\n\n> On Wed, 10 Jan 2024 at 02:44, Andy Fan <[email protected]> wrote:\n>> Hi,\n>>\n>> I want to know if Andres or you have plan\n>> to do some code review. I don't expect this would happen very soon, just\n>> want to make sure this will not happen that both of you think the other\n>> one will do, but actually none of them does it in fact. a commit fest\n>> [1] has been added for this.\n>\n>\n>> +++ b/src/backend/storage/buffer/bufmgr.c\n>> @@ -5419,6 +5419,7 @@ LockBufHdr(BufferDesc *desc)\n>> perform_spin_delay(&delayStatus);\n>> }\n>> finish_spin_delay(&delayStatus);\n>> + START_SPIN_LOCK();\n>> return old_buf_state | BM_LOCKED;\n>> }\n>\n> I think that we need to 'arm' the checks just before we lock the spin\n> lock, and 'disarm' the checks just after we unlock the spin lock,\n> rather than after and before, respectively. That way, we won't have a\n> chance of false negatives: with your current patch it is possible that\n> an interrupt fires between the acquisition of the lock and the code in\n> START_SPIN_LOCK() marking the thread as holding a spin lock, which\n> would cause any check in that signal handler to incorrectly read that\n> we don't hold any spin locks.\n\nThat's a good idea. fixed in v2.\n\n>\n>> +++ b/src/backend/storage/lmgr/lock.c\n>> @@ -776,6 +776,8 @@ LockAcquireExtended(const LOCKTAG *locktag,\n>> bool found_conflict;\n>> bool log_lock = false;\n>>\n>> + Assert(SpinLockCount == 0);\n>> +\n>\n> I'm not 100% sure on the policy of this, but theoretically you could\n> use LockAquireExtended(dontWait=true) while holding a spin lock, as\n> that would not have an unknown duration. Then again, this function\n> also does elog/ereport, which would cause issues, still, so this code\n> may be the better option.\n\nI thought this statement as \"keeping the current patch as it is\" since\n\"not waiting\" doesn't means the a few dozen in this case. please\ncorrect me if anything wrong.\n\n>\n>> + elog(PANIC, \"stuck spinlock detected at %s, %s:%d after waiting for %u ms\",\n>> + func, file, line, delay_ms);\n>\n> pg_usleep doesn't actually guarantee that we'll wait for exactly that\n> duration; depending on signals received while spinning and/or OS\n> scheduling decisions it may be off by orders of magnitude.\n\nTrue, but I did this for two reasons. a). the other soluation needs call\n'time' syscall twice, I didn't want to pay this run-time effort. b). the\npossiblity of geting a signals during pg_usleep should be low and\neven that happens, because the message is just for human, we don't need\na absolutely accurate number. what do you think?\n\n>\n>> +++ b/src/common/scram-common.c\n>\n> This is unrelated to the main patchset\n\nFixed in v2.\n\n>\n>> +++ b/src/include/storage/spin.h\n>\n> Minor: I think these changes could better be included in miscadmin, or\n> at least the definition for SpinLockCount should be moved there: The\n> spin lock system itself shouldn't be needed in places where we need to\n> make sure that we don't hold any spinlocks, and miscadmin.h already\n> holds things related to \"System interrupt and critical section\n> handling\", which seems quite related.\n\nfixed in v2. \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 11 Jan 2024 09:55:18 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nRobert Haas <[email protected]> writes:\n\n> Thanks for jumping in with a review, Matthias!\n\nFWIW, Matthias is also the first one for this proposal at this\nthread, thanks for that as well!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:17:52 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Wed, Jan 10, 2024 at 10:17 PM Andy Fan <[email protected]> wrote:\n> fixed in v2.\n\nTiming the spinlock wait seems like a separate patch from the new sanity checks.\n\nI suspect that the new sanity checks should only be armed in\nassert-enabled builds.\n\nI'm doubtful that this method of tracking the wait time will be\naccurate. And I don't know that we can make it accurate with\nreasonable overhead. But I don't think we can assume that the time we\ntried to wait for and the time that we were actually descheduled are\nthe same.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 10:45:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n\n> On Wed, Jan 10, 2024 at 10:17 PM Andy Fan <[email protected]> wrote:\n>> fixed in v2.\n>\n> Timing the spinlock wait seems like a separate patch from the new sanity checks.\n\nYes, a separate patch would be better, so removed it from v4.\n\n> I suspect that the new sanity checks should only be armed in\n> assert-enabled builds.\n\nThere are 2 changes in v4. a). Make sure every code is only armed in\nassert-enabled builds. Previously there was some counter++ in non\nassert-enabled build. b). Record the location of spin lock so that\nwhenever the Assert failure, we know which spin lock it is. In our\ninternal testing, that helps a lot.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 15 Jan 2024 13:19:56 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi Matthias / Robert:\n\nDo you still have interest with making some progress on this topic?\n\n> Robert Haas <[email protected]> writes:\n>\n>> On Wed, Jan 10, 2024 at 10:17 PM Andy Fan <[email protected]> wrote:\n>>> fixed in v2.\n>>\n>> Timing the spinlock wait seems like a separate patch from the new sanity checks.\n>\n> Yes, a separate patch would be better, so removed it from v4.\n>\n>> I suspect that the new sanity checks should only be armed in\n>> assert-enabled builds.\n>\n> There are 2 changes in v4. a). Make sure every code is only armed in\n> assert-enabled builds. Previously there was some counter++ in non\n> assert-enabled build. b). Record the location of spin lock so that\n> whenever the Assert failure, we know which spin lock it is. In our\n> internal testing, that helps a lot.\n\nv5 attached for fix the linking issue on Windows.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 18 Jan 2024 20:54:30 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 18, 2024 at 7:56 AM Andy Fan <[email protected]> wrote:\n> Do you still have interest with making some progress on this topic?\n\nSome, but it's definitely not my top priority. I wish I could give as\nmuch attention as everyone would like to everyone's patches, but I\ncan't even come close.\n\nI think that the stack that you set up in START_SPIN_LOCK() is crazy.\nThat would only be necessary if it were legal to acquire multiple\nspinlocks at the same time, which it definitely isn't. Also, doing\nmemory allocation and deallocation here appears highly undesirable -\neven if we did need to support multiple spinlocks, it would be better\nto handle this using something like the approach we already use for\nlwlocks, where there is a fixed size array and we blow up if it\noverflows.\n\nASSERT_NO_SPIN_LOCK() looks odd, because I would expect it to be\nspelled Assert(!AnySpinLockHeld()). But looking deeper, I see that it\ndoesn't internally Assert() but rather does something else. Maybe the\nname needs some workshopping. SpinLockMustNotBeHeldHere()?\nVerifyNoSpinLocksHeld()?\n\nI think we should check that no spinlock is held in a few additional\nplaces: the start of SpinLockAcquire(), and the start of errstart().\n\nYou added an #include to dynahash.c despite making no other changes to the file.\n\nI don't know whether the choice to treat buffer header locks as\nspinlocks is correct. It seems like it at least deserves a comment,\nand possibly some discussion on this mailing list about whether that's\nthe right idea. I'm not sure that we have all the same restrictions\nfor buffer header locks as we do for spinlocks in general, but I'm\nalso not sure that we don't.\n\nOn a related note, the patch overall has 0 comments. I don't know that\nit needs a lot, but 0 isn't many at all.\n\nmiscadmin.h doesn't seem like a good place for this. It's a\nwidely-included header file and these checks should be needed in\nrelatively few places; also, they're not really related to most of\nwhat's in that file, IIRC. I also wonder why we're using macros\ninstead of static inline functions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 09:16:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nHi Robert,\n\nThanks for your attention!\n\n> On Thu, Jan 18, 2024 at 7:56 AM Andy Fan <[email protected]> wrote:\n>> Do you still have interest with making some progress on this topic?\n>\n> Some, but it's definitely not my top priority. I wish I could give as\n> much attention as everyone would like to everyone's patches, but I\n> can't even come close.\n\nYour point is fair enough.\n\nAfter reading your comments, I decide to talk more before sending the\nnext version.\n\n>\n> I think that the stack that you set up in START_SPIN_LOCK() is crazy.\n> That would only be necessary if it were legal to acquire multiple\n> spinlocks at the same time, which it definitely isn't. Also, doing\n> memory allocation and deallocation here appears highly undesirable -\n> even if we did need to support multiple spinlocks, it would be better\n> to handle this using something like the approach we already use for\n> lwlocks, where there is a fixed size array and we blow up if it\n> overflows.\n\nI wanted to disallow to acquire multiple spinlocks at the same time in\nthe first version, but later I thought that is beyond of the scope of\nthis patch. Now I prefer to disallow that. if there is no objection in\nthe following days, I will do this in next version. After this, we don't\nneed malloc at all.\n\n>\n> ASSERT_NO_SPIN_LOCK() looks odd, because I would expect it to be\n> spelled Assert(!AnySpinLockHeld()). But looking deeper, I see that it\n> doesn't internally Assert() but rather does something else. Maybe the\n> name needs some workshopping. SpinLockMustNotBeHeldHere()?\n> VerifyNoSpinLocksHeld()?\n\nYes, it is not a Assert since I want to provide more information about\nwhere the SpinLock was held. Assert doesn't have such capacity but\nelog(PANIC, ...) can put more information before the PANIC.\n\nVerifyNoSpinLocksHeld looks much more professional than\nASSERT_NO_SPIN_LOCK; I will use this in the next version.\n\n\n> I think we should check that no spinlock is held in a few additional\n> places: the start of SpinLockAcquire(), and the start of errstart().\n\nAgreed.\n\n> You added an #include to dynahash.c despite making no other changes to\n> the file.\n\nThat's mainly because I put the code into miscadmin.h and spin.h depends\non miscadmin.h with MACROs.\n\n>\n> I don't know whether the choice to treat buffer header locks as\n> spinlocks is correct. It seems like it at least deserves a comment,\n> and possibly some discussion on this mailing list about whether that's\n> the right idea. I'm not sure that we have all the same restrictions\n> for buffer header locks as we do for spinlocks in general, but I'm\n> also not sure that we don't.\n\nThe LockBufHdr also used init_local_spin_delay / perform_spin_delay\ninfrastruce and then it has the same issue like ${subject}, it is pretty\nlike the code in s_lock; Based on my current knowledge, I think we\nshould add the check there.\n\n>\n> On a related note, the patch overall has 0 comments. I don't know that\n> it needs a lot, but 0 isn't many at all.\n\nhmm, I tried to write a good commit message, but comments do need some\nimprovement, thanks for highlighting this!\n\n>\n> miscadmin.h doesn't seem like a good place for this. It's a\n> widely-included header file and these checks should be needed in\n> relatively few places; also, they're not really related to most of\n> what's in that file, IIRC.\n\nthey were put into spin.h in v1 but later move to miscadmin.h at [1]. \n\n> I also wonder why we're using macros instead of static inline\n> functions.\n\nSTART_SPIN_LOCK need to be macro since it use __FILE__ and __LINE__ to\nnote where the SpinLock is held. for others, just for consistent\npurpose. I think they can be changed to inline function, at least for\nVerifyNoSpinLocksHeld. \n\n[1]\nhttps://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 01:51:56 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Thu, Jan 18, 2024 at 1:30 PM Andy Fan <[email protected]> wrote:\n> > You added an #include to dynahash.c despite making no other changes to\n> > the file.\n>\n> That's mainly because I put the code into miscadmin.h and spin.h depends\n> on miscadmin.h with MACROs.\n\nThat's not a good reason. Headers need to include headers on which\nthey depend; a .c file shouldn't be required to include one header\nbecause it includes another.\n\n> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n> infrastruce and then it has the same issue like ${subject}, it is pretty\n> like the code in s_lock; Based on my current knowledge, I think we\n> should add the check there.\n\nI'd like to hear from Andres, if possible. @Andres: Should these\nsanity checks apply only to spin locks per se, or also to buffer\nheader locks?\n\n> they were put into spin.h in v1 but later move to miscadmin.h at [1].\n> [1]\n> https://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n\nI'm not entirely sure what the right thing to do is here, and the\nanswer may depend on the previous question. But I disagree with\nMatthias -- I don't think miscadmin.h can be the right answer\nregardless.\n\n> START_SPIN_LOCK need to be macro since it use __FILE__ and __LINE__ to\n> note where the SpinLock is held. for others, just for consistent\n> purpose. I think they can be changed to inline function, at least for\n> VerifyNoSpinLocksHeld.\n\nGood point about __FILE__ and __LINE__.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 14:00:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nHi,\n\nHere is the summary of the open-items, it would be great that Andres and\nMatthias have a look at this when they have time.\n\n1. Shall we treat the LockBufHdr as a SpinLock usage.\n\n>> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n>> infrastruce and then it has the same issue like ${subject}, it is pretty\n>> like the code in s_lock; Based on my current knowledge, I think we\n>> should add the check there.\n>\n> I'd like to hear from Andres, if possible. @Andres: Should these\n> sanity checks apply only to spin locks per se, or also to buffer\n> header locks?\n>\n\n2. Where shall we put the START/END_SPIN_LOCK() into? Personally I'd\nlike spin.h. One of the reasons is 'misc' usually makes me think they\nare something not well categoried, and hence many different stuffs are\nput together. \n\n>> they were put into spin.h in v1 but later move to miscadmin.h at [1].\n>> [1]\n>> https://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n>\n> I'm not entirely sure what the right thing to do is here, and the\n> answer may depend on the previous question. But I disagree with\n> Matthias -- I don't think miscadmin.h can be the right answer\n> regardless.\n>\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 10:51:43 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nv6 attached which addressed all the items Robert suggested except the\nfollowing 2 open items. They are handled differently.\n\n>\n> Here is the summary of the open-items, it would be great that Andres and\n> Matthias have a look at this when they have time.\n>\n>>> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n>>> infrastruce and then it has the same issue like ${subject}, it is pretty\n>>> like the code in s_lock; Based on my current knowledge, I think we\n>>> should add the check there.\n>>\n>> I'd like to hear from Andres, if possible. @Andres: Should these\n>> sanity checks apply only to spin locks per se, or also to buffer\n>> header locks?\n\nv6 is splitted into 2 commits, one for normal SpinLock and one for\nLockBufHdr lock.\n\ncommit 6276d2f66b0760053e3fdfe259971be3abba3c63\nAuthor: yizhi.fzh <[email protected]>\nDate: Fri Jan 19 13:52:07 2024 +0800\n\n Detect more misuse of spin lock automatically\n \n Spin lock are intended for *very* short-term locks, but it is possible\n to be misused in many cases. e.g. Acquiring another LWLocks or regular\n locks, memory allocation, errstart when holding a spin lock. this patch\n would detect such misuse automatically in a USE_ASSERT_CHECKING build.\n \n CHECK_FOR_INTERRUPTS should be avoided as well when holding a spin lock.\n Depends on what signals are left to handle, PG may raise error/fatal\n which would cause the code jump to some other places which is hardly to\n release the spin lock anyway.\n\ncommit 590a0c6f767f62f6c83289d55de99973bc7da417 (HEAD -> s_stuck_v3)\nAuthor: yizhi.fzh <[email protected]>\nDate: Fri Jan 19 13:57:46 2024 +0800\n\n Treat (un)LockBufHdr as a SpinLock.\n \n The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n infrastructure and so it is also possible that PANIC the system\n when it can't be acquired in a short time, and its code is pretty\n similar with s_lock. so treat it same as SPIN lock when regarding to\n misuse of spinlock detection.\n\n>>> they were put into spin.h in v1 but later move to miscadmin.h at [1].\n>>> [1]\n>>> https://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n>>\n>> I'm not entirely sure what the right thing to do is here, and the\n>> answer may depend on the previous question. But I disagree with\n>> Matthias -- I don't think miscadmin.h can be the right answer\n>> regardless.\n\nI put it into spin.h this time in commit 1, and include the extern\nfunction VerifyNoSpinLocksHeld in spin.c into miscadmin.h like what we\ndid for ProcessInterrupts. This will easy the miscadmin dependency. the\nchanges for '#include xxx' looks better than before.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Fri, 19 Jan 2024 14:17:13 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nAndy Fan <[email protected]> writes:\n\n> Hi,\n>\n> v6 attached which addressed all the items Robert suggested except the\n> following 2 open items. They are handled differently.\n>\n>>\n>> Here is the summary of the open-items, it would be great that Andres and\n>> Matthias have a look at this when they have time.\n>>\n>>>> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n>>>> infrastruce and then it has the same issue like ${subject}, it is pretty\n>>>> like the code in s_lock; Based on my current knowledge, I think we\n>>>> should add the check there.\n>>>\n>>> I'd like to hear from Andres, if possible. @Andres: Should these\n>>> sanity checks apply only to spin locks per se, or also to buffer\n>>> header locks?\n>\n> v6 is splitted into 2 commits, one for normal SpinLock and one for\n> LockBufHdr lock.\n>\n> commit 6276d2f66b0760053e3fdfe259971be3abba3c63\n> Author: yizhi.fzh <[email protected]>\n> Date: Fri Jan 19 13:52:07 2024 +0800\n>\n> Detect more misuse of spin lock automatically\n> \n> Spin lock are intended for *very* short-term locks, but it is possible\n> to be misused in many cases. e.g. Acquiring another LWLocks or regular\n> locks, memory allocation, errstart when holding a spin lock. this patch\n> would detect such misuse automatically in a USE_ASSERT_CHECKING build.\n> \n> CHECK_FOR_INTERRUPTS should be avoided as well when holding a spin lock.\n> Depends on what signals are left to handle, PG may raise error/fatal\n> which would cause the code jump to some other places which is hardly to\n> release the spin lock anyway.\n>\n> commit 590a0c6f767f62f6c83289d55de99973bc7da417 (HEAD -> s_stuck_v3)\n> Author: yizhi.fzh <[email protected]>\n> Date: Fri Jan 19 13:57:46 2024 +0800\n>\n> Treat (un)LockBufHdr as a SpinLock.\n> \n> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n> infrastructure and so it is also possible that PANIC the system\n> when it can't be acquired in a short time, and its code is pretty\n> similar with s_lock. so treat it same as SPIN lock when regarding to\n> misuse of spinlock detection.\n>\n>>>> they were put into spin.h in v1 but later move to miscadmin.h at [1].\n>>>> [1]\n>>>> https://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n>>>\n>>> I'm not entirely sure what the right thing to do is here, and the\n>>> answer may depend on the previous question. But I disagree with\n>>> Matthias -- I don't think miscadmin.h can be the right answer\n>>> regardless.\n>\n> I put it into spin.h this time in commit 1, and include the extern\n> function VerifyNoSpinLocksHeld in spin.c into miscadmin.h like what we\n> did for ProcessInterrupts. This will easy the miscadmin dependency. the\n> changes for '#include xxx' looks better than before.\n\nI found a speical case about checking it in errstart. So commit 3 in v7\nis added. I'm not intent to commit 1 and commit 3 should be 2 sperate\ncommits, but making them 2 will be easy for discussion.\n\ncommit 757c67c1d4895ce6a523bcf5217af8eb2351e2a1 (HEAD -> s_stuck_v3)\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Jan 22 07:14:29 2024 +0800\n\n Bypass SpinLock checking in SIGQUIT signal hander\n \n When a process receives a SIGQUIT signal, it indicates the system has a\n crash time. It's possible that the process is just holding a Spin\n lock. By our current checking, this process will PANIC with a misuse of\n spinlock which is pretty prone to misunderstanding. so we need to bypass\n the spin lock holding checking in this case. It is safe since the\n overall system will be restarted.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 22 Jan 2024 07:26:04 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Andy Fan <[email protected]> writes:\n\n> I found a speical case about checking it in errstart. So commit 3 in v7\n> is added. \n>\n> commit 757c67c1d4895ce6a523bcf5217af8eb2351e2a1 (HEAD -> s_stuck_v3)\n> Author: yizhi.fzh <[email protected]>\n> Date: Mon Jan 22 07:14:29 2024 +0800\n>\n> Bypass SpinLock checking in SIGQUIT signal hander\n> \n\nI used sigismember(&BlockSig, SIGQUIT) to detect if a process is doing a\nquickdie, however this is bad not only because it doesn't work on\nWindows, but also it has too poor performance even it impacts on\nUSE_ASSERT_CHECKING build only. In v8, I introduced a new global\nvariable quickDieInProgress to handle this.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 22 Jan 2024 15:18:35 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Mon, Jan 22, 2024 at 2:22 AM Andy Fan <[email protected]> wrote:\n> I used sigismember(&BlockSig, SIGQUIT) to detect if a process is doing a\n> quickdie, however this is bad not only because it doesn't work on\n> Windows, but also it has too poor performance even it impacts on\n> USE_ASSERT_CHECKING build only. In v8, I introduced a new global\n> variable quickDieInProgress to handle this.\n\nOK, I like the split between 0001 and 0002. I still think 0001 has\ncosmetic problems, but if some committer wants to take it forward,\nthey can decide what to do about that; you and I going back and forth\ndoesn't seem like the right approach to sorting that out. Whether or\nnot 0002 is adopted might affect what we do about the cosmetics in\n0001, too.\n\n0003 seems ... unfortunate. It seems like an admission that 0001 is\nwrong. Surely it *isn't* right to ignore the spinlock restrictions in\nquickdie() in general. For example, we could self-deadlock if we try\nto acquire a spinlock we already hold. If the problem here is merely\nthe call in errstart() then maybe we need to rethink that particular\ncall. If it goes any deeper than that, maybe we've got actual bugs we\nneed to fix.\n\n+ * It's likely to check the BlockSig to know if it is doing a quickdie\n+ * with sigismember, but it is too expensive in test, so introduce\n+ * quickDieInProgress to avoid that.\n\nThis isn't very good English -- I realize that can sometimes be hard\n-- but also -- I don't think it likely that a future hacker would\nwonder why this isn't done that way. A static variable is normal for\nPostgreSQL; checking the signal mask would be a completely novel\napproach. So I think this comment is missing the mark topically. If\nthis patch is right at all, the comment here should focus on why\ndisabling these checks in quickdie() is necessary and appropriate, not\nwhy it's coded to match everything else in the system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 10:35:53 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nRobert Haas <[email protected]> writes:\n\n> On Mon, Jan 22, 2024 at 2:22 AM Andy Fan <[email protected]> wrote:\n>> I used sigismember(&BlockSig, SIGQUIT) to detect if a process is doing a\n>> quickdie, however this is bad not only because it doesn't work on\n>> Windows, but also it has too poor performance even it impacts on\n>> USE_ASSERT_CHECKING build only. In v8, I introduced a new global\n>> variable quickDieInProgress to handle this.\n>\n> OK, I like the split between 0001 and 0002. I still think 0001 has\n> cosmetic problems, but if some committer wants to take it forward,\n> they can decide what to do about that; you and I going back and forth\n> doesn't seem like the right approach to sorting that out. Whether or\n> not 0002 is adopted might affect what we do about the cosmetics in\n> 0001, too.\n\nReplacing ASSERT_NO_SPIN_LOCK with VerifyNoSpinLocksHeld or adding more\ncomments to each call of VerifyNoSpinLocksHeld really makes me happy\nsince they make things more cosmetic and practical. So I'd be absolutely\nwilling to do more stuff like this. Thanks for such suggestions!\n\nThen I can't understand the left cosmetic problems... since you are\nsaying it may related to 0002, I guess you are talking about the naming\nof START_SPIN_LOCK and END_SPIN_LOCK?\n\n>\n> 0003 seems ... unfortunate. It seems like an admission that 0001 is\n> wrong.\n\nYes, that's what I was thinking. I doubted if I should merge 0003 to\n0001 directly during this discussion, and finally I made it separate for\neasier dicussion.\n\n> Surely it *isn't* right to ignore the spinlock restrictions in\n> quickdie() in general. For example, we could self-deadlock if we try\n> to acquire a spinlock we already hold. If the problem here is merely\n> the call in errstart() then maybe we need to rethink that particular\n> call. If it goes any deeper than that, maybe we've got actual bugs we\n> need to fix.\n\nI get your point! Acquiring an already held spinlock in quickdie is\nunlikely to happen, but since our existing infrastructure can handle it,\nthen there is no reason to bypass it. Since the problem here is just\nerrstart, we can do a if(!quickDieInProgress) VerifyNoSpinLocksHeld();\nin errstart only. Another place besides the errstart is the\nCHECK_FOR_INTERRUPTS in errfinish. I think we can add the same check for\nthe VerifyNoSpinLocksHeld in CHECK_FOR_INTERRUPTS.\n\n>\n> + * It's likely to check the BlockSig to know if it is doing a quickdie\n> + * with sigismember, but it is too expensive in test, so introduce\n> + * quickDieInProgress to avoid that.\n>\n> This isn't very good English -- I realize that can sometimes be hard\n> -- but also -- I don't think it likely that a future hacker would\n> wonder why this isn't done that way. A static variable is normal for\n> PostgreSQL; checking the signal mask would be a completely novel\n> approach. So I think this comment is missing the mark topically.\n\nI was wrong to think sigismember is a syscall, but now I see it is just\na function in glibc. Even I can't get the source code of it, I think it\nshould just be some bit-operation based on the definition of __sigset_t. \n\nAnother badness of sigismember is it is not avaiable in windows. It is\nstill unclear to me why sigaddset in quickdie can \ncompile in windows. (I have a sigismember version and get a linker\nerror at windows). This should be a blocker for me to submit the next\nversion of patch.\n\n> If\n> this patch is right at all, the comment here should focus on why\n> disabling these checks in quickdie() is necessary and appropriate, not\n> why it's coded to match everything else in the system.\n\nI agree, and I think the patch 0003 is not right at all:(\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 00:16:10 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Mon, Jan 22, 2024 at 11:58 AM Andy Fan <[email protected]> wrote:\n> I get your point! Acquiring an already held spinlock in quickdie is\n> unlikely to happen, but since our existing infrastructure can handle it,\n> then there is no reason to bypass it.\n\nNo, the existing infrastructure cannot handle that at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:01:29 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nRobert Haas <[email protected]> writes:\n\n> On Mon, Jan 22, 2024 at 11:58 AM Andy Fan <[email protected]> wrote:\n>> I get your point! Acquiring an already held spinlock in quickdie is\n>> unlikely to happen, but since our existing infrastructure can handle it,\n>> then there is no reason to bypass it.\n>\n> No, the existing infrastructure cannot handle that at all.\n\nActually I mean we can handle it without 0003. am I still wrong?\nWithout the 0003, if we acquiring the spin lock which is held by\nourself already. VerifyNoSpinLocksHeld in SpinLockAcquire should catch\nit. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 01:10:17 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "On Mon, Jan 22, 2024 at 12:13 PM Andy Fan <[email protected]> wrote:\n> > On Mon, Jan 22, 2024 at 11:58 AM Andy Fan <[email protected]> wrote:\n> >> I get your point! Acquiring an already held spinlock in quickdie is\n> >> unlikely to happen, but since our existing infrastructure can handle it,\n> >> then there is no reason to bypass it.\n> >\n> > No, the existing infrastructure cannot handle that at all.\n>\n> Actually I mean we can handle it without 0003. am I still wrong?\n> Without the 0003, if we acquiring the spin lock which is held by\n> ourself already. VerifyNoSpinLocksHeld in SpinLockAcquire should catch\n> it.\n\nBut that's only going to run in assert-only builds. The whole point of\nthe patch set is to tell developers that there are bugs in the code\nthat need fixing, not to catch problems that actually occur in\nproduction.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:25:14 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nRobert Haas <[email protected]> writes:\n\n> On Mon, Jan 22, 2024 at 12:13 PM Andy Fan <[email protected]> wrote:\n>> > On Mon, Jan 22, 2024 at 11:58 AM Andy Fan <[email protected]> wrote:\n>> >> I get your point! Acquiring an already held spinlock in quickdie is\n>> >> unlikely to happen, but since our existing infrastructure can handle it,\n>> >> then there is no reason to bypass it.\n>> >\n>> > No, the existing infrastructure cannot handle that at all.\n>>\n>> Actually I mean we can handle it without 0003. am I still wrong?\n>> Without the 0003, if we acquiring the spin lock which is held by\n>> ourself already. VerifyNoSpinLocksHeld in SpinLockAcquire should catch\n>> it.\n>\n> But that's only going to run in assert-only builds. The whole point of\n> the patch set is to tell developers that there are bugs in the code\n> that need fixing, not to catch problems that actually occur in\n> production.\n\nI see. As to this aspect, then yes.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 01:31:59 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-18 14:00:58 -0500, Robert Haas wrote:\n> > The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n> > infrastruce and then it has the same issue like ${subject}, it is pretty\n> > like the code in s_lock; Based on my current knowledge, I think we\n> > should add the check there.\n> \n> I'd like to hear from Andres, if possible. @Andres: Should these\n> sanity checks apply only to spin locks per se, or also to buffer\n> header locks?\n\nThey also should apply to buffer header locks. The exact same dangers apply\nthere. The only reason this isn't using a plain spinlock is that this way we\ncan modify more state with a single atomic operation. But all the dangers of\nusing spinlocks apply just as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jan 2024 12:07:04 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nOn 2024-01-22 15:18:35 +0800, Andy Fan wrote:\n> I used sigismember(&BlockSig, SIGQUIT) to detect if a process is doing a\n> quickdie, however this is bad not only because it doesn't work on\n> Windows, but also it has too poor performance even it impacts on\n> USE_ASSERT_CHECKING build only. In v8, I introduced a new global\n> variable quickDieInProgress to handle this.\n\nFor reasons you already noted, using sigismember() isn't viable. But I am not\nconvinced by quickDieInProgress either. I think we could just reset the state\nfor the spinlock check in the code handling PANIC, perhaps via a helper\nfunction in spin.c.\n\n\n> +void\n> +VerifyNoSpinLocksHeld(void)\n> +{\n> +#ifdef USE_ASSERT_CHECKING\n> +\tif (last_spin_lock_file != NULL)\n> +\t\telog(PANIC, \"A spin lock has been held at %s:%d\",\n> +\t\t\t last_spin_lock_file, last_spin_lock_lineno);\n> +#endif\n> +}\n\nI think the #ifdef for this needs to be in the header, not here. Otherwise we\nadd a pointless external function call to a bunch of performance critical\ncode.\n\n\n> From f09518df76572adca85cba5008ea0cae5074603a Mon Sep 17 00:00:00 2001\n> From: \"yizhi.fzh\" <[email protected]>\n> Date: Fri, 19 Jan 2024 13:57:46 +0800\n> Subject: [PATCH v8 2/3] Treat (un)LockBufHdr as a SpinLock.\n> \n> The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n> infrastructure and so it is also possible that PANIC the system\n> when it can't be acquired in a short time, and its code is pretty\n> similar with s_lock. so treat it same as SPIN lock when regarding to\n> misuse of spinlock detection.\n> ---\n> src/backend/storage/buffer/bufmgr.c | 1 +\n> src/include/storage/buf_internals.h | 1 +\n> 2 files changed, 2 insertions(+)\n> \n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index 7d601bef6d..c600a113cf 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -5409,6 +5409,7 @@ LockBufHdr(BufferDesc *desc)\n> \n> \tinit_local_spin_delay(&delayStatus);\n> \n> +\tSTART_SPIN_LOCK();\n> \twhile (true)\n> \t{\n> \t\t/* set BM_LOCKED flag */\n\nSeems pretty odd that we now need init_local_spin_delay() and\nSTART_SPIN_LOCK(). Note that init_local_spin_delay() also wraps handling of\n__FILE__, __LINE__ etc, so it seems we're duplicating state here.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jan 2024 12:15:12 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi, \n\n> Hi,\n>\n> On 2024-01-22 15:18:35 +0800, Andy Fan wrote:\n>> I used sigismember(&BlockSig, SIGQUIT) to detect if a process is doing a\n>> quickdie, however this is bad not only because it doesn't work on\n>> Windows, but also it has too poor performance even it impacts on\n>> USE_ASSERT_CHECKING build only. In v8, I introduced a new global\n>> variable quickDieInProgress to handle this.\n>\n> For reasons you already noted, using sigismember() isn't viable. But I am not\n> convinced by quickDieInProgress either. I think we could just reset the state\n> for the spinlock check in the code handling PANIC, perhaps via a helper\n> function in spin.c.\n\nHandled with the action for your suggestion #3. \n\n>> +void\n>> +VerifyNoSpinLocksHeld(void)\n>> +{\n>> +#ifdef USE_ASSERT_CHECKING\n>> +\tif (last_spin_lock_file != NULL)\n>> +\t\telog(PANIC, \"A spin lock has been held at %s:%d\",\n>> +\t\t\t last_spin_lock_file, last_spin_lock_lineno);\n>> +#endif\n>> +}\n>\n> I think the #ifdef for this needs to be in the header, not here. Otherwise we\n> add a pointless external function call to a bunch of performance critical\n> code.\n>\nDone.\n\n>> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n>> index 7d601bef6d..c600a113cf 100644\n>> --- a/src/backend/storage/buffer/bufmgr.c\n>> +++ b/src/backend/storage/buffer/bufmgr.c\n>> @@ -5409,6 +5409,7 @@ LockBufHdr(BufferDesc *desc)\n>> \n>> \tinit_local_spin_delay(&delayStatus);\n>> \n>> +\tSTART_SPIN_LOCK();\n>> \twhile (true)\n>> \t{\n>> \t\t/* set BM_LOCKED flag */\n>\n> Seems pretty odd that we now need init_local_spin_delay() and\n> START_SPIN_LOCK(). Note that init_local_spin_delay() also wraps handling of\n> __FILE__, __LINE__ etc, so it seems we're duplicating state here.\n\nThanks for catching this! Based on the feedbacks so far, it is not OK to\nacquire another spin lock when holding one already. So I refactor the\ncode like this:\n\n /*\n- * Support for spin delay which is useful in various places where\n- * spinlock-like procedures take place.\n+ * Support for spin delay and spin misuse detection purpose.\n+ *\n+ * spin delay which is useful in various places where spinlock-like\n+ * procedures take place.\n+ *\n+ * spin misuse is based on global spinStatus to know if a spin lock\n+ * is held when a heavy operation is taking.\n */\n typedef struct\n {\n@@ -846,22 +854,40 @@ typedef struct\n \tconst char *file;\n \tint\t\t\tline;\n \tconst char *func;\n-} SpinDelayStatus;\n+\tbool\t\tin_panic; /* works for spin lock misuse purpose. */\n+} SpinLockStatus;\n\n+extern PGDLLIMPORT SpinLockStatus spinStatus;\n\nNow all the init_local_spin_delay, perform_spin_delay, finish_spin_delay\naccess the same global variable spinStatus. and just 2 extra functions\nadded (previously have 3). they are:\n\nextern void VerifyNoSpinLocksHeld(bool check_in_panic);\nextern void ResetSpinLockStatus(void);\n\nThe panic check stuff is still added into spinLockStatus. \n\nv9 attached.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 25 Jan 2024 15:24:17 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "\nAndres Freund <[email protected]> writes:\n\n> On 2024-01-18 14:00:58 -0500, Robert Haas wrote:\n>> > The LockBufHdr also used init_local_spin_delay / perform_spin_delay\n>> > infrastruce and then it has the same issue like ${subject}, it is pretty\n>> > like the code in s_lock; Based on my current knowledge, I think we\n>> > should add the check there.\n>> \n>> I'd like to hear from Andres, if possible. @Andres: Should these\n>> sanity checks apply only to spin locks per se, or also to buffer\n>> header locks?\n>\n> They also should apply to buffer header locks. The exact same dangers apply\n> there. The only reason this isn't using a plain spinlock is that this way we\n> can modify more state with a single atomic operation. But all the dangers of\n> using spinlocks apply just as well.\n\nThanks for speaking on this! \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 25 Jan 2024 15:36:41 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi,\n\nThere are some similarities between this and\nhttps://www.postgresql.org/message-id/20240207184050.rkvpuudq7huijmkb%40awork3.anarazel.de\nas described in that email.\n\n\nOn 2024-01-25 15:24:17 +0800, Andy Fan wrote:\n> From 4c8fd0ab71299e57fbeb18dec70051bd1d035c7c Mon Sep 17 00:00:00 2001\n> From: \"yizhi.fzh\" <[email protected]>\n> Date: Thu, 25 Jan 2024 15:19:49 +0800\n> Subject: [PATCH v9 1/1] Detect misuse of spin lock automatically\n>\n> Spin lock are intended for very short-term locks, but it is possible\n> to be misused in many cases. e.g. Acquiring another LWLocks or regular\n> locks, memory allocation, errstart when holding a spin lock. this patch\n> would detect such misuse automatically in a USE_ASSERT_CHECKING build.\n\n> CHECK_FOR_INTERRUPTS should be avoided as well when holding a spin lock.\n> Depends on what signals are left to handle, PG may raise error/fatal\n> which would cause the code jump to some other places which is hardly to\n> release the spin lock anyway.\n> ---\n> src/backend/storage/buffer/bufmgr.c | 24 +++++++----\n> src/backend/storage/lmgr/lock.c | 6 +++\n> src/backend/storage/lmgr/lwlock.c | 21 +++++++---\n> src/backend/storage/lmgr/s_lock.c | 63 ++++++++++++++++++++---------\n> src/backend/tcop/postgres.c | 6 +++\n> src/backend/utils/error/elog.c | 10 +++++\n> src/backend/utils/mmgr/mcxt.c | 16 ++++++++\n> src/include/miscadmin.h | 21 +++++++++-\n> src/include/storage/buf_internals.h | 1 +\n> src/include/storage/s_lock.h | 56 ++++++++++++++++++-------\n> src/tools/pgindent/typedefs.list | 2 +-\n> 11 files changed, 176 insertions(+), 50 deletions(-)\n>\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index 7d601bef6d..739a94209b 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -5402,12 +5402,11 @@ rlocator_comparator(const void *p1, const void *p2)\n> uint32\n> LockBufHdr(BufferDesc *desc)\n> {\n> -\tSpinDelayStatus delayStatus;\n> \tuint32\t\told_buf_state;\n>\n> \tAssert(!BufferIsLocal(BufferDescriptorGetBuffer(desc)));\n>\n> -\tinit_local_spin_delay(&delayStatus);\n> +\tinit_local_spin_delay();\n>\n> \twhile (true)\n> \t{\n\nHm, I think this might make this code a bit more expensive. It's cheaper, both\nin the number of instructions and their cost, to set variables on the stack\nthan in global memory - and it's already performance critical code. I think\nwe need to optimize the code so that we only do init_local_spin_delay() once\nwe are actually spinning, rather than also on uncontended locks.\n\n\n\n> @@ -5432,20 +5431,29 @@ LockBufHdr(BufferDesc *desc)\n> static uint32\n> WaitBufHdrUnlocked(BufferDesc *buf)\n> {\n> -\tSpinDelayStatus delayStatus;\n> \tuint32\t\tbuf_state;\n>\n> -\tinit_local_spin_delay(&delayStatus);\n> +\t/*\n> +\t * Suppose the buf will not be locked for a long time, setup a spin on\n> +\t * this.\n> +\t */\n> +\tinit_local_spin_delay();\n\nI don't know what this comment really means.\n\n\n\n> +#ifdef USE_ASSERT_CHECKING\n> +void\n> +VerifyNoSpinLocksHeld(bool check_in_panic)\n> +{\n> +\tif (!check_in_panic && spinStatus.in_panic)\n> +\t\treturn;\n\nWhy do we need this?\n\n\n\n> diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h\n> index aa06e49da2..c3fe75a41d 100644\n> --- a/src/include/storage/s_lock.h\n> +++ b/src/include/storage/s_lock.h\n> @@ -652,7 +652,10 @@ tas(volatile slock_t *lock)\n> */\n> #if !defined(S_UNLOCK)\n> #define S_UNLOCK(lock)\t\\\n> -\tdo { __asm__ __volatile__(\"\" : : : \"memory\"); *(lock) = 0; } while (0)\n> +\tdo { __asm__ __volatile__(\"\" : : : \"memory\"); \\\n> +\t\tResetSpinLockStatus(); \\\n> +\t\t*(lock) = 0; \\\n> +} while (0)\n> #endif\n\nThat seems like the wrong place. There are other definitions of S_UNLOCK(), so\nwe clearly can't do this here.\n\n\n> /*\n> - * Support for spin delay which is useful in various places where\n> - * spinlock-like procedures take place.\n> + * Support for spin delay and spin misuse detection purpose.\n> + *\n> + * spin delay which is useful in various places where spinlock-like\n> + * procedures take place.\n> + *\n> + * spin misuse is based on global spinStatus to know if a spin lock\n> + * is held when a heavy operation is taking.\n> */\n> typedef struct\n> {\n> @@ -846,22 +854,40 @@ typedef struct\n> \tconst char *file;\n> \tint\t\t\tline;\n> \tconst char *func;\n> -} SpinDelayStatus;\n> +\tbool\t\tin_panic; /* works for spin lock misuse purpose. */\n> +} SpinLockStatus;\n>\n> +extern PGDLLIMPORT SpinLockStatus spinStatus;\n> +\n> +#ifdef USE_ASSERT_CHECKING\n> +extern void VerifyNoSpinLocksHeld(bool check_in_panic);\n> +extern void ResetSpinLockStatus(void);\n> +#else\n> +#define VerifyNoSpinLocksHeld(check_in_panic) ((void) true)\n> +#define ResetSpinLockStatus() ((void) true)\n> +#endif\n> +\n> +/*\n> + * start the spin delay logic and record the places where the spin lock\n> + * is held which is also helpful for spin lock misuse detection purpose.\n> + * init_spin_delay should be called with ResetSpinLockStatus in pair.\n> + */\n> static inline void\n> -init_spin_delay(SpinDelayStatus *status,\n> -\t\t\t\tconst char *file, int line, const char *func)\n> +init_spin_delay(const char *file, int line, const char *func)\n> {\n> -\tstatus->spins = 0;\n> -\tstatus->delays = 0;\n> -\tstatus->cur_delay = 0;\n> -\tstatus->file = file;\n> -\tstatus->line = line;\n> -\tstatus->func = func;\n> +\t/* it is not allowed to spin another lock when holding one already. */\n> +\tVerifyNoSpinLocksHeld(true);\n> +\tspinStatus.spins = 0;\n> +\tspinStatus.delays = 0;\n> +\tspinStatus.cur_delay = 0;\n> +\tspinStatus.file = file;\n> +\tspinStatus.line = line;\n> +\tspinStatus.func = func;\n> +\tspinStatus.in_panic = false;\n> }\n>\n> -#define init_local_spin_delay(status) init_spin_delay(status, __FILE__, __LINE__, __func__)\n> -extern void perform_spin_delay(SpinDelayStatus *status);\n> -extern void finish_spin_delay(SpinDelayStatus *status);\n> +#define init_local_spin_delay() init_spin_delay( __FILE__, __LINE__, __func__)\n> +extern void perform_spin_delay(void);\n> +extern void finish_spin_delay(void);\n\nAs an API this doesn't quite make sense to me. For one, right now an\nuncontended SpinLockAcquire afaict will not trigger this mechanism, as we\nnever call init_spin_delay(). It also adds overhead to optimized builds, as we\nnow maintain state in global variables instead of local memory.\n\n\nMaybe we could have\n\n- spinlock_prepare_acquire() - about to acquire a spinlock\n empty in optimized builds, asserts that no other spinlock is held etc\n\n This would get called in SpinLockAcquire(), LockBufHdr() etc.\n\n\n- spinlock_finish_acquire() - have acquired spinlock\n empty in optimized builds, in assert builds sets variable indicating we're\n in spinlock\n\n This would get called in SpinLockRelease() etc.\n\n\n- spinlock_finish_release() - not holding the lock anymore\n\n This would get called by SpinLockRelease(), UnlockBufHdr()\n\n\n- spinlock_prepare_spin() - about to spin waiting for a spinlock\n like the current init_spin_delay()\n\n This would get called in s_lock(), LockBufHdr() etc.\n\n\n- spinlock_finish_spin() - completed waiting for a spinlock\n like the current finish_spin_delay()\n\n This would get called in s_lock(), LockBufHdr() etc.\n\n\nI don't love the spinlock_ prefix, that could end up confusing people. I toyed\nwith \"spinlike_\" but am not in love either.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 7 Feb 2024 11:15:57 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" }, { "msg_contents": "Hi, \n\n> There are some similarities between this and\n> https://www.postgresql.org/message-id/20240207184050.rkvpuudq7huijmkb%40awork3.anarazel.de\n> as described in that email.\n\nThanks for this information.\n\n>\n>\n> Hm, I think this might make this code a bit more expensive. It's cheaper, both\n> in the number of instructions and their cost, to set variables on the stack\n> than in global memory - and it's already performance critical code. I think\n> we need to optimize the code so that we only do init_local_spin_delay() once\n> we are actually spinning, rather than also on uncontended locks.\n\nA great lession learnt, thanks for highlighting this!\n>\n>\n>\n>> @@ -5432,20 +5431,29 @@ LockBufHdr(BufferDesc *desc)\n>> static uint32\n>> WaitBufHdrUnlocked(BufferDesc *buf)\n>> {\n>> -\tSpinDelayStatus delayStatus;\n>> \tuint32\t\tbuf_state;\n>>\n>> -\tinit_local_spin_delay(&delayStatus);\n>> +\t/*\n>> +\t * Suppose the buf will not be locked for a long time, setup a spin on\n>> +\t * this.\n>> +\t */\n>> +\tinit_local_spin_delay();\n>\n> I don't know what this comment really means.\n\nHmm, copy-paste error. Removed in v10.\n\n>\n>\n>> +#ifdef USE_ASSERT_CHECKING\n>> +void\n>> +VerifyNoSpinLocksHeld(bool check_in_panic)\n>> +{\n>> +\tif (!check_in_panic && spinStatus.in_panic)\n>> +\t\treturn;\n>\n> Why do we need this?\n\nWe disallow errstart when a spin lock is held then there are two\nspeical cases need to be handled.\n\na). quickdie signal handler. The reason is explained with the below\ncomments. \n\n/*\n * It is possible that getting here when holding a spin lock already.\n * However current function needs some actions like elog which are\n * disallowed when holding a spin lock by spinlock misuse detection\n * system. So tell that system to treat this specially.\n */\nspinStatus.in_panic = true;\n\nb). VerifyNoSpinLocksHeld function.\n\nif (spinStatus.func != NULL)\n{\n\t/*\n\t * Now we have held a spin lock and then errstart is disallow, \n\t * to avoid the endless recursive call of VerifyNoSpinLocksHeld\n\t * because of the VerifyNoSpinLocksHeld checks in errstart,\n\t * set spinStatus.in_panic to true to break the cycle.\n\t */\n\tspinStatus.in_panic = true;\n\telog(PANIC, \"A spin lock has been held at %s:%d\",\n\t\t spinStatus.func, spinStatus.line);\n}\n\n>\n>\n>> diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h\n>> index aa06e49da2..c3fe75a41d 100644\n>> --- a/src/include/storage/s_lock.h\n>> +++ b/src/include/storage/s_lock.h\n>> @@ -652,7 +652,10 @@ tas(volatile slock_t *lock)\n>> */\n>> #if !defined(S_UNLOCK)\n>> #define S_UNLOCK(lock)\t\\\n>> -\tdo { __asm__ __volatile__(\"\" : : : \"memory\"); *(lock) = 0; } while (0)\n>> +\tdo { __asm__ __volatile__(\"\" : : : \"memory\"); \\\n>> +\t\tResetSpinLockStatus(); \\\n>> +\t\t*(lock) = 0; \\\n>> +} while (0)\n>> #endif\n>\n> That seems like the wrong place. There are other definitions of S_UNLOCK(), so\n> we clearly can't do this here.\n\nTrue, in the v10, the spinlock_finish_release() is called in\nSpinLockRelease. The side effect is if user calls S_UNLOCK directly,\nthere would be something wrong because lack of call\nspinlock_finish_release, I found this usage in regress.c (I changed it\nto SpinLockRelease). but I think it is OK because:\n\n1) in s_lock.h, there is clear comment say:\n\n *\tNOTE: none of the macros in this file are intended to be called directly.\n *\tCall them through the hardware-independent macros in spin.h.\n\n2). If someone breaks the above rule, the issue can be found easily in\nassert build.\n\n3). It has no impact on release build.\n\n>>\n>> -#define init_local_spin_delay(status) init_spin_delay(status, __FILE__, __LINE__, __func__)\n>> -extern void perform_spin_delay(SpinDelayStatus *status);\n>> -extern void finish_spin_delay(SpinDelayStatus *status);\n>> +#define init_local_spin_delay() init_spin_delay( __FILE__, __LINE__, __func__)\n>> +extern void perform_spin_delay(void);\n>> +extern void finish_spin_delay(void);\n>\n> As an API this doesn't quite make sense to me. For one, right now an\n> uncontended SpinLockAcquire afaict will not trigger this mechanism, as we\n> never call init_spin_delay().\n\nAnother great lesssion learnt, thanks for this as well!\n\n>\n>\n> Maybe we could have\n\nI moved on with the below suggestion with some small modification.\n\n>\n> - spinlock_prepare_acquire() - about to acquire a spinlock\n> empty in optimized builds, asserts that no other spinlock is held\n> etc\n>\n> This would get called in SpinLockAcquire(), LockBufHdr() etc.\n\n\"asserts that no other spinlock\" has much more user cases than\nSpinLockAcquire / LockBufHdr, I think sharing the same function will be\nOK which is VerifyNoSpinLocksHeld function for now. \n\n\n> - spinlock_finish_acquire() - have acquired spinlock\n> empty in optimized builds, in assert builds sets variable indicating we're\n> in spinlock\n>\n> This would get called in SpinLockRelease() etc.\n\nI think you mean \"SpinLockAcquire\" here.\n\nMatthias suggested \"we need to 'arm' the checks just before we lock the\nspin lock, and 'disarm' the checks just after we unlock the spin lock\"\nat [1], I'm kind of persuaded by that. so I used\nspinlock_prepare_acquire to set variable indicating we're in\nspinlock. which one do you prefer now? \n\n>\n> - spinlock_finish_release() - not holding the lock anymore\n>\n> This would get called by SpinLockRelease(), UnlockBufHdr()\n>\n>\n> - spinlock_prepare_spin() - about to spin waiting for a spinlock\n> like the current init_spin_delay()\n>\n> This would get called in s_lock(), LockBufHdr() etc.\n>\n>\n> - spinlock_finish_spin() - completed waiting for a spinlock\n> like the current finish_spin_delay()\n>\n> This would get called in s_lock(), LockBufHdr() etc.\n\nAll done in v10, for consistent purpose, I also renamed\nperform_spin_delay to spinlock_perform_delay. \n\nI have got much more than what I expected before in this review process,\nthank you very much about that!\n\n[1]\nhttps://www.postgresql.org/message-id/CAEze2WggP-2Dhocmdhp-LxBzic%3DMXRgGA_tmv1G_9n-PDt2MQg%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 08 Feb 2024 21:56:24 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the s_lock_stuck on perform_spin_delay" } ]
[ { "msg_contents": "Hi,\n\nWe have documentation on how to upgrade \"publisher\" and \"subscriber\"\nat [1], but currently we do not have any documentation on how to\nupgrade logical replication clusters.\nHere is a patch to document how to upgrade different logical\nreplication clusters: a) Upgrade 2 node logical replication cluster b)\nUpgrade cascaded logical replication cluster c) Upgrade 2 node\ncircular logical replication cluster.\nThoughts?\n\n[1] - https://www.postgresql.org/docs/devel/pgupgrade.html\n\nRegards,\nVignesh", "msg_date": "Thu, 4 Jan 2024 14:21:51 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Documentation to upgrade logical replication cluster" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for making a patch! Below part is my comments.\r\n\r\n1.\r\nOnly two steps were added an id, but I think it should be for all the steps.\r\nSee [1].\r\n\r\n2.\r\nI'm not sure it should be listed as step 10. I felt that it should be new section.\r\nAt that time other steps like \"Prepare for {publisher|subscriber} upgrades\" can be moved as well.\r\nThought?\r\n\r\n3.\r\n```\r\n+ The prerequisites of publisher upgrade applies to logical Replication\r\n```\r\n\r\nReplication -> replication\r\n\r\n4.\r\n```\r\n+ <para>\r\n+ Let's say publisher is in <literal>node1</literal> and subscriber is\r\n+ in <literal>node2</literal>.\r\n+ </para>\r\n```\r\n\r\nI felt it is more friendly if you added the name of directory for each instance.\r\n\r\n5.\r\nYou did not write the initialization of new node. Was it intentional?\r\n\r\n6.\r\n```\r\n+ <para>\r\n+ Disable all the subscriptions on <literal>node2</literal> that are\r\n+ subscribing the changes from <literal>node1</literal> by using\r\n+ <link linkend=\"sql-altersubscription-params-disable\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>,\r\n+ for e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 DISABLE;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node1_node2 DISABLE;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nSubscriptions are disabled after stopping a publisher, but it leads ERRORs on the publisher.\r\nI think it's better to swap these steps.\r\n\r\n7.\r\n```\r\n+<programlisting>\r\n+dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_ctl -D /opt/PostgreSQL/pub_data stop -l logfile\r\n+</programlisting>\r\n```\r\n\r\nHmm. I thought you did not have to show the current directory. You were in the\r\nbin dir, but it is not our requirement, right? \r\n\r\n8.\r\n```\r\n+<programlisting>\r\n+dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_upgrade\r\n+ --old-datadir \"/opt/PostgreSQL/postgres/17/pub_data\"\r\n+ --new-datadir \"/opt/PostgreSQL/postgres/&majorversion;/pub_upgraded_data\"\r\n+ --old-bindir \"/opt/PostgreSQL/postgres/17/bin\"\r\n+ --new-bindir \"/opt/PostgreSQL/postgres/&majorversion;/bin\"\r\n+</programlisting>\r\n```\r\n\r\nFor PG17, both old and new bindir look the same. Can we use 18 as new-bindir?\r\n\r\n9.\r\n```\r\n+ <para>\r\n+ Create any tables that were created in <literal>node2</literal>\r\n+ between step-2 and now, for e.g.:\r\n+<programlisting>\r\n+node2=# CREATE TABLE distributors (\r\n+node2(# did integer CONSTRAINT no_null NOT NULL,\r\n+node2(# name varchar(40) NOT NULL\r\n+node2(# );\r\n+CREATE TABLE\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nI think this SQLs must be done on node1, because it has not boot between step-2\r\nand step-7.\r\n\r\n10.\r\n```\r\n+ <step>\r\n+ <para>\r\n+ Enable all the subscriptions on <literal>node2</literal> that are\r\n+ subscribing the changes from <literal>node1</literal> by using\r\n+ <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>,\r\n+ for e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node1_node2 ENABLE;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n+ </step>\r\n+\r\n+ <step>\r\n+ <para>\r\n+ Refresh the publications using\r\n+ <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\r\n+ for e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n+ </step>\r\n```\r\n\r\nI was very confused the location where they would be really do. If my above\r\ncomment is correct, should they be executed on node1 as well? Could you please all\r\nthe notation again?\r\n\r\n11.\r\n```\r\n+ <para>\r\n+ Disable all the subscriptions on <literal>node1</literal> that are\r\n+ subscribing the changes from <literal>node2</literal> by using\r\n+ <link linkend=\"sql-altersubscription-params-disable\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>,\r\n+ for e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node2_node1 DISABLE;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node2_node1 DISABLE;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nThey should be on node1, but noted as node2.\r\n\r\n12.\r\n```\r\n+ <para>\r\n+ Enable all the subscriptions on <literal>node1</literal> that are\r\n+ subscribing the changes from <literal>node2</literal> by using\r\n+ <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>,\r\n+ for e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node2_node1 ENABLE;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node2_node1 ENABLE;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nYou said that \"enable all the subscription on node1\", but SQLs are done on node2.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB58667AE04D291924671E2051F5879@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Fri, 5 Jan 2024 03:38:37 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Here are some review comments for patch v1-0001.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1. GENERAL - blank lines\n\nMost (but not all) of your procedure steps are preceded by blank lines\nto make them more readable in the SGML. Add the missing blank lines\nfor the steps that didn't have them.\n\n2. GENERAL - for e.g.:\n\nAll the \"for e.g:\" that precedes the code examples can just say\n\"e.g.:\" like in other examples on this page.\n\n~~~\n3. GENERAL - reference from elsewhere\n\nI was wondering if \"Chapter 30. Logical Replication\" should include a\nsection that references back to all this just to make it easier to\nfind.\n\n~~~\n\n4.\n+ <para>\n+ Migration of logical replication clusters can be done when all the members\n+ of the old logical replication clusters are version 17.0 or later.\n+ </para>\n\n/can be done when/is possible only when/\n\n~~~\n\n5.\n+ <para>\n+ The prerequisites of publisher upgrade applies to logical Replication\n+ cluster upgrades also. See <xref linkend=\"prepare-publisher-upgrades\"/>\n+ for the details of publisher upgrade prerequisites.\n+ </para>\n\n/applies to/apply to/\n/logical Replication/logical replication/\n\n~~~\n\n6.\n+ <para>\n+ The prerequisites of subscriber upgrade applies to logical Replication\n+ cluster upgrades also. See <xref linkend=\"prepare-subscriber-upgrades\"/>\n+ for the details of subscriber upgrade prerequisites.\n+ </para>\n+ </note>\n\n/applies to/apply to/\n/logical Replication/logical replication/\n\n~~~\n\n7.\n+ <para>\n+ The steps to upgrade logical replication clusters in various scenarios are\n+ given below.\n+ </para>\n\nThe 3 titles do not render very prominently, so it is too easy to get\nlost scrolling up and down looking for the different scenarios. If the\ntitle rendering can't be improved, at least a list of 3 links here\n(like a TOC) would be helpful.\n\n~~~\n\n//////////\nSteps to Upgrade 2 node logical replication cluster\n//////////\n\n8. GENERAL - server names\n\nI noticed in this set of steps you called the servers 'pub_data' and\n'pub_upgraded_data' and 'sub_data' and 'sub_upgraded_data'. I see it\nis easy to read like this, it is also different from all the\nsubsequent procedures where the names are just like 'data1', 'data2',\n'data3', and 'data1_upgraded', 'data2_upgraded', 'data3_upgraded'.\n\nI felt maybe it is better to use a consistent naming for all the procedures.\n\n~~~\n\n9.\n+ <step>\n+ <title>Steps to Upgrade 2 node logical replication cluster</title>\n\nSUGGESTION\nSteps to upgrade a two-node logical replication cluster\n\n~~~\n\n10.\n+\n+ <procedure>\n+ <step>\n+ <para>\n+ Let's say publisher is in <literal>node1</literal> and subscriber is\n+ in <literal>node2</literal>.\n+ </para>\n+ </step>\n\n10a.\nThis renders as Step 1. But IMO this should not be a \"step\" at all --\nit's just a description of the scenario.\n\n~\n\n10b.\nThe subsequent steps refer to subscriptions 'sub1_node1_node2' and\n'sub2_node1_node2'. IMO it would help with the example code if those\nare named up front here too. e.g.\n\nnode2 has two subscriptions for changes from node1:\nsub1_node1_node2\nsub2_node1_node2\n\n~~~\n\n11.\n+ <step>\n+ <para>\n+ Upgrade the publisher node <literal>node1</literal>'s server to the\n+ required newer version, for e.g.:\n\nThe wording repeating node/node1 seems complicated.\n\nSUGGESTION\nUpgrade the publisher node's server to the required newer version, e.g.:\n\n~~~\n\n12.\n+ <step>\n+ <para>\n+ Start the upgraded publisher node\n<literal>node1</literal>'s server, for e.g.:\n\nIMO better to use the similar wording used for the \"Stop\" step\n\nSUGGESTION\nStart the upgraded publisher server in node1, e.g.:\n\n~~~\n\n13.\n+ <step>\n+ <para>\n+ Upgrade the subscriber node <literal>node2</literal>'s server to\n+ the required new version, for e.g.:\n\nThe wording repeating node/node2 seems complicated.\n\nSUGGESTION\nUpgrade the subscriber node's server to the required newer version, e.g.:\n\n~~~\n\n14.\n+ <step>\n+ <para>\n+ Start the upgraded subscriber node <literal>node2</literal>'s server,\n+ for e.g.:\n\nIMO better to use the similar wording used for the \"Stop\" step\n\nSUGGESTION\nStart the upgraded subscriber server in node2, e.g.:\n\n~~~\n\n15.\n+ <step>\n+ <para>\n+ Create any tables that were created in the upgraded\npublisher <literal>node1</literal>\n+ server between step-5 and now, for e.g.:\n+<programlisting>\n+node2=# CREATE TABLE distributors (\n+node2(# did integer CONSTRAINT no_null NOT NULL,\n+node2(# name varchar(40) NOT NULL\n+node2(# );\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n15a\nMaybe it is better to have a link to setp5 instead of just hardwiring\n\"Step-5\" in the text.\n\n~\n\n15b.\nI didn't think it was needed to spread the CREATE TABLE across\nmultiple lines. It is just a dummy example anyway so IMO better to use\nup less space.\n\n~~~\n\n16.\n+ <step>\n+ <para>\n+ Refresh the publications using\n+ <link\nlinkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n\n/Refresh the publications/Refresh the subscription's publications/\n\n~~~\n\n//////////\nSteps to upgrade cascaded logical replication clusters\n//////////\n\n(these comments are similar to those in the previous procedure, but I\nwill give them all again)\n\n17.\n+ <procedure>\n+ <step>\n+ <title>Steps to upgrade cascaded logical replication clusters</title>\n+ <procedure>\n+ <step>\n+ <para>\n+ Let's say we have a cascaded logical replication setup\n+ <literal>node1</literal>-><literal>node2</literal>-><literal>node3</literal>.\n+ Here <literal>node2</literal> is subscribing the changes from\n+ <literal>node1</literal> and <literal>node3</literal> is subscribing\n+ the changes from <literal>node2</literal>.\n+ </para>\n+ </step>\n\n17a.\nThis renders as Step 1. But IMO this should not be a \"step\" at all --\nit's just a description of the scenario.\n\n~\n\n17b.\nThe subsequent steps refer to subscriptions 'sub1_node1_node2' and\n'sub1_node1_node2' and 'sub1_node2_node3' and 'sub2_node2_node3'. IMO\nit would help with the example code if those are named up front here\ntoo, e.g.\n\nnode2 has two subscriptions for changes from node1:\nsub1_node1_node2\nsub2_node1_node2\n\nnode3 has two subscriptions for changes from node2:\nsub1_node2_node3\nsub2_node2_node3\n\n~~~\n\n18.\n+ <step>\n+ <para>\n+ Upgrade the publisher node <literal>node1</literal>'s server to the\n+ required newer version, for e.g.:\n\nI'm not sure it is good to call this the publisher node, because in\nthis scenario node2 is also a publisher node.\n\nSUGGESTION\nUpgrade the node1 server to the required newer version, e.g.:\n\n~~~\n\n19.\n+ <step>\n+ <para>\n+ Start the upgraded node <literal>node1</literal>'s server, for e.g.:\n\nSUGGESTION\nStart the upgraded node1's server, e.g.:\n\n~~~\n\n20.\n+ <step>\n+ <para>\n+ Upgrade the node <literal>node2</literal>'s server to the required\n+ new version, for e.g.:\n\nSUGGESTION\nUpgrade the node2 server to the required newer version, e.g.:\n\n~~~\n\n21.\n+ <step>\n+ <para>\n+ Start the upgraded node <literal>node2</literal>'s server, for e.g.:\n\nSUGGESTION\nStart the upgraded node2's server, e.g.:\n\n~~~\n\n22.\n+ <step>\n+ <para>\n+ Create any tables that were created in the upgraded\npublisher <literal>node1</literal>\n+ server between step-5 and now, for e.g.:\n\n22a\nMaybe this should say \"On node2, create any tables...\"\n\n~\n\n22b.\nMaybe it is better to have a link to step5 instead of just hardwiring\n\"Step-5\" in the text.\n\n~\n\n22c.\nI didn't think it was needed to spread the CREATE TABLE across\nmultiple lines. It is just a dummy example anyway so IMO better to use\nup less space.\n\n~~~\n\n23.\n+ <step>\n+ <para>\n+ Enable all the subscriptions on <literal>node2</literal> that are\n+ subscribing the changes from <literal>node2</literal> by using\n+ <link\nlinkend=\"sql-altersubscription-params-enable\"><command>ALTER\nSUBSCRIPTION ... ENABLE</command></link>,\n+ for e.g.:\n\nTypo: /subscribing the changes from node2/subscribing the changes from node1/\n\n~~~\n\n\n99.\n+ <step>\n+ <para>\n+ Refresh the publications using\n+ <link\nlinkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n+ for e.g.:\n\nSUGGESTION\nRefresh the node2 subscription's publications using...\n\n~~~\n\n25.\n+ <step>\n+ <para>\n+ Upgrade the node <literal>node3</literal>'s server to the required\n+ new version, for e.g.:\n\nSUGGESTION\nUpgrade the node3 server to the required newer version, e.g.:\n\n~~~\n\n26.\n+ <step>\n+ <para>\n+ Start the upgraded node <literal>node3</literal>'s server, for e.g.:\n\nSUGGESTION\nStart the upgraded node3's server, e.g.:\n\n~~~\n\n27.\n+ <step>\n+ <para>\n+ Create any tables that were created in the upgraded node\n+ <literal>node2</literal> between step-9 and now, for e.g.:\n\n27a.\nSUGGESTION\nOn node3, create any tables that were created in the upgraded node2 between...\n\n~\n\n27b.\nMaybe it is better to have a link to step9 instead of just hardwiring\n\"Step-9\" in the text.\n\n~\n\n27c.\nI didn't think it was needed to spread the CREATE TABLE across\nmultiple lines. It is just a dummy example anyway so IMO better to use\nup less space.\n\n~~~\n\n28.\n+ <step>\n+ <para>\n+ Refresh the publications using\n+ <link\nlinkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n+ for e.g.:\n\nSUGGESTION\nRefresh the node3 subscription's publications using...\n\n//////////\nSteps to Upgrade 2 node circular logical replication cluster</title>\n//////////\n\n(Again, some of these comments are similar to before, but I'll repeat\nthem anyhow)\n\n~~~\n\n29. GENERAL - Should this circular scenario even be mentioned?\n\nIIUC there are no other PG docs for describing how to set up and\nmanage a circular scenario like this. I know you wrote a blog about\nthis topic [1], and I think there was a documentation patch [2] about\nthis but it was never pushed.\n\nSo, I'm not sure it is appropriate to include these docs \"Steps to\nupgrade XXX\" when there are not even any docs about \"Steps to create\nXXX\".\n\n~~~\n\n30.\n+ <procedure>\n+ <step>\n+ <title>Steps to Upgrade 2 node circular logical replication\ncluster</title>\n\nSUGGESTION\nSteps to upgrade a two-node circular logical replication cluster\n\n~~~\n\n31.\n+ <step>\n+ <para>\n+ Let's say we have a circular logical replication setup\n+ <literal>node1</literal>-><literal>node2</literal> and\n+ <literal>node2</literal>-><literal>node1</literal>. Here\n+ <literal>node2</literal> is subscribing the changes from\n+ <literal>node1</literal> and <literal>node1</literal> is subscribing\n+ the changes from <literal>node2</literal>.\n+ </para>\n+ </step>\n\n31a\nThis renders as Step 1. But IMO this should not be a \"step\" at all --\nit's just a description of the scenario.\nREVIEW COMMENT 05/1\n\n~\n\n31b.\nThe subsequent steps refer to subscriptions 'sub1_node1_node2' and\n'sub2_node1_node2' and 'sub1_node2_node1' and 'sub1_node2_node1'. IMO\nit would help with the example code if those are named up front here\ntoo. e.g.\n\nnode1 has two subscriptions for changes from node2:\nsub1_node2_node1\nsub2_node2_node1\n\nnode2 has two subscriptions for changes from node1:\nsub1_node1_node2\nsub2_node1_node2\n\n~~~\n\n32.\n+ <step>\n+ <para>\n+ Upgrade the node <literal>node1</literal>'s server to the required\n+ newer version, for e.g.:\n\nSUGGESTION\nUpgrade the node1 server to the required newer version, e.g.:\n\n~~~\n\n33.\n+ <step>\n+ <para>\n+ Start the upgraded node <literal>node1</literal>'s server, for e.g.:\n\nSUGGESTION\nStart the upgraded node1's server, e.g.:\n\n~~~\n\n34.\n+ <step>\n+ <para>\n+ Wait till all the incremental changes are synchronized.\n+ </para>\n\nAny hint on how to do this?\n\n~~~\n\n35.\n+ <step>\n+ <para>\n+ Create any tables that were created in <literal>node2</literal>\n+ between step-2 and now, for e.g.:\n\n35a.\nThat doesn't seem right.\n- Don't you mean \"created in the upgraded node1\"?\n- Don't you mean \"between step-5\"?\n\nSUGGESTION\nOn node2, create any tables that were created in the upgraded node1\nbetween step5 and...\n\n~\n\n35b.\nMaybe it is better to have a link to step5 instead of just hardwiring\n\"Step-5\" in the text.\n\n~\n\n35c.\nI didn't think it was needed to spread the CREATE TABLE across\nmultiple lines. It is just a dummy example anyway so IMO better to use\nup less space.\n\n~~~\n\n36.\n+ <step>\n+ <para>\n+ Refresh the publications using\n+ <link\nlinkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n+ for e.g.:\n\nSUGGESTION\nRefresh the node2 subscription's publications using...\n\n~~~\n\n37.\n+ <step>\n+ <para>\n+ Disable all the subscriptions on <literal>node1</literal> that are\n+ subscribing the changes from <literal>node2</literal> by using\n+ <link\nlinkend=\"sql-altersubscription-params-disable\"><command>ALTER\nSUBSCRIPTION ... DISABLE</command></link>,\n+ for e.g.:\n+<programlisting>\n+node2=# ALTER SUBSCRIPTION sub1_node2_node1 DISABLE;\n+ALTER SUBSCRIPTION\n+node2=# ALTER SUBSCRIPTION sub2_node2_node1 DISABLE;\n+ALTER SUBSCRIPTION\n+</programlisting>\n+ </para>\n+ </step>\n\nThis example looks wrong. IIUC these commands should be done on node1\nbut the example shows a node2 prompt.\n\n~~~\n\n38.\n+ <step>\n+ <para>\n+ Upgrade the node <literal>node2</literal>'s server to the required\n+ new version, for e.g.:\n\nSUGGESTION\nUpgrade the node2 server to the required newer version, e.g.:\n\n~~~\n\n39.\n+ <step>\n+ <para>\n+ Start the upgraded node <literal>node2</literal>'s server, for e.g.:\n\nSUGGESTION\nStart the upgraded node2's server, e.g.:\n\n~~~\n\n40.\n+ <step>\n+ <para>\n+ Create any tables that were created in the upgraded node\n+ <literal>node1</literal> between step-10 and now, for e.g.:\n+<programlisting>\n+node2=# CREATE TABLE distributors (\n+node2(# did integer CONSTRAINT no_null NOT NULL,\n+node2(# name varchar(40) NOT NULL\n+node2(# );\n+CREATE TABLE\n+</programlisting>\n\n40a.\nThat doesn't seem right.\n- Don't you mean \"created in the upgraded node2\"?\n- Don't you mean \"between step-12\"?\n\nSUGGESTION\nOn node1, create any tables that were created in the upgraded node2\nbetween step12 and...\n\n~\n\n40b.\nMaybe it is better to have a link to step12 instead of just hardwiring\n\"Step-12\" in the text.\n\n~\n\n40c.\nI didn't think it was needed to spread the CREATE TABLE across\nmultiple lines. It is just a dummy example anyway so IMO better to use\nup less space.\n\n~~~\n\n41.\n+ <step>\n+ <para>\n+ Enable all the subscriptions on <literal>node1</literal> that are\n+ subscribing the changes from <literal>node2</literal> by using\n+ <link\nlinkend=\"sql-altersubscription-params-enable\"><command>ALTER\nSUBSCRIPTION ... ENABLE</command></link>,\n+ for e.g.:\n+<programlisting>\n+node2=# ALTER SUBSCRIPTION sub1_node2_node1 ENABLE;\n+ALTER SUBSCRIPTION\n+node2=# ALTER SUBSCRIPTION sub2_node2_node1 ENABLE;\n+ALTER SUBSCRIPTION\n+</programlisting>\n+ </para>\n+ </step>\n\nThe example looks wrong. IIUC these commands should be done on node1\nbut the example shows a node2 prompt.\n\n~~\n\n42.\n+ <step>\n+ <para>\n+ Refresh the publications using\n+ <link\nlinkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>\n+ for e.g.:\n+<programlisting>\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n+ALTER SUBSCRIPTION\n+node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\n+ALTER SUBSCRIPTION\n+</programlisting>\n+ </para>\n+ </step>\n\n42a.\nSUGGESTION\nRefresh the node1 subscription's publications using...\n\n~\n\n42b.\nThe example looks wrong. IIUC these commands should be done on node1\nbut the example shows a node2 prompt.\n\n======\n[1] https://www.postgresql.fastware.com/blog/bi-directional-replication-using-origin-filtering-in-postgresql\n[2] https://www.postgresql.org/message-id/CALDaNm3tv%2BnWMXO0q39EuwzbXEQyF5thT4Ha1PvfQ%2BfQgSdi_A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 5 Jan 2024 16:18:57 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, Jan 5, 2024 at 2:38 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n...\n> 2.\n> I'm not sure it should be listed as step 10. I felt that it should be new section.\n> At that time other steps like \"Prepare for {publisher|subscriber} upgrades\" can be moved as well.\n> Thought?\n\nDuring my review, I also felt that step 10 is now so long that it is a\ndistraction from the other content on this page.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 5 Jan 2024 16:25:44 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Thu, Jan 4, 2024 at 2:22 PM vignesh C <[email protected]> wrote:\n>\n> Hi,\n>\n> We have documentation on how to upgrade \"publisher\" and \"subscriber\"\n> at [1], but currently we do not have any documentation on how to\n> upgrade logical replication clusters.\n> Here is a patch to document how to upgrade different logical\n> replication clusters: a) Upgrade 2 node logical replication cluster b)\n> Upgrade cascaded logical replication cluster c) Upgrade 2 node\n> circular logical replication cluster.\n> Thoughts?\n>\n> [1] - https://www.postgresql.org/docs/devel/pgupgrade.html\n\nThanks for this. It really helps developers a lot. In addition to the\ndocs, why can't all of these steps be put into a perl/shell script or\na C tool sitting in the src/bin directory?\n\nI prefer a postgres src/bin tool which takes publisher and subscriber\nconnection strings as the inputs, talks to them and upgrades both\npublisher and subscriber. Of course, one can write such a tool outside\nof postgres in their own programming language, but the capability to\nupgrade postgres servers with logical replication is such an important\ntask one would often require it. Therefore, an off-the-shelf tool not\nonly avoids manual efforts but makes it effortless for the users,\nafter all, if any of the steps isn't performed as stated in the docs\nthe servers may end up in an inconsistent state.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Jan 2024 12:51:56 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Thu, Jan 4, 2024 at 2:22 PM vignesh C <[email protected]> wrote:\n>\n> We have documentation on how to upgrade \"publisher\" and \"subscriber\"\n> at [1], but currently we do not have any documentation on how to\n> upgrade logical replication clusters.\n> Here is a patch to document how to upgrade different logical\n> replication clusters: a) Upgrade 2 node logical replication cluster b)\n> Upgrade cascaded logical replication cluster c) Upgrade 2 node\n> circular logical replication cluster.\n>\n\nToday, off-list, I had a short discussion on this documentation with\nJonathan and Michael. I was not sure whether we should add this in the\nmain documentation of the upgrade or maintain it as a separate wiki\npage. My primary worry was that this seemed to be taking too much\nspace on pgupgrade page and making the information on that page a bit\nunreadable. Jonathan suggested that we can add this information to the\nlogical replication page [1] and add a reference in the pgupgrade\npage. That suggestion makes sense to me considering we have\nsub-sections like Monitoring, Security, and Configuration Settings on\nthe logical replication page. We can have a new sub-section Upgrade on\nthe same lines. What do you think?\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:50:33 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, Jan 8, 2024 at 12:52 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Jan 4, 2024 at 2:22 PM vignesh C <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > We have documentation on how to upgrade \"publisher\" and \"subscriber\"\n> > at [1], but currently we do not have any documentation on how to\n> > upgrade logical replication clusters.\n> > Here is a patch to document how to upgrade different logical\n> > replication clusters: a) Upgrade 2 node logical replication cluster b)\n> > Upgrade cascaded logical replication cluster c) Upgrade 2 node\n> > circular logical replication cluster.\n> > Thoughts?\n> >\n> > [1] - https://www.postgresql.org/docs/devel/pgupgrade.html\n>\n> Thanks for this. It really helps developers a lot. In addition to the\n> docs, why can't all of these steps be put into a perl/shell script or\n> a C tool sitting in the src/bin directory?\n>\n> I prefer a postgres src/bin tool which takes publisher and subscriber\n> connection strings as the inputs, talks to them and upgrades both\n> publisher and subscriber. Of course, one can write such a tool outside\n> of postgres in their own programming language, but the capability to\n> upgrade postgres servers with logical replication is such an important\n> task one would often require it. Therefore, an off-the-shelf tool not\n> only avoids manual efforts but makes it effortless for the users,\n> after all, if any of the steps isn't performed as stated in the docs\n> the servers may end up in an inconsistent state.\n>\n\nThis idea has merits but not sure if we just add a few tests that\nusers can refer to if they want or provide a utility as you described.\nI would prefer a test or two for now and if there is a demand then we\ncan consider having such a utility. In either case, I feel it is\nbetter discussed in a separate thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 10 Jan 2024 16:03:41 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Wed, 10 Jan 2024 at 15:50, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 4, 2024 at 2:22 PM vignesh C <[email protected]> wrote:\n> >\n> > We have documentation on how to upgrade \"publisher\" and \"subscriber\"\n> > at [1], but currently we do not have any documentation on how to\n> > upgrade logical replication clusters.\n> > Here is a patch to document how to upgrade different logical\n> > replication clusters: a) Upgrade 2 node logical replication cluster b)\n> > Upgrade cascaded logical replication cluster c) Upgrade 2 node\n> > circular logical replication cluster.\n> >\n>\n> Today, off-list, I had a short discussion on this documentation with\n> Jonathan and Michael. I was not sure whether we should add this in the\n> main documentation of the upgrade or maintain it as a separate wiki\n> page. My primary worry was that this seemed to be taking too much\n> space on pgupgrade page and making the information on that page a bit\n> unreadable. Jonathan suggested that we can add this information to the\n> logical replication page [1] and add a reference in the pgupgrade\n> page. That suggestion makes sense to me considering we have\n> sub-sections like Monitoring, Security, and Configuration Settings on\n> the logical replication page. We can have a new sub-section Upgrade on\n> the same lines. What do you think?\n\nI feel that would be better, also others like Kuroda-san had said in\nthe similar lines at comment-2 at [1] and Peter also had similar\nopinion at [2]. I will handle this in the next version.\n\n[1] - https://www.postgresql.org/message-id/TY3PR01MB9889BD1202530E8310AC9B3DF5662%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAHut%2BPs4AtGB9MMK51%3D1Z1JQ1FUK%2BX0oXQuAdEad1kEEuw7%2BkA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 11 Jan 2024 09:50:19 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, 5 Jan 2024 at 09:08, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for making a patch! Below part is my comments.\n>\n> 1.\n> Only two steps were added an id, but I think it should be for all the steps.\n> See [1].\n\nI have added wherever it is required as of now.\n\n> 2.\n> I'm not sure it should be listed as step 10. I felt that it should be new section.\n> At that time other steps like \"Prepare for {publisher|subscriber} upgrades\" can be moved as well.\n> Thought?\n\nI have moved all of these to a separate page in logical-replication\nunder Upgrade\n\n> 3.\n> ```\n> + The prerequisites of publisher upgrade applies to logical Replication\n> ```\n>\n> Replication -> replication\n\nModified\n\n> 4.\n> ```\n> + <para>\n> + Let's say publisher is in <literal>node1</literal> and subscriber is\n> + in <literal>node2</literal>.\n> + </para>\n> ```\n>\n> I felt it is more friendly if you added the name of directory for each instance.\n\nI have listed this in the pg_upgrade command execution, since it is\nmentioned there I have not added here too.\n\n> 5.\n> You did not write the initialization of new node. Was it intentional?\n\nAdded it now\n\n> 6.\n> ```\n> + <para>\n> + Disable all the subscriptions on <literal>node2</literal> that are\n> + subscribing the changes from <literal>node1</literal> by using\n> + <link linkend=\"sql-altersubscription-params-disable\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 DISABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node1_node2 DISABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> ```\n>\n> Subscriptions are disabled after stopping a publisher, but it leads ERRORs on the publisher.\n> I think it's better to swap these steps.\n\nModified\n\n> 7.\n> ```\n> +<programlisting>\n> +dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_ctl -D /opt/PostgreSQL/pub_data stop -l logfile\n> +</programlisting>\n> ```\n>\n> Hmm. I thought you did not have to show the current directory. You were in the\n> bin dir, but it is not our requirement, right?\n\nI kept this just to show the version being used\n\n> 8.\n> ```\n> +<programlisting>\n> +dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_upgrade\n> + --old-datadir \"/opt/PostgreSQL/postgres/17/pub_data\"\n> + --new-datadir \"/opt/PostgreSQL/postgres/&majorversion;/pub_upgraded_data\"\n> + --old-bindir \"/opt/PostgreSQL/postgres/17/bin\"\n> + --new-bindir \"/opt/PostgreSQL/postgres/&majorversion;/bin\"\n> +</programlisting>\n> ```\n>\n> For PG17, both old and new bindir look the same. Can we use 18 as new-bindir?\n\nModfied\n\n> 9.\n> ```\n> + <para>\n> + Create any tables that were created in <literal>node2</literal>\n> + between step-2 and now, for e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (\n> +node2(# did integer CONSTRAINT no_null NOT NULL,\n> +node2(# name varchar(40) NOT NULL\n> +node2(# );\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> ```\n>\n> I think this SQLs must be done on node1, because it has not boot between step-2\n> and step-7.\n\nModified\n\n> 10.\n> ```\n> + <step>\n> + <para>\n> + Enable all the subscriptions on <literal>node2</literal> that are\n> + subscribing the changes from <literal>node1</literal> by using\n> + <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node1_node2 ENABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n> +\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n> ```\n>\n> I was very confused the location where they would be really do. If my above\n> comment is correct, should they be executed on node1 as well? Could you please all\n> the notation again?\n\nModified\n\n> 11.\n> ```\n> + <para>\n> + Disable all the subscriptions on <literal>node1</literal> that are\n> + subscribing the changes from <literal>node2</literal> by using\n> + <link linkend=\"sql-altersubscription-params-disable\"><command>ALTER SUBSCRIPTION ... DISABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node2_node1 DISABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node2_node1 DISABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> ```\n>\n> They should be on node1, but noted as node2.\n\nModified\n\n> 12.\n> ```\n> + <para>\n> + Enable all the subscriptions on <literal>node1</literal> that are\n> + subscribing the changes from <literal>node2</literal> by using\n> + <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER SUBSCRIPTION ... ENABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node2_node1 ENABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node2_node1 ENABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> ```\n>\n> You said that \"enable all the subscription on node1\", but SQLs are done on node2.\n\nModified\n\nThanks for the comments, the attached v2 version patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Sat, 13 Jan 2024 19:07:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, 5 Jan 2024 at 10:49, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for patch v1-0001.\n>\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 1. GENERAL - blank lines\n>\n> Most (but not all) of your procedure steps are preceded by blank lines\n> to make them more readable in the SGML. Add the missing blank lines\n> for the steps that didn't have them.\n\nModified\n\n> 2. GENERAL - for e.g.:\n>\n> All the \"for e.g:\" that precedes the code examples can just say\n> \"e.g.:\" like in other examples on this page.\n\nModified\n\n> ~~~\n> 3. GENERAL - reference from elsewhere\n>\n> I was wondering if \"Chapter 30. Logical Replication\" should include a\n> section that references back to all this just to make it easier to\n> find.\n\nI have moved this to Chapter 30 now as it is more applicable there and\nalso based on feedback from Amit at [1].\n\n> ~~~\n>\n> 4.\n> + <para>\n> + Migration of logical replication clusters can be done when all the members\n> + of the old logical replication clusters are version 17.0 or later.\n> + </para>\n>\n> /can be done when/is possible only when/\n\nModified\n\n> ~~~\n>\n> 5.\n> + <para>\n> + The prerequisites of publisher upgrade applies to logical Replication\n> + cluster upgrades also. See <xref linkend=\"prepare-publisher-upgrades\"/>\n> + for the details of publisher upgrade prerequisites.\n> + </para>\n>\n> /applies to/apply to/\n> /logical Replication/logical replication/\n\nModified\n\n> ~~~\n>\n> 6.\n> + <para>\n> + The prerequisites of subscriber upgrade applies to logical Replication\n> + cluster upgrades also. See <xref linkend=\"prepare-subscriber-upgrades\"/>\n> + for the details of subscriber upgrade prerequisites.\n> + </para>\n> + </note>\n>\n> /applies to/apply to/\n> /logical Replication/logical replication/\n\nModified\n\n> ~~~\n>\n> 7.\n> + <para>\n> + The steps to upgrade logical replication clusters in various scenarios are\n> + given below.\n> + </para>\n>\n> The 3 titles do not render very prominently, so it is too easy to get\n> lost scrolling up and down looking for the different scenarios. If the\n> title rendering can't be improved, at least a list of 3 links here\n> (like a TOC) would be helpful.\n\nI added a list of these 3 links in the beginning.\n\n> ~~~\n>\n> //////////\n> Steps to Upgrade 2 node logical replication cluster\n> //////////\n>\n> 8. GENERAL - server names\n>\n> I noticed in this set of steps you called the servers 'pub_data' and\n> 'pub_upgraded_data' and 'sub_data' and 'sub_upgraded_data'. I see it\n> is easy to read like this, it is also different from all the\n> subsequent procedures where the names are just like 'data1', 'data2',\n> 'data3', and 'data1_upgraded', 'data2_upgraded', 'data3_upgraded'.\n>\n> I felt maybe it is better to use a consistent naming for all the procedures.\n\nModified\n\n> ~~~\n>\n> 9.\n> + <step>\n> + <title>Steps to Upgrade 2 node logical replication cluster</title>\n>\n> SUGGESTION\n> Steps to upgrade a two-node logical replication cluster\n\nModified\n\n> ~~~\n>\n> 10.\n> +\n> + <procedure>\n> + <step>\n> + <para>\n> + Let's say publisher is in <literal>node1</literal> and subscriber is\n> + in <literal>node2</literal>.\n> + </para>\n> + </step>\n>\n> 10a.\n> This renders as Step 1. But IMO this should not be a \"step\" at all --\n> it's just a description of the scenario.\n\nModified\n\n> ~\n>\n> 10b.\n> The subsequent steps refer to subscriptions 'sub1_node1_node2' and\n> 'sub2_node1_node2'. IMO it would help with the example code if those\n> are named up front here too. e.g.\n>\n> node2 has two subscriptions for changes from node1:\n> sub1_node1_node2\n> sub2_node1_node2\n\nModified\n\n> ~~~\n>\n> 11.\n> + <step>\n> + <para>\n> + Upgrade the publisher node <literal>node1</literal>'s server to the\n> + required newer version, for e.g.:\n>\n> The wording repeating node/node1 seems complicated.\n>\n> SUGGESTION\n> Upgrade the publisher node's server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 12.\n> + <step>\n> + <para>\n> + Start the upgraded publisher node\n> <literal>node1</literal>'s server, for e.g.:\n>\n> IMO better to use the similar wording used for the \"Stop\" step\n>\n> SUGGESTION\n> Start the upgraded publisher server in node1, e.g.:\n\nModified\n\n> ~~~\n>\n> 13.\n> + <step>\n> + <para>\n> + Upgrade the subscriber node <literal>node2</literal>'s server to\n> + the required new version, for e.g.:\n>\n> The wording repeating node/node2 seems complicated.\n>\n> SUGGESTION\n> Upgrade the subscriber node's server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 14.\n> + <step>\n> + <para>\n> + Start the upgraded subscriber node <literal>node2</literal>'s server,\n> + for e.g.:\n>\n> IMO better to use the similar wording used for the \"Stop\" step\n>\n> SUGGESTION\n> Start the upgraded subscriber server in node2, e.g.:\n\nModified\n\n> ~~~\n>\n> 15.\n> + <step>\n> + <para>\n> + Create any tables that were created in the upgraded\n> publisher <literal>node1</literal>\n> + server between step-5 and now, for e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (\n> +node2(# did integer CONSTRAINT no_null NOT NULL,\n> +node2(# name varchar(40) NOT NULL\n> +node2(# );\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 15a\n> Maybe it is better to have a link to setp5 instead of just hardwiring\n> \"Step-5\" in the text.\n\nModified\n\n> ~'\n>\n> 15b.\n> I didn't think it was needed to spread the CREATE TABLE across\n> multiple lines. It is just a dummy example anyway so IMO better to use\n> up less space.\n\nModified\n\n> ~~~\n>\n> 16.\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link\n> linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n>\n> /Refresh the publications/Refresh the subscription's publications/\n\nModified\n\n> ~~~\n>\n> //////////\n> Steps to upgrade cascaded logical replication clusters\n> //////////\n>\n> (these comments are similar to those in the previous procedure, but I\n> will give them all again)\n>\n> 17.\n> + <procedure>\n> + <step>\n> + <title>Steps to upgrade cascaded logical replication clusters</title>\n> + <procedure>\n> + <step>\n> + <para>\n> + Let's say we have a cascaded logical replication setup\n> + <literal>node1</literal>-><literal>node2</literal>-><literal>node3</literal>.\n> + Here <literal>node2</literal> is subscribing the changes from\n> + <literal>node1</literal> and <literal>node3</literal> is subscribing\n> + the changes from <literal>node2</literal>.\n> + </para>\n> + </step>\n>\n> 17a.\n> This renders as Step 1. But IMO this should not be a \"step\" at all --\n> it's just a description of the scenario.\n\nModified\n\n> ~\n>\n> 17b.\n> The subsequent steps refer to subscriptions 'sub1_node1_node2' and\n> 'sub1_node1_node2' and 'sub1_node2_node3' and 'sub2_node2_node3'. IMO\n> it would help with the example code if those are named up front here\n> too, e.g.\n>\n> node2 has two subscriptions for changes from node1:\n> sub1_node1_node2\n> sub2_node1_node2\n>\n> node3 has two subscriptions for changes from node2:\n> sub1_node2_node3\n> sub2_node2_node3\n\nModified\n\n> ~~~\n>\n> 18.\n> + <step>\n> + <para>\n> + Upgrade the publisher node <literal>node1</literal>'s server to the\n> + required newer version, for e.g.:\n>\n> I'm not sure it is good to call this the publisher node, because in\n> this scenario node2 is also a publisher node.\n>\n> SUGGESTION\n> Upgrade the node1 server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 19.\n> + <step>\n> + <para>\n> + Start the upgraded node <literal>node1</literal>'s server, for e.g.:\n>\n> SUGGESTION\n> Start the upgraded node1's server, e.g.:\n\nModified\n\n> ~~~\n>\n> 20.\n> + <step>\n> + <para>\n> + Upgrade the node <literal>node2</literal>'s server to the required\n> + new version, for e.g.:\n>\n> SUGGESTION\n> Upgrade the node2 server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 21.\n> + <step>\n> + <para>\n> + Start the upgraded node <literal>node2</literal>'s server, for e.g.:\n>\n> SUGGESTION\n> Start the upgraded node2's server, e.g.:\n\nModified\n\n> ~~~\n>\n> 22.\n> + <step>\n> + <para>\n> + Create any tables that were created in the upgraded\n> publisher <literal>node1</literal>\n> + server between step-5 and now, for e.g.:\n>\n> 22a\n> Maybe this should say \"On node2, create any tables...\"\n\nModified\n\n> ~\n>\n> 22b.\n> Maybe it is better to have a link to step5 instead of just hardwiring\n> \"Step-5\" in the text.\n\nModified\n\n> ~\n>\n> 22c.\n> I didn't think it was needed to spread the CREATE TABLE across\n> multiple lines. It is just a dummy example anyway so IMO better to use\n> up less space.\n\nModified\n\n> ~~~\n>\n> 23.\n> + <step>\n> + <para>\n> + Enable all the subscriptions on <literal>node2</literal> that are\n> + subscribing the changes from <literal>node2</literal> by using\n> + <link\n> linkend=\"sql-altersubscription-params-enable\"><command>ALTER\n> SUBSCRIPTION ... ENABLE</command></link>,\n> + for e.g.:\n>\n> Typo: /subscribing the changes from node2/subscribing the changes from node1/\n\nModified\n\n> ~~~\n>\n>\n> 99.\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link\n> linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + for e.g.:\n>\n> SUGGESTION\n> Refresh the node2 subscription's publications using...\n\nModified\n\n> ~~~\n>\n> 25.\n> + <step>\n> + <para>\n> + Upgrade the node <literal>node3</literal>'s server to the required\n> + new version, for e.g.:\n>\n> SUGGESTION\n> Upgrade the node3 server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 26.\n> + <step>\n> + <para>\n> + Start the upgraded node <literal>node3</literal>'s server, for e.g.:\n>\n> SUGGESTION\n> Start the upgraded node3's server, e.g.:\n\nModified\n\n> ~~~\n>\n> 27.\n> + <step>\n> + <para>\n> + Create any tables that were created in the upgraded node\n> + <literal>node2</literal> between step-9 and now, for e.g.:\n>\n> 27a.\n> SUGGESTION\n> On node3, create any tables that were created in the upgraded node2 between...\n\nModified\n\n> ~\n>\n> 27b.\n> Maybe it is better to have a link to step9 instead of just hardwiring\n> \"Step-9\" in the text.\n\nModified\n\n> ~\n>\n> 27c.\n> I didn't think it was needed to spread the CREATE TABLE across\n> multiple lines. It is just a dummy example anyway so IMO better to use\n> up less space.\n\nModified\n\n> ~~~\n>\n> 28.\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link\n> linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + for e.g.:\n>\n> SUGGESTION\n> Refresh the node3 subscription's publications using...\n\nModified\n\n> //////////\n> Steps to Upgrade 2 node circular logical replication cluster</title>\n> //////////\n>\n> (Again, some of these comments are similar to before, but I'll repeat\n> them anyhow)\n>\n> ~~~\n>\n> 29. GENERAL - Should this circular scenario even be mentioned?\n>\n> IIUC there are no other PG docs for describing how to set up and\n> manage a circular scenario like this. I know you wrote a blog about\n> this topic [1], and I think there was a documentation patch [2] about\n> this but it was never pushed.\n>\n> So, I'm not sure it is appropriate to include these docs \"Steps to\n> upgrade XXX\" when there are not even any docs about \"Steps to create\n> XXX\".\n\nI feel we can add this later once this patch reaches a better shape\n\n> ~~~\n>\n> 30.\n> + <procedure>\n> + <step>\n> + <title>Steps to Upgrade 2 node circular logical replication\n> cluster</title>\n>\n> SUGGESTION\n> Steps to upgrade a two-node circular logical replication cluster\n\nModified\n\n> ~~~\n>\n> 31.\n> + <step>\n> + <para>\n> + Let's say we have a circular logical replication setup\n> + <literal>node1</literal>-><literal>node2</literal> and\n> + <literal>node2</literal>-><literal>node1</literal>. Here\n> + <literal>node2</literal> is subscribing the changes from\n> + <literal>node1</literal> and <literal>node1</literal> is subscribing\n> + the changes from <literal>node2</literal>.\n> + </para>\n> + </step>\n>\n> 31a\n> This renders as Step 1. But IMO this should not be a \"step\" at all --\n> it's just a description of the scenario.\n> REVIEW COMMENT 05/1\n\nModified\n\n> ~\n>\n> 31b.\n> The subsequent steps refer to subscriptions 'sub1_node1_node2' and\n> 'sub2_node1_node2' and 'sub1_node2_node1' and 'sub1_node2_node1'. IMO\n> it would help with the example code if those are named up front here\n> too. e.g.\n>\n> node1 has two subscriptions for changes from node2:\n> sub1_node2_node1\n> sub2_node2_node1\n>\n> node2 has two subscriptions for changes from node1:\n> sub1_node1_node2\n> sub2_node1_node2\n\nModified\n\n> ~~~\n>\n> 32.\n> + <step>\n> + <para>\n> + Upgrade the node <literal>node1</literal>'s server to the required\n> + newer version, for e.g.:\n>\n> SUGGESTION\n> Upgrade the node1 server to the required newer version, e.g.:\n>\n> ~~~\n>\n> 33.\n> + <step>\n> + <para>\n> + Start the upgraded node <literal>node1</literal>'s server, for e.g.:\n>\n> SUGGESTION\n> Start the upgraded node1's server, e.g.:\n\nModified\n\n> ~~~\n>\n> 34.\n> + <step>\n> + <para>\n> + Wait till all the incremental changes are synchronized.\n> + </para>\n>\n> Any hint on how to do this?\n\nThis is not required as it is already mentioned in the prerequisites\nsection. I have removed this.\n\n> ~~~\n>\n> 35.\n> + <step>\n> + <para>\n> + Create any tables that were created in <literal>node2</literal>\n> + between step-2 and now, for e.g.:\n>\n> 35a.\n> That doesn't seem right.\n> - Don't you mean \"created in the upgraded node1\"?\n> - Don't you mean \"between step-5\"?\n>\n> SUGGESTION\n> On node2, create any tables that were created in the upgraded node1\n> between step5 and...\n\nThis is correct, we need to create the tables since the subscription\nwas disabled\n\n> ~\n>\n> 35b.\n> Maybe it is better to have a link to step5 instead of just hardwiring\n> \"Step-5\" in the text.\n\nModified\n\n> ~\n>\n> 35c.\n> I didn't think it was needed to spread the CREATE TABLE across\n> multiple lines. It is just a dummy example anyway so IMO better to use\n> up less space.\n\nModified\n\n> ~~~\n>\n> 36.\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link\n> linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + for e.g.:\n>\n> SUGGESTION\n> Refresh the node2 subscription's publications using...\n\nModified\n\n> ~~~\n>\n> 37.\n> + <step>\n> + <para>\n> + Disable all the subscriptions on <literal>node1</literal> that are\n> + subscribing the changes from <literal>node2</literal> by using\n> + <link\n> linkend=\"sql-altersubscription-params-disable\"><command>ALTER\n> SUBSCRIPTION ... DISABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node2_node1 DISABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node2_node1 DISABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> This example looks wrong. IIUC these commands should be done on node1\n> but the example shows a node2 prompt.\n\nModified\n\n> ~~~\n>\n> 38.\n> + <step>\n> + <para>\n> + Upgrade the node <literal>node2</literal>'s server to the required\n> + new version, for e.g.:\n>\n> SUGGESTION\n> Upgrade the node2 server to the required newer version, e.g.:\n\nModified\n\n> ~~~\n>\n> 39.\n> + <step>\n> + <para>\n> + Start the upgraded node <literal>node2</literal>'s server, for e.g.:\n>\n> SUGGESTION\n> Start the upgraded node2's server, e.g.:\n\nModified\n\n> ~~~\n>\n> 40.\n> + <step>\n> + <para>\n> + Create any tables that were created in the upgraded node\n> + <literal>node1</literal> between step-10 and now, for e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (\n> +node2(# did integer CONSTRAINT no_null NOT NULL,\n> +node2(# name varchar(40) NOT NULL\n> +node2(# );\n> +CREATE TABLE\n> +</programlisting>\n>\n> 40a.\n> That doesn't seem right.\n> - Don't you mean \"created in the upgraded node2\"?\n> - Don't you mean \"between step-12\"?\n>\n> SUGGESTION\n> On node1, create any tables that were created in the upgraded node2\n> between step12 and...\n\nModified\n\n> ~\n>\n> 40b.\n> Maybe it is better to have a link to step12 instead of just hardwiring\n> \"Step-12\" in the text.\n\nModified\n\n> ~\n>\n> 40c.\n> I didn't think it was needed to spread the CREATE TABLE across\n> multiple lines. It is just a dummy example anyway so IMO better to use\n> up less space.\n\nModified\n\n> ~~~\n>\n> 41.\n> + <step>\n> + <para>\n> + Enable all the subscriptions on <literal>node1</literal> that are\n> + subscribing the changes from <literal>node2</literal> by using\n> + <link\n> linkend=\"sql-altersubscription-params-enable\"><command>ALTER\n> SUBSCRIPTION ... ENABLE</command></link>,\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node2_node1 ENABLE;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node2_node1 ENABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> The example looks wrong. IIUC these commands should be done on node1\n> but the example shows a node2 prompt.\n\nModified\n\n> ~~\n>\n> 42.\n> + <step>\n> + <para>\n> + Refresh the publications using\n> + <link\n> linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>\n> + for e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 42a.\n> SUGGESTION\n> Refresh the node1 subscription's publications using...\n\nModified\n\n> ~\n>\n> 42b.\n> The example looks wrong. IIUC these commands should be done on node1\n> but the example shows a node2 prompt.\n\nModified\n\nThanks for the comments, the v2 version patch attached at [2] has the\nfixes for the same.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KPFtxOzmkrJDY3LkeCkmWX5hZbSak7JLR57%2BvEq3afjQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CALDaNm2PD_eWLkLDs0qQ8MvWvh8j%3Dhee4_n6MX6Zz%3D%2BHosz%3Dpg%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 13 Jan 2024 21:20:26 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Hi Vignesh, here are some review comments for patch v2-0001.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n+ <step id=\"pgupgrade-step-logical-replication\">\n+ <title>Upgrade logical replication clusters</title>\n+\n+ <para>\n+ Refer <link linkend=\"logical-replication-upgrade\">logical\nreplication upgrade section</link>\n+ for details on upgrading logical replication clusters.\n+ </para>\n+\n+ </step>\n+\n\nThis renders like:\nRefer logical replication upgrade section for details on upgrading\nlogical replication clusters.\n\n~\n\nIMO it would be better to use xref instead of link, which will render\nmore normally like:\nSee Section 30.11 for details on upgrading logical replication clusters.\n\nSUGGESTION\n <para>\n See <xref linkend=\"logical-replication-upgrade\"/>\n for details on upgrading logical replication clusters.\n </para>\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2. GENERAL - blurb\n\n+ <sect1 id=\"logical-replication-upgrade\">\n+ <title>Upgrade</title>\n+\n+ <procedure>\n+ <step id=\"prepare-publisher-upgrades\">\n+ <title>Prepare for publisher upgrades</title>\n\nI felt there should be a short (1 or 2 sentence) general blurb about\npub/sub upgrade before jumping straight into:\n\n\"1. Prepare for publisher upgrades\"\n\"2. Prepare for subscriber upgrades\"\n\"3. Upgrading logical replication cluster\"\n\n~\n\nSpecifically, at first, it looks strange that the HTML renders as\nsteps 1,2,3 instead of sub-sections (30.11.1, 30.11.2, 30.11.3); Maybe\n\"steps\" are fine, but then at least there needs to be some intro\nsentence saying like \"follow these steps:\"\n\n~~~\n\n3.\n+ <step id=\"upgrading-logical-replication-cluster\">\n+ <title>Upgrading logical replication cluster</title>\n\n/cluster/clusters/\n\n~~~\n\n4.\n+ <para>\n+ The steps to upgrade the following logical replication clusters are\n+ detailed below:\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ <link linkend=\"steps-two-node-logical-replication-cluster\">Two-node\nlogical replication cluster.</link>\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <link linkend=\"steps-cascaded-logical-replication-cluster\">Cascaded\nlogical replication cluster.</link>\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <link linkend=\"steps-two-node-circular-logical-replication-cluster\">Two-node\ncircular logical replication cluster.</link>\n+ </para>\n+ </listitem>\n+ </itemizedlist>\n+ </para>\n\nIsn't there a better way to accomplish this by using xref and\n'xreflabel' so you don't have to type the link text here?\n\n\n//////////\nSteps to upgrade a two-node logical replication cluster\n//////////\n\n5.\n+ <para>\n+ Let's say publisher is in <literal>node1</literal> and subscriber is\n+ in <literal>node2</literal>. The subscriber <literal>node2</literal> has\n+ two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n+ subscribing the changes from <literal>node1</literal>.\n+ </para>\n\n5a\nThose subscription names should also be rendered as literals.\n\n~\n\n5b\n/which is/which are/\n\n~~~\n\n6.\n+ <step>\n+ <para>\n+ Initialize data1_upgraded instance by using the required newer\n+ version.\n+ </para>\n+ </step>\n\ndata1_upgraded should be rendered as literal.\n\n~~~\n\n7.\n+\n+ <step>\n+ <para>\n+ Initialize data2_upgraded instance by using the required newer\n+ version.\n+ </para>\n+ </step>\n\ndata2_upgraded should be rendered as literal.\n\n~~~\n\n8.\n+\n+ <step>\n+ <para>\n+ On <literal>node2</literal>, create any tables that were created in\n+ the upgraded publisher <literal>node1</literal> server between\n+ <link linkend=\"two-node-cluster-disable-subscriptions-node2\">\n+ when the subscriptions where disabled in\n<literal>node2</literal></link>\n+ and now, e.g.:\n+<programlisting>\n+node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n8a.\nThis link to the earlier step renders badly like:\nOn node2, create any tables that were created in the upgraded\npublisher node1 server between when the subscriptions where disabled\nin node2 and now, e.g.:\n\nIMO this link should be like \"Step N\", not some words -- maybe it is\nanother opportunity for using xreflabel?\n\n~\n\n8b.\nAlso has typos \"when the subscriptions where disabled\" (??)\n\n//////////\nSteps to upgrade a cascaded logical replication clusters\n//////////\n\n9.\n+ <procedure>\n+ <step id=\"steps-cascaded-logical-replication-cluster\">\n+ <title>Steps to upgrade a cascaded logical replication clusters</title>\n\nThe title has a strange mix of singular \"a\" and plural \"clusters\"\n\n~~~\n\n10.\n+ <para>\n+ Let's say we have a cascaded logical replication setup\n+ <literal>node1</literal>-><literal>node2</literal>-><literal>node3</literal>.\n+ Here <literal>node2</literal> is subscribing the changes from\n+ <literal>node1</literal> and <literal>node3</literal> is subscribing\n+ the changes from <literal>node2</literal>. The <literal>node2</literal>\n+ has two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n+ subscribing the changes from <literal>node1</literal>. The\n+ <literal>node3</literal> has two subscriptions sub1_node2_node3 and\n+ sub2_node2_node3 which is subscribing the changes from\n+ <literal>node2</literal>.\n+ </para>\n\n10a.\nThose subscription names should also be rendered as literals.\n\n~\n\n10b.\n/which is/which are/ (occurs 2x)\n\n~~~\n\n11.\n+\n+ <step>\n+ <para>\n+ Initialize data1_upgraded instance by using the required\nnewer version.\n+ </para>\n+ </step>\n\ndata1_upgraded should be rendered as literal.\n\n~~~\n\n12.\n+\n+ <step>\n+ <para>\n+ Initialize data2_upgraded instance by using the required\nnewer version.\n+ </para>\n+ </step>\n\ndata2_upgraded should be rendered as literal.\n\n~~~\n\n13.\n+\n+ <step>\n+ <para>\n+ On <literal>node2</literal>, create any tables that were created in\n+ the upgraded publisher <literal>node1</literal> server between\n+ <link linkend=\"cascaded-cluster-disable-sub-node1-node2\">\n+ when the subscriptions where disabled in\n<literal>node2</literal></link>\n+ and now, e.g.:\n+<programlisting>\n+node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n13a.\nThis link to the earlier step renders badly like:\nOn node2, create any tables that were created in the upgraded\npublisher node1 server between when the subscriptions where disabled\nin node2 and now, e.g.:\n\nIMO this link should be like \"Step N\", not some words -- maybe it is\nanother opportunity for using xreflabel?\n\n~\n\n13b\nAlso has typos \"when the subscriptions where disabled\" (??)\n\n~~~\n\n14.\n+\n+ <step>\n+ <para>\n+ Initialize data3_upgraded instance by using the required\nnewer version.\n+ </para>\n+ </step>\n\ndata3_upgraded should be rendered as literal.\n\n~~~\n\n15.\n+\n+ <step>\n+ <para>\n+ On <literal>node3</literal>, create any tables that were created in\n+ the upgraded <literal>node2</literal> between\n+ <link linkend=\"cascaded-cluster-disable-sub-node2-node3\">when the\n+ subscriptions where disabled in <literal>node3</literal></link>\n+ and now, e.g.:\n+<programlisting>\n+node3=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n15a.\nThis link to the earlier step renders badly like:\nOn node3, create any tables that were created in the upgraded node2\nbetween when the subscriptions where disabled in node3 and now, e.g.:\n\n~\n\n15b.\nAlso has typos \"when the subscriptions where disabled\" (??)\n\n//////////\nSteps to upgrade a two-node circular logical replication cluster\n//////////\n\n16.\n+ <para>\n+ Let's say we have a circular logical replication setup\n+ <literal>node1</literal>-><literal>node2</literal> and\n+ <literal>node2</literal>-><literal>node1</literal>. Here\n+ <literal>node2</literal> is subscribing the changes from\n+ <literal>node1</literal> and <literal>node1</literal> is subscribing\n+ the changes from <literal>node2</literal>. The <literal>node1</literal>\n+ has two subscriptions sub1_node2_node1 and sub2_node2_node1 which is\n+ subscribing the changes from <literal>node2</literal>. The\n+ <literal>node2</literal> has two subscriptions sub1_node1_node2 and\n+ sub2_node1_node2 which is subscribing the changes from\n+ <literal>node1</literal>.\n+ </para>\n\n16a\nThose subscription names should also be rendered as literals.\n\n~\n\n16b\n/which is/which are/\n\n~~~\n\n17.\n+\n+ <step>\n+ <para>\n+ Initialize data1_upgraded instance by using the required newer\n+ version.\n+ </para>\n+ </step>\n\ndata1_upgraded should render as literal.\n\n~~~\n\n18.\n+\n+ <step>\n+ <para>\n+ On <literal>node1</literal>, Create any tables that were created in\n+ <literal>node2</literal> between <link\nlinkend=\"circular-cluster-disable-sub-node2\">\n+ when the subscriptions where disabled in\n<literal>node2</literal></link>\n+ and now, e.g.:\n+<programlisting>\n+node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n18a.\nThis link to the earlier step renders badly like:\nOn node1, Create any tables that were created in node2 between when\nthe subscriptions where disabled in node2 and now, e.g.:\n\nIMO this link should be like \"Step N\", not some words -- maybe it is\nanother opportunity for using xreflabel?\n~\n\n18b\nAlso has typos \"when the subscriptions where disabled\" (??)\n\n~\n\n18c.\n/Create any/create any/\n\n~~~\n\n19.\n+\n+ <step>\n+ <para>\n+ Initialize data2_upgraded instance by using the required newer\n+ version.\n+ </para>\n+ </step>\n\ndata2_upgraded should render as literal.\n\n~~~\n\n20.\n+\n+ <step>\n+ <para>\n+ On <literal>node2</literal>, Create any tables that were created in\n+ the upgraded <literal>node1</literal> between <link\nlinkend=\"circular-cluster-disable-sub-node1\">\n+ when the subscriptions where disabled in\n<literal>node1</literal></link>\n+ and now, e.g.:\n+<programlisting>\n+node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n\n20a.\nThis link to the earlier step renders badly like:\nOn node2, Create any tables that were created in the upgraded node1\nbetween when the subscriptions where disabled in node1 and now, e.g.:\n\n~\n\n20b\nAlso has typos \"when the subscriptions where disabled\" (??)\n\n~\n\n20c.\n/Create any/create any/\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:30:45 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for updating the patch!\r\n\r\n> > 7.\r\n> > ```\r\n> > +<programlisting>\r\n> > +dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_ctl -D\r\n> /opt/PostgreSQL/pub_data stop -l logfile\r\n> > +</programlisting>\r\n> > ```\r\n> >\r\n> > Hmm. I thought you did not have to show the current directory. You were in the\r\n> > bin dir, but it is not our requirement, right?\r\n> \r\n> I kept this just to show the version being used\r\n>\r\n\r\nHmm, but by default, the current directory is not set as PATH. So this example\r\nlooks strange for me.\r\n\r\nBelow lines are my comments for v2 patch.\r\n\r\n01.\r\n\r\n```\r\n+ <step id=\"pgupgrade-step-logical-replication\">\r\n+ <title>Upgrade logical replication clusters</title>\r\n+\r\n+ <para>\r\n+ Refer <link linkend=\"logical-replication-upgrade\">logical replication upgrade section</link>\r\n+ for details on upgrading logical replication clusters.\r\n+ </para>\r\n+\r\n+ </step>\r\n```\r\n\r\nI think we do not have to write it as one of steps. I think we can move to\r\n\"Usage\" part and modify like:\r\n\r\nThis page only focus on nodes which are not logical replication participant. See\r\n<link linkend=\"logical-replication-upgrade\"> for upgrading such nodes.\r\n\r\n02.\r\n\r\n```\r\n with the primary.) Only logical slots on the primary are copied to the\r\n new standby, but other slots on the old standby are not copied so must\r\n be recreated manually.\r\n```\r\n\r\nA description for logical slots were remained. If you want to keep, we must\r\nsay that it would be done for PG17+.\r\n\r\n03.\r\n\r\nI think the numbering seems bit confusing. sectX sgml tags should be used in\r\nthis case. How about formatting like below?\r\n\r\nUpgrade (sect1)\r\n--- Prerequisites (sect2)\r\n --- For upgrading a publisher node (sect3)\r\n --- For upgrading a subscriber node (sect3)\r\n--- Examples (sect2)\r\n --- Two-node logical replication cluster (sect3)\r\n --- Cascaded logical replication cluster (sect3)\r\n --- Two-node circular logical replication cluster (sect3)\r\n\r\n04. \r\n\r\nMissing introduction in the head of this section. E.g., \r\n\r\nBoth publishers and subscribers can be upgraded, but there are some notes.\r\nBefore reading this section, you should read <xref linkend=\"pgupgrade\"/> page.\r\n\r\n05.\r\n\r\n```\r\n+ <step id=\"prepare-publisher-upgrades\">\r\n+ <title>Prepare for publisher upgrades</title>\r\n...\r\n```\r\n\r\nShould we describe in this page that publications can be upgraded in any\r\nversions?\r\n\r\n06.\r\n\r\n```\r\n+ <step id=\"prepare-subscriber-upgrades\">\r\n+ <title>Prepare for subscriber upgrades</title\r\n```\r\n\r\nSame as 5, should we describe in this page that subscriptions can be upgraded\r\nin any versions?\r\n\r\n07.\r\n\r\nBasic considerations should be described before describing concrete steps.\r\nE.g., publishers must be upgraded first. Also: While upgrading a subscriber,\r\npublisher can accept changes from users.\r\n\r\n08.\r\n\r\n```\r\n+ two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\r\n+ subscribing the changes from <literal>node1</literal>.\r\n```\r\n\r\nBoth \"sub1_node1_node2\" and \"sub2_node1_node2\" must be rendered.\r\n\r\n09.\r\n\r\n```\r\n+ <step>\r\n+ <para>\r\n+ Initialize data1_upgraded instance by using the required newer\r\n+ version.\r\n+ </para>\r\n```\r\n\r\nMissing rendering. All similar paragraphs must be fixed.\r\n\r\n10.\r\n\r\n```\r\n+ On <literal>node2</literal>, create any tables that were created in\r\n+ the upgraded publisher <literal>node1</literal> server between\r\n+ <link linkend=\"two-node-cluster-disable-subscriptions-node2\">\r\n+ when the subscriptions where disabled in <literal>node2</literal></link>\r\n+ and now, e.g.:\r\n```\r\n\r\na.\r\n\r\nI think the link is not correct, it should refer Step 6. Can we add the step number?\r\nAll similar paragraphs must be fixed.\r\n\r\nb.\r\n\r\nNot sure, but s/where disabled/were disabled/ ?\r\nAll similar paragraphs must be fixed.\r\n\r\n11.\r\n\r\n```\r\n+ <para>\r\n+ Refresh the <literal>node2</literal> subscription's publications using\r\n+ <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\r\n+ e.g.:\r\n+<programlisting>\r\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\r\n+ALTER SUBSCRIPTION\r\n+node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\r\n+ALTER SUBSCRIPTION\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nNot sure, but should we clarify that copy_data must be on?\r\n\r\n12.\r\n\r\n```\r\n+ has two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\r\n+ subscribing the changes from <literal>node1</literal>. The\r\n+ <literal>node3</literal> has two subscriptions sub1_node2_node3 and\r\n+ sub2_node2_node3 which is subscribing the changes from\r\n```\r\n\r\nName of subscriptions must be rendered.\r\n\r\n13.\r\n\r\n```\r\n+ <para>\r\n+ On <literal>node1</literal>, Create any tables that were created in\r\n+ <literal>node2</literal> between <link linkend=\"circular-cluster-disable-sub-node2\">\r\n+ when the subscriptions where disabled in <literal>node2</literal></link>\r\n+ and now, e.g.:\r\n+<programlisting>\r\n+node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\r\n+CREATE TABLE\r\n+</programlisting>\r\n+ </para>\r\n...\r\n+ <para>\r\n+ On <literal>node2</literal>, Create any tables that were created in\r\n+ the upgraded <literal>node1</literal> between <link linkend=\"circular-cluster-disable-sub-node1\">\r\n+ when the subscriptions where disabled in <literal>node1</literal></link>\r\n+ and now, e.g.:\r\n+<programlisting>\r\n+node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\r\n+CREATE TABLE\r\n+</programlisting>\r\n+ </para>\r\n```\r\n\r\nSame tables were created, they must have another name.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Mon, 15 Jan 2024 09:09:32 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, 15 Jan 2024 at 09:01, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, here are some review comments for patch v2-0001.\n>\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 1.\n> + <step id=\"pgupgrade-step-logical-replication\">\n> + <title>Upgrade logical replication clusters</title>\n> +\n> + <para>\n> + Refer <link linkend=\"logical-replication-upgrade\">logical\n> replication upgrade section</link>\n> + for details on upgrading logical replication clusters.\n> + </para>\n> +\n> + </step>\n> +\n>\n> This renders like:\n> Refer logical replication upgrade section for details on upgrading\n> logical replication clusters.\n>\n> ~\n>\n> IMO it would be better to use xref instead of link, which will render\n> more normally like:\n> See Section 30.11 for details on upgrading logical replication clusters.\n>\n> SUGGESTION\n> <para>\n> See <xref linkend=\"logical-replication-upgrade\"/>\n> for details on upgrading logical replication clusters.\n> </para>\n\nModified\n\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 2. GENERAL - blurb\n>\n> + <sect1 id=\"logical-replication-upgrade\">\n> + <title>Upgrade</title>\n> +\n> + <procedure>\n> + <step id=\"prepare-publisher-upgrades\">\n> + <title>Prepare for publisher upgrades</title>\n>\n> I felt there should be a short (1 or 2 sentence) general blurb about\n> pub/sub upgrade before jumping straight into:\n>\n> \"1. Prepare for publisher upgrades\"\n> \"2. Prepare for subscriber upgrades\"\n> \"3. Upgrading logical replication cluster\"\n\nAdded\n\n> ~\n>\n> Specifically, at first, it looks strange that the HTML renders as\n> steps 1,2,3 instead of sub-sections (30.11.1, 30.11.2, 30.11.3); Maybe\n> \"steps\" are fine, but then at least there needs to be some intro\n> sentence saying like \"follow these steps:\"\n> ~~~\n\nModified\n\n>\n> 3.\n> + <step id=\"upgrading-logical-replication-cluster\">\n> + <title>Upgrading logical replication cluster</title>\n>\n> /cluster/clusters/\n\nModified\n\n> ~~~\n>\n> 4.\n> + <para>\n> + The steps to upgrade the following logical replication clusters are\n> + detailed below:\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + <link linkend=\"steps-two-node-logical-replication-cluster\">Two-node\n> logical replication cluster.</link>\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <link linkend=\"steps-cascaded-logical-replication-cluster\">Cascaded\n> logical replication cluster.</link>\n> + </para>\n> + </listitem>\n> + <listitem>\n> + <para>\n> + <link linkend=\"steps-two-node-circular-logical-replication-cluster\">Two-node\n> circular logical replication cluster.</link>\n> + </para>\n> + </listitem>\n> + </itemizedlist>\n> + </para>\n>\n> Isn't there a better way to accomplish this by using xref and\n> 'xreflabel' so you don't have to type the link text here?\n\nModified\n\n>\n> //////////\n> Steps to upgrade a two-node logical replication cluster\n> //////////\n>\n> 5.\n> + <para>\n> + Let's say publisher is in <literal>node1</literal> and subscriber is\n> + in <literal>node2</literal>. The subscriber <literal>node2</literal> has\n> + two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n> + subscribing the changes from <literal>node1</literal>.\n> + </para>\n>\n> 5a\n> Those subscription names should also be rendered as literals.\n\nModified\n\n> ~\n>\n> 5b\n> /which is/which are/\n\nModified\n\n> ~~~\n>\n> 6.\n> + <step>\n> + <para>\n> + Initialize data1_upgraded instance by using the required newer\n> + version.\n> + </para>\n> + </step>\n>\n> data1_upgraded should be rendered as literal.\n\nModified\n\n> ~~~\n>\n> 7.\n> +\n> + <step>\n> + <para>\n> + Initialize data2_upgraded instance by using the required newer\n> + version.\n> + </para>\n> + </step>\n>\n> data2_upgraded should be rendered as literal.\n\nModified\n\n> ~~~\n>\n> 8.\n> +\n> + <step>\n> + <para>\n> + On <literal>node2</literal>, create any tables that were created in\n> + the upgraded publisher <literal>node1</literal> server between\n> + <link linkend=\"two-node-cluster-disable-subscriptions-node2\">\n> + when the subscriptions where disabled in\n> <literal>node2</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 8a.\n> This link to the earlier step renders badly like:\n> On node2, create any tables that were created in the upgraded\n> publisher node1 server between when the subscriptions where disabled\n> in node2 and now, e.g.:\n>\n> IMO this link should be like \"Step N\", not some words -- maybe it is\n> another opportunity for using xreflabel?\n\nModified\n\n> ~\n>\n> 8b.\n> Also has typos \"when the subscriptions where disabled\" (??)\n\nThis is not required after using xref, removed it.\n\n> //////////\n> Steps to upgrade a cascaded logical replication clusters\n> //////////\n>\n> 9.\n> + <procedure>\n> + <step id=\"steps-cascaded-logical-replication-cluster\">\n> + <title>Steps to upgrade a cascaded logical replication clusters</title>\n>\n> The title has a strange mix of singular \"a\" and plural \"clusters\"\n\nChanged it to keep it consistent\n\n> ~~~\n>\n> 10.\n> + <para>\n> + Let's say we have a cascaded logical replication setup\n> + <literal>node1</literal>-><literal>node2</literal>-><literal>node3</literal>.\n> + Here <literal>node2</literal> is subscribing the changes from\n> + <literal>node1</literal> and <literal>node3</literal> is subscribing\n> + the changes from <literal>node2</literal>. The <literal>node2</literal>\n> + has two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n> + subscribing the changes from <literal>node1</literal>. The\n> + <literal>node3</literal> has two subscriptions sub1_node2_node3 and\n> + sub2_node2_node3 which is subscribing the changes from\n> + <literal>node2</literal>.\n> + </para>\n>\n> 10a.\n> Those subscription names should also be rendered as literals.\n\nModified\n\n> ~\n>\n> 10b.\n> /which is/which are/ (occurs 2x)\n\nModified\n\n> ~~~\n>\n> 11.\n> +\n> + <step>\n> + <para>\n> + Initialize data1_upgraded instance by using the required\n> newer version.\n> + </para>\n> + </step>\n>\n> data1_upgraded should be rendered as literal.\n\nModified\n\n> ~~~\n>\n> 12.\n> +\n> + <step>\n> + <para>\n> + Initialize data2_upgraded instance by using the required\n> newer version.\n> + </para>\n> + </step>\n>\n> data2_upgraded should be rendered as literal.\n\nModified\n\n> ~~~\n>\n> 13.\n> +\n> + <step>\n> + <para>\n> + On <literal>node2</literal>, create any tables that were created in\n> + the upgraded publisher <literal>node1</literal> server between\n> + <link linkend=\"cascaded-cluster-disable-sub-node1-node2\">\n> + when the subscriptions where disabled in\n> <literal>node2</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 13a.\n> This link to the earlier step renders badly like:\n> On node2, create any tables that were created in the upgraded\n> publisher node1 server between when the subscriptions where disabled\n> in node2 and now, e.g.:\n>\n> IMO this link should be like \"Step N\", not some words -- maybe it is\n> another opportunity for using xreflabel?\n\nModified\n\n> ~\n>\n> 13b\n> Also has typos \"when the subscriptions where disabled\" (??)\n\nThis is not required after using xref, removed it.\n\n> ~~~\n>\n> 14.\n> +\n> + <step>\n> + <para>\n> + Initialize data3_upgraded instance by using the required\n> newer version.\n> + </para>\n> + </step>\n>\n> data3_upgraded should be rendered as literal.\n\nModified\n\n> ~~~\n>\n> 15.\n> +\n> + <step>\n> + <para>\n> + On <literal>node3</literal>, create any tables that were created in\n> + the upgraded <literal>node2</literal> between\n> + <link linkend=\"cascaded-cluster-disable-sub-node2-node3\">when the\n> + subscriptions where disabled in <literal>node3</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node3=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 15a.\n> This link to the earlier step renders badly like:\n> On node3, create any tables that were created in the upgraded node2\n> between when the subscriptions where disabled in node3 and now, e.g.:\n\nChanged it to xref.\n\n> ~\n>\n> 15b.\n> Also has typos \"when the subscriptions where disabled\" (??)\n\nThis is not required after using xref, removed it.\n\n> //////////\n> Steps to upgrade a two-node circular logical replication cluster\n> //////////\n>\n> 16.\n> + <para>\n> + Let's say we have a circular logical replication setup\n> + <literal>node1</literal>-><literal>node2</literal> and\n> + <literal>node2</literal>-><literal>node1</literal>. Here\n> + <literal>node2</literal> is subscribing the changes from\n> + <literal>node1</literal> and <literal>node1</literal> is subscribing\n> + the changes from <literal>node2</literal>. The <literal>node1</literal>\n> + has two subscriptions sub1_node2_node1 and sub2_node2_node1 which is\n> + subscribing the changes from <literal>node2</literal>. The\n> + <literal>node2</literal> has two subscriptions sub1_node1_node2 and\n> + sub2_node1_node2 which is subscribing the changes from\n> + <literal>node1</literal>.\n> + </para>\n>\n> 16a\n> Those subscription names should also be rendered as literals.\n\nModified\n\n> ~\n>\n> 16b\n> /which is/which are/\n\nModified\n\n> ~~~\n>\n> 17.\n> +\n> + <step>\n> + <para>\n> + Initialize data1_upgraded instance by using the required newer\n> + version.\n> + </para>\n> + </step>\n>\n> data1_upgraded should render as literal.\n\nModified\n\n> ~~~\n>\n> 18.\n> +\n> + <step>\n> + <para>\n> + On <literal>node1</literal>, Create any tables that were created in\n> + <literal>node2</literal> between <link\n> linkend=\"circular-cluster-disable-sub-node2\">\n> + when the subscriptions where disabled in\n> <literal>node2</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 18a.\n> This link to the earlier step renders badly like:\n> On node1, Create any tables that were created in node2 between when\n> the subscriptions where disabled in node2 and now, e.g.:\n>\n> IMO this link should be like \"Step N\", not some words -- maybe it is\n> another opportunity for using xreflabel?\n\nModified to xref\n\n>\n> 18b\n> Also has typos \"when the subscriptions where disabled\" (??)\n\nThis is not required after using xref, removed it.\n\n> ~\n>\n> 18c.\n> /Create any/create any/\n\nModified\n\n> ~~~\n>\n> 19.\n> +\n> + <step>\n> + <para>\n> + Initialize data2_upgraded instance by using the required newer\n> + version.\n> + </para>\n> + </step>\n>\n> data2_upgraded should render as literal.\n\nModified\n\n> ~~~\n>\n> 20.\n> +\n> + <step>\n> + <para>\n> + On <literal>node2</literal>, Create any tables that were created in\n> + the upgraded <literal>node1</literal> between <link\n> linkend=\"circular-cluster-disable-sub-node1\">\n> + when the subscriptions where disabled in\n> <literal>node1</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n>\n> 20a.\n> This link to the earlier step renders badly like:\n> On node2, Create any tables that were created in the upgraded node1\n> between when the subscriptions where disabled in node1 and now, e.g.:\n\nModified to xref\n\n> ~\n>\n> 20b\n> Also has typos \"when the subscriptions where disabled\" (??)\n\nThis is not required after using xref, removed it.\n\nThanks for the comments, the attached v3 version patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 24 Jan 2024 10:45:39 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, 15 Jan 2024 at 14:39, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for updating the patch!\n>\n> > > 7.\n> > > ```\n> > > +<programlisting>\n> > > +dba@node1:/opt/PostgreSQL/postgres/&majorversion;/bin$ pg_ctl -D\n> > /opt/PostgreSQL/pub_data stop -l logfile\n> > > +</programlisting>\n> > > ```\n> > >\n> > > Hmm. I thought you did not have to show the current directory. You were in the\n> > > bin dir, but it is not our requirement, right?\n> >\n> > I kept this just to show the version being used\n> >\n>\n> Hmm, but by default, the current directory is not set as PATH. So this example\n> looks strange for me.\n\nI have removed the paths shown to avoid confusion.\n\n> Below lines are my comments for v2 patch.\n>\n> 01.\n>\n> ```\n> + <step id=\"pgupgrade-step-logical-replication\">\n> + <title>Upgrade logical replication clusters</title>\n> +\n> + <para>\n> + Refer <link linkend=\"logical-replication-upgrade\">logical replication upgrade section</link>\n> + for details on upgrading logical replication clusters.\n> + </para>\n> +\n> + </step>\n> ```\n>\n> I think we do not have to write it as one of steps. I think we can move to\n> \"Usage\" part and modify like:\n>\n> This page only focus on nodes which are not logical replication participant. See\n> <link linkend=\"logical-replication-upgrade\"> for upgrading such nodes.\n\nI have removed it from usage and moved it to the description section.\n\n> 02.\n>\n> ```\n> with the primary.) Only logical slots on the primary are copied to the\n> new standby, but other slots on the old standby are not copied so must\n> be recreated manually.\n> ```\n>\n> A description for logical slots were remained. If you want to keep, we must\n> say that it would be done for PG17+.\n\nMentioned as 17 or later.\n\n> 03.\n>\n> I think the numbering seems bit confusing. sectX sgml tags should be used in\n> this case. How about formatting like below?\n>\n> Upgrade (sect1)\n> --- Prerequisites (sect2)\n> --- For upgrading a publisher node (sect3)\n> --- For upgrading a subscriber node (sect3)\n> --- Examples (sect2)\n> --- Two-node logical replication cluster (sect3)\n> --- Cascaded logical replication cluster (sect3)\n> --- Two-node circular logical replication cluster (sect3)\n\nI felt this is better and changed it like:\n 30.11. Upgrade\n --- 30.11.1. Prepare for publisher upgrades\n --- 30.11.2. Prepare for subscriber upgrades\n --- 30.11.3. Upgrading logical replication clusters\n --- 30.11.3.1. Steps to upgrade a two-node logical replication cluster\n --- 30.11.3.2. Steps to upgrade a cascaded logical replication cluster\n --- 30.11.3.3. Steps to upgrade a two-node circular logical\nreplication cluster\n\n> 04.\n>\n> Missing introduction in the head of this section. E.g.,\n>\n> Both publishers and subscribers can be upgraded, but there are some notes.\n> Before reading this section, you should read <xref linkend=\"pgupgrade\"/> page.\n\nAdded it with slight changes\n\n> 05.\n>\n> ```\n> + <step id=\"prepare-publisher-upgrades\">\n> + <title>Prepare for publisher upgrades</title>\n> ...\n> ```\n>\n> Should we describe in this page that publications can be upgraded in any\n> versions?\n\nI felt that need not be mentioned, as these are being upgraded from\nearlier versions too\n\n> 06.\n>\n> ```\n> + <step id=\"prepare-subscriber-upgrades\">\n> + <title>Prepare for subscriber upgrades</title\n> ```\n>\n> Same as 5, should we describe in this page that subscriptions can be upgraded\n> in any versions?\n\nI felt that need not be mentioned, as these are being upgraded from\nearlier versions too\n\n> 07.\n>\n> Basic considerations should be described before describing concrete steps.\n\nThe steps clearly mention the order in which it should be upgraded,\nI'm not sure if we should repeat it again.\n\n> E.g., publishers must be upgraded first. Also: While upgrading a subscriber,\n> publisher can accept changes from users.\n\nI have added this.\n\n> 08.\n>\n> ```\n> + two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n> + subscribing the changes from <literal>node1</literal>.\n> ```\n>\n> Both \"sub1_node1_node2\" and \"sub2_node1_node2\" must be rendered.\n\nModified\n\n> 09.\n>\n> ```\n> + <step>\n> + <para>\n> + Initialize data1_upgraded instance by using the required newer\n> + version.\n> + </para>\n> ```\n>\n> Missing rendering. All similar paragraphs must be fixed.\n\nModified\n\n> 10.\n>\n> ```\n> + On <literal>node2</literal>, create any tables that were created in\n> + the upgraded publisher <literal>node1</literal> server between\n> + <link linkend=\"two-node-cluster-disable-subscriptions-node2\">\n> + when the subscriptions where disabled in <literal>node2</literal></link>\n> + and now, e.g.:\n> ```\n>\n> a.\n>\n> I think the link is not correct, it should refer Step 6. Can we add the step number?\n> All similar paragraphs must be fixed.\n\nI have kept it as step1 just in case any table is created before the\nserver is stopped in node1. So I felt it is better to refer to the\nstep of disabled subscription now.\n\n> b.\n>\n> Not sure, but s/where disabled/were disabled/ ?\n> All similar paragraphs must be fixed.\n\nThis is removed\n\n> 11.\n>\n> ```\n> + <para>\n> + Refresh the <literal>node2</literal> subscription's publications using\n> + <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +node2=# ALTER SUBSCRIPTION sub2_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> ```\n>\n> Not sure, but should we clarify that copy_data must be on?\n\nI have not mentioned here as copy_data by default is true in this case\n\n> 12.\n>\n> ```\n> + has two subscriptions sub1_node1_node2 and sub2_node1_node2 which is\n> + subscribing the changes from <literal>node1</literal>. The\n> + <literal>node3</literal> has two subscriptions sub1_node2_node3 and\n> + sub2_node2_node3 which is subscribing the changes from\n> ```\n>\n> Name of subscriptions must be rendered.\n\nModified\n\n> 13.\n>\n> ```\n> + <para>\n> + On <literal>node1</literal>, Create any tables that were created in\n> + <literal>node2</literal> between <link linkend=\"circular-cluster-disable-sub-node2\">\n> + when the subscriptions where disabled in <literal>node2</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> ...\n> + <para>\n> + On <literal>node2</literal>, Create any tables that were created in\n> + the upgraded <literal>node1</literal> between <link linkend=\"circular-cluster-disable-sub-node1\">\n> + when the subscriptions where disabled in <literal>node1</literal></link>\n> + and now, e.g.:\n> +<programlisting>\n> +node2=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> ```\n>\n> Same tables were created, they must have another name.\n\nFor simplicity I used the same tables in all examples. I felt it should be ok\n\nThe v3 version patch attached at [1] has the changes for the same.\n[1] - https://www.postgresql.org/message-id/CALDaNm0ph5CFZ6ENL9EYiJhz3-xQMYx%2BUKWpFzggiLVfPKJoFw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 24 Jan 2024 10:54:13 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for updating the patch! Basically your patch looks good.\r\nBelow lines are my comments for v3.\r\n\r\n01.\r\n\r\n```\r\n <para>\r\n The output plugins referenced by the slots on the old cluster must be\r\n installed in the new PostgreSQL executable directory.\r\n </para>\r\n```\r\n\r\nPostgreSQL must be marked as <productname>.\r\n\r\n02.\r\n\r\n```\r\n<programlisting>\r\npg_ctl -D /opt/PostgreSQL/data1 stop -l logfile\r\n</programlisting>\r\n```\r\n\r\nI checked that found that -l was no-op when `pg_ctl stop` was specified. Can we remove?\r\nThe documentation is not listed -l for the stop command.\r\nAll the similar lines should be fixed as well.\r\n\r\n03.\r\n\r\n```\r\n On <literal>node3</literal>, create any tables that were created in\r\n the upgraded <literal>node2</literal> between\r\n <xref linkend=\"cascaded-cluster-disable-sub-node2-node3\"/> and now,\r\n```\r\n\r\nIf tables are newly defined on node1 between 1 - 11, they are not defined on node3.\r\nSo they must be defined on node3 as well.\r\n\r\n04.\r\n\r\n```\r\n <step>\r\n <para id=\"cascaded-cluster-disable-sub-node2-node3\">\r\n```\r\n\r\nEven if the referred steps is correct, ID should be allocated to step, not para.\r\nThat's why the rendering is bit a strange.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 24 Jan 2024 09:45:55 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Here are some review comments for patch v3.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n+\n+ <para>\n+ This page does not cover steps to upgrade logical replication\nclusters, refer\n+ <xref linkend=\"logical-replication-upgrade\"/> for details on upgrading\n+ logical replication clusters.\n+ </para>\n+\n\nI felt that maybe this note was misplaced. Won't it be better to put\nthis down in the \"Usage\" section of this page?\n\nBEFORE\nThese are the steps to perform an upgrade with pg_upgrade:\n\nSUGGESTION (or something like this)\nBelow are the steps to perform an upgrade with pg_upgrade.\n\nNote, the steps to upgrade logical replication clusters are not\ncovered here; refer to <xref linkend=\"logical-replication-upgrade\"/>\nfor details.\n\n~~~\n\n2.\n Configure the servers for log shipping. (You do not need to run\n <function>pg_backup_start()</function> and\n<function>pg_backup_stop()</function>\n or take a file system backup as the standbys are still synchronized\n- with the primary.) Only logical slots on the primary are copied to the\n- new standby, but other slots on the old standby are not copied so must\n- be recreated manually.\n+ with the primary.) In version 17.0 or later, only logical slots on the\n+ primary are copied to the new standby, but other slots on the\nold standby\n+ are not copied so must be recreated manually.\n </para>\n\nThis para was still unclear to me. What is version 17.0 referring to\n-- the old_cluster version? Do you mean something like:\nIf the old cluster is < v17 then logical slots are not copied. If the\nold_cluster is >= v17 then...\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n3.\n+ <para>\n+ While upgrading a subscriber, write operations can be performed in the\n+ publisher, these changes will be replicated to the subscriber once the\n+ subscriber upgradation is completed.\n+ </para>\n\n3a.\n/publisher, these changes/publisher. These changes/\n\n~\n\n3b.\n\"upgradation\" ??. See [1]\n\nmaybe just /upgradation/upgrade/\n\n~~~\n\n4. GENERAL - prompts/paths\n\nI noticed in v3 you removed all the cmd prompts like:\ndba@node1:/opt/PostgreSQL/postgres/17/bin$\ndba@node2:/opt/PostgreSQL/postgres/18/bin$\netc.\n\nI thought those were helpful to disambiguate which server/version was\nbeing operated on. I wonder if there is some way to keep information\nstill but not make it look like a real current directory that\nKuroda-san did not like:\n\ne.g. Maybe something like the below is possible?\n\n(dba@node1: v17) pg_upgrade...\n(dba@node2: v18) pg_upgrade...\n\n======\n[1] https://english.stackexchange.com/questions/192187/upgradation-not-universally-accepted#:~:text=Not%20all%20dictionaries%20(or%20native,by%20most%20non%2DIE%20speakers.\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 25 Jan 2024 11:15:09 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Wed, 24 Jan 2024 at 15:16, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for updating the patch! Basically your patch looks good.\n> Below lines are my comments for v3.\n>\n> 01.\n>\n> ```\n> <para>\n> The output plugins referenced by the slots on the old cluster must be\n> installed in the new PostgreSQL executable directory.\n> </para>\n> ```\n>\n> PostgreSQL must be marked as <productname>.\n\nModified\n\n> 02.\n>\n> ```\n> <programlisting>\n> pg_ctl -D /opt/PostgreSQL/data1 stop -l logfile\n> </programlisting>\n> ```\n>\n> I checked that found that -l was no-op when `pg_ctl stop` was specified. Can we remove?\n> The documentation is not listed -l for the stop command.\n> All the similar lines should be fixed as well.\n\nModified\n\n> 03.\n>\n> ```\n> On <literal>node3</literal>, create any tables that were created in\n> the upgraded <literal>node2</literal> between\n> <xref linkend=\"cascaded-cluster-disable-sub-node2-node3\"/> and now,\n> ```\n>\n> If tables are newly defined on node1 between 1 - 11, they are not defined on node3.\n> So they must be defined on node3 as well.\n\nThe new node1 tables will be created in node2 in step-11. Since we\nhave mentioned that, create the tables that were created between step\n6 and now, all of node1 and node2 tables will get created\n\n> 04.\n>\n> ```\n> <step>\n> <para id=\"cascaded-cluster-disable-sub-node2-node3\">\n> ```\n>\n> Even if the referred steps is correct, ID should be allocated to step, not para.\n> That's why the rendering is bit a strange.\n\nModified\n\nThe attached v4 version patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 25 Jan 2024 15:09:52 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Thu, 25 Jan 2024 at 05:45, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for patch v3.\n>\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 1.\n> +\n> + <para>\n> + This page does not cover steps to upgrade logical replication\n> clusters, refer\n> + <xref linkend=\"logical-replication-upgrade\"/> for details on upgrading\n> + logical replication clusters.\n> + </para>\n> +\n>\n> I felt that maybe this note was misplaced. Won't it be better to put\n> this down in the \"Usage\" section of this page?\n>\n> BEFORE\n> These are the steps to perform an upgrade with pg_upgrade:\n>\n> SUGGESTION (or something like this)\n> Below are the steps to perform an upgrade with pg_upgrade.\n>\n> Note, the steps to upgrade logical replication clusters are not\n> covered here; refer to <xref linkend=\"logical-replication-upgrade\"/>\n> for details.\n\nModified\n\n> ~~~\n>\n> 2.\n> Configure the servers for log shipping. (You do not need to run\n> <function>pg_backup_start()</function> and\n> <function>pg_backup_stop()</function>\n> or take a file system backup as the standbys are still synchronized\n> - with the primary.) Only logical slots on the primary are copied to the\n> - new standby, but other slots on the old standby are not copied so must\n> - be recreated manually.\n> + with the primary.) In version 17.0 or later, only logical slots on the\n> + primary are copied to the new standby, but other slots on the\n> old standby\n> + are not copied so must be recreated manually.\n> </para>\n>\n> This para was still unclear to me. What is version 17.0 referring to\n> -- the old_cluster version? Do you mean something like:\n> If the old cluster is < v17 then logical slots are not copied. If the\n> old_cluster is >= v17 then...\n\nYes, I have rephrased it now.\n\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 3.\n> + <para>\n> + While upgrading a subscriber, write operations can be performed in the\n> + publisher, these changes will be replicated to the subscriber once the\n> + subscriber upgradation is completed.\n> + </para>\n>\n> 3a.\n> /publisher, these changes/publisher. These changes/\n\nModified\n\n> ~\n>\n> 3b.\n> \"upgradation\" ??. See [1]\n>\n> maybe just /upgradation/upgrade/\n\nModified\n\n> ~~~\n>\n> 4. GENERAL - prompts/paths\n>\n> I noticed in v3 you removed all the cmd prompts like:\n> dba@node1:/opt/PostgreSQL/postgres/17/bin$\n> dba@node2:/opt/PostgreSQL/postgres/18/bin$\n> etc.\n>\n> I thought those were helpful to disambiguate which server/version was\n> being operated on. I wonder if there is some way to keep information\n> still but not make it look like a real current directory that\n> Kuroda-san did not like:\n>\n> e.g. Maybe something like the below is possible?\n>\n> (dba@node1: v17) pg_upgrade...\n> (dba@node2: v18) pg_upgrade...\n\nI did not want to add this as our current documentation is consistent\nwith how it is documented in the pg_upgrade page at [1].\n\nThe v4 version patch attached at [2] has the changes for the same.\n\n[1] - https://www.postgresql.org/docs/devel/pgupgrade.html\n[2] - https://www.postgresql.org/message-id/CALDaNm1wCHmBwpLM%3Dd9oBoZqKXOe-TwC-LCcHC9gFy0bazZU6Q%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 25 Jan 2024 15:15:50 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Hi Vignesh,\n\nHere are some review comments for patch v4.\n\nThese are cosmetic only; otherwise v4 LGTM.\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n1.\n Configure the servers for log shipping. (You do not need to run\n <function>pg_backup_start()</function> and\n<function>pg_backup_stop()</function>\n or take a file system backup as the standbys are still synchronized\n- with the primary.) Only logical slots on the primary are copied to the\n- new standby, but other slots on the old standby are not copied so must\n- be recreated manually.\n+ with the primary.) If the old cluster is prior to 17.0, then no slots\n+ on the primary are copied to the new standby, so all the slots must be\n+ recreated manually. If the old cluster is 17.0 or later, then only\n+ logical slots on the primary are copied to the new standby, but other\n+ slots on the old standby are not copied so must be recreated manually.\n </para>\n\nPerhaps the part from \"If the old cluster is prior...\" should be in a\nnew paragraph.\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\n+ <para>\n+ Setup the <link linkend=\"logical-replication-config-subscriber\">\n+ subscriber configurations</link> in the new subscriber.\n+ <application>pg_upgrade</application> attempts to migrate subscription\n+ dependencies which includes the subscription's table information present in\n+ <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>\n+ system catalog and also the subscription's replication origin. This allows\n+ logical replication on the new subscriber to continue from where the\n+ old subscriber was up to. Migration of subscription dependencies is only\n+ supported when the old cluster is version 17.0 or later. Subscription\n+ dependencies on clusters before version 17.0 will silently be ignored.\n+ </para>\n\nPerhaps the part from \"pg_upgrade attempts...\" should be in a new paragraph.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 29 Jan 2024 12:04:18 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for updating the patch! For now, v4 patch LGTM.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Mon, 29 Jan 2024 02:30:38 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, 29 Jan 2024 at 06:34, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh,\n>\n> Here are some review comments for patch v4.\n>\n> These are cosmetic only; otherwise v4 LGTM.\n>\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 1.\n> Configure the servers for log shipping. (You do not need to run\n> <function>pg_backup_start()</function> and\n> <function>pg_backup_stop()</function>\n> or take a file system backup as the standbys are still synchronized\n> - with the primary.) Only logical slots on the primary are copied to the\n> - new standby, but other slots on the old standby are not copied so must\n> - be recreated manually.\n> + with the primary.) If the old cluster is prior to 17.0, then no slots\n> + on the primary are copied to the new standby, so all the slots must be\n> + recreated manually. If the old cluster is 17.0 or later, then only\n> + logical slots on the primary are copied to the new standby, but other\n> + slots on the old standby are not copied so must be recreated manually.\n> </para>\n>\n> Perhaps the part from \"If the old cluster is prior...\" should be in a\n> new paragraph.\n\nModified\n\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 2.\n> + <para>\n> + Setup the <link linkend=\"logical-replication-config-subscriber\">\n> + subscriber configurations</link> in the new subscriber.\n> + <application>pg_upgrade</application> attempts to migrate subscription\n> + dependencies which includes the subscription's table information present in\n> + <link linkend=\"catalog-pg-subscription-rel\">pg_subscription_rel</link>\n> + system catalog and also the subscription's replication origin. This allows\n> + logical replication on the new subscriber to continue from where the\n> + old subscriber was up to. Migration of subscription dependencies is only\n> + supported when the old cluster is version 17.0 or later. Subscription\n> + dependencies on clusters before version 17.0 will silently be ignored.\n> + </para>\n>\n> Perhaps the part from \"pg_upgrade attempts...\" should be in a new paragraph.\n\nModified\n\nThanks for the comments, the attached v5 version patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 29 Jan 2024 10:10:16 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, Jan 29, 2024 at 10:10 AM vignesh C <[email protected]> wrote:\n>\n> Thanks for the comments, the attached v5 version patch has the changes\n> for the same.\n\nThanks for working on this. Here are some comments on the v5 patch:\n\n1.\n+ <para>\n+ Migration of logical replication clusters is possible only when all the\n+ members of the old logical replication clusters are version 17.0 or later.\n\nPerhaps define what logical replication cluster is either in glossary\nor within a parenthesis next to the first use in the docs? This will\nhelp developers understand it better and will not confuse them with\npostgres cluster. I see it being used for the first time in code\ncomments 9a17be1e2, but this patch uses it for the first time in the\ndocs.\n\n2.\n+ Before reading this section, refer <xref linkend=\"pgupgrade\"/> page for\n+ more details about pg_upgrade.\n+ </para>\n\nThis looks extraneous, we can just link to pg_upgrade on the first use\nof pg_upgrade, change the following\n\n+ <para>\n+ <application>pg_upgrade</application> attempts to migrate logical\n+ slots. This helps avoid the need for manually defining the same\n\nto\n\n+ <para>\n+ <xref linkend=\"pgupgrade\"/> attempts to migrate logical\n+ slots. This helps avoid the need for manually defining the same\n\n3.\n+ transactional, the user is advised to take backups. Backups can be taken\n+ as described in <xref linkend=\"backup-base-backup\"/>.\n+ </para>\n\nHow about simplifying the above to \"the user is advised to take\nbackups as described in <xref linkend=\"backup-base-backup\"/>\" instead\nof two statements?\n\n4.\n subscription is temporarily disabled, by executing\n+ <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION\n... DISABLE</command></link>.\n+ Re-enable the subscription after the upgrade.\n+ </para>\n\nIs it to avoid repeated failures of logical replication apply workers\non the subscribers? Isn't it good to say why subscription needs to be\ndisabled?\n\n5.\n+ <para>\n+ There are some prerequisites for <application>pg_upgrade</application> to\n+ be able to upgrade the logical slots. If these are not met an error\n+ will be reported.\n+ </para>\n\nI think it's better to be \"Following are prerequisites for\n<application>pg_upgrade</application> to..\"?\n\n6.\n+ <listitem>\n+ <para>\n+ The old cluster has replicated all the transactions and logical decoding\n+ messages to subscribers.\n+ </para>\n\nI think it's better to be \"The old cluster must have replicated all\nthe transactions and ....\"?\n\n7.\n+ <para>\n+ The new cluster must not have permanent logical slots, i.e.,\n+ there must be no slots where\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>temporary</structfield>\n+ is <literal>false</literal>.\n\nI think we better specify a full SQL query as opposed to just\nspecifying one output column and the view name.\n\n<para>\n The new cluster must not have permanent logical slots, i.e., a query like:\n<programlisting>\nSELECT count(*) FROM pg_replication_slots WHERE slot_type = 'logical'\nAND temporary IS false;\n</programlisting>\n must return 0.\n </para>\n\n8.\n+ If the old cluster is prior to 17.0, then no slots on the primary are\n+ copied to the new standby, so all the slots must be recreated manually.\n+ If the old cluster is 17.0 or later, then only logical slots on the\n\nI think it's better to say \"version 17.0\" instead of just \"17.0\".\n\n9.\n+ primary are copied to the new standby, but other slots on the\nold standby\n\n\"but other slots on the old standby\" - is it slots on the old standby\nor old cluster?\n\nI think it's the other way around: the old cluster needs to be\nreplaced with the old standby in the newly added paragraph.\n\n10.\nChange\n+ primary are copied to the new standby, but other slots on the\nold standby\n+ are not copied so must be recreated manually.\n\nto\n\n+ primary are copied to the new standby, but other slots on the\nold standby\n+ are not copied, so must be recreated manually.\n\n11.\n+ <note>\n+ <para>\n+ The logical replication restrictions apply to logical replication cluster\n+ upgrades also. See <xref linkend=\"logical-replication-restrictions\"/> for\n+ the details of logical replication restrictions.\n+ </para>\n\nHow about just say \"See <xref\nlinkend=\"logical-replication-restrictions\"/> for details.\" instead of\nusing logical replication restrictions more than once in the same\npara?\n\n12.\n+ <para>\n+ The prerequisites of publisher upgrade apply to logical replication\n+ cluster upgrades also. See <xref linkend=\"prepare-publisher-upgrades\"/>\n+ for the details of publisher upgrade prerequisites.\n\nHow about just say \"See <xref linkend=\"prepare-publisher-upgrades\"/>\nfor details.\" instead of using publisher upgrade prerequisites more\nthan once in the same para?\n\n13.\n+ <para>\n+ The prerequisites of subscriber upgrade apply to logical replication\n+ cluster upgrades also. See <xref linkend=\"prepare-subscriber-upgrades\"/>\n+ for the details of subscriber upgrade prerequisites.\n+ </para>\n\nHow about just say \"See <xref linkend=\"prepare-subscriber-upgrades\"/>\nfor details.\" instead of using subscriber upgrade prerequisites more\nthan once in the same para?\n\n14.\n+ Upgrading logical replication cluster requires multiple steps to be\n+ performed on various nodes. Because not all operations are\n\nPer comment #1, defining logical replication clusters and nodes helps\nclearly distinguish. For instance, one can get confused with the\nvarious terms in hand - postgres cluster, logical replication cluster,\nnode etc.\n\n15.\n+ two subscriptions <literal>sub1_node1_node2</literal> and\n+ <literal>sub2_node1_node2</literal> which are subscribing the changes\n+ from <literal>node1</literal>.\n\nWhy confluse with subsription names by including node1 and node2 in\nit? We are not creating subscriptions from node1 to node2, are we? I'd\nrecommend using simplified names like mysub1, mysub2 like elsewhere in\nthe documentation.\n\n16.\n+ Let's say publisher is in <literal>node1</literal> and subscriber is\n+ in <literal>node2</literal>.\n\nHow about saying \"publisher is in a database cluster named\n<literal>node1</literal> and subscriber is in database cluster named\n<literal>node2</literal>\"? I think using this terminology helps.\n\n17.\n+ refer to <xref linkend=\"logical-replication-upgrade\"/> for details.\n+ </para>\n+ </note>\n\nIMHO, it could have been better if steps to upgrade the logical\nreplication cluster is specified in pgupgrade.sgml as opposed to\nlogical-replication.sgml. Because, upgrading logical replication\ncluster is a sub-section for pg_upgrade.\n\n18.\n+ <para>\n+ The steps to upgrade the following logical replication clusters are\n+ detailed below:\n+ <itemizedlist>\n+ <listitem>\n+ <para>\n+ Follow the steps specified in\n\nI think we can talk about what advantages upgrading logical\nreplication clusters brings in. We can say that the pg_upgrade makes\nit possible 1) to re-use the logical replication slots post-upgrade,\n2) to re-use the subscribers i.e. now it's not required to re-create\nall the logical subscribers after the upgrade, so no initial table\nsync, no creation of new clusters for subscribers etc.\n\n19. I think we can talk about the possible gotchas i.e. the things\nthat can go wrong while performing any of the prescribed steps. What\nhappens if the slot the pg_upgrade interrupts after it upgraded a few\nof the replication slots or if some of the prerequisites are not met\netc.?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 16:00:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, 29 Jan 2024 at 16:01, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Jan 29, 2024 at 10:10 AM vignesh C <[email protected]> wrote:\n> >\n> > Thanks for the comments, the attached v5 version patch has the changes\n> > for the same.\n>\n> Thanks for working on this. Here are some comments on the v5 patch:\n>\n> 1.\n> + <para>\n> + Migration of logical replication clusters is possible only when all the\n> + members of the old logical replication clusters are version 17.0 or later.\n>\n> Perhaps define what logical replication cluster is either in glossary\n> or within a parenthesis next to the first use in the docs? This will\n> help developers understand it better and will not confuse them with\n> postgres cluster. I see it being used for the first time in code\n> comments 9a17be1e2, but this patch uses it for the first time in the\n> docs.\n\nI have added it in glossary.\n\n> 2.\n> + Before reading this section, refer <xref linkend=\"pgupgrade\"/> page for\n> + more details about pg_upgrade.\n> + </para>\n>\n> This looks extraneous, we can just link to pg_upgrade on the first use\n> of pg_upgrade, change the following\n>\n> + <para>\n> + <application>pg_upgrade</application> attempts to migrate logical\n> + slots. This helps avoid the need for manually defining the same\n>\n> to\n>\n> + <para>\n> + <xref linkend=\"pgupgrade\"/> attempts to migrate logical\n> + slots. This helps avoid the need for manually defining the same\n\nModified\n\n> 3.\n> + transactional, the user is advised to take backups. Backups can be taken\n> + as described in <xref linkend=\"backup-base-backup\"/>.\n> + </para>\n>\n> How about simplifying the above to \"the user is advised to take\n> backups as described in <xref linkend=\"backup-base-backup\"/>\" instead\n> of two statements?\n\nModified\n\n> 4.\n> subscription is temporarily disabled, by executing\n> + <link linkend=\"sql-altersubscription\"><command>ALTER SUBSCRIPTION\n> ... DISABLE</command></link>.\n> + Re-enable the subscription after the upgrade.\n> + </para>\n>\n> Is it to avoid repeated failures of logical replication apply workers\n> on the subscribers? Isn't it good to say why subscription needs to be\n> disabled?\n\nAdded it\n\n> 5.\n> + <para>\n> + There are some prerequisites for <application>pg_upgrade</application> to\n> + be able to upgrade the logical slots. If these are not met an error\n> + will be reported.\n> + </para>\n>\n> I think it's better to be \"Following are prerequisites for\n> <application>pg_upgrade</application> to..\"?\n\nModified\n\n> 6.\n> + <listitem>\n> + <para>\n> + The old cluster has replicated all the transactions and logical decoding\n> + messages to subscribers.\n> + </para>\n>\n> I think it's better to be \"The old cluster must have replicated all\n> the transactions and ....\"?\n\nModified\n\n> 7.\n> + <para>\n> + The new cluster must not have permanent logical slots, i.e.,\n> + there must be no slots where\n> + <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>temporary</structfield>\n> + is <literal>false</literal>.\n>\n> I think we better specify a full SQL query as opposed to just\n> specifying one output column and the view name.\n>\n> <para>\n> The new cluster must not have permanent logical slots, i.e., a query like:\n> <programlisting>\n> SELECT count(*) FROM pg_replication_slots WHERE slot_type = 'logical'\n> AND temporary IS false;\n> </programlisting>\n> must return 0.\n> </para>\n\nModified\n\n> 8.\n> + If the old cluster is prior to 17.0, then no slots on the primary are\n> + copied to the new standby, so all the slots must be recreated manually.\n> + If the old cluster is 17.0 or later, then only logical slots on the\n>\n> I think it's better to say \"version 17.0\" instead of just \"17.0\".\n\nModified\n\n> 9.\n> + primary are copied to the new standby, but other slots on the\n> old standby\n>\n> \"but other slots on the old standby\" - is it slots on the old standby\n> or old cluster?\n>\n> I think it's the other way around: the old cluster needs to be\n> replaced with the old standby in the newly added paragraph.\n\nModified it to old primary as we upgrade primary and do a rsync\n\n> 10.\n> Change\n> + primary are copied to the new standby, but other slots on the\n> old standby\n> + are not copied so must be recreated manually.\n>\n> to\n>\n> + primary are copied to the new standby, but other slots on the\n> old standby\n> + are not copied, so must be recreated manually.\n\nModified\n\n> 11.\n> + <note>\n> + <para>\n> + The logical replication restrictions apply to logical replication cluster\n> + upgrades also. See <xref linkend=\"logical-replication-restrictions\"/> for\n> + the details of logical replication restrictions.\n> + </para>\n>\n> How about just say \"See <xref\n> linkend=\"logical-replication-restrictions\"/> for details.\" instead of\n> using logical replication restrictions more than once in the same\n> para?\n\nModified\n\n> 12.\n> + <para>\n> + The prerequisites of publisher upgrade apply to logical replication\n> + cluster upgrades also. See <xref linkend=\"prepare-publisher-upgrades\"/>\n> + for the details of publisher upgrade prerequisites.\n>\n> How about just say \"See <xref linkend=\"prepare-publisher-upgrades\"/>\n> for details.\" instead of using publisher upgrade prerequisites more\n> than once in the same para?\n\nModified\n\n> 13.\n> + <para>\n> + The prerequisites of subscriber upgrade apply to logical replication\n> + cluster upgrades also. See <xref linkend=\"prepare-subscriber-upgrades\"/>\n> + for the details of subscriber upgrade prerequisites.\n> + </para>\n>\n> How about just say \"See <xref linkend=\"prepare-subscriber-upgrades\"/>\n> for details.\" instead of using subscriber upgrade prerequisites more\n> than once in the same para?\n\nModified\n\n> 14.\n> + Upgrading logical replication cluster requires multiple steps to be\n> + performed on various nodes. Because not all operations are\n>\n> Per comment #1, defining logical replication clusters and nodes helps\n> clearly distinguish. For instance, one can get confused with the\n> various terms in hand - postgres cluster, logical replication cluster,\n> node etc.\n\nI have added \"logical replication clusters\". I felt no need to add\nnode as it is not a new terminology. It is already being used in many\nplaces like in [1], [2] & [3]\n\n> 15.\n> + two subscriptions <literal>sub1_node1_node2</literal> and\n> + <literal>sub2_node1_node2</literal> which are subscribing the changes\n> + from <literal>node1</literal>.\n>\n> Why confluse with subsription names by including node1 and node2 in\n> it? We are not creating subscriptions from node1 to node2, are we? I'd\n> recommend using simplified names like mysub1, mysub2 like elsewhere in\n> the documentation.\n\nI have used the name sub1_node1_node to indicate it is subscribing\nchanges from node1 to node2. I felt this is self explainatory names.\n\n> 16.\n> + Let's say publisher is in <literal>node1</literal> and subscriber is\n> + in <literal>node2</literal>.\n>\n> How about saying \"publisher is in a database cluster named\n> <literal>node1</literal> and subscriber is in database cluster named\n> <literal>node2</literal>\"? I think using this terminology helps.\n\nI felt existing is ok as similar is used in [2] & [3]\n\n> 17.\n> + refer to <xref linkend=\"logical-replication-upgrade\"/> for details.\n> + </para>\n> + </note>\n>\n> IMHO, it could have been better if steps to upgrade the logical\n> replication cluster is specified in pgupgrade.sgml as opposed to\n> logical-replication.sgml. Because, upgrading logical replication\n> cluster is a sub-section for pg_upgrade.\n\nAs the content for logical replication is more, I felt it is better to\nkeep it here and also we have given a link to this in the pg_upgrade\npage. I did not want the upgrade page to become bulky because of the\nlogical replication upgrade section.\n\n> 18.\n> + <para>\n> + The steps to upgrade the following logical replication clusters are\n> + detailed below:\n> + <itemizedlist>\n> + <listitem>\n> + <para>\n> + Follow the steps specified in\n>\n> I think we can talk about what advantages upgrading logical\n> replication clusters brings in. We can say that the pg_upgrade makes\n> it possible 1) to re-use the logical replication slots post-upgrade,\n> 2) to re-use the subscribers i.e. now it's not required to re-create\n> all the logical subscribers after the upgrade, so no initial table\n> sync, no creation of new clusters for subscribers etc.\n\nI felt this is self explanatory, no need to mention about the\ncomplexity involved in the manual steps involved. As the same is not\nmentioned in case of streaming replication too at [4].\n\n> 19. I think we can talk about the possible gotchas i.e. the things\n> that can go wrong while performing any of the prescribed steps. What\n> happens if the slot the pg_upgrade interrupts after it upgraded a few\n> of the replication slots or if some of the prerequisites are not met\n> etc.?\n\nThere is the note below when we run pg_upgrade:\n\"If pg_upgrade fails after this point, you must re-initdb the new\ncluster before continuing.\"\nI felt this is kind of self explanatory. Also the pre-requisite\nmentions clearly about the configurations that must be set before\nupgrade is run. So I felt the existing information was enough.\n\nThanks for the comment, the attached v6 version patch has the changes\nfor the same.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication.html\n[2] - https://www.postgresql.org/docs/devel/logical-replication-publication.html\n[3] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n[4] - https://www.postgresql.org/docs/devel/pgupgrade.html\n\nRegards,\nVignesh", "msg_date": "Tue, 30 Jan 2024 15:44:50 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Dear Vignesh,\r\n\r\nThanks for updating the patch! Here are my comments for v6.\r\n\r\n01.\r\n```\r\n+ <glossterm>Logical replication cluster</glossterm>\r\n+ <glossdef>\r\n+ <para>\r\n+ A set of publisher and subscriber instance with publisher instance\r\n+ replicating changes to the subscriber instance.\r\n+ </para>\r\n+ </glossdef>\r\n```\r\n\r\nShould we say 1:N relationship is allowed?\r\n\r\n02.\r\n```\r\n@@ -70,6 +70,7 @@ PostgreSQL documentation\r\n pg_upgrade supports upgrades from 9.2.X and later to the current\r\n major release of <productname>PostgreSQL</productname>, including snapshot and beta releases.\r\n </para>\r\n+\r\n </refsect1>\r\n```\r\n\r\nUnnecessary blank.\r\n\r\n03.\r\n```\r\n <para>\r\n- These are the steps to perform an upgrade\r\n- with <application>pg_upgrade</application>:\r\n+ Below are the steps to perform an upgrade\r\n+ with <application>pg_upgrade</application>.\r\n </para>\r\n```\r\n\r\nI'm not sure it should be included in this patch.\r\n\r\n04.\r\n```\r\n+ If the old primary is prior to version 17.0, then no slots on the primary\r\n+ are copied to the new standby, so all the slots on the old standby must\r\n+ be recreated manually.\r\n```\r\n\r\nI think that \"all the slots on the old standby\" must be created manually in any\r\ncases. Therefore, the preposition \", so\" seems not correct.\r\n\r\n05.\r\n```\r\nIf the old primary is version 17.0 or later, then\r\n+ only logical slots on the primary are copied to the new standby, but\r\n+ other slots on the old standby are not copied, so must be recreated\r\n+ manually.\r\n```\r\n\r\nHow about replacing this paragraph to below?\r\n\r\n```\r\nAll the slots on the old standby must be recreated manually. If the old primary\r\nis version 17.0 or later, then only logical slots on the primary are copied to the\r\nnew standby.\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Wed, 31 Jan 2024 06:12:17 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Wed, 31 Jan 2024 at 11:42, Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Vignesh,\n>\n> Thanks for updating the patch! Here are my comments for v6.\n>\n> 01.\n> ```\n> + <glossterm>Logical replication cluster</glossterm>\n> + <glossdef>\n> + <para>\n> + A set of publisher and subscriber instance with publisher instance\n> + replicating changes to the subscriber instance.\n> + </para>\n> + </glossdef>\n> ```\n>\n> Should we say 1:N relationship is allowed?\n\nI felt this need not be mentioned here, just wanted to give an\nindication wherever this terminology is used, it means a set of\npublisher and subscriber instances. Detail information should be added\nin the logical replication related pages\n\n> 02.\n> ```\n> @@ -70,6 +70,7 @@ PostgreSQL documentation\n> pg_upgrade supports upgrades from 9.2.X and later to the current\n> major release of <productname>PostgreSQL</productname>, including snapshot and beta releases.\n> </para>\n> +\n> </refsect1>\n> ```\n>\n> Unnecessary blank.\n\nRemoved it.\n\n> 03.\n> ```\n> <para>\n> - These are the steps to perform an upgrade\n> - with <application>pg_upgrade</application>:\n> + Below are the steps to perform an upgrade\n> + with <application>pg_upgrade</application>.\n> </para>\n> ```\n>\n> I'm not sure it should be included in this patch.\n\nThis is not required in this patch, removed it.\n\n> 04.\n> ```\n> + If the old primary is prior to version 17.0, then no slots on the primary\n> + are copied to the new standby, so all the slots on the old standby must\n> + be recreated manually.\n> ```\n>\n> I think that \"all the slots on the old standby\" must be created manually in any\n> cases. Therefore, the preposition \", so\" seems not correct.\n\nI felt that this change is not related to this patch. I'm removing\nthese changes from the patch. Let's handle rephrasing of the base code\nchange in a separate thread.\n\n> 05.\n> ```\n> If the old primary is version 17.0 or later, then\n> + only logical slots on the primary are copied to the new standby, but\n> + other slots on the old standby are not copied, so must be recreated\n> + manually.\n> ```\n>\n> How about replacing this paragraph to below?\n>\n> ```\n> All the slots on the old standby must be recreated manually. If the old primary\n> is version 17.0 or later, then only logical slots on the primary are copied to the\n> new standby.\n> ```\n\nI felt that this change is not related to this patch. I'm removing\nthese changes from the patch. Let's handle rephrasing of the base code\nchange in a separate thread.\n\nThanks for the comments, the attached v7 version patch has the changes\nfor the same.\n\nRegards,\nVignesh", "msg_date": "Thu, 1 Feb 2024 14:50:10 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Here are some review comments for patch v7-0001.\n\n======\ndoc/src/sgml/glossary.sgml\n\n1.\n+ <glossentry id=\"glossary-logical-replication-cluster\">\n+ <glossterm>Logical replication cluster</glossterm>\n+ <glossdef>\n+ <para>\n+ A set of publisher and subscriber instance with publisher instance\n+ replicating changes to the subscriber instance.\n+ </para>\n+ </glossdef>\n+ </glossentry>\n\n1a.\n/instance with/instances with/\n\n~~~\n\n1b.\nThe description then made me want to look up the glossary definition\nof a \"publisher instance\" and \"subscriber instance\", but then I was\nquite surprised that even \"Publisher\" and \"Subscriber\" terms are not\ndescribed in the glossary. Should this patch add those, or should we\nstart another thread for adding them?\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n2.\n+ <para>\n+ Migration of logical replication clusters is possible only when all the\n+ members of the old logical replication clusters are version 17.0 or later.\n+ </para>\n\nHere is where \"logical replication clusters\" is mentioned. Shouldn't\nthis first reference be linked to that new to the glossary entry --\ne.g. <glossterm linkend=\"...\">logical replication clusters</glossterm>\n\n~~~\n\n3.\n+ <para>\n+ Following are the prerequisites for <application>pg_upgrade</application>\n+ to be able to upgrade the logical slots. If these are not met an error\n+ will be reported.\n+ </para>\n\nSUGGESTION\nThe following prerequisites are required for ...\n\n~~~\n\n4.\n+ <para>\n+ All slots on the old cluster must be usable, i.e., there are no slots\n+ whose\n+ <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>conflict_reason</structfield>\n+ is not <literal>NULL</literal>.\n+ </para>\n\nThe double-negative is too tricky \"no slots whose ... not NULL\", needs\nrewording. Maybe it is better to instead use an example as the next\nbullet point does.\n\n~~~\n\n5.\n+\n+ <para>\n+ Following are the prerequisites for\n<application>pg_upgrade</application> to\n+ be able to upgrade the subscriptions. If these are not met an error\n+ will be reported.\n+ </para>\n\nSUGGESTION\nThe following prerequisites are required for ...\n\n======\ndoc/src/sgml/ref/pgupgrade.sgml\n\n6.\n+ <note>\n+ <para>\n+ The steps to upgrade logical replication clusters are not covered here;\n+ refer to <xref linkend=\"logical-replication-upgrade\"/> for details.\n+ </para>\n+ </note>\n\nMaybe here too there should be a link to the glossary term \"logical\nreplication clusters\".\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 9 Feb 2024 17:59:51 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Thu, 1 Feb 2024 at 14:50, vignesh C <[email protected]> wrote:\n>\n> On Wed, 31 Jan 2024 at 11:42, Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Vignesh,\n> >\n> > Thanks for updating the patch! Here are my comments for v6.\n> >\n> > 01.\n> > ```\n> > + <glossterm>Logical replication cluster</glossterm>\n> > + <glossdef>\n> > + <para>\n> > + A set of publisher and subscriber instance with publisher instance\n> > + replicating changes to the subscriber instance.\n> > + </para>\n> > + </glossdef>\n> > ```\n> >\n> > Should we say 1:N relationship is allowed?\n>\n> I felt this need not be mentioned here, just wanted to give an\n> indication wherever this terminology is used, it means a set of\n> publisher and subscriber instances. Detail information should be added\n> in the logical replication related pages\n>\n> > 02.\n> > ```\n> > @@ -70,6 +70,7 @@ PostgreSQL documentation\n> > pg_upgrade supports upgrades from 9.2.X and later to the current\n> > major release of <productname>PostgreSQL</productname>, including snapshot and beta releases.\n> > </para>\n> > +\n> > </refsect1>\n> > ```\n> >\n> > Unnecessary blank.\n>\n> Removed it.\n>\n> > 03.\n> > ```\n> > <para>\n> > - These are the steps to perform an upgrade\n> > - with <application>pg_upgrade</application>:\n> > + Below are the steps to perform an upgrade\n> > + with <application>pg_upgrade</application>.\n> > </para>\n> > ```\n> >\n> > I'm not sure it should be included in this patch.\n>\n> This is not required in this patch, removed it.\n>\n> > 04.\n> > ```\n> > + If the old primary is prior to version 17.0, then no slots on the primary\n> > + are copied to the new standby, so all the slots on the old standby must\n> > + be recreated manually.\n> > ```\n> >\n> > I think that \"all the slots on the old standby\" must be created manually in any\n> > cases. Therefore, the preposition \", so\" seems not correct.\n>\n> I felt that this change is not related to this patch. I'm removing\n> these changes from the patch. Let's handle rephrasing of the base code\n> change in a separate thread.\n>\n> > 05.\n> > ```\n> > If the old primary is version 17.0 or later, then\n> > + only logical slots on the primary are copied to the new standby, but\n> > + other slots on the old standby are not copied, so must be recreated\n> > + manually.\n> > ```\n> >\n> > How about replacing this paragraph to below?\n> >\n> > ```\n> > All the slots on the old standby must be recreated manually. If the old primary\n> > is version 17.0 or later, then only logical slots on the primary are copied to the\n> > new standby.\n> > ```\n>\n> I felt that this change is not related to this patch. I'm removing\n> these changes from the patch. Let's handle rephrasing of the base code\n> change in a separate thread.\n\nA new thread is started for the same and a patch is attached at [1].\nLet's discuss this change at the new thread.\n[1] - https://www.postgresql.org/message-id/CAHv8RjJHCw0jpUo9PZxjcguzGt3j2W1_NH%3DQuREoN0nYiVdVeA%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 9 Feb 2024 21:34:30 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, 9 Feb 2024 at 12:30, Peter Smith <[email protected]> wrote:\n>\n> Here are some review comments for patch v7-0001.\n>\n> ======\n> doc/src/sgml/glossary.sgml\n>\n> 1.\n> + <glossentry id=\"glossary-logical-replication-cluster\">\n> + <glossterm>Logical replication cluster</glossterm>\n> + <glossdef>\n> + <para>\n> + A set of publisher and subscriber instance with publisher instance\n> + replicating changes to the subscriber instance.\n> + </para>\n> + </glossdef>\n> + </glossentry>\n>\n> 1a.\n> /instance with/instances with/\n\nModified\n\n> ~~~\n>\n> 1b.\n> The description then made me want to look up the glossary definition\n> of a \"publisher instance\" and \"subscriber instance\", but then I was\n> quite surprised that even \"Publisher\" and \"Subscriber\" terms are not\n> described in the glossary. Should this patch add those, or should we\n> start another thread for adding them?\n\n I felt it is better to start a new thread for this\n\n> ======\n> doc/src/sgml/logical-replication.sgml\n>\n> 2.\n> + <para>\n> + Migration of logical replication clusters is possible only when all the\n> + members of the old logical replication clusters are version 17.0 or later.\n> + </para>\n>\n> Here is where \"logical replication clusters\" is mentioned. Shouldn't\n> this first reference be linked to that new to the glossary entry --\n> e.g. <glossterm linkend=\"...\">logical replication clusters</glossterm>\n\nModified\n\n> ~~~\n>\n> 3.\n> + <para>\n> + Following are the prerequisites for <application>pg_upgrade</application>\n> + to be able to upgrade the logical slots. If these are not met an error\n> + will be reported.\n> + </para>\n>\n> SUGGESTION\n> The following prerequisites are required for ...\n>\n> ~~~\n>\n> 4.\n> + <para>\n> + All slots on the old cluster must be usable, i.e., there are no slots\n> + whose\n> + <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>conflict_reason</structfield>\n> + is not <literal>NULL</literal>.\n> + </para>\n>\n> The double-negative is too tricky \"no slots whose ... not NULL\", needs\n> rewording. Maybe it is better to instead use an example as the next\n> bullet point does.\n\nThe other way is to mention \"all slots should have conflic_reason is\nNULL\", but in this case i feel checking for records is not NULL is\nbetter. So I have kept the wording the same and added an example to\navoid any confusion.\n\n> ~~~\n>\n> 5.\n> +\n> + <para>\n> + Following are the prerequisites for\n> <application>pg_upgrade</application> to\n> + be able to upgrade the subscriptions. If these are not met an error\n> + will be reported.\n> + </para>\n>\n> SUGGESTION\n> The following prerequisites are required for ...\n\nModified\n\n> ======\n> doc/src/sgml/ref/pgupgrade.sgml\n>\n> 6.\n> + <note>\n> + <para>\n> + The steps to upgrade logical replication clusters are not covered here;\n> + refer to <xref linkend=\"logical-replication-upgrade\"/> for details.\n> + </para>\n> + </note>\n>\n> Maybe here too there should be a link to the glossary term \"logical\n> replication clusters\".\n\nModified\n\nThanks for the comments, the attached v8 version patch has the changes\nfor the patch.\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Feb 2024 14:33:59 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, 12 Feb 2024 at 14:33, vignesh C <[email protected]> wrote:\n>\n> On Fri, 9 Feb 2024 at 12:30, Peter Smith <[email protected]> wrote:\n> >\n> > Here are some review comments for patch v7-0001.\n> >\n> > ======\n> > doc/src/sgml/glossary.sgml\n> >\n> > 1.\n> > + <glossentry id=\"glossary-logical-replication-cluster\">\n> > + <glossterm>Logical replication cluster</glossterm>\n> > + <glossdef>\n> > + <para>\n> > + A set of publisher and subscriber instance with publisher instance\n> > + replicating changes to the subscriber instance.\n> > + </para>\n> > + </glossdef>\n> > + </glossentry>\n> >\n> > 1a.\n> > /instance with/instances with/\n>\n> Modified\n>\n> > ~~~\n> >\n> > 1b.\n> > The description then made me want to look up the glossary definition\n> > of a \"publisher instance\" and \"subscriber instance\", but then I was\n> > quite surprised that even \"Publisher\" and \"Subscriber\" terms are not\n> > described in the glossary. Should this patch add those, or should we\n> > start another thread for adding them?\n>\n> I felt it is better to start a new thread for this\n\nA new patch has been posted at [1] to address this.\n[1] - https://www.postgresql.org/message-id/CANhcyEXa%3D%2BshzbdS2iW9%3DY%3D_Eh7aRWZbQKJjDHVYiCmuiE1Okw%40mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 13 Feb 2024 08:49:25 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Here are some minor comments for patch v8-0001.\n\n======\ndoc/src/sgml/glossary.sgml\n\n1.\n+ <glossdef>\n+ <para>\n+ A set of publisher and subscriber instances with publisher instance\n+ replicating changes to the subscriber instance.\n+ </para>\n+ </glossdef>\n\n/with publisher instance/with the publisher instance/\n\n~~~\n\n2.\nThere are 2 SQL fragments but they are wrapped differently (see\nbelow). e.g. it is not clear why is the 2nd fragment wrapped since it\nis shorter than the 1st. OTOH, maybe you want the 1st fragment to\nwrap. Either way, consistency wrapping would be better.\n\n\npostgres=# SELECT slot_name FROM pg_replication_slots WHERE slot_type\n= 'logical' AND conflict_reason IS NOT NULL;\n slot_name\n-----------\n(0 rows)\n\nversus\n\nSELECT count(*) FROM pg_replication_slots WHERE slot_type = 'logical'\nAND temporary IS false;\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 22 Feb 2024 15:05:16 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Thu, 22 Feb 2024 at 09:35, Peter Smith <[email protected]> wrote:\n>\n> Here are some minor comments for patch v8-0001.\n>\n> ======\n> doc/src/sgml/glossary.sgml\n>\n> 1.\n> + <glossdef>\n> + <para>\n> + A set of publisher and subscriber instances with publisher instance\n> + replicating changes to the subscriber instance.\n> + </para>\n> + </glossdef>\n>\n> /with publisher instance/with the publisher instance/\n\nModified\n\n> ~~~\n>\n> 2.\n> There are 2 SQL fragments but they are wrapped differently (see\n> below). e.g. it is not clear why is the 2nd fragment wrapped since it\n> is shorter than the 1st. OTOH, maybe you want the 1st fragment to\n> wrap. Either way, consistency wrapping would be better.\n\nModified\n\nThanks for the comments, the attached v9 version patch has the changes\nfor the same.\n\nI have added a commitfest entry for this:\nhttps://commitfest.postgresql.org/47/4848/\n\nRegards,\nVignesh", "msg_date": "Thu, 22 Feb 2024 19:04:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "Hi Vignesh, I have no further comments. Patch v9 LGTM.\n\n==========\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 23 Feb 2024 10:28:10 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, 23 Feb 2024 at 04:58, Peter Smith <[email protected]> wrote:\n>\n> Hi Vignesh, I have no further comments. Patch v9 LGTM.\n\nThe v9 version patch was not applying on top of HEAD because of few\ncommits, the updated v10 version patch is rebased on top of HEAD.\n\nRegards,\nVignesh", "msg_date": "Mon, 6 May 2024 10:39:54 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Mon, May 6, 2024 at 10:40 AM vignesh C <[email protected]> wrote:\n>\n> The v9 version patch was not applying on top of HEAD because of few\n> commits, the updated v10 version patch is rebased on top of HEAD.\n>\n\nLet's say publisher is in <literal>node1</literal> and subscriber is\n+ in <literal>node2</literal>. The subscriber <literal>node2</literal> has\n+ two subscriptions <literal>sub1_node1_node2</literal> and\n+ <literal>sub2_node1_node2</literal> which are subscribing the changes\n+ from <literal>node1</literal>.\n\nDo we need to show multiple subscriptions? You are following the same\nsteps for both subscriptions, so it may not add much value to show\nsteps for two subscriptions. You can write steps for one and add a\nnote to say it has to be done for other subscriptions present.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:24:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, 20 Sept 2024 at 16:24, Amit Kapila <[email protected]> wrote:\n>\n> On Mon, May 6, 2024 at 10:40 AM vignesh C <[email protected]> wrote:\n> >\n> > The v9 version patch was not applying on top of HEAD because of few\n> > commits, the updated v10 version patch is rebased on top of HEAD.\n> >\n>\n> Let's say publisher is in <literal>node1</literal> and subscriber is\n> + in <literal>node2</literal>. The subscriber <literal>node2</literal> has\n> + two subscriptions <literal>sub1_node1_node2</literal> and\n> + <literal>sub2_node1_node2</literal> which are subscribing the changes\n> + from <literal>node1</literal>.\n>\n> Do we need to show multiple subscriptions? You are following the same\n> steps for both subscriptions, so it may not add much value to show\n> steps for two subscriptions. You can write steps for one and add a\n> note to say it has to be done for other subscriptions present.\n\nI didn’t include a note because each disable/enable statement\nspecifies: a) Disable all subscriptions on the node, b) Enable all\nsubscriptions on the node. The attached v11 version patch just to show\nthe examples with one subscription.\n\nRegards,\nVignesh", "msg_date": "Fri, 20 Sep 2024 17:34:52 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Fri, Sep 20, 2024 at 5:46 PM vignesh C <[email protected]> wrote:\n>\n> I didn’t include a note because each disable/enable statement\n> specifies: a) Disable all subscriptions on the node, b) Enable all\n> subscriptions on the node. The attached v11 version patch just to show\n> the examples with one subscription.\n>\n\nThe following steps in the bi-directional node upgrade have some problems.\n\n+ <para>\n+ On <literal>node1</literal>, create any tables that were created in\n+ <literal>node2</literal> between <xref\nlinkend=\"circular-cluster-disable-sub-node2\"/>\n+ and now, e.g.:\n+<programlisting>\n+node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n+CREATE TABLE\n+</programlisting>\n+ </para>\n+ </step>\n+\n+ <step>\n+ <para>\n+ Enable all the subscriptions on <literal>node2</literal> that are\n+ subscribing the changes from <literal>node1</literal> by using\n+ <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER\nSUBSCRIPTION ... ENABLE</command></link>,\n+ e.g.:\n+<programlisting>\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;\n+ALTER SUBSCRIPTION\n+</programlisting>\n+ </para>\n+ </step>\n+\n+ <step>\n+ <para>\n+ Refresh the <literal>node2</literal> subscription's publications using\n+ <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\nSUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n+ e.g.:\n+<programlisting>\n+node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n+ALTER SUBSCRIPTION\n+</programlisting>\n+ </para>\n\nIf you are creating missing tables on node-1, won't that node's\nsubscription be refreshed to get the missing data? Also, I suggest\nmoving the step-2 in the above steps to enable subscriptions on node-2\nshould be moved before creating a table on node-1 and then issuing a\nREFRESH command on node-1. The similar steps for other node's upgrade\nfollowing these steps have similar problems.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:20:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Tue, Sep 24, 2024 at 4:20 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 5:46 PM vignesh C <[email protected]> wrote:\n> >\n> > I didn’t include a note because each disable/enable statement\n> > specifies: a) Disable all subscriptions on the node, b) Enable all\n> > subscriptions on the node. The attached v11 version patch just to show\n> > the examples with one subscription.\n> >\n>\n> The following steps in the bi-directional node upgrade have some problems.\n>\n\nOne more point to note is that I am planning to commit this patch only\nfor HEAD. We can have an argument to backpatch this to 17 as well but\nas users would be able to upgrade the slots from 17 onwards, I am\ninclined to push this to HEAD only.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:22:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Tue, 24 Sept 2024 at 16:20, Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Sep 20, 2024 at 5:46 PM vignesh C <[email protected]> wrote:\n> >\n> > I didn’t include a note because each disable/enable statement\n> > specifies: a) Disable all subscriptions on the node, b) Enable all\n> > subscriptions on the node. The attached v11 version patch just to show\n> > the examples with one subscription.\n> >\n>\n> The following steps in the bi-directional node upgrade have some problems.\n>\n> + <para>\n> + On <literal>node1</literal>, create any tables that were created in\n> + <literal>node2</literal> between <xref\n> linkend=\"circular-cluster-disable-sub-node2\"/>\n> + and now, e.g.:\n> +<programlisting>\n> +node1=# CREATE TABLE distributors (did integer PRIMARY KEY, name varchar(40));\n> +CREATE TABLE\n> +</programlisting>\n> + </para>\n> + </step>\n> +\n> + <step>\n> + <para>\n> + Enable all the subscriptions on <literal>node2</literal> that are\n> + subscribing the changes from <literal>node1</literal> by using\n> + <link linkend=\"sql-altersubscription-params-enable\"><command>ALTER\n> SUBSCRIPTION ... ENABLE</command></link>,\n> + e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 ENABLE;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n> + </step>\n> +\n> + <step>\n> + <para>\n> + Refresh the <literal>node2</literal> subscription's publications using\n> + <link linkend=\"sql-altersubscription-params-refresh-publication\"><command>ALTER\n> SUBSCRIPTION ... REFRESH PUBLICATION</command></link>,\n> + e.g.:\n> +<programlisting>\n> +node2=# ALTER SUBSCRIPTION sub1_node1_node2 REFRESH PUBLICATION;\n> +ALTER SUBSCRIPTION\n> +</programlisting>\n> + </para>\n>\n> If you are creating missing tables on node-1, won't that node's\n> subscription be refreshed to get the missing data? Also, I suggest\n> moving the step-2 in the above steps to enable subscriptions on node-2\n> should be moved before creating a table on node-1 and then issuing a\n> REFRESH command on node-1. The similar steps for other node's upgrade\n> following these steps have similar problems.\n\nReordered the docs to enable the subscription before creating the\ntable. For bi-directional replication, a publication refresh is\nnecessary on both nodes: a) First, refresh the publication on the old\nversion server to set the newly added tables to a ready state in the\npg_subscription_rel catalog. b) Next, refresh the publication on the\nupgraded version server to initiate the initial sync and update the\npg_subscription_rel with the ready state. This change has been\nincorporated into the attached v12 version patch.\n\nRegards,\nVignesh", "msg_date": "Tue, 24 Sep 2024 21:33:11 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Tue, Sep 24, 2024 at 9:42 PM vignesh C <[email protected]> wrote:\n>\n> Reordered the docs to enable the subscription before creating the\n> table. For bi-directional replication, a publication refresh is\n> necessary on both nodes: a) First, refresh the publication on the old\n> version server to set the newly added tables to a ready state in the\n> pg_subscription_rel catalog.\n>\n\nThis is not required for table-specific publications and isn't needed\nfor the examples mentioned in the patch. So, I have removed this part\nand pushed the patch. BTW, you choose to upgrade the publisher first\nbut one can upgrade the subscriber first as well. If so, we can add a\nnote to the documentation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 25 Sep 2024 14:06:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Wed, 25 Sept 2024 at 14:06, Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Sep 24, 2024 at 9:42 PM vignesh C <[email protected]> wrote:\n> >\n> > Reordered the docs to enable the subscription before creating the\n> > table. For bi-directional replication, a publication refresh is\n> > necessary on both nodes: a) First, refresh the publication on the old\n> > version server to set the newly added tables to a ready state in the\n> > pg_subscription_rel catalog.\n> >\n>\n> This is not required for table-specific publications and isn't needed\n> for the examples mentioned in the patch. So, I have removed this part\n> and pushed the patch. BTW, you choose to upgrade the publisher first\n> but one can upgrade the subscriber first as well. If so, we can add a\n> note to the documentation.\n\nYes, users can upgrade either the publisher first and then the\nsubscriber, or the subscriber first and then the publisher. I felt\nthis note is necessary only for the \"Steps to upgrade a two-node\nlogical replication cluster,\" as it may confuse users in other types\nof logical replication with questions such as: a) Which subscriber\nshould be upgraded first? b) Which subscriptions should be disabled?\nc) When should each subscription be enabled?\nThe attached patch includes a note for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 25 Sep 2024 17:58:55 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Documentation to upgrade logical replication cluster" }, { "msg_contents": "On Wed, Sep 25, 2024 at 6:07 PM vignesh C <[email protected]> wrote:\n>\n> Yes, users can upgrade either the publisher first and then the\n> subscriber, or the subscriber first and then the publisher. I felt\n> this note is necessary only for the \"Steps to upgrade a two-node\n> logical replication cluster,\" as it may confuse users in other types\n> of logical replication with questions such as: a) Which subscriber\n> should be upgraded first? b) Which subscriptions should be disabled?\n> c) When should each subscription be enabled?\n> The attached patch includes a note for the same.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Sep 2024 16:28:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Documentation to upgrade logical replication cluster" } ]
[ { "msg_contents": "Hi,\n\nI discovered that my patch to add WAL summarization added two new\nSQL-callable functions but failed to document them. 0001 fixes that.\n\nAn outstanding item from the original thread was to write a better\ntest for the not-yet-committed pg_walsummary utility. But I discovered\nthat I couldn't do that because there were some race conditions that\ncouldn't easily be cured. So 0002 therefore adds a new function\npg_get_wal_summarizer_state() which returns various items of in-memory\nstate related to WAL summarization. We had some brief discussion of\nthis being desirable for other reasons; it's nice for users to be able\nto look at this information in case of trouble (say, if the summarizer\nis not keeping up).\n\n0003 then adds the previously-proposed pg_walsummary utility, with\ntests that depend on 0002.\n\n0004 attempts to fix some problems detected by Coverity and subsequent\ncode inspection.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 Jan 2024 09:37:21 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "cleanup patches for incremental backup" }, { "msg_contents": "Hello again,\n\nI have now committed 0001.\n\nI got some off-list review of 0004; specifically, Michael Paquier said\nit looked OK, and Tom Lane found another bug. So I've added a fix for\nthat in what's now 0003.\n\nHere's v2. I plan to commit the rest of this fairly soon if there are\nno comments.\n\n...Robert", "msg_date": "Tue, 9 Jan 2024 13:18:51 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, Jan 9, 2024 at 1:18 PM Robert Haas <[email protected]> wrote:\n> Here's v2. I plan to commit the rest of this fairly soon if there are\n> no comments.\n\nDone, with a minor adjustment to 0003.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 13:13:24 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "Hi,\r\nThank you for developing the new tool. I have attached a patch that corrects the spelling of the --individual option in the SGML file.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Robert Haas <[email protected]> \r\nSent: Friday, January 12, 2024 3:13 AM\r\nTo: PostgreSQL Hackers <[email protected]>; Jakub Wartak <[email protected]>\r\nSubject: Re: cleanup patches for incremental backup\r\n\r\nOn Tue, Jan 9, 2024 at 1:18 PM Robert Haas <[email protected]> wrote:\r\n> Here's v2. I plan to commit the rest of this fairly soon if there are \r\n> no comments.\r\n\r\nDone, with a minor adjustment to 0003.\r\n\r\n--\r\nRobert Haas\r\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 12 Jan 2024 00:58:49 +0000", "msg_from": "\"Shinoda, Noriyoshi (HPE Services Japan - FSIP)\"\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "RE: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Jan 11, 2024 at 8:00 PM Shinoda, Noriyoshi (HPE Services Japan\n- FSIP) <[email protected]> wrote:\n> Thank you for developing the new tool. I have attached a patch that corrects the spelling of the --individual option in the SGML file.\n\nThanks, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 09:50:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "Hello Robert,\n\n12.01.2024 17:50, Robert Haas wrote:\n> On Thu, Jan 11, 2024 at 8:00 PM Shinoda, Noriyoshi (HPE Services Japan\n> - FSIP) <[email protected]> wrote:\n>> Thank you for developing the new tool. I have attached a patch that corrects the spelling of the --individual option in the SGML file.\n> Thanks, committed.\n\nI've found one more typo in the sgml:\nsummarized_pid\nAnd one in a comment:\nsumamry\n\nA trivial fix is attached.\n\nBest regards,\nAlexander", "msg_date": "Sat, 13 Jan 2024 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Sat, Jan 13, 2024 at 1:00 PM Alexander Lakhin <[email protected]> wrote:\n> I've found one more typo in the sgml:\n> summarized_pid\n> And one in a comment:\n> sumamry\n>\n> A trivial fix is attached.\n\nThanks, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 11:58:04 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Mon, 15 Jan 2024 at 17:58, Robert Haas <[email protected]> wrote:\n>\n> On Sat, Jan 13, 2024 at 1:00 PM Alexander Lakhin <[email protected]> wrote:\n> > I've found one more typo in the sgml:\n> > summarized_pid\n> > And one in a comment:\n> > sumamry\n> >\n> > A trivial fix is attached.\n>\n> Thanks, committed.\n\nOff-list I was notified that the new WAL summarizer process was not\nyet added to the glossary, so PFA a patch that does that.\nIn passing, it also adds \"incremental backup\" to the glossary, and\nupdates the documented types of backends in monitoring.sgml with the\nnew backend type, too.\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 15 Jan 2024 21:31:18 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Mon, Jan 15, 2024 at 3:31 PM Matthias van de Meent\n<[email protected]> wrote:\n> Off-list I was notified that the new WAL summarizer process was not\n> yet added to the glossary, so PFA a patch that does that.\n> In passing, it also adds \"incremental backup\" to the glossary, and\n> updates the documented types of backends in monitoring.sgml with the\n> new backend type, too.\n\nI wonder if it's possible that you sent the wrong version of this\npatch, because:\n\n(1) The docs don't build with this applied. I'm not sure if it's the\nonly problem, but <glossterm linkend=\"glossary-db-cluster\" is missing\nthe closing >.\n\n(2) The changes to monitoring.sgml contain an unrelated change, about\npg_stat_all_indexes.idx_scan.\n\nAlso, I think the \"For more information, see <xref linkend=\"whatever\"\n/> bit should have a period after the markup tag, as we seem to do in\nother cases.\n\nOne other thought is that the incremental backup only replaces\nrelation files with incremental files, and it never does anything\nabout FSM files. So the statement that it only contains data that was\npotentially changed isn't quite correct. It might be better to phrase\nit the other way around i.e. it is like a full backup, except that\nsome files can be replaced by incremental files which omit blocks to\nwhich no WAL-logged changes have been made.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 10:39:10 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, 16 Jan 2024 at 16:39, Robert Haas <[email protected]> wrote:\n>\n> On Mon, Jan 15, 2024 at 3:31 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > Off-list I was notified that the new WAL summarizer process was not\n> > yet added to the glossary, so PFA a patch that does that.\n> > In passing, it also adds \"incremental backup\" to the glossary, and\n> > updates the documented types of backends in monitoring.sgml with the\n> > new backend type, too.\n>\n> I wonder if it's possible that you sent the wrong version of this\n> patch, because:\n>\n> (1) The docs don't build with this applied. I'm not sure if it's the\n> only problem, but <glossterm linkend=\"glossary-db-cluster\" is missing\n> the closing >.\n\nThat's my mistake, I didn't check install-world yet due to unrelated\nissues building the docs. I've since sorted out these issues (this was\na good stick to get that done), so this issue is fixed in the attached\npatch.\n\n> (2) The changes to monitoring.sgml contain an unrelated change, about\n> pg_stat_all_indexes.idx_scan.\n\nThanks for noticing, fixed in attached.\n\n> Also, I think the \"For more information, see <xref linkend=\"whatever\"\n> /> bit should have a period after the markup tag, as we seem to do in\n> other cases.\n\nFixed.\n\n> One other thought is that the incremental backup only replaces\n> relation files with incremental files, and it never does anything\n> about FSM files. So the statement that it only contains data that was\n> potentially changed isn't quite correct. It might be better to phrase\n> it the other way around i.e. it is like a full backup, except that\n> some files can be replaced by incremental files which omit blocks to\n> which no WAL-logged changes have been made.\n\nHow about the attached?\n\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Tue, 16 Jan 2024 21:22:15 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, Jan 16, 2024 at 3:22 PM Matthias van de Meent\n<[email protected]> wrote:\n> > One other thought is that the incremental backup only replaces\n> > relation files with incremental files, and it never does anything\n> > about FSM files. So the statement that it only contains data that was\n> > potentially changed isn't quite correct. It might be better to phrase\n> > it the other way around i.e. it is like a full backup, except that\n> > some files can be replaced by incremental files which omit blocks to\n> > which no WAL-logged changes have been made.\n>\n> How about the attached?\n\nI like the direction.\n\n+ A special <glossterm linkend=\"glossary-basebackup\">base backup</glossterm>\n+ that for some WAL-logged relations only contains the pages that were\n+ modified since a previous backup, as opposed to the full relation data of\n+ normal base backups. Like base backups, it is generated by the tool\n+ <xref linkend=\"app-pgbasebackup\"/>.\n\nCould we say \"that for some files may contain only those pages that\nwere modified since a previous backup, as opposed to the full contents\nof every file\"? My thoughts are (1) there's no hard guarantee that an\nincremental backup will replace even one file with an incremental\nfile, although in practice it is probably almost always going to\nhappen and (2) pg_combinebackup would actually be totally fine with\nany file at all being sent incrementally; it's only that the server\nisn't smart enough to figure out how to do this with e.g. SLRU data\nright now.\n\n+ To restore incremental backups the tool <xref\nlinkend=\"app-pgcombinebackup\"/>\n+ is used, which combines the incremental backups with a base backup and\n+ <glossterm linkend=\"glossary-wal\">WAL</glossterm> to restore a\n+ <glossterm linkend=\"glossary-db-cluster\">database cluster</glossterm> to\n+ a consistent state.\n\nI wondered if this needed to be clearer that the chain of backups\ncould have length > 2. But on further reflection, I think it's fine,\nunless you feel otherwise.\n\nThe rest LGTM.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 15:49:20 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, 16 Jan 2024 at 21:49, Robert Haas <[email protected]> wrote:\n>\n> On Tue, Jan 16, 2024 at 3:22 PM Matthias van de Meent\n> <[email protected]> wrote:\n> + A special <glossterm linkend=\"glossary-basebackup\">base backup</glossterm>\n> + that for some WAL-logged relations only contains the pages that were\n> + modified since a previous backup, as opposed to the full relation data of\n> + normal base backups. Like base backups, it is generated by the tool\n> + <xref linkend=\"app-pgbasebackup\"/>.\n>\n> Could we say \"that for some files may contain only those pages that\n> were modified since a previous backup, as opposed to the full contents\n> of every file\"?\n\nSure, added in attached.\n\n> + To restore incremental backups the tool <pgcombinebackup>\n> + is used, which combines the incremental backups with a base backup and\n> + [...]\n> I wondered if this needed to be clearer that the chain of backups\n> could have length > 2. But on further reflection, I think it's fine,\n> unless you feel otherwise.\n\nI removed \"the\" from the phrasing \"the incremental backups\", which\nmakes it a bit less restricted.\n\n> The rest LGTM.\n\nIn the latest patch I also fixed the casing of \"Incremental Backup\" to\n\"... backup\", to be in line with most other multi-word items.\n\nThanks!\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)", "msg_date": "Wed, 17 Jan 2024 19:42:37 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 17, 2024 at 1:42 PM Matthias van de Meent\n<[email protected]> wrote:\n> Sure, added in attached.\n\nI think this mostly looks good now but I just realized that I think\nthis needs rephrasing:\n\n+ To restore incremental backups the tool <xref\nlinkend=\"app-pgcombinebackup\"/>\n+ is used, which combines incremental backups with a base backup and\n+ <glossterm linkend=\"glossary-wal\">WAL</glossterm> to restore a\n+ <glossterm linkend=\"glossary-db-cluster\">database cluster</glossterm> to\n+ a consistent state.\n\nThe way this is worded, at least to me, it makes it sound like\npg_combinebackup is going to do the WAL recovery for you, which it\nisn't. Maybe:\n\nTo restore incremental backups the tool <xref\nlinkend=\"app-pgcombinebackup\"/> is used, which combines incremental\nbackups with a base backup. Afterwards, recovery can use <glossterm\nlinkend=\"glossary-wal\">WAL</glossterm> to bring the <glossterm\nlinkend=\"glossary-db-cluster\">database cluster</glossterm> to a\nconsistent state.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 15:10:13 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, 17 Jan 2024 at 21:10, Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 1:42 PM Matthias van de Meent\n> <[email protected]> wrote:\n> > Sure, added in attached.\n>\n> I think this mostly looks good now but I just realized that I think\n> this needs rephrasing:\n>\n> + To restore incremental backups the tool <xref\n> linkend=\"app-pgcombinebackup\"/>\n> + is used, which combines incremental backups with a base backup and\n> + <glossterm linkend=\"glossary-wal\">WAL</glossterm> to restore a\n> + <glossterm linkend=\"glossary-db-cluster\">database cluster</glossterm> to\n> + a consistent state.\n>\n> The way this is worded, at least to me, it makes it sound like\n> pg_combinebackup is going to do the WAL recovery for you, which it\n> isn't. Maybe:\n>\n> To restore incremental backups the tool <xref\n> linkend=\"app-pgcombinebackup\"/> is used, which combines incremental\n> backups with a base backup. Afterwards, recovery can use <glossterm\n> linkend=\"glossary-wal\">WAL</glossterm> to bring the <glossterm\n> linkend=\"glossary-db-cluster\">database cluster</glossterm> to a\n> consistent state.\n\nSure, that's fine with me.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 18 Jan 2024 10:49:48 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Jan 18, 2024 at 4:50 AM Matthias van de Meent\n<[email protected]> wrote:\n> Sure, that's fine with me.\n\nOK, committed that way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 09:40:56 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "I'm seeing some recent buildfarm failures for pg_walsummary:\n\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-01-14%2006%3A21%3A58\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=idiacanthus&dt=2024-01-17%2021%3A10%3A36\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-01-20%2018%3A58%3A49\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-01-23%2002%3A46%3A57\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-01-23%2020%3A23%3A36\n\nThe signature looks nearly identical in each:\n\n\t# Failed test 'WAL summary file exists'\n\t# at t/002_blocks.pl line 79.\n\n\t# Failed test 'stdout shows block 0 modified'\n\t# at t/002_blocks.pl line 85.\n\t# ''\n\t# doesn't match '(?^m:FORK main: block 0$)'\n\nI haven't been able to reproduce the issue on my machine, and I haven't\nfigured out precisely what is happening yet, but I wanted to make sure\nthere is awareness.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 11:08:46 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 12:08 PM Nathan Bossart\n<[email protected]> wrote:\n> I'm seeing some recent buildfarm failures for pg_walsummary:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2024-01-14%2006%3A21%3A58\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=idiacanthus&dt=2024-01-17%2021%3A10%3A36\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-01-20%2018%3A58%3A49\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=taipan&dt=2024-01-23%2002%3A46%3A57\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-01-23%2020%3A23%3A36\n>\n> The signature looks nearly identical in each:\n>\n> # Failed test 'WAL summary file exists'\n> # at t/002_blocks.pl line 79.\n>\n> # Failed test 'stdout shows block 0 modified'\n> # at t/002_blocks.pl line 85.\n> # ''\n> # doesn't match '(?^m:FORK main: block 0$)'\n>\n> I haven't been able to reproduce the issue on my machine, and I haven't\n> figured out precisely what is happening yet, but I wanted to make sure\n> there is awareness.\n\nThis is weird. There's a little more detail in the log file,\nregress_log_002_blocks, e.g. from the first failure you linked:\n\n[11:18:20.683](96.787s) # before insert, summarized TLI 1 through 0/14E09D0\n[11:18:21.188](0.505s) # after insert, summarized TLI 1 through 0/14E0D08\n[11:18:21.326](0.138s) # examining summary for TLI 1 from 0/14E0D08 to 0/155BAF0\n# 1\n...\n[11:18:21.349](0.000s) # got: 'pg_walsummary: error: could\nnot open file \"/home/nm/farm/gcc64/HEAD/pgsql.build/src/bin/pg_walsummary/tmp_check/t_002_blocks_node1_data/pgdata/pg_wal/summaries/0000000100000000014E0D0800000000155BAF0\n# 1.summary\": No such file or directory'\n\nThe \"examining summary\" line is generated based on the output of\npg_available_wal_summaries(). The way that works is that the server\ncalls readdir(), disassembles the filename into a TLI and two LSNs,\nand returns the result. Then, a fraction of a second later, the test\nscript reassembles those components into a filename and finds the file\nmissing. If the logic to translate between filenames and TLIs & LSNs\nwere incorrect, the test would fail consistently. So the only\nexplanation that seems to fit the facts is the file disappearing out\nfrom under us. But that really shouldn't happen. We do have code to\nremove such files in MaybeRemoveOldWalSummaries(), but it's only\nsupposed to be nuking files more than 10 days old.\n\nSo I don't really have a theory here as to what could be happening. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 12:46:16 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 12:46:16PM -0500, Robert Haas wrote:\n> The \"examining summary\" line is generated based on the output of\n> pg_available_wal_summaries(). The way that works is that the server\n> calls readdir(), disassembles the filename into a TLI and two LSNs,\n> and returns the result. Then, a fraction of a second later, the test\n> script reassembles those components into a filename and finds the file\n> missing. If the logic to translate between filenames and TLIs & LSNs\n> were incorrect, the test would fail consistently. So the only\n> explanation that seems to fit the facts is the file disappearing out\n> from under us. But that really shouldn't happen. We do have code to\n> remove such files in MaybeRemoveOldWalSummaries(), but it's only\n> supposed to be nuking files more than 10 days old.\n> \n> So I don't really have a theory here as to what could be happening. :-(\n\nThere might be an overflow risk in the cutoff time calculation, but I doubt\nthat's the root cause of these failures:\n\n\t/*\n\t * Files should only be removed if the last modification time precedes the\n\t * cutoff time we compute here.\n\t */\n\tcutoff_time = time(NULL) - 60 * wal_summary_keep_time;\n\nOtherwise, I think we'll probably need to add some additional logging to\nfigure out what is happening...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 12:05:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 1:05 PM Nathan Bossart <[email protected]> wrote:\n> There might be an overflow risk in the cutoff time calculation, but I doubt\n> that's the root cause of these failures:\n>\n> /*\n> * Files should only be removed if the last modification time precedes the\n> * cutoff time we compute here.\n> */\n> cutoff_time = time(NULL) - 60 * wal_summary_keep_time;\n>\n> Otherwise, I think we'll probably need to add some additional logging to\n> figure out what is happening...\n\nWhere, though? I suppose we could:\n\n1. Change the server code so that it logs each WAL summary file\nremoved at a log level high enough to show up in the test logs.\n\n2. Change the TAP test so that it prints out readdir(WAL summary\ndirectory) at various points in the test.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 14:08:08 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 02:08:08PM -0500, Robert Haas wrote:\n> On Wed, Jan 24, 2024 at 1:05 PM Nathan Bossart <[email protected]> wrote:\n>> Otherwise, I think we'll probably need to add some additional logging to\n>> figure out what is happening...\n> \n> Where, though? I suppose we could:\n> \n> 1. Change the server code so that it logs each WAL summary file\n> removed at a log level high enough to show up in the test logs.\n> \n> 2. Change the TAP test so that it prints out readdir(WAL summary\n> directory) at various points in the test.\n\nThat seems like a reasonable starting point. Even if it doesn't help\ndetermine the root cause, it should at least help rule out concurrent\nsummary removal.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 13:39:22 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 2:39 PM Nathan Bossart <[email protected]> wrote:\n> That seems like a reasonable starting point. Even if it doesn't help\n> determine the root cause, it should at least help rule out concurrent\n> summary removal.\n\nHere is a patch for that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 25 Jan 2024 10:06:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Jan 25, 2024 at 10:06:41AM -0500, Robert Haas wrote:\n> On Wed, Jan 24, 2024 at 2:39 PM Nathan Bossart <[email protected]> wrote:\n>> That seems like a reasonable starting point. Even if it doesn't help\n>> determine the root cause, it should at least help rule out concurrent\n>> summary removal.\n> \n> Here is a patch for that.\n\nLGTM. The only thing I might add is the cutoff_time in that LOG.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Jan 2024 10:08:50 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Jan 25, 2024 at 11:08 AM Nathan Bossart\n<[email protected]> wrote:\n> On Thu, Jan 25, 2024 at 10:06:41AM -0500, Robert Haas wrote:\n> > On Wed, Jan 24, 2024 at 2:39 PM Nathan Bossart <[email protected]> wrote:\n> >> That seems like a reasonable starting point. Even if it doesn't help\n> >> determine the root cause, it should at least help rule out concurrent\n> >> summary removal.\n> >\n> > Here is a patch for that.\n>\n> LGTM. The only thing I might add is the cutoff_time in that LOG.\n\nHere is v2 with that addition.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 26 Jan 2024 11:04:37 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Fri, Jan 26, 2024 at 11:04:37AM -0500, Robert Haas wrote:\n> Here is v2 with that addition.\n\nLooks reasonable.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jan 2024 11:39:04 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Fri, Jan 26, 2024 at 12:39 PM Nathan Bossart\n<[email protected]> wrote:\n> On Fri, Jan 26, 2024 at 11:04:37AM -0500, Robert Haas wrote:\n> > Here is v2 with that addition.\n>\n> Looks reasonable.\n\nThanks for the report & review. I have committed that version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jan 2024 13:37:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "Hello Robert,\n\n26.01.2024 21:37, Robert Haas wrote:\n> On Fri, Jan 26, 2024 at 12:39 PM Nathan Bossart\n> <[email protected]> wrote:\n>> On Fri, Jan 26, 2024 at 11:04:37AM -0500, Robert Haas wrote:\n>>> Here is v2 with that addition.\n>> Looks reasonable.\n> Thanks for the report & review. I have committed that version.\n\nWhile trying to reproduce the 002_blocks test failure, I've encountered\nanother anomaly (or two):\nmake -s check -C src/bin/pg_walsummary/ PROVE_TESTS=\"t/002*\" PROVE_FLAGS=\"--timer\"\n# +++ tap check in src/bin/pg_walsummary +++\n[05:40:38] t/002_blocks.pl .. # poll_query_until timed out executing this query:\n# SELECT EXISTS (\n#     SELECT * from pg_available_wal_summaries()\n#     WHERE tli = 0 AND end_lsn > '0/0'\n# )\n#\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\n[05:40:38] t/002_blocks.pl .. ok   266739 ms ( 0.00 usr  0.01 sys + 17.51 cusr 26.79 csys = 44.31 CPU)\n[05:45:05]\nAll tests successful.\nFiles=1, Tests=3, 267 wallclock secs ( 0.02 usr  0.02 sys + 17.51 cusr 26.79 csys = 44.34 CPU)\nResult: PASS\n\nIt looks like the test may call pg_get_wal_summarizer_state() when\nWalSummarizerCtl->initialized is false yet, i. e. before the first\nGetOldestUnsummarizedLSN() call.\nI could reproduce the issue easily (within 10 test runs) with\npg_usleep(100000L);\nadded inside WalSummarizerMain() just below:\nsigprocmask(SIG_SETMASK, &UnBlockSig, NULL);\n\nBut the fact that the test passes regardless of the timeout, make me\nwonder, whether any test should fail when such timeout occurs?\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Robert,\n\n 26.01.2024 21:37, Robert Haas wrote:\n\n\nOn Fri, Jan 26, 2024 at 12:39 PM Nathan Bossart\n<[email protected]> wrote:\n\n\nOn Fri, Jan 26, 2024 at 11:04:37AM -0500, Robert Haas wrote:\n\n\nHere is v2 with that addition.\n\n\n\nLooks reasonable.\n\n\n\nThanks for the report & review. I have committed that version.\n\n\n\n While trying to reproduce the 002_blocks test failure, I've\n encountered\n another anomaly (or two):\n make -s check -C src/bin/pg_walsummary/ PROVE_TESTS=\"t/002*\"\n PROVE_FLAGS=\"--timer\"\n # +++ tap check in src/bin/pg_walsummary +++\n [05:40:38] t/002_blocks.pl .. # poll_query_until timed out executing\n this query:\n # SELECT EXISTS (\n #     SELECT * from pg_available_wal_summaries()\n #     WHERE tli = 0 AND end_lsn > '0/0'\n # )\n #\n # expecting this output:\n # t\n # last actual query output:\n # f\n # with stderr:\n [05:40:38] t/002_blocks.pl .. ok   266739 ms ( 0.00 usr  0.01 sys +\n 17.51 cusr 26.79 csys = 44.31 CPU)\n [05:45:05]\n All tests successful.\n Files=1, Tests=3, 267 wallclock secs ( 0.02 usr  0.02 sys + 17.51\n cusr 26.79 csys = 44.34 CPU)\n Result: PASS\n\n It looks like the test may call pg_get_wal_summarizer_state() when\n WalSummarizerCtl->initialized is false yet, i. e. before the\n first\n GetOldestUnsummarizedLSN() call.\n I could reproduce the issue easily (within 10 test runs) with\n pg_usleep(100000L);\n added inside WalSummarizerMain() just below:\n sigprocmask(SIG_SETMASK, &UnBlockSig, NULL);\n\n But the fact that the test passes regardless of the timeout, make me\n wonder, whether any test should fail when such timeout occurs?\n\n Best regards,\n Alexander", "msg_date": "Sat, 27 Jan 2024 10:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "24.01.2024 20:46, Robert Haas wrote:\n> This is weird. There's a little more detail in the log file,\n> regress_log_002_blocks, e.g. from the first failure you linked:\n>\n> [11:18:20.683](96.787s) # before insert, summarized TLI 1 through 0/14E09D0\n> [11:18:21.188](0.505s) # after insert, summarized TLI 1 through 0/14E0D08\n> [11:18:21.326](0.138s) # examining summary for TLI 1 from 0/14E0D08 to 0/155BAF0\n> # 1\n> ...\n> [11:18:21.349](0.000s) # got: 'pg_walsummary: error: could\n> not open file \"/home/nm/farm/gcc64/HEAD/pgsql.build/src/bin/pg_walsummary/tmp_check/t_002_blocks_node1_data/pgdata/pg_wal/summaries/0000000100000000014E0D0800000000155BAF0\n> # 1.summary\": No such file or directory'\n>\n> The \"examining summary\" line is generated based on the output of\n> pg_available_wal_summaries(). The way that works is that the server\n> calls readdir(), disassembles the filename into a TLI and two LSNs,\n> and returns the result.\n\nI'm discouraged by \"\\n1\" in the file name and in the\n\"examining summary...\" message.\nregress_log_002_blocks from the following successful test run on the same\nsungazer node contains:\n[15:21:58.924](0.106s) # examining summary for TLI 1 from 0/155BAE0 to 0/155E750\n[15:21:58.925](0.001s) ok 1 - WAL summary file exists\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 27 Jan 2024 11:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Sat, Jan 27, 2024 at 11:00:01AM +0300, Alexander Lakhin wrote:\n> 24.01.2024 20:46, Robert Haas wrote:\n>> This is weird. There's a little more detail in the log file,\n>> regress_log_002_blocks, e.g. from the first failure you linked:\n>> \n>> [11:18:20.683](96.787s) # before insert, summarized TLI 1 through 0/14E09D0\n>> [11:18:21.188](0.505s) # after insert, summarized TLI 1 through 0/14E0D08\n>> [11:18:21.326](0.138s) # examining summary for TLI 1 from 0/14E0D08 to 0/155BAF0\n>> # 1\n>> ...\n>> [11:18:21.349](0.000s) # got: 'pg_walsummary: error: could\n>> not open file \"/home/nm/farm/gcc64/HEAD/pgsql.build/src/bin/pg_walsummary/tmp_check/t_002_blocks_node1_data/pgdata/pg_wal/summaries/0000000100000000014E0D0800000000155BAF0\n>> # 1.summary\": No such file or directory'\n>> \n>> The \"examining summary\" line is generated based on the output of\n>> pg_available_wal_summaries(). The way that works is that the server\n>> calls readdir(), disassembles the filename into a TLI and two LSNs,\n>> and returns the result.\n> \n> I'm discouraged by \"\\n1\" in the file name and in the\n> \"examining summary...\" message.\n> regress_log_002_blocks from the following successful test run on the same\n> sungazer node contains:\n> [15:21:58.924](0.106s) # examining summary for TLI 1 from 0/155BAE0 to 0/155E750\n> [15:21:58.925](0.001s) ok 1 - WAL summary file exists\n\nAh, I think this query:\n\n\tSELECT tli, start_lsn, end_lsn from pg_available_wal_summaries()\n\t\tWHERE tli = $summarized_tli AND end_lsn > '$summarized_lsn'\n\nis returning more than one row in some cases. I attached a quick sketch of\nan easy way to reproduce the issue as well as one way to fix it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 27 Jan 2024 10:31:09 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Sat, Jan 27, 2024 at 10:31:09AM -0600, Nathan Bossart wrote:\n> On Sat, Jan 27, 2024 at 11:00:01AM +0300, Alexander Lakhin wrote:\n>> I'm discouraged by \"\\n1\" in the file name and in the\n>> \"examining summary...\" message.\n>> regress_log_002_blocks from the following successful test run on the same\n>> sungazer node contains:\n>> [15:21:58.924](0.106s) # examining summary for TLI 1 from 0/155BAE0 to 0/155E750\n>> [15:21:58.925](0.001s) ok 1 - WAL summary file exists\n> \n> Ah, I think this query:\n> \n> \tSELECT tli, start_lsn, end_lsn from pg_available_wal_summaries()\n> \t\tWHERE tli = $summarized_tli AND end_lsn > '$summarized_lsn'\n> \n> is returning more than one row in some cases. I attached a quick sketch of\n> an easy way to reproduce the issue as well as one way to fix it.\n\nThe buildfarm just caught a failure with the new logging in place:\n\n\thttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-01-29%2018%3A09%3A10\n\nI'm not totally sure my \"fix\" is correct, but I think this does confirm the\ntheory.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 12:21:30 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Mon, Jan 29, 2024 at 1:21 PM Nathan Bossart <[email protected]> wrote:\n> > Ah, I think this query:\n> >\n> > SELECT tli, start_lsn, end_lsn from pg_available_wal_summaries()\n> > WHERE tli = $summarized_tli AND end_lsn > '$summarized_lsn'\n> >\n> > is returning more than one row in some cases. I attached a quick sketch of\n> > an easy way to reproduce the issue as well as one way to fix it.\n>\n> The buildfarm just caught a failure with the new logging in place:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-01-29%2018%3A09%3A10\n>\n> I'm not totally sure my \"fix\" is correct, but I think this does confirm the\n> theory.\n\nAh. The possibilities of ending up with TWO new WAL summaries never\noccurred to me. Things that never occurred to the developer are a\nleading cause of bugs, and so here.\n\nI'm wondering if what we need to do is run pg_walsummary on both\nsummary files in that case. If we just pick one or the other, how do\nwe know which one to pick?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 15:18:50 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Mon, Jan 29, 2024 at 03:18:50PM -0500, Robert Haas wrote:\n> I'm wondering if what we need to do is run pg_walsummary on both\n> summary files in that case. If we just pick one or the other, how do\n> we know which one to pick?\n\nEven if we do that, isn't it possible that none of the summaries will\ninclude the change? Presently, we get the latest summarized LSN, make a\nchange, and then wait for the next summary file with a greater LSN than\nwhat we saw before the change. But AFAICT there's no guarantee that means\nthe change has been summarized yet, although the chances of that happening\nin a test are probably pretty small.\n\nCould we get the LSN before and after making the change and then inspect\nall summaries that include that LSN range?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 29 Jan 2024 15:13:21 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Mon, Jan 29, 2024 at 4:13 PM Nathan Bossart <[email protected]> wrote:\n> On Mon, Jan 29, 2024 at 03:18:50PM -0500, Robert Haas wrote:\n> > I'm wondering if what we need to do is run pg_walsummary on both\n> > summary files in that case. If we just pick one or the other, how do\n> > we know which one to pick?\n>\n> Even if we do that, isn't it possible that none of the summaries will\n> include the change? Presently, we get the latest summarized LSN, make a\n> change, and then wait for the next summary file with a greater LSN than\n> what we saw before the change. But AFAICT there's no guarantee that means\n> the change has been summarized yet, although the chances of that happening\n> in a test are probably pretty small.\n>\n> Could we get the LSN before and after making the change and then inspect\n> all summaries that include that LSN range?\n\nThe trick here is that each WAL summary file covers one checkpoint\ncycle. The intent of the test is to load data into the table,\ncheckpoint, see what summaries exist, then update a row, checkpoint\nagain, and see what summaries now exist. We expect one new summary\nbecause there's been one new checkpoint. When I was thinking about\nthis yesterday, I was imagining that we were somehow getting an extra\ncheckpoint in some cases. But it looks like it's actually an\noff-by-one situation. In\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2024-01-29%2018%3A09%3A10\nthe new files that show up between \"after insert\" and \"after new\nsummary\" are:\n\n00000001000000000152FAE000000000015AAAC8.summary (LSN distance ~500k)\n00000001000000000152F7A8000000000152FAE0.summary (LSN distance 824 bytes)\n\nThe checkpoint after the inserts says:\n\nLOG: checkpoint complete: wrote 14 buffers (10.9%); 0 WAL file(s)\nadded, 0 removed, 0 recycled; write=0.956 s, sync=0.929 s, total=3.059\ns; sync files=39, longest=0.373 s, average=0.024 s; distance=491 kB,\nestimate=491 kB; lsn=0/15AAB20, redo lsn=0/15AAAC8\n\nAnd the checkpoint after the single-row update says:\n\nLOG: checkpoint complete: wrote 4 buffers (3.1%); 0 WAL file(s)\nadded, 0 removed, 0 recycled; write=0.648 s, sync=0.355 s, total=2.798\ns; sync files=3, longest=0.348 s, average=0.119 s; distance=11 kB,\nestimate=443 kB; lsn=0/15AD770, redo lsn=0/15AD718\n\nSo both of the new WAL summary files that are appearing here are from\ncheckpoints that happened before the single-row update. The larger\nfile is the one covering the 400 inserts, and the smaller one is the\ncheckpoint before that. Which means that the \"Wait for a new summary\nto show up.\" code isn't actually waiting long enough, and then the\nwhole thing goes haywire. The problem is, I think, that this code\nnaively thinks it can just wait for summarized_lsn and everything will\nbe fine ... but that assumes we were caught up when we first measured\nthe summarized_lsn, and that need not be so, because it takes some\nshort but non-zero amount of time for the summarizer to catch up with\nthe WAL generated during initdb.\n\nI think the solution here is to find a better way to wait for the\ninserts to be summarized, one that actually does wait for that to\nhappen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jan 2024 10:51:26 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, Jan 30, 2024 at 10:51 AM Robert Haas <[email protected]> wrote:\n> I think the solution here is to find a better way to wait for the\n> inserts to be summarized, one that actually does wait for that to\n> happen.\n\nHere's a patch for that. I now think\na7097ca630a25dd2248229f21ebce4968d85d10a was actually misguided, and\nserved only to mask some of the failures caused by waiting for the WAL\nsummary file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 30 Jan 2024 11:52:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, Jan 30, 2024 at 11:52 AM Robert Haas <[email protected]> wrote:\n> Here's a patch for that. I now think\n> a7097ca630a25dd2248229f21ebce4968d85d10a was actually misguided, and\n> served only to mask some of the failures caused by waiting for the WAL\n> summary file.\n\nCommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jan 2024 10:33:02 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Jan 24, 2024 at 12:05:15PM -0600, Nathan Bossart wrote:\n> There might be an overflow risk in the cutoff time calculation, but I doubt\n> that's the root cause of these failures:\n> \n> \t/*\n> \t * Files should only be removed if the last modification time precedes the\n> \t * cutoff time we compute here.\n> \t */\n> \tcutoff_time = time(NULL) - 60 * wal_summary_keep_time;\n\nI've attached a short patch for fixing this overflow risk. Specifically,\nit limits wal_summary_keep_time to INT_MAX / SECS_PER_MINUTE, just like\nlog_rotation_age.\n\nI considering checking for overflow when we subtract the keep-time from the\nresult of time(2), but AFAICT there's only a problem if time_t is unsigned,\nwhich Wikipedia leads me to believe is unusual [0], so I figured we might\nbe able to just wait this one out until 2038.\n\n> Otherwise, I think we'll probably need to add some additional logging to\n> figure out what is happening...\n\nSeparately, I suppose it's probably time to revert the temporary debugging\ncode adding by commit 5ddf997. I can craft a patch for that, too.\n\n[0] https://en.wikipedia.org/wiki/Unix_time#Representing_the_number\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 14 Mar 2024 16:00:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Mar 14, 2024 at 04:00:10PM -0500, Nathan Bossart wrote:\n> Separately, I suppose it's probably time to revert the temporary debugging\n> code adding by commit 5ddf997. I can craft a patch for that, too.\n\nAs promised...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 14 Mar 2024 20:52:55 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Thu, Mar 14, 2024 at 08:52:55PM -0500, Nathan Bossart wrote:\n> Subject: [PATCH v1 1/2] Revert \"Temporary patch to help debug pg_walsummary\n> test failures.\"\n\n> Subject: [PATCH v1 2/2] Fix possible overflow in MaybeRemoveOldWalSummaries().\n\nAssuming there are no objections or feedback, I plan to commit these two\npatches within the next couple of days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Mar 2024 13:15:02 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Tue, Mar 19, 2024 at 01:15:02PM -0500, Nathan Bossart wrote:\n> Assuming there are no objections or feedback, I plan to commit these two\n> patches within the next couple of days.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 13:35:27 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Mar 20, 2024 at 2:35 PM Nathan Bossart <[email protected]> wrote:\n> On Tue, Mar 19, 2024 at 01:15:02PM -0500, Nathan Bossart wrote:\n> > Assuming there are no objections or feedback, I plan to commit these two\n> > patches within the next couple of days.\n>\n> Committed.\n\nThanks. Sorry you had to clean up after me. :-(\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:53:01 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for incremental backup" }, { "msg_contents": "On Wed, Mar 20, 2024 at 02:53:01PM -0400, Robert Haas wrote:\n> On Wed, Mar 20, 2024 at 2:35 PM Nathan Bossart <[email protected]> wrote:\n>> Committed.\n> \n> Thanks. Sorry you had to clean up after me. :-(\n\nNo worries.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 14:24:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for incremental backup" } ]
[ { "msg_contents": "We have some (generated) SQL that uses grouping sets to give us the\nsame data grouped in multiple ways (with sets of groups configurable\nby the user), with the ordering of the rows the same as the grouping\nset. This generally works fine, except for when one of the grouping\nsets contains part of another grouping set joined against a subquery\n(at least, I think that's the trigger).\n\nMinimal example here:\n\nSELECT seq, CONCAT('n', seq) AS n INTO TEMP TABLE test1 FROM\ngenerate_series(1,5) AS seq;\nSELECT seq, CONCAT('x', 6-seq) AS x INTO TEMP TABLE test2 FROM\ngenerate_series(1,5) AS seq;\n\nSELECT\n GROUPING(test1.n) AS gp_n,\n GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)))\nAS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) FROM test1\nGROUP BY\nGROUPING SETS(\n (test1.n),\n (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)))\n)\nORDER BY\n CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END NULLS FIRST,\n CASE WHEN GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)))=0 THEN concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)) ELSE NULL END NULLS FIRST;\n gp_n | gp_conc | n | concat\n------+---------+----+--------\n 1 | 0 | | n5x1\n 1 | 0 | | n4x2\n 1 | 0 | | n3x3\n 1 | 0 | | n2x4\n 1 | 0 | | n1x5\n 0 | 1 | n1 |\n 0 | 1 | n2 |\n 0 | 1 | n3 |\n 0 | 1 | n4 |\n 0 | 1 | n5 |\n\n\nAm I missing some reason why the first set isn't sorted as I'd hoped?\nIs the subquery value in the ORDER BY not the same as the value in the\nmain query? That seems... frustrating. I'd like to be able to say\n\"order by column (n)\" but I don't think I can?\n\nOn Centos7, with the latest pg12 from the pg repo:\nPostgreSQL 12.16 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n20150623 (Red Hat 4.8.5-44), 64-bit\n\nThanks\n\nGeoff\n\n\n", "msg_date": "Fri, 5 Jan 2024 17:38:29 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Jan 6, 2024 at 01:38 +0800, Geoff Winkless <[email protected]>, wrote:\n>\n> Am I missing some reason why the first set isn't sorted as I'd hoped?\n\nWoo, it’s a complex order by, I try to understand your example.\nAnd I think the order is right, what’s your expected order result?\n\n```\nORDER BY\n CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END NULLS FIRST,\n CASE WHEN GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)))=0 THEN concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)) ELSE NULL END NULLS FIRST;\n```\nYou want to Order by a, b where a is: CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END NULLS FIRST.\nGROUPING(test1.n)=0 means that your are within grouping set test1.n and the value is test1.n, so results of another grouping\nset b is NULL, and you specific  NULL FIRST.\n\nSo your will first get the results of grouping set b while of course, column gp_n GROUPING(test1.n) is 1.\nThe result is very right.\n\ngp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n5x1\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n1x5\n 0 | 1 | n1 | NULL\n 0 | 1 | n2 | NULL\n 0 | 1 | n3 | NULL\n 0 | 1 | n4 | NULL\n 0 | 1 | n5 | NULL\n(10 rows)\n\nNB: the Grouping bit is set to 1 when this column is not included.\n\nhttps://www.postgresql.org/docs/current/functions-aggregate.html\nGROUPING ( group_by_expression(s) ) → integer\nReturns a bit mask indicating which GROUP BY expressions are not included in the current grouping set. Bits are assigned with the rightmost argument corresponding to the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included\n\nI guess you misunderstand it?\n\n\nAnd your GROUPING target entry seems misleading, I modify it to:\n\nSELECT GROUPING(test1.n, (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))))::bit(2),\n\ntest1.n, CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\nFROM test1\n…skip\n\n\nTo show the grouping condition:\n\ngrouping | n | concat\n----------+------+--------\n 10 | NULL | n5x1\n 10 | NULL | n4x2\n 10 | NULL | n3x3\n 10 | NULL | n2x4\n 10 | NULL | n1x5\n 01 | n1 | NULL\n 01 | n2 | NULL\n 01 | n3 | NULL\n 01 | n4 | NULL\n 01 | n5 | NULL\n(10 rows)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi, \n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Jan 6, 2024 at 01:38 +0800, Geoff Winkless <[email protected]>, wrote:\n\nAm I missing some reason why the first set isn't sorted as I'd hoped?\n\nWoo, it’s a complex order by, I try to understand your example.\nAnd I think the order is right, what’s your expected order result?\n\n```\nORDER BY\n CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END NULLS FIRST,\n CASE WHEN GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)))=0 THEN concat(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq)) ELSE NULL END NULLS FIRST;\n```\nYou want to Order by a, b where a is: CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END NULLS FIRST.\nGROUPING(test1.n)=0 means that your are within grouping set test1.n and the value is test1.n, so results of another grouping\nset b is NULL, and you specific  NULL FIRST. \n\nSo your will first get the results of grouping set b while of course, column gp_n GROUPING(test1.n) is 1.\nThe result is very right.\n\ngp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n5x1\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n1x5\n 0 | 1 | n1 | NULL\n 0 | 1 | n2 | NULL\n 0 | 1 | n3 | NULL\n 0 | 1 | n4 | NULL\n 0 | 1 | n5 | NULL\n(10 rows)\n\nNB: the Grouping bit is set to 1 when this column is not included.\n\nhttps://www.postgresql.org/docs/current/functions-aggregate.html\nGROUPING ( group_by_expression(s) ) → integer\nReturns a bit mask indicating which GROUP BY expressions are not included in the current grouping set. Bits are assigned with the rightmost argument corresponding to the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included\n\nI guess you misunderstand it?\n\n\nAnd your GROUPING target entry seems misleading, I modify it to:\n\nSELECT GROUPING(test1.n, (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))))::bit(2), \n\ntest1.n, CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) \nFROM test1\n…skip\n\n\nTo show the grouping condition:\n\ngrouping | n | concat\n----------+------+--------\n 10 | NULL | n5x1\n 10 | NULL | n4x2\n 10 | NULL | n3x3\n 10 | NULL | n2x4\n 10 | NULL | n1x5\n 01 | n1 | NULL\n 01 | n2 | NULL\n 01 | n3 | NULL\n 01 | n4 | NULL\n 01 | n5 | NULL\n(10 rows)", "msg_date": "Sat, 6 Jan 2024 02:34:39 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Fri, 5 Jan 2024 at 18:34, Zhang Mingli <[email protected]> wrote:\n>\n> On Jan 6, 2024 at 01:38 +0800, Geoff Winkless <[email protected]>, wrote:\n>\n>\n> Am I missing some reason why the first set isn't sorted as I'd hoped?\n>\n>\n> Woo, it’s a complex order by, I try to understand your example.\n> And I think the order is right, what’s your expected order result?\n\nI was hoping to see\n\ngp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n1x5\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n5x1\n 0 | 1 | n1 | NULL\n 0 | 1 | n2 | NULL\n 0 | 1 | n3 | NULL\n 0 | 1 | n4 | NULL\n 0 | 1 | n5 | NULL\n\nbecause when gp_conc is 0, it should be ordering by the concat() value.\n\n> https://www.postgresql.org/docs/current/functions-aggregate.html\n> GROUPING ( group_by_expression(s) ) → integer\n> Returns a bit mask indicating which GROUP BY expressions are not included in the current grouping set. Bits are assigned with the rightmost argument corresponding to the least-significant bit; each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included\n>\n> I guess you misunderstand it?\n\nI don't think I did. I pass GROUPING(something) and if the current set\nis being grouped by (something) then the return value will be 0.\n\n> And your GROUPING target entry seems misleading, I modify it to:\n>\n> SELECT GROUPING(test1.n, (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))))::bit(2),\n>\n> test1.n, CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\n> FROM test1\n> …skip\n>\n>\n> To show the grouping condition:\n>\n> grouping | n | concat\n> ----------+------+--------\n> 10 | NULL | n5x1\n> 10 | NULL | n4x2\n> 10 | NULL | n3x3\n> 10 | NULL | n2x4\n> 10 | NULL | n1x5\n> 01 | n1 | NULL\n> 01 | n2 | NULL\n> 01 | n3 | NULL\n> 01 | n4 | NULL\n> 01 | n5 | NULL\n> (10 rows)\n\n\nWith respect, I've no idea why you think that's any clearer.\n\nGeoff\n\n\n", "msg_date": "Sat, 6 Jan 2024 15:38:31 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Sat, Jan 6, 2024 at 8:38 AM Geoff Winkless <[email protected]> wrote:\n\n> On Fri, 5 Jan 2024 at 18:34, Zhang Mingli <[email protected]> wrote:\n> >\n> > On Jan 6, 2024 at 01:38 +0800, Geoff Winkless <[email protected]>,\n> wrote:\n> >\n> >\n> > Am I missing some reason why the first set isn't sorted as I'd hoped?\n> >\n> >\n> > Woo, it’s a complex order by, I try to understand your example.\n> > And I think the order is right, what’s your expected order result?\n>\n> I was hoping to see\n>\n> gp_n | gp_conc | n | concat\n> ------+---------+------+--------\n> 1 | 0 | NULL | n1x5\n> 1 | 0 | NULL | n2x4\n> 1 | 0 | NULL | n3x3\n> 1 | 0 | NULL | n4x2\n> 1 | 0 | NULL | n5x1\n> 0 | 1 | n1 | NULL\n> 0 | 1 | n2 | NULL\n> 0 | 1 | n3 | NULL\n> 0 | 1 | n4 | NULL\n> 0 | 1 | n5 | NULL\n>\n> because when gp_conc is 0, it should be ordering by the concat() value.\n>\n>\nSomething does seem off here with the interaction between grouping sets and\norder by. I'm inclined to believe that using grouping in the order by\nsimply is an unsupported concept we fail to prohibit. The discussion\naround union all equivalency and grouping happening well before order by\nlead me to this conclusion.\n\nYou can get the desired result with a much less convoluted order by clause\n- so long as you understand where your nulls are coming from - with:\n\nhttps://dbfiddle.uk/Uk22nPIZ\n\nORDER BY\n n nulls first , x nulls first\n\nWhere x is the assigned alias for the concatenation expression column.\n\nDavid J.\n\nOn Sat, Jan 6, 2024 at 8:38 AM Geoff Winkless <[email protected]> wrote:On Fri, 5 Jan 2024 at 18:34, Zhang Mingli <[email protected]> wrote:\n>\n> On Jan 6, 2024 at 01:38 +0800, Geoff Winkless <[email protected]>, wrote:\n>\n>\n> Am I missing some reason why the first set isn't sorted as I'd hoped?\n>\n>\n> Woo, it’s a complex order by, I try to understand your example.\n> And I think the order is right, what’s your expected order result?\n\nI was hoping to see\n\ngp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n1x5\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n5x1\n 0 | 1 | n1 | NULL\n 0 | 1 | n2 | NULL\n 0 | 1 | n3 | NULL\n 0 | 1 | n4 | NULL\n 0 | 1 | n5 | NULL\n\nbecause when gp_conc is 0, it should be ordering by the concat() value.Something does seem off here with the interaction between grouping sets and order by.  I'm inclined to believe that using grouping in the order by simply is an unsupported concept we fail to prohibit.  The discussion around union all equivalency and grouping happening well before order by lead me to this conclusion.You can get the desired result with a much less convoluted order by clause - so long as you understand where your nulls are coming from - with:https://dbfiddle.uk/Uk22nPIZORDER BY  n nulls first , x nulls firstWhere x is the assigned alias for the concatenation expression column.David J.", "msg_date": "Sat, 6 Jan 2024 09:22:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "Hi,\n\n\nZhang Mingli\nwww.hashdata.xyz\nOn Jan 6, 2024 at 23:38 +0800, Geoff Winkless <[email protected]>, wrote:\n>\n> I was hoping to see\n>\n> gp_n | gp_conc | n | concat\n> ------+---------+------+--------\n> 1 | 0 | NULL | n1x5\n> 1 | 0 | NULL | n2x4\n> 1 | 0 | NULL | n3x3\n> 1 | 0 | NULL | n4x2\n> 1 | 0 | NULL | n5x1\n> 0 | 1 | n1 | NULL\n> 0 | 1 | n2 | NULL\n> 0 | 1 | n3 | NULL\n> 0 | 1 | n4 | NULL\n> 0 | 1 | n5 | NULL\n>\n> because when gp_conc is 0, it should be ordering by the concat() value.\nHi, I misunderstand and thought you want to see the rows of gp_n = 0 first.\nSo you’re not satisfied with the second key of Order By.\nI simply the SQL to show that the difference exists:\n\nSELECT GROUPING(test1.n) AS gp_n,\nGROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) AS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\n FROM test1\nGROUP BY GROUPING SETS( (test1.n), (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) )\nHAVING n is NULL\nORDER BY concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) NULLS FIRST;\n gp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n1x5\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n5x1\n(5 rows)\n\nThis is what you want, right?\n\nAnd if there is a CASE WHEN, the order changed:\n\nSELECT GROUPING(test1.n) AS gp_n,\nGROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) AS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\n FROM test1\nGROUP BY GROUPING SETS( (test1.n), (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) )\nHAVING n is NULL\nORDER BY CASE WHEN true THEN concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) END NULLS FIRST;\n gp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n5x1\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n1x5\n(5 rows)\n\nI haven’t dinged into this and it seems sth related with CASE WHEN.\nA case when true will change the order.\n\n\n\n\n\n\n\n\n\n\n\nHi, \n\n\n\n\nZhang Mingli\nwww.hashdata.xyz\n\n\n\nOn Jan 6, 2024 at 23:38 +0800, Geoff Winkless <[email protected]>, wrote:\n\nI was hoping to see\n\ngp_n | gp_conc | n | concat\n------+---------+------+--------\n1 | 0 | NULL | n1x5\n1 | 0 | NULL | n2x4\n1 | 0 | NULL | n3x3\n1 | 0 | NULL | n4x2\n1 | 0 | NULL | n5x1\n0 | 1 | n1 | NULL\n0 | 1 | n2 | NULL\n0 | 1 | n3 | NULL\n0 | 1 | n4 | NULL\n0 | 1 | n5 | NULL\n\nbecause when gp_conc is 0, it should be ordering by the concat() value.\nHi, I misunderstand and thought you want to see the rows of gp_n = 0 first.\nSo you’re not satisfied with the second key of Order By.\nI simply the SQL to show that the difference exists:\n\nSELECT GROUPING(test1.n) AS gp_n, \nGROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) AS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\n FROM test1 \nGROUP BY GROUPING SETS( (test1.n), (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) ) \nHAVING n is NULL \nORDER BY concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) NULLS FIRST;\n gp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n1x5\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n5x1\n(5 rows)\n\nThis is what you want, right?\n\nAnd if there is a CASE WHEN, the order changed:\n\nSELECT GROUPING(test1.n) AS gp_n, \nGROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) AS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\n FROM test1 \nGROUP BY GROUPING SETS( (test1.n), (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))) ) \nHAVING n is NULL\nORDER BY CASE WHEN true THEN concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) END NULLS FIRST;\n gp_n | gp_conc | n | concat\n------+---------+------+--------\n 1 | 0 | NULL | n5x1\n 1 | 0 | NULL | n4x2\n 1 | 0 | NULL | n3x3\n 1 | 0 | NULL | n2x4\n 1 | 0 | NULL | n1x5\n(5 rows)\n\nI haven’t dinged into this and it seems sth related with CASE WHEN.\nA case when true will change the order.", "msg_date": "Sun, 7 Jan 2024 00:48:17 +0800", "msg_from": "Zhang Mingli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Sat, 6 Jan 2024 at 16:22, David G. Johnston\n<[email protected]> wrote:\n> On Sat, Jan 6, 2024 at 8:38 AM Geoff Winkless <[email protected]> wrote:\n>> because when gp_conc is 0, it should be ordering by the concat() value.\n>\n> Something does seem off here with the interaction between grouping sets and order by.\n> I'm inclined to believe that using grouping in the order by simply is an unsupported\n> concept we fail to prohibit.\n\nThat's disappointing.\n\n> You can get the desired result with a much less convoluted order by clause -\n> so long as you understand where your nulls are coming from - with:\n> ORDER BY\n> n nulls first , x nulls first\n\nAhh, well, yes, that's fine in this instance which (as you may\nremember) was a minimal example of the behaviour, but wouldn't be\nuseful in the real-world situation, where we can have many\npotentially-conflicting grouping sets, each set needing to be ordered\nconsistently internally.\n\nGeoff\n\n\n", "msg_date": "Sat, 6 Jan 2024 16:57:27 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> Something does seem off here with the interaction between grouping sets and\n> order by.\n\nYeah. I think Geoff is correct to identify the use of subqueries in\nthe grouping sets as the triggering factor. We can get some insight\nby explicitly printing the ordering values:\n\nSELECT\n GROUPING(test1.n) AS gp_n,\n GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)))\nAS gp_conc,\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)),\n CASE WHEN GROUPING(test1.n)=0 THEN test1.n ELSE NULL END as o1,\n CASE WHEN GROUPING(concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)))=0 THEN concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)) ELSE NULL END as o2\nFROM test1\nGROUP BY\nGROUPING SETS(\n (test1.n),\n (concat(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq)))\n)\nORDER BY o1 NULLS FIRST, o2 NULLS FIRST;\n\nwhich produces\n\n gp_n | gp_conc | n | concat | o1 | o2 \n------+---------+----+--------+----+----\n 1 | 0 | | n5x1 | | x1\n 1 | 0 | | n4x2 | | x2\n 1 | 0 | | n3x3 | | x3\n 1 | 0 | | n2x4 | | x4\n 1 | 0 | | n1x5 | | x5\n 0 | 1 | n1 | | n1 | \n 0 | 1 | n2 | | n2 | \n 0 | 1 | n3 | | n3 | \n 0 | 1 | n4 | | n4 | \n 0 | 1 | n5 | | n5 | \n(10 rows)\n\nThose columns appear correctly sorted, so it's not the sort that\nis misbehaving. But how come the values don't match the \"concat\"\ncolumn where they should? EXPLAIN VERBOSE gives a further clue:\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=53622.76..53623.76 rows=400 width=136)\n Output: (GROUPING(test1.n)), (GROUPING(concat(test1.n, (SubPlan 1)))), test1.n, (concat(test1.n, (SubPlan 2))), (CASE WHEN (GROUPING(test1.n) = 0) THEN test1.n ELSE NULL::text END), (CASE WHEN (GROUPING(concat(test1.n, (SubPlan 3))) = 0) THEN concat(test1.n, (SubPlan 4)) ELSE NULL::text END)\n Sort Key: (CASE WHEN (GROUPING(test1.n) = 0) THEN test1.n ELSE NULL::text END) NULLS FIRST, (CASE WHEN (GROUPING(concat(test1.n, (SubPlan 3))) = 0) THEN concat(test1.n, (SubPlan 4)) ELSE NULL::text END) NULLS FIRST\n -> HashAggregate (cost=32890.30..53605.48 rows=400 width=136)\n Output: GROUPING(test1.n), GROUPING(concat(test1.n, (SubPlan 1))), test1.n, (concat(test1.n, (SubPlan 2))), CASE WHEN (GROUPING(test1.n) = 0) THEN test1.n ELSE NULL::text END, CASE WHEN (GROUPING(concat(test1.n, (SubPlan 3))) = 0) THEN concat(test1.n, (SubPlan 4)) ELSE NULL::text END\n Hash Key: test1.n\n Hash Key: concat(test1.n, (SubPlan 2))\n -> Seq Scan on public.test1 (cost=0.00..32887.12 rows=1270 width=68)\n Output: test1.n, concat(test1.n, (SubPlan 2)), test1.seq\n SubPlan 2\n -> Seq Scan on public.test2 (cost=0.00..25.88 rows=6 width=32)\n Output: test2.x\n Filter: (test2.seq = test1.seq)\n SubPlan 4\n -> Seq Scan on public.test2 test2_1 (cost=0.00..25.88 rows=6 width=32)\n Output: test2_1.x\n Filter: (test2_1.seq = test1.seq)\n(17 rows)\n\nWe have ended up with four distinct SubPlans (two of which seem to\nhave gotten dropped because they are inside GROUPING functions,\nwhich never really evaluate their arguments). What I think happened\nhere is that the parser let the concat() expressions in the targetlist\nand ORDER BY through because they were syntactically identical to\nGROUPING SET expresions --- but later on, the planner expanded each\nof the sub-selects to a distinct SubPlan, and that meant that those\nsubexpressions were no longer identical, and so most of them didn't\nget converted to references to the grouping key column output by the\nHashAggregate node. Because they didn't get converted, they fail to\ndo the right thing in rows where they should go to NULL because\nthey're from the wrong grouping set. The \"concat\" targetlist element\ndid get converted, so it behaves correctly, and the GROUPING functions\ndon't actually care because they have a different method for\ndetermining what they should output. But you get the wrong answer for\nthe concat() inside the \"o2\" expression: it gets evaluated afresh\nusing the nulled value of test1.n.\n\nI think this particular symptom might be new, but we've definitely\nseen related trouble reports before. I'm inclined to think that the\nright fix will require making the parser actually replace such\nexpressions with Vars referencing a notional grouping output relation,\nso that there's not multiple instances of the sub-query in the parser\noutput in the first place. That's a fairly big job and nobody's\ntackled it yet.\n\nIn the meantime, what I'd suggest as a workaround is to put those\nsubexpressions into a sub-select with an optimization fence (you\ncould use OFFSET 0 or a materialized CTE), so that the grouping\nsets list in the outer query just has simple Vars as elements.\n\n[Digression: the SQL spec *requires* grouping set elements to be\nsimple Vars, and I begin to see why when contemplating examples like\nthis. It's a little weird that \"concat(test1.n, ...)\" can evaluate\nwith a non-null value of test1.n in a row where test1.n alone would\nevaluate as null. However, we've dug this hole for ourselves and now\nwe have to deal with the consequences.]\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Jan 2024 14:49:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Sat, 6 Jan 2024, 19:49 Tom Lane, <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > Something does seem off here with the interaction between grouping sets\n> and\n> > order by.\n>\n> Yeah. I think Geoff is correct to identify the use of subqueries in\n> the grouping sets as the triggering factor.\n\n[snip]\n\n> I think this particular symptom might be new, but we've definitely\n> seen related trouble reports before. I'm inclined to think that the\n> right fix will require making the parser actually replace such\n> expressions with Vars referencing a notional grouping output relation,\n> so that there's not multiple instances of the sub-query in the parser\n> output in the first place.\n\n\nWell yes. I assumed that since it's required that a group expression is in\nthe query itself that the grouping values were taken from the result set, I\nhave to admit to some surprise that they're calculated twice (three times?).\n\nThat's a fairly big job and nobody's\n> tackled it yet.\n\n\nFor what it's worth, as a user if we could reference a column alias in the\nGROUP and ORDER sections, rather than having to respecify the expression\neach time, that would be a far more friendly solution. Not sure it makes\nthe work any less difficult though.\n\nIn the meantime, what I'd suggest as a workaround is to put those\n> subexpressions into a sub-select with an optimization fence (you\n> could use OFFSET 0 or a materialized CTE), so that the grouping\n> sets list in the outer query just has simple Vars as elements.\n>\n\nNot possible in our case, sadly - at least not without a complete redesign\nof our SQL-generating code. It would be (much) easier to add a sort to the\noutput stage, tbh, and stop lazily relying on the output being sorted for\nus; I guess that's the route we'll have to take.\n\nThanks all for taking the time to look at it.\n\nGeoff\n\nOn Sat, 6 Jan 2024, 19:49 Tom Lane, <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> Something does seem off here with the interaction between grouping sets and\n> order by.\n\nYeah.  I think Geoff is correct to identify the use of subqueries in\nthe grouping sets as the triggering factor.  [snip]\nI think this particular symptom might be new, but we've definitely\nseen related trouble reports before.  I'm inclined to think that the\nright fix will require making the parser actually replace such\nexpressions with Vars referencing a notional grouping output relation,\nso that there's not multiple instances of the sub-query in the parser\noutput in the first place.  Well yes. I assumed that since it's required that a group expression is in the query itself that the grouping values were taken from the result set, I have to admit to some surprise that they're calculated twice (three times?).That's a fairly big job and nobody's\ntackled it yet.For what it's worth, as a user if we could reference a column alias in the GROUP and ORDER sections, rather than having to respecify the expression each time, that would be a far more friendly solution. Not sure it makes the work any less difficult though.\nIn the meantime, what I'd suggest as a workaround is to put those\nsubexpressions into a sub-select with an optimization fence (you\ncould use OFFSET 0 or a materialized CTE), so that the grouping\nsets list in the outer query just has simple Vars as elements.Not possible in our case, sadly - at least not without a complete redesign of our SQL-generating code. It would be (much) easier to add a sort to the output stage, tbh, and stop lazily relying on the output being sorted for us; I guess that's the route we'll have to take.Thanks all for taking the time to look at it.Geoff", "msg_date": "Sat, 6 Jan 2024 23:27:40 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Sat, 6 Jan 2024 at 23:27, Geoff Winkless <[email protected]> wrote:\n\n> Well yes. I assumed that since it's required that a group expression is in the query itself that\n> the grouping values were taken from the result set, I have to admit to some surprise that\n> they're calculated twice (three times?).\n\nSeems there was a reason why I thought that: per the documentation:\n\n\"The arguments to the GROUPING function are not actually evaluated,\nbut they must exactly match expressions given in the GROUP BY clause\nof the associated query level.\"\n\nhttps://www.postgresql.org/docs/16/functions-aggregate.html#FUNCTIONS-GROUPING-TABLE\n\nMildly interesting: you can pass column positions to GROUP BY and\nORDER BY but if you try to pass a position to GROUPING() (I wondered\nif that would help the engine somehow) it fails:\n\nSELECT\n test1.n,\n CONCAT(test1.n, (SELECT x FROM test2 WHERE seq=test1.seq))\nFROM test1\nGROUP BY\nGROUPING SETS(\n 1,\n 2\n)\nORDER BY\n CASE WHEN GROUPING(1)=0 THEN 1 ELSE NULL END NULLS FIRST,\n CASE WHEN GROUPING(2)=0 THEN 2 ELSE NULL END NULLS FIRST;\n\nERROR: arguments to GROUPING must be grouping expressions of the\nassociated query level\n\nGeoff\n\n\n", "msg_date": "Mon, 8 Jan 2024 10:23:51 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Mon, 8 Jan 2024 at 10:23, Geoff Winkless <[email protected]> wrote:\n\n> Seems there was a reason why I thought that: per the documentation:\n>\n> \"The arguments to the GROUPING function are not actually evaluated,\n> but they must exactly match expressions given in the GROUP BY clause\n> of the associated query level.\"\n>\n> https://www.postgresql.org/docs/16/functions-aggregate.html#FUNCTIONS-GROUPING-TABLE\n\nTo throw a spanner in the works, it looks like it's not the test\nitself that's failing: it's putting the ORDERing in a CASE at all that\nfails.\n\n... ORDER BY\n CASE WHEN GROUPING(test1.n) THEN 1 ELSE NULL END NULLS FIRST, CASE\nWHEN true THEN 2 ELSE 2 END;\n n | concat\n----+--------\n n1 |\n n2 |\n n3 |\n n4 |\n n5 |\n | n3x3\n | n5x1\n | n2x4\n | n1x5\n | n4x2\n\nbut without the CASE works fine:\n\n... ORDER BY\n CASE WHEN GROUPING(test1.n) THEN 1 ELSE NULL END NULLS FIRST, 2;\n n | concat\n----+--------\n n4 |\n n2 |\n n3 |\n n5 |\n n1 |\n | n1x5\n | n2x4\n | n3x3\n | n4x2\n | n5x1\n\nWhat's even more of a head-scratcher is why fixing this this then\nbreaks the _first_ group's ORDERing.\n\nIt _looks_ like removing the CASE altogether and ordering by the\nGROUPING value for all the grouping sets first:\n\nORDER BY\n GROUPING(test1.n,CONCAT(test1.n, (SELECT x FROM test2 WHERE\nseq=test1.seq))), 1, 2;\n\nactually works. I'm trying to figure out if that scales up or if it's\njust dumb luck that it works for my example.\n\nGeoff\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:12:43 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Mon, 8 Jan 2024 at 11:12, Geoff Winkless <[email protected]> wrote:\n> What's even more of a head-scratcher is why fixing this this then\n> breaks the _first_ group's ORDERing.\n\nIgnore that. Finger slippage - looking back I realised I forgot the\n\"=0\" test after the GROUPING() call.\n\nIt looks like I'm going to go with\n\nORDER BY GROUPING(test1.n), test1.n, GROUPING(CONCAT(....)), CONCAT(...)\n\nbecause it's easier to build the query sequentially that way than\nputting all the GROUPING tests into a single ORDER, and it does seem\nto work OK.\n\nGeoff\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:53:47 +0000", "msg_from": "Geoff Winkless <[email protected]>", "msg_from_op": true, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" }, { "msg_contents": "On Monday, January 8, 2024, Geoff Winkless <[email protected]> wrote\n>\n>\n> Mildly interesting: you can pass column positions to GROUP BY and\n> ORDER BY but if you try to pass a position to GROUPING() (I wondered\n> if that would help the engine somehow) it fails:\n>\n\nThe symbol 1 is ambigious - it can be the number or a column reference. In\na compound expression it is always the number, not the column reference.\n\nDavid J.\n\nOn Monday, January 8, 2024, Geoff Winkless <[email protected]> wrote\n\nMildly interesting: you can pass column positions to GROUP BY and\nORDER BY but if you try to pass a position to GROUPING() (I wondered\nif that would help the engine somehow) it fails:\nThe symbol 1 is ambigious - it can be the number or a column reference.  In a compound expression it is always the number, not the column reference.David J.", "msg_date": "Mon, 8 Jan 2024 07:01:40 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: weird GROUPING SETS and ORDER BY behaviour" } ]
[ { "msg_contents": "hi.\nMaybe this is a small printout err_position bug.\n\ncreate table atacc2 ( test int, a int, b int) ;\nsuccess tests:\nalter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b );\nalter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b a);\nalter table atacc2 add CONSTRAINT x PRIMARY KEYa (id, b);\n\ntests have problem:\nalter table atacc2 add constraints x unique (test, a, b);\nERROR: syntax error at or near \"(\"\nLINE 1: alter table atacc2 add constraints x unique (test, a, b);\n\n ^\nADD either following with the optional keyword \"COLUMN\" or\n\"CONSTRAINT\" as the doc.\nso I should expect the '^' point at \"constraints\"?\n\n\n", "msg_date": "Mon, 8 Jan 2024 13:59:54 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "alter table add x wrong error position" }, { "msg_contents": "On Sunday, January 7, 2024, jian he <[email protected]> wrote:\n\n> hi.\n> Maybe this is a small printout err_position bug.\n>\n> create table atacc2 ( test int, a int, b int) ;\n> success tests:\n> alter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b );\n> alter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b a);\n> alter table atacc2 add CONSTRAINT x PRIMARY KEYa (id, b);\n>\n> tests have problem:\n> alter table atacc2 add constraints x unique (test, a, b);\n> ERROR: syntax error at or near \"(\"\n> LINE 1: alter table atacc2 add constraints x unique (test, a, b);\n>\n> ^\n> ADD either following with the optional keyword \"COLUMN\" or\n> \"CONSTRAINT\" as the doc.\n> so I should expect the '^' point at \"constraints\"?\n>\n\nIt’s finding “… add column_name data_type column_constraint” then dies at\nthe parenthesis. So indeed the care t should be pointing where it probably\nis, at the parenthesis that the error is referring to.\n\nDavid J.\n\nOn Sunday, January 7, 2024, jian he <[email protected]> wrote:hi.\nMaybe this is a small printout err_position bug.\n\ncreate table atacc2 ( test int, a int, b int) ;\nsuccess tests:\nalter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b );\nalter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b a);\nalter table atacc2 add CONSTRAINT x PRIMARY KEYa (id, b);\n\ntests have problem:\nalter table atacc2 add constraints x unique (test, a, b);\nERROR:  syntax error at or near \"(\"\nLINE 1: alter table atacc2 add constraints x unique (test, a, b);\n\n          ^\nADD either following with the optional keyword \"COLUMN\" or\n\"CONSTRAINT\"  as the doc.\nso I should expect the '^' point at \"constraints\"?\nIt’s finding “… add column_name data_type column_constraint” then dies at the parenthesis.  So indeed the care t should be pointing where it probably is, at the parenthesis that the error is referring to.David J.", "msg_date": "Sun, 7 Jan 2024 23:17:38 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: alter table add x wrong error position" }, { "msg_contents": "On 2024-Jan-08, jian he wrote:\n\n> hi.\n> Maybe this is a small printout err_position bug.\n> \n> create table atacc2 ( test int, a int, b int) ;\n> success tests:\n> alter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b );\n> alter table atacc2 add CONSTRAINT x PRIMARY KEY (id, b a);\n> alter table atacc2 add CONSTRAINT x PRIMARY KEYa (id, b);\n> \n> tests have problem:\n> alter table atacc2 add constraints x unique (test, a, b);\n> ERROR: syntax error at or near \"(\"\n> LINE 1: alter table atacc2 add constraints x unique (test, a, b);\n> \n> ^\n> ADD either following with the optional keyword \"COLUMN\" or\n> \"CONSTRAINT\" as the doc.\n> so I should expect the '^' point at \"constraints\"?\n\nHere you're saying to add a column called constraints, of\ntype x; then UNIQUE is parsed by columnDef as ColQualList, which goes to\nthe ColConstraintElem production starting with the UNIQUE keyword: \n\n\t\t\t| UNIQUE opt_unique_null_treatment opt_definition OptConsTableSpace\n\nso the next thing could be opt_unique_null_treatment or opt_definition\nor OptConsTableSpace or going back to ColQualList, but none of those\nstart with a '(' parens. So the ( doesn't have a match and you get the\nsyntax error.\n\nIf you don't misspell CONSTRAINT as \"constraints\", there's no issue.\n\nI don't see any way to improve this. You can't forbid misspellings of\nthe keyword CONSTRAINT, because they can be column names.\n\nalter table atacc2 add cnstrnt x unique (test, a, b);\nERROR: error de sintaxis en o cerca de «(»\nLÍNEA 1: alter table atacc2 add cnstrnt x unique (test, a, b);\n ^\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 12 Jan 2024 10:58:21 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: alter table add x wrong error position" } ]
[ { "msg_contents": "Hi,\n\nHere's a quick status report after the first week:\nStatus summary:\n Needs review: 238.\n Waiting on Author: 44.\n Ready for Committer: 27.\n Committed: 36.\n Moved to next CF: 1.\n Withdrawn: 2.\n Returned with Feedback: 3.\n Rejected: 1.\nTotal: 352.\n\nHere is a list of \"Needs review\" entries for which there has not been\nmuch communication on the thread and needs help in proceeding further.\nHackers please pick one of these and review/share your suggestions,\nthat will be helpful in taking it forward:\nFixes for non-atomic read of read of control file on ext4 + ntfs | Thomas Munro\nRecovering from detoast-related catcache invalidations | Tom Lane\nDump-restore loosing 'attnotnull' bit for DEFERRABLE PRIMARY KEY\ncolumn(s) | Amul sul\npgbench: allow to cancel queries during benchmark | Yugo Nagata\nvacuumdb/clusterdb/reindexdb: allow specifying objects to process in\nall databases | Nathan Bossart\nAdd additional extended protocol commands to psql: \\parse and \\bindx |\nAnthonin Bonnefoy\nFunction to log backtrace of postgres processes | Vignesh C/Bharath Rupireddy\nCheck consistency of GUC defaults between .sample.conf and\npg_settings.boot_val | Nathan Bossart\narchive modules loose ends | Nathan Bossart\nmake pg_ctl more friendly | Crisp Lee\nlocked reads for atomics | Nathan Bossart\ncommon signal handler protection | Nathan Bossart\nReuse child_relids in try_partitionwise_join | Ashutosh Bapat\n\nHere is a list of \"Ready for Committer\" entries for which there has\nnot been much communication on the thread and needs help in proceeding\nfurther. If any of the committers has some time to spare, please help\nus on these:\nSkip hidden files in serverside utilities | Daniel Gustafsson\nUnlinking Parallel Hash Join inner batch files sooner | Thomas Munro\npg_basebackup: mention that spread checkpoints are the default | Michael Banck\nEvaluate arguments of correlated SubPlans in the referencing ExprState\n| Andres Freund\nCross-database SERIALIZABLE safe snapshots | Thomas Munro\nAdd support function for range containment operators | Kim Johan Andersson\nRefactor pipe_read_line as a more generic interface for reading\narbitrary strings off a pipe | Daniel Gustafsson\nUse atomic ops for unlogged LSN | John Morris\nfunctions to compute size of schemas/AMs (and maybe \\dn++ and \\dA++) |\nJustin Pryzby\nImprove Boolean Predicate JSON Path Docs | David Wheeler\nAdd PQsendPipelineSync() to libpq | Anton Kirilov\nTrigger violates foreign key constraint | Laurenz Albe\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease get the patch to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\n\nI have pinged many threads that are in \"Ready for Committer\", \"Needs\nreview\" state and don't apply, compile warning-free, or pass\ncheck-world. I have also updated the status of the patches which were\nnot in the correct state.\n\nI will also be pinging the patch owners to review a few patches who\nhave submitted one or more patches but have not picked any of the\npatches for review.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 8 Jan 2024 11:52:14 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2024-01 first week update" }, { "msg_contents": "On Mon, 8 Jan 2024 at 07:22, vignesh C <[email protected]> wrote:\n> Here is a list of \"Needs review\" entries for which there has not been\n> much communication on the thread and needs help in proceeding further.\n\nThank you for creating these lists. It's definitely helpful to see\nwhat to focus my reviewing effort on. I noticed they are missing one\nof my commitfest entries though:\n\nAdd non-blocking version of PQcancel | Jelte Fennema-Nio\n\nimho this patch is pretty much finished, but it has only received\nEnglish spelling feedback in the last 8 months (I addressed the\nfeedback).\n\n\n", "msg_date": "Mon, 8 Jan 2024 18:19:53 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On Mon, 8 Jan 2024 at 22:50, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> On Mon, 8 Jan 2024 at 07:22, vignesh C <[email protected]> wrote:\n> > Here is a list of \"Needs review\" entries for which there has not been\n> > much communication on the thread and needs help in proceeding further.\n>\n> Thank you for creating these lists. It's definitely helpful to see\n> what to focus my reviewing effort on. I noticed they are missing one\n> of my commitfest entries though:\n\nThis is just one list for this week, I will be focussing on a\ndifferent set of patches for review in the next week.\n\n> Add non-blocking version of PQcancel | Jelte Fennema-Nio\n>\n> imho this patch is pretty much finished, but it has only received\n> English spelling feedback in the last 8 months (I addressed the\n> feedback).\n\nThanks, One of us will have a look at this patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 9 Jan 2024 10:54:19 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "Hi,\n\nI think we need to be more aggressive about marking things returned\nwith feedback when they don't get updated. If a patch is waiting for\nreviews for a long time, well, that's one thing. Maybe we eventually\nclose it due to lack of interest in reviewing it, but that should be\ndone cautiously, as it will understandably piss people off. But I\nregularly find patches in the CommitFest which have been waiting on\nauthor for multiple commitfests and are just repeatedly moved forward.\nThat's crazy to me. It makes the CommitFest application fill up with\njunk that obscures what actually needs to be dealt with.\n\n...Robert\n\n\n", "msg_date": "Tue, 9 Jan 2024 17:18:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On Wed, 10 Jan 2024 at 03:48, Robert Haas <[email protected]> wrote:\n>\n> Hi,\n>\n> I think we need to be more aggressive about marking things returned\n> with feedback when they don't get updated. If a patch is waiting for\n> reviews for a long time, well, that's one thing. Maybe we eventually\n> close it due to lack of interest in reviewing it, but that should be\n> done cautiously, as it will understandably piss people off. But I\n> regularly find patches in the CommitFest which have been waiting on\n> author for multiple commitfests and are just repeatedly moved forward.\n> That's crazy to me. It makes the CommitFest application fill up with\n> junk that obscures what actually needs to be dealt with.\n\nI also noticed that many patches get carried forward to the next\ncommitfest. I will start highlighting these kinds of patches to the\nauthor, request them to act upon the patches and return the patch if\nnothing is done in this commitfest. I'm also working on the needs\nreview patches, I have already withdrawn one of my patches yesterday\nat [1] which got no response for a long time. I also got one on my\ncolleagues's patch to be withdrawn by requesting him internally at\n[2]. I will be posting these kinds of patches in my weekly update. It\nwill be good if one of the senior folks reply whether to take it\nforward or not, that will help me in taking a decision quickly on what\nto do with these patches.\n\nOne kind suggestion for all the patches which have been inactive for a\nlong time is for the author to re-assess the patch once again and act\non the entry voluntarily which will help the committers and reviewers\nin spending time on the real patch which requires focus.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm1AQZYgT0tALRrkvpP1Q%2B8%2Be7vkGCUjQ-jim1C0q3e%3DzA%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAHut+PtBX9S146Zq1CQUQQJ3n-P3ZSV4w9AHfC-LDsX5T5uT9w@mail.gmail.com\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 10 Jan 2024 09:17:42 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "> On 9 Jan 2024, at 23:18, Robert Haas <[email protected]> wrote:\n\n> I think we need to be more aggressive about marking things returned\n> with feedback when they don't get updated.\n\nI very much agree. Having marked quite a lot of patches as RwF when being CFM\nI can attest that it gets very little off-list pushback or angry emails. While\nit does happen, the overwhelming majority of responses are understanding and\npositive, so no CFM should be worried about \"being the bad guy\".\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 09:37:17 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On 2024-Jan-10, Daniel Gustafsson wrote:\n\n> > On 9 Jan 2024, at 23:18, Robert Haas <[email protected]> wrote:\n> \n> > I think we need to be more aggressive about marking things returned\n> > with feedback when they don't get updated.\n> \n> I very much agree. Having marked quite a lot of patches as RwF when being CFM\n> I can attest that it gets very little off-list pushback or angry emails. While\n> it does happen, the overwhelming majority of responses are understanding and\n> positive, so no CFM should be worried about \"being the bad guy\".\n\nI like this idea very much -- return patches when the author does not\nrespond AFTER receiving feedback or the patch rotting.\n\nHowever, this time around I saw that a bunch of patches were returned or\nthreatened to be returned JUST BECAUSE nobody had replied to the thread,\nwith a justification like \"you need to generate more interest in your\npatch\". This is a TERRIBLE idea, and there's one reason why creating a\nnew commitfest entry in the following commitfest is no good: \n\nAt the FOSDEM developer meeting, we do a run of CF patch triage, where\nwe check the topmost patches in order of number-of-commitfests. If you\nreturn an old patch and a new CF entry is created, this number is reset,\nand we could quite possibly fail to detect some very old patch because\nof this. At times, the attention a patch gets during the CF triage is\nsufficient to get the patch moving forward after long inactivity, so\nthis is not academic. Case in point: [1].\n\nSo by all means let's return patches that rot or fail to get updated per\nfeedback. But DO NOT return patches because of inactivity.\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n", "msg_date": "Sun, 4 Feb 2024 10:02:12 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On Sun, 4 Feb 2024 at 14:32, Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jan-10, Daniel Gustafsson wrote:\n>\n> > > On 9 Jan 2024, at 23:18, Robert Haas <[email protected]> wrote:\n> >\n> > > I think we need to be more aggressive about marking things returned\n> > > with feedback when they don't get updated.\n> >\n> > I very much agree. Having marked quite a lot of patches as RwF when being CFM\n> > I can attest that it gets very little off-list pushback or angry emails. While\n> > it does happen, the overwhelming majority of responses are understanding and\n> > positive, so no CFM should be worried about \"being the bad guy\".\n>\n> I like this idea very much -- return patches when the author does not\n> respond AFTER receiving feedback or the patch rotting.\n>\n> However, this time around I saw that a bunch of patches were returned or\n> threatened to be returned JUST BECAUSE nobody had replied to the thread,\n> with a justification like \"you need to generate more interest in your\n> patch\". This is a TERRIBLE idea, and there's one reason why creating a\n> new commitfest entry in the following commitfest is no good:\n\nI have seen that most of the threads are being discussed and being\npromptly updated. But very few of the entries become stale and just\nmove from one commitfest to another commitfest without anything being\ndone. For these kinds of entries, we were just trying to see if the\nauthor or anybody is really interested or not in pursuing it.\n\nWe should do something about these kinds of entries, there were few\nsuggestions like tagging under a new category or so, can we add a new\nstatus to park these entries something like \"Waiting for direction\".\nThe threads which have no discussion for 6 months or so can be flagged\nto this new status and these can be discussed in one of the developer\nmeetings or so and conclude on these items.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 4 Feb 2024 20:21:15 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On Sun, Feb 4, 2024 at 6:51 AM vignesh C <[email protected]> wrote:\n> We should do something about these kinds of entries, there were few\n> suggestions like tagging under a new category or so, can we add a new\n> status to park these entries something like \"Waiting for direction\".\n> The threads which have no discussion for 6 months or so can be flagged\n> to this new status and these can be discussed in one of the developer\n> meetings or so and conclude on these items.\n\nSee also [1], with a patch suggestion. IIRC, there was also related\ndiscussion on making the \"resurrection\" process single-step so that\nyou don't lose history, per Alvaro's concern.\n\n--Jacob\n\n[1] https://postgr.es/m/flat/CAAWbhmjM6hn_3sjVBUyqVKysWVAtsBOkaNt01QF3AMOCunuKMg%40mail.gmail.com\n\n\n", "msg_date": "Mon, 5 Feb 2024 10:47:52 -0800", "msg_from": "Jacob Champion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On 2024-Feb-04, vignesh C wrote:\n\n> We should do something about these kinds of entries, there were few\n> suggestions like tagging under a new category or so, can we add a new\n> status to park these entries something like \"Waiting for direction\".\n> The threads which have no discussion for 6 months or so can be flagged\n> to this new status and these can be discussed in one of the developer\n> meetings or so and conclude on these items.\n\nMaybe a new status is appropriate ... I would suggest \"Stalled\". Such a\npatch still applies and has no pending feedback, but nobody seems\ninterested. Making a patch no longer stalled means there's discussion\nthat leads to:\n\n1. further development? Perhaps the author just needed more direction.\n2. a decision that it's not a feature we want, or maybe not in this\n form. Then we close it as rejected.\n3. a reviewer/committer finding time to provide additional feedback.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n", "msg_date": "Wed, 7 Feb 2024 13:37:29 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "> On 7 Feb 2024, at 13:37, Alvaro Herrera <[email protected]> wrote:\n\n> Maybe a new status is appropriate ... I would suggest \"Stalled\". Such a\n> patch still applies and has no pending feedback, but nobody seems\n> interested.\n\nSince the CF app knows when the last email in the thread was, the state of the\npatch entry and the number of CF's which is has been present in; maybe we can\nextend the app to highlight these patches in a way which doesn't add more\nmanual intervention? It might yield a few false positives, but so will setting\nit manually.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 7 Feb 2024 13:48:16 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On 2024-Feb-07, Daniel Gustafsson wrote:\n\n> Since the CF app knows when the last email in the thread was, the\n> state of the patch entry and the number of CF's which is has been\n> present in; maybe we can extend the app to highlight these patches in\n> a way which doesn't add more manual intervention? It might yield a\n> few false positives, but so will setting it manually.\n\nHmm, but suppose the author is posting new versions now and again\nbecause of apply conflicts; and the CFM is pinging them about that, but\nnot providing any actionable feedback. Such a thread would be active\nenough, but the patch is still stalled.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n\n\n", "msg_date": "Wed, 7 Feb 2024 14:15:30 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "On Wed, Feb 7, 2024 at 6:07 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Feb-04, vignesh C wrote:\n>\n> > We should do something about these kinds of entries, there were few\n> > suggestions like tagging under a new category or so, can we add a new\n> > status to park these entries something like \"Waiting for direction\".\n> > The threads which have no discussion for 6 months or so can be flagged\n> > to this new status and these can be discussed in one of the developer\n> > meetings or so and conclude on these items.\n>\n> Maybe a new status is appropriate ... I would suggest \"Stalled\". Such a\n> patch still applies and has no pending feedback, but nobody seems\n> interested. Making a patch no longer stalled means there's discussion\n> that leads to:\n>\n> 1. further development? Perhaps the author just needed more direction.\n> 2. a decision that it's not a feature we want, or maybe not in this\n> form. Then we close it as rejected.\n> 3. a reviewer/committer finding time to provide additional feedback.\n>\n\n+1. This suggestion sounds reasonable. I think we probably need to\ndecide the time when the patch's status should be changed to\n\"Stalled\".\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 7 Feb 2024 19:21:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" }, { "msg_contents": "> On 7 Feb 2024, at 14:15, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2024-Feb-07, Daniel Gustafsson wrote:\n> \n>> Since the CF app knows when the last email in the thread was, the\n>> state of the patch entry and the number of CF's which is has been\n>> present in; maybe we can extend the app to highlight these patches in\n>> a way which doesn't add more manual intervention? It might yield a\n>> few false positives, but so will setting it manually.\n> \n> Hmm, but suppose the author is posting new versions now and again\n> because of apply conflicts; and the CFM is pinging them about that, but\n> not providing any actionable feedback. Such a thread would be active\n> enough, but the patch is still stalled.\n\nIf the patch author is actively rebasing the patch and thus generating traffic\non the thread I'm not sure I would call it stalled - there might not be enough\n(or any) reviews but the activity alone might make someone interested. Either\nway, I'd personally prefer such false positives if it means reducing the manual\nworkload for the CFM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 7 Feb 2024 15:14:40 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Commitfest 2024-01 first week update" } ]
[ { "msg_contents": "The pg_amcheck reports a skip message if the layout of the index does \nnot match expectations. That message includes the bytes that were \nexpected and the ones that were found. But the found ones are arbitrary \nbytes, which can have funny effects on the terminal when they are \nprinted. To avoid that, escape non-word characters before printing.", "msg_date": "Mon, 8 Jan 2024 08:27:57 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Escape output of pg_amcheck test" }, { "msg_contents": "Hi,\n\n> The pg_amcheck reports a skip message if the layout of the index does\n> not match expectations. That message includes the bytes that were\n> expected and the ones that were found. But the found ones are arbitrary\n> bytes, which can have funny effects on the terminal when they are\n> printed. To avoid that, escape non-word characters before printing.\n\nLGTM.\n\nI didn't get the part about the /r modifier at first, but \"man perlre\" helped:\n\n\"\"\"\nr - perform non-destructive substitution and return the new value\n\"\"\"\n\nThe /a modifier requires Perl >= 5.14, which is fine [1].\n\n[1]: https://www.postgresql.org/docs/current/install-requirements.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 8 Jan 2024 15:45:06 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Escape output of pg_amcheck test" }, { "msg_contents": "\n\nOn 1/7/24 23:27, Peter Eisentraut wrote:\n> The pg_amcheck reports a skip message if the layout of the index does \n> not match expectations.  That message includes the bytes that were \n> expected and the ones that were found.  But the found ones are arbitrary \n> bytes, which can have funny effects on the terminal when they are \n> printed.  To avoid that, escape non-word characters before printing.\n\n> +\t\t\t# escape non-word characters to avoid confusing the terminal\n> +\t\t\t$b =~ s{(\\W)}{ sprintf '\\x%02x', ord($1) }aegr);\n\nThe /r modifier defeats the purpose of the patch, at least for my perl \nversion, perl 5, version 28, subversion 1 (v5.28.1). With just the /aeg \nmodifier, it works fine.\n\n-- \nMark Dilger\n\n\n", "msg_date": "Mon, 8 Jan 2024 05:41:02 -0800", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Escape output of pg_amcheck test" }, { "msg_contents": "\n\n> On Jan 8, 2024, at 5:41 AM, Mark Dilger <[email protected]> wrote:\n> \n> The /r modifier defeats the purpose of the patch, at least for my perl version, perl 5, version 28, subversion 1 (v5.28.1). With just the /aeg modifier, it works fine.\n\nNevermind. I might be wrong about that. I didn't have a test case handy that would generate index corruption which would result in characters of the problematic class, and so I quickly wrote some (wrong) instrumentation to try to test your patch.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 8 Jan 2024 05:52:26 -0800", "msg_from": "Mark Dilger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Escape output of pg_amcheck test" }, { "msg_contents": "Hi,\n\n> [...] so I quickly wrote some (wrong) instrumentation to try to test your patch.\n\nYep, it confused me too at first.\n\nSince the encoding happens right before exit() call, maybe it's worth\nchanging $b in-place in order to make the code slightly more readable\nfor most of us :)\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 8 Jan 2024 17:04:19 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Escape output of pg_amcheck test" }, { "msg_contents": "On 08.01.24 15:04, Aleksander Alekseev wrote:\n>> [...] so I quickly wrote some (wrong) instrumentation to try to test your patch.\n> \n> Yep, it confused me too at first.\n> \n> Since the encoding happens right before exit() call, maybe it's worth\n> changing $b in-place in order to make the code slightly more readable\n> for most of us :)\n\nMy patch originally had the old-style\n\nmy $b_escaped = $b;\n$b_escaped =~ s/.../;\n\n... sprintf(..., $b_escaped);\n\nbut then I learned about the newish /r modifier and thought it was \ncooler. :)\n\n\n", "msg_date": "Mon, 8 Jan 2024 16:06:01 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Escape output of pg_amcheck test" }, { "msg_contents": "On 08.01.24 16:06, Peter Eisentraut wrote:\n> On 08.01.24 15:04, Aleksander Alekseev wrote:\n>>> [...] so I quickly wrote some (wrong) instrumentation to try to test \n>>> your patch.\n>>\n>> Yep, it confused me too at first.\n>>\n>> Since the encoding happens right before exit() call, maybe it's worth\n>> changing $b in-place in order to make the code slightly more readable\n>> for most of us :)\n> \n> My patch originally had the old-style\n> \n> my $b_escaped = $b;\n> $b_escaped =~ s/.../;\n> \n> ... sprintf(..., $b_escaped);\n> \n> but then I learned about the newish /r modifier and thought it was \n> cooler. :)\n\ncommitted\n\n\n\n", "msg_date": "Sun, 14 Jan 2024 07:32:05 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Escape output of pg_amcheck test" } ]
[ { "msg_contents": "­If I want to change the name of my database schema, I call \n\talter schema my_schema rename to other_schema\nHowever, there is a problem with functions that call other functions in the same schema. These functions have a search_path \n\talter function my_schema.function1 set search_path to my_schema\nIf the name of the schema is changed with \"alter schema...\", the search path of the functions is not changed, so I still have to call the following after renaming the schema: \n\talter function other_schema.function1 set search_path to other_schema\nThis is worse than it seems at first glance, because I need to know which functions have a search_path. If my list of these functions is incomplete and I therefore do not change the search_path for all functions, there will be an error in the schema after renaming the schema.\n\nI am sure that in the vast majority of cases where a function has a search_path, this search_path specifies the schema in which the function is located, i.e. the function \n\tmy_schema.function1 \nhas search_path \n\tmy_schema\nIt would therefore be great if you could implement a \"magic variable\" called __function_schema__, which can be set as the search_path of a function and which is not evaluated from the outset, but is transferred unchanged to the metadata of the function:\n\tMetadata of function1:\n\t...\n\tsearch_path: __function_schema__\n\t...\nEach time the function is executed, the variable value is determined. Therefore, the search_path is always correct: as long as the function is in the schema my_schema, the search_path __function_schema__ is evaluated to my_schema when the function is executed, and as soon as the function is in the schema other_schema after the schema has been renamed, the search_path __function_schema__ is evaluated to other_schema when the function is executed.\nOf course, the implementation could cache the value of __function_schema__ for each function and only change it when the schema of the function changes.\n\nWilma\nPS Even though I wrote that I would like to have a \"magic variable\" called __function_schema__, I would of course also be very happy with a name other than __function_schema__.\n________________________________________________________\nYour E-Mail. Your Cloud. Your Office. eclipso Mail Europe. https://www.eclipso.de\n\n\n\n\n", "msg_date": "Mon, 08 Jan 2024 09:05:47 +0100", "msg_from": "\"Wilma Wantren\" <[email protected]>", "msg_from_op": true, "msg_subject": "Changing a schema's name with function1 calling function2" } ]
[ { "msg_contents": "After reading the logic of removing useless join, I think the comment of\nthis might need to be changed: \"Currently, join_is_removable only succeeds\nif sjinfo's right hand is a single baserel. \" could be changed to\n\"Currently, join_is_removable only succeeds if sjinfo's min_righthand is a\nsingle baserel. \". Because the useless join in the query \"select t1.* from\nt1 left join (t2 left join t3 on t3.a=t2.b) on t2.a=t1.a;\" would also be\neliminated. That is, the query will be converted to \"select t1.* from t1;\"", "msg_date": "Mon, 8 Jan 2024 17:11:44 +0800", "msg_from": "ywgrit <[email protected]>", "msg_from_op": true, "msg_subject": "Change comments of removing useless joins." } ]
[ { "msg_contents": "I've been thinking about INSERT performance and noticed that copyfrom.c\n(COPY FROM) performs ~4 unnecessary pointer-deferences per record in the\ncase when there's no indexes and no AFTER ROW INSERT triggers (i.e. when\nyou just want to load data really fast!).\n\nI moved the for-loop inside the per-batch if-checks and got a little\nspeedup. Obviously, this only matters for CPU-bound INSERTs with very\nnarrow tables - if there's other overhead (including parsing), this gain\ndisappears into the noise. I'm not a regular contributor, apologies in\nadvance if I got something wrong, and no worries if this is too small to\nbother. My patch below passes \"make check\". I'll of course post other wins\nas I find them, but this one seemed easy.\n\nMy reference test comes from a conversation on HN (\nhttps://news.ycombinator.com/item?id=38864213 ) loading 100M tiny records\nfrom COPY TO ... BINARY on a GCP c2d-standard-8:\nhttps://gcloud-compute.com/c2-standard-8.html (8 vCPU, 32GB, network SSD).\n\ntime sh -c \"echo \\\"drop table if exists tbl; create unlogged table tbl(city\nint2, temp int2);copy tbl FROM '/home/asah/citydata.bin' binary;\\\" |\n./pg/bin/postgres --single -D tmp -p 9999 postgres\";\n\nresults from 3 runs:\nreal 0m26.488s, user 0m14.745s, sys 0m3.299s\nreal 0m28.978s, user 0m14.010s, sys 0m3.288s\nreal 0m28.920s, user 0m14.028s, sys 0m3.201s\n==>\nreal 0m24.483s, user 0m13.280s, sys 0m3.305s\nreal 0m28.668s, user 0m13.095s, sys 0m3.501s\nreal 0m28.306s, user 0m13.032s, sys 0m3.505s\n\n\nOn my mac m1 air,\n\nreal 0m11.922s, user 0m10.220s, sys 0m1.302s\nreal 0m12.761s, user 0m10.137s, sys 0m1.401s\nreal 0m12.734s, user 0m10.146s, sys 0m1.376s\n==>\nreal 0m12.173s, user 0m9.785s, sys 0m1.221s\nreal 0m12.462s, user 0m9.691s, sys 0m1.393s\nreal 0m12.266s, user 0m9.719s, sys 0m1.390s\n\n\npatch: (passes \"make check\" - feel free to drop/replace my comments of\ncourse)\n\ndiff --git a/src/backend/commands/copyfrom.c\nb/src/backend/commands/copyfrom.c\nindex 37836a769c..d3783678e0 100644\n--- a/src/backend/commands/copyfrom.c\n+++ b/src/backend/commands/copyfrom.c\n@@ -421,13 +421,14 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo\n*miinfo,\n buffer->bistate);\n MemoryContextSwitchTo(oldcontext);\n\n- for (i = 0; i < nused; i++)\n- {\n /*\n * If there are any indexes, update them for all the\ninserted\n * tuples, and run AFTER ROW INSERT triggers.\n */\n if (resultRelInfo->ri_NumIndices > 0)\n+ {\n+ /* expensive inner loop hidden by if-check */\n+ for (i = 0; i < nused; i++)\n {\n List *recheckIndexes;\n\n@@ -441,6 +442,7 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,\n\n cstate->transition_capture);\n list_free(recheckIndexes);\n }\n+ }\n\n /*\n * There's no indexes, but see if we need to run AFTER ROW\nINSERT\n@@ -449,15 +451,18 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo\n*miinfo,\n else if (resultRelInfo->ri_TrigDesc != NULL &&\n\n (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||\n\nresultRelInfo->ri_TrigDesc->trig_insert_new_table))\n+ {\n+ /* expensive inner loop hidden by if-check */\n+ for (i = 0; i < nused; i++)\n {\n cstate->cur_lineno = buffer->linenos[i];\n ExecARInsertTriggers(estate, resultRelInfo,\n\n slots[i], NIL,\n\n cstate->transition_capture);\n- }\n\n ExecClearTuple(slots[i]);\n }\n+ }\n\n /* Update the row counter and progress of the COPY command\n*/\n *processed += nused;\n\nhope this helps,\nadam\n\nI've been thinking about INSERT performance and noticed that copyfrom.c (COPY FROM) performs ~4 unnecessary pointer-deferences per record in the case when there's no indexes and no AFTER ROW INSERT triggers (i.e. when you just want to load data really fast!).I moved the for-loop inside the per-batch if-checks and got a little speedup. Obviously, this only matters for CPU-bound INSERTs with very narrow tables - if there's other overhead (including parsing), this gain disappears into the noise. I'm not a regular contributor, apologies in advance if I got something wrong, and no worries if this is too small to bother. My patch below passes \"make check\". I'll of course post other wins as I find them, but this one seemed easy.My reference test comes from a conversation on HN ( https://news.ycombinator.com/item?id=38864213 ) loading 100M tiny records from COPY TO ... BINARY on a GCP c2d-standard-8: https://gcloud-compute.com/c2-standard-8.html (8 vCPU, 32GB, network SSD).time sh -c \"echo \\\"drop table if exists tbl; create unlogged table tbl(city int2, temp int2);copy tbl FROM '/home/asah/citydata.bin' binary;\\\" | ./pg/bin/postgres --single -D tmp -p 9999 postgres\";results from 3 runs:real 0m26.488s, user 0m14.745s, sys 0m3.299sreal 0m28.978s, user 0m14.010s, sys 0m3.288sreal 0m28.920s, user 0m14.028s, sys 0m3.201s==>real 0m24.483s, user 0m13.280s, sys 0m3.305sreal 0m28.668s, user 0m13.095s, sys 0m3.501sreal 0m28.306s, user 0m13.032s, sys 0m3.505sOn my mac m1 air,real\t0m11.922s, user\t0m10.220s, sys\t0m1.302sreal\t0m12.761s, user\t0m10.137s, sys\t0m1.401sreal\t0m12.734s, user\t0m10.146s, sys\t0m1.376s==>real\t0m12.173s, user\t0m9.785s, sys\t0m1.221sreal\t0m12.462s, user\t0m9.691s, sys\t0m1.393sreal\t0m12.266s, user\t0m9.719s, sys\t0m1.390spatch: (passes \"make check\" - feel free to drop/replace my comments of course)diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.cindex 37836a769c..d3783678e0 100644--- a/src/backend/commands/copyfrom.c+++ b/src/backend/commands/copyfrom.c@@ -421,13 +421,14 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,                                                   buffer->bistate);                MemoryContextSwitchTo(oldcontext);-               for (i = 0; i < nused; i++)-               {                /*                 * If there are any indexes, update them for all the inserted                 * tuples, and run AFTER ROW INSERT triggers.                 */                if (resultRelInfo->ri_NumIndices > 0)+               {+                       /* expensive inner loop hidden by if-check */+                       for (i = 0; i < nused; i++)                        {                                List       *recheckIndexes;@@ -441,6 +442,7 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,                                                                         cstate->transition_capture);                                list_free(recheckIndexes);                        }+               }                /*                 * There's no indexes, but see if we need to run AFTER ROW INSERT@@ -449,15 +451,18 @@ CopyMultiInsertBufferFlush(CopyMultiInsertInfo *miinfo,                else if (resultRelInfo->ri_TrigDesc != NULL &&                                         (resultRelInfo->ri_TrigDesc->trig_insert_after_row ||                                          resultRelInfo->ri_TrigDesc->trig_insert_new_table))+               {+                       /* expensive inner loop hidden by if-check */+                       for (i = 0; i < nused; i++)                        {                                cstate->cur_lineno = buffer->linenos[i];                                ExecARInsertTriggers(estate, resultRelInfo,                                                                         slots[i], NIL,                                                                         cstate->transition_capture);-                       }                                ExecClearTuple(slots[i]);                        }+               }                /* Update the row counter and progress of the COPY command */                *processed += nused;hope this helps,adam", "msg_date": "Mon, 8 Jan 2024 09:54:51 -0500", "msg_from": "Adam S <[email protected]>", "msg_from_op": true, "msg_subject": "INSERT performance: less CPU when no indexes or triggers" } ]
[ { "msg_contents": "We had a complaint (see [1], but it's not the first IIRC) about how\npsql doesn't behave very nicely if one ends \\sf or allied commands\nwith a semicolon:\n\nregression=# \\sf sin(float8);\nERROR: expected a right parenthesis\n\nThis is a bit of a usability gotcha, since many other backslash\ncommands are forgiving about trailing semicolons. I looked at\nthe code and found that it's actually trying to ignore semicolons,\nby passing semicolon = true to psql_scan_slash_option. But that\nfails to work because it's also passing type = OT_WHOLE_LINE,\nand the whole-line code path ignores the semicolon flag. Probably\nthat's just because nobody needed to use that combination back in\nthe day. There's another user of OT_WHOLE_LINE, exec_command_help,\nwhich also wants this behavior and has written its own stripping\ncode to get it. That seems pretty silly, so here's a quick finger\nexercise to move that logic into psql_scan_slash_option.\n\nIs this enough of a bug to deserve back-patching? I'm not sure.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/CAEs%3D6D%3DnwX2wm0hjkaw6C_LnqR%2BNFtnnzbSzeZq-xcfi_ooKSw%40mail.gmail.com", "msg_date": "Mon, 08 Jan 2024 15:48:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Make psql ignore trailing semicolons in \\sf, \\ef, etc" }, { "msg_contents": "On Mon Jan 8, 2024 at 2:48 PM CST, Tom Lane wrote:\n> We had a complaint (see [1], but it's not the first IIRC) about how\n> psql doesn't behave very nicely if one ends \\sf or allied commands\n> with a semicolon:\n>\n> regression=# \\sf sin(float8);\n> ERROR: expected a right parenthesis\n>\n> This is a bit of a usability gotcha, since many other backslash\n> commands are forgiving about trailing semicolons. I looked at\n> the code and found that it's actually trying to ignore semicolons,\n> by passing semicolon = true to psql_scan_slash_option. But that\n> fails to work because it's also passing type = OT_WHOLE_LINE,\n> and the whole-line code path ignores the semicolon flag. Probably\n> that's just because nobody needed to use that combination back in\n> the day. There's another user of OT_WHOLE_LINE, exec_command_help,\n> which also wants this behavior and has written its own stripping\n> code to get it. That seems pretty silly, so here's a quick finger\n> exercise to move that logic into psql_scan_slash_option.\n>\n> Is this enough of a bug to deserve back-patching? I'm not sure.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/CAEs%3D6D%3DnwX2wm0hjkaw6C_LnqR%2BNFtnnzbSzeZq-xcfi_ooKSw%40mail.gmail.com\n\n> + /*\n> + * In whole-line mode, we interpret semicolon = true as stripping\n> + * trailing whitespace as well as semi-colons; this gives the\n> + * nearest equivalent to what semicolon = true does in normal\n> + * mode. Note there's no concept of quoting in this mode.\n> + */\n> + if (semicolon)\n> + {\n> + while (mybuf.len > 0 &&\n> + (mybuf.data[mybuf.len - 1] == ';' ||\n> + (isascii((unsigned char) mybuf.data[mybuf.len - 1]) &&\n> + isspace((unsigned char) mybuf.data[mybuf.len - 1]))))\n> + {\n> + mybuf.data[--mybuf.len] = '\\0';\n> + }\n> + }\n\nSeems like if there was going to be any sort of casting, it would be to \nan int, which is what the man page says for these two function, though \nisascii(3) explicitly mentions \"unsigned char.\"\n\nSmall English nit-pick: I would drop the hyphen between semi and colons.\n\nAs for backpatching, seems useful in the sense that people can write the \nsame script for all supported version of Postgres using the relaxed \nsyntax.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 08 Jan 2024 17:31:37 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make psql ignore trailing semicolons in \\sf, \\ef, etc" }, { "msg_contents": "\"Tristan Partin\" <[email protected]> writes:\n> On Mon Jan 8, 2024 at 2:48 PM CST, Tom Lane wrote:\n>> + (isascii((unsigned char) mybuf.data[mybuf.len - 1]) &&\n>> + isspace((unsigned char) mybuf.data[mybuf.len - 1]))))\n\n> Seems like if there was going to be any sort of casting, it would be to \n> an int, which is what the man page says for these two function, though \n> isascii(3) explicitly mentions \"unsigned char.\"\n\nCasting to unsigned char is our standard pattern for using these\nfunctions. If \"char\" is signed (which is the only case in which\nthis changes anything) then casting to int would imply sign-extension\nof the char's high-order bit, which is exactly what must not happen\nin order to produce a legal value to be passed to these functions.\nPOSIX says:\n\n The c argument is an int, the value of which the application shall\n ensure is a character representable as an unsigned char or equal\n to the value of the macro EOF. If the argument has any other\n value, the behavior is undefined.\n\nIf we cast to unsigned char, then the subsequent implicit cast to int\nwill do zero-extension which is what we need.\n\n> Small English nit-pick: I would drop the hyphen between semi and colons.\n\nMe too, except that it's spelled like that in nearby comments.\nShall I change them all?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Jan 2024 19:08:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make psql ignore trailing semicolons in \\sf, \\ef, etc" }, { "msg_contents": "On Mon, 2024-01-08 at 15:48 -0500, Tom Lane wrote:\n> Is this enough of a bug to deserve back-patching? I'm not sure.\n\nI like the patch, but I wouldn't back-patch it. I'd call the current\nbehavior a slight inconsistency rather than an outright bug, and I think\nthat we should be conservative with back-patching.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 09 Jan 2024 08:27:59 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make psql ignore trailing semicolons in \\sf, \\ef, etc" }, { "msg_contents": "On Mon Jan 8, 2024 at 6:08 PM CST, Tom Lane wrote:\n> \"Tristan Partin\" <[email protected]> writes:\n> > On Mon Jan 8, 2024 at 2:48 PM CST, Tom Lane wrote:\n> >> + (isascii((unsigned char) mybuf.data[mybuf.len - 1]) &&\n> >> + isspace((unsigned char) mybuf.data[mybuf.len - 1]))))\n>\n> > Seems like if there was going to be any sort of casting, it would be to \n> > an int, which is what the man page says for these two function, though \n> > isascii(3) explicitly mentions \"unsigned char.\"\n>\n> Casting to unsigned char is our standard pattern for using these\n> functions. If \"char\" is signed (which is the only case in which\n> this changes anything) then casting to int would imply sign-extension\n> of the char's high-order bit, which is exactly what must not happen\n> in order to produce a legal value to be passed to these functions.\n> POSIX says:\n>\n> The c argument is an int, the value of which the application shall\n> ensure is a character representable as an unsigned char or equal\n> to the value of the macro EOF. If the argument has any other\n> value, the behavior is undefined.\n>\n> If we cast to unsigned char, then the subsequent implicit cast to int\n> will do zero-extension which is what we need.\n\nThanks for the explanation.\n\n> > Small English nit-pick: I would drop the hyphen between semi and colons.\n>\n> Me too, except that it's spelled like that in nearby comments.\n> Shall I change them all?\n\nI'll leave it up to you. Patch looks good as-is.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 09 Jan 2024 10:27:54 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make psql ignore trailing semicolons in \\sf, \\ef, etc" }, { "msg_contents": "Laurenz Albe <[email protected]> writes:\n> On Mon, 2024-01-08 at 15:48 -0500, Tom Lane wrote:\n>> Is this enough of a bug to deserve back-patching? I'm not sure.\n\n> I like the patch, but I wouldn't back-patch it. I'd call the current\n> behavior a slight inconsistency rather than an outright bug, and I think\n> that we should be conservative with back-patching.\n\nNobody spoke in favor of back-patching, so committed to HEAD only.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:21:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Make psql ignore trailing semicolons in \\sf, \\ef, etc" } ]
[ { "msg_contents": "In 30e7c175b81, support for pre-9.2 servers was removed from pg_dump. \nBut I found that a lot of dead code was left for supporting dumping \ntriggers from those old versions, presumably because that code was not \nbehind straightforward versioned \"if\" branches. This patch removes the \nrest of the unneeded code.", "msg_date": "Tue, 9 Jan 2024 08:38:30 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump: Remove obsolete trigger support" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> In 30e7c175b81, support for pre-9.2 servers was removed from pg_dump. \n> But I found that a lot of dead code was left for supporting dumping \n> triggers from those old versions, presumably because that code was not \n> behind straightforward versioned \"if\" branches. This patch removes the \n> rest of the unneeded code.\n\nHm, you're right, we can depend on pg_get_triggerdef in all cases now.\nHowever, the patch looks a little incomplete: you did not remove\nfetching of all of the now-unneeded values from the SQL queries.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 10:27:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Remove obsolete trigger support" }, { "msg_contents": "On 09.01.24 16:27, Tom Lane wrote:\n> Peter Eisentraut <[email protected]> writes:\n>> In 30e7c175b81, support for pre-9.2 servers was removed from pg_dump.\n>> But I found that a lot of dead code was left for supporting dumping\n>> triggers from those old versions, presumably because that code was not\n>> behind straightforward versioned \"if\" branches. This patch removes the\n>> rest of the unneeded code.\n> \n> Hm, you're right, we can depend on pg_get_triggerdef in all cases now.\n> However, the patch looks a little incomplete: you did not remove\n> fetching of all of the now-unneeded values from the SQL queries.\n\nI think all the remaining SQL queries only select the fields that are \nneeded. The now-unneeded values were only selected by queries that are \nbeing deleted. If I missed something, an example would help me.\n\n\n\n", "msg_date": "Tue, 9 Jan 2024 17:29:22 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump: Remove obsolete trigger support" }, { "msg_contents": "I wrote:\n> However, the patch looks a little incomplete: you did not remove\n> fetching of all of the now-unneeded values from the SQL queries.\n\nOh, scratch that, I now see that we already did that query\noptimization.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 11:38:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump: Remove obsolete trigger support" } ]
[ { "msg_contents": "Dear all,\r\n\r\nI recently used benchmarksql to evaluate the performance of postgresql. I achieved nearly 20% improvement \r\nwith NUM_XLOGINSERT_LOCKS changed from 8 to 16 under some cases of high concurrency. I wonder whether \r\nit is feasible to make NUM_XLOGINSERT_LOCKS a configuration parameter, so that users can get easier to optimize \r\ntheir postgresql performance through this setting.\r\n\r\nThanks,\r\nQingsong", "msg_date": "Wed, 10 Jan 2024 09:37:40 +0800", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "<[email protected]> writes:\n> I recently used benchmarksql to evaluate the performance of postgresql. I achieved nearly 20% improvement \n> with NUM_XLOGINSERT_LOCKS changed from 8 to 16 under some cases of high concurrency. I wonder whether \n> it is feasible to make NUM_XLOGINSERT_LOCKS a configuration parameter, so that users can get easier to optimize \n> their postgresql performance through this setting.\n\nMaking it an actual GUC would carry nontrivial costs, not least that\nthere are hot code paths that do \"foo % NUM_XLOGINSERT_LOCKS\" which\nwould go from a mask operation to a full integer divide. We are\nunlikely to consider that on the basis of an unsupported assertion\nthat there's a performance gain under unspecified conditions.\n\nEven with data to justify a change, I think it'd make a lot more sense\nto just raise the constant value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 21:38:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "On Tue, Jan 09, 2024 at 09:38:17PM -0500, Tom Lane wrote:\n> Making it an actual GUC would carry nontrivial costs, not least that\n> there are hot code paths that do \"foo % NUM_XLOGINSERT_LOCKS\" which\n> would go from a mask operation to a full integer divide. We are\n> unlikely to consider that on the basis of an unsupported assertion\n> that there's a performance gain under unspecified conditions.\n> \n> Even with data to justify a change, I think it'd make a lot more sense\n> to just raise the constant value.\n\nThis suggestion has showed up more than once in the past, and WAL\ninsertion is a path that can become so hot under some workloads that\nchanging it to a GUC would not be wise from the point of view of\nperformance. Redesigning all that to not require a set of LWLocks\ninto something more scalable would lead to better result, whatever\nthis design may be.\n--\nMichael", "msg_date": "Wed, 10 Jan 2024 13:08:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> This suggestion has showed up more than once in the past, and WAL\n> insertion is a path that can become so hot under some workloads that\n> changing it to a GUC would not be wise from the point of view of\n> performance. Redesigning all that to not require a set of LWLocks\n> into something more scalable would lead to better result, whatever\n> this design may be.\n\nMaybe. I bet just bumping up the constant by 2X or 4X or so would get\nmost of the win for far less work; it's not like adding a few more\nLWLocks is expensive. But we need some evidence about what to set it to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Jan 2024 23:30:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "On Wed, Jan 10, 2024 at 10:00 AM Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > This suggestion has showed up more than once in the past, and WAL\n> > insertion is a path that can become so hot under some workloads that\n> > changing it to a GUC would not be wise from the point of view of\n> > performance. Redesigning all that to not require a set of LWLocks\n> > into something more scalable would lead to better result, whatever\n> > this design may be.\n>\n> Maybe. I bet just bumping up the constant by 2X or 4X or so would get\n> most of the win for far less work; it's not like adding a few more\n> LWLocks is expensive. But we need some evidence about what to set it to.\n\nI previously made an attempt to improve WAL insertion performance with\nvarying NUM_XLOGINSERT_LOCKS. IIRC, we will lose what we get by\nincreasing insertion locks (reduction in WAL insertion lock\nacquisition time) to the CPU overhead of flushing the WAL in\nWaitXLogInsertionsToFinish as referred to by the following comment.\nUnfortunately, I've lost the test results, I'll run them up again and\ncome back.\n\n/*\n * Number of WAL insertion locks to use. A higher value allows more insertions\n * to happen concurrently, but adds some CPU overhead to flushing the WAL,\n * which needs to iterate all the locks.\n */\n#define NUM_XLOGINSERT_LOCKS 8\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 11:39:11 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "Bharath Rupireddy <[email protected]> writes:\n> On Wed, Jan 10, 2024 at 10:00 AM Tom Lane <[email protected]> wrote:\n>> Maybe. I bet just bumping up the constant by 2X or 4X or so would get\n>> most of the win for far less work; it's not like adding a few more\n>> LWLocks is expensive. But we need some evidence about what to set it to.\n\n> I previously made an attempt to improve WAL insertion performance with\n> varying NUM_XLOGINSERT_LOCKS. IIRC, we will lose what we get by\n> increasing insertion locks (reduction in WAL insertion lock\n> acquisition time) to the CPU overhead of flushing the WAL in\n> WaitXLogInsertionsToFinish as referred to by the following comment.\n\nVery interesting --- this is at variance with what the OP said, so\nwe definitely need details about the test conditions in both cases.\n\n> Unfortunately, I've lost the test results, I'll run them up again and\n> come back.\n\nPlease.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Jan 2024 01:13:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "On Wed, Jan 10, 2024 at 11:43 AM Tom Lane <[email protected]> wrote:\n>\n> Bharath Rupireddy <[email protected]> writes:\n> > On Wed, Jan 10, 2024 at 10:00 AM Tom Lane <[email protected]> wrote:\n> >> Maybe. I bet just bumping up the constant by 2X or 4X or so would get\n> >> most of the win for far less work; it's not like adding a few more\n> >> LWLocks is expensive. But we need some evidence about what to set it to.\n>\n> > I previously made an attempt to improve WAL insertion performance with\n> > varying NUM_XLOGINSERT_LOCKS. IIRC, we will lose what we get by\n> > increasing insertion locks (reduction in WAL insertion lock\n> > acquisition time) to the CPU overhead of flushing the WAL in\n> > WaitXLogInsertionsToFinish as referred to by the following comment.\n>\n> Very interesting --- this is at variance with what the OP said, so\n> we definitely need details about the test conditions in both cases.\n>\n> > Unfortunately, I've lost the test results, I'll run them up again and\n> > come back.\n>\n> Please.\n\nOkay, I'm back with some testing.\n\nTest case:\n./pgbench --initialize --scale=100 --username=ubuntu postgres\n./pgbench --progress=10 --client=64 --time=300 --builtin=tpcb-like\n--username=ubuntu postgres\n\nSetup:\n./configure --prefix=$PWD/inst/ CFLAGS=\"-ggdb3 -O3\" > install.log &&\nmake -j 8 install > install.log 2>&1 &\n\nshared_buffers = '8GB'\nmax_wal_size = '32GB'\ntrack_wal_io_timing = on\n\nStats measured:\nI've used the attached patch to measure WAL Insert Lock Acquire Time\n(wal_insert_lock_acquire_time) and WAL Wait for In-progress Inserts\nto Finish Time (wal_wait_for_insert_to_finish_time).\n\nResults with varying NUM_XLOGINSERT_LOCKS (note that we can't allow it\nbe more than MAX_SIMUL_LWLOCKS):\nLocks TPS WAL Insert Lock Acquire Time in Milliseconds WAL\nWait for In-progress Inserts to Finish Time in Milliseconds\n8 18669 12532 8775\n16 18076 10641 13491\n32 18034 6635 13997\n64 17582 3937 14718\n128 17782 4563 20145\n\nAlso, check the attached graph. Clearly there's an increase in the\ntime spent in waiting for in-progress insertions to finish in\nWaitXLogInsertionsToFinish from 8.7 seconds to 20 seconds. Whereas,\nthe time spent to acquire WAL insertion locks decreased from 12.5\nseconds to 4.5 seconds. Overall, this hasn't resulted any improvement\nin TPS, in fact observed slight reduction.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 Jan 2024 12:02:39 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "On 1/12/24 12:32 AM, Bharath Rupireddy wrote:\n> Test case:\n> ./pgbench --initialize --scale=100 --username=ubuntu postgres\n> ./pgbench --progress=10 --client=64 --time=300 --builtin=tpcb-like\n> --username=ubuntu postgres\n> \n> Setup:\n> ./configure --prefix=$PWD/inst/ CFLAGS=\"-ggdb3 -O3\" > install.log &&\n> make -j 8 install > install.log 2>&1 &\n> \n> shared_buffers = '8GB'\n> max_wal_size = '32GB'\n> track_wal_io_timing = on\n> \n> Stats measured:\n> I've used the attached patch to measure WAL Insert Lock Acquire Time\n> (wal_insert_lock_acquire_time) and WAL Wait for In-progress Inserts\n> to Finish Time (wal_wait_for_insert_to_finish_time).\n\nUnfortunately this leaves the question of how frequently is \nWaitXLogInsertionsToFinish() being called and by whom. One possibility \nhere is that wal_buffers is too small so backends are constantly having \nto write WAL data to free up buffers.\n-- \nJim Nasby, Data Architect, Austin TX\n\n\n\n", "msg_date": "Fri, 12 Jan 2024 16:09:19 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" }, { "msg_contents": "On Fri, Jan 12, 2024 at 7:33 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Jan 10, 2024 at 11:43 AM Tom Lane <[email protected]> wrote:\n> >\n> > Bharath Rupireddy <[email protected]> writes:\n> > > On Wed, Jan 10, 2024 at 10:00 AM Tom Lane <[email protected]> wrote:\n> > >> Maybe. I bet just bumping up the constant by 2X or 4X or so would get\n> > >> most of the win for far less work; it's not like adding a few more\n> > >> LWLocks is expensive. But we need some evidence about what to set it to.\n> >\n> > > I previously made an attempt to improve WAL insertion performance with\n> > > varying NUM_XLOGINSERT_LOCKS. IIRC, we will lose what we get by\n> > > increasing insertion locks (reduction in WAL insertion lock\n> > > acquisition time) to the CPU overhead of flushing the WAL in\n> > > WaitXLogInsertionsToFinish as referred to by the following comment.\n> >\n> > Very interesting --- this is at variance with what the OP said, so\n> > we definitely need details about the test conditions in both cases.\n> >\n> > > Unfortunately, I've lost the test results, I'll run them up again and\n> > > come back.\n> >\n> > Please.\n>\n> Okay, I'm back with some testing\n\n[..]\n\n> Results with varying NUM_XLOGINSERT_LOCKS (note that we can't allow it\n> be more than MAX_SIMUL_LWLOCKS):\n> Locks TPS WAL Insert Lock Acquire Time in Milliseconds WAL\n> Wait for In-progress Inserts to Finish Time in Milliseconds\n> 8 18669 12532 8775\n> 16 18076 10641 13491\n> 32 18034 6635 13997\n> 64 17582 3937 14718\n> 128 17782 4563 20145\n>\n> Also, check the attached graph. Clearly there's an increase in the\n> time spent in waiting for in-progress insertions to finish in\n> WaitXLogInsertionsToFinish from 8.7 seconds to 20 seconds. Whereas,\n> the time spent to acquire WAL insertion locks decreased from 12.5\n> seconds to 4.5 seconds. Overall, this hasn't resulted any improvement\n> in TPS, in fact observed slight reduction.\n\nHi, I've hastily tested using Bharath's patches too as I was thinking\nit would be a fast win due to contention, however it seems that (at\nleast on fast NVMEs?) increasing NUM_XLOGINSERT_LOCKS doesn't seem to\nhelp.\n\nWith pgbench -P 5 -c 32 -j 32 -T 30 and\n- 64vCPU Lsv2 (AMD EPYC), on single NVMe device (with ext4) that can\ndo 100k RW IOPS@8kB (with fio/libaio, 4jobs)\n- shared_buffers = '8GB', max_wal_size = '32GB', track_wal_io_timing = on\n- maxed out wal_buffers = '256MB'\n\ntpcb-like with synchronous_commit=off\n TPS wal_insert_lock_acquire_time wal_wait_for_insert_to_finish_time\n8 30393 24087 128\n32 31205 968 93\n\ntpcb-like with synchronous_commit=on\n TPS wal_insert_lock_acquire_time wal_wait_for_insert_to_finish_time\n8 12031 8472 10722\n32 11957 1188 12563\n\ntpcb-like with synchronous_commit=on and pgbench -c 64 -j 64\n TPS wal_insert_lock_acquire_time wal_wait_for_insert_to_finish_time\n8 25010 90620 68318\n32 25976 18569 85319\n// same, Bharath said , it shifted from insert_lock to\nwaiting_for_insert to finish\n\ninsertonly (largeinserts) with synchronous_commit=off (still -c 32 -j 32)\n TPS wal_insert_lock_acquire_time wal_wait_for_insert_to_finish_time\n8 367 19142 83\n32 393 875 68\n\ninsertonly (largeinserts) with synchronous_commit=on (still -c 32 -j 32)\n TPS wal_insert_lock_acquire_time wal_wait_for_insert_to_finish_time\n8 329 15950 125\n32 310 2177 316\n\ninsertonly was := {\n create sequence s1;\n create table t (id bigint, t text) partition by hash (id);\n create table t_h0 partition of t FOR VALUES WITH (modulus 8, remainder 0);\n create table t_h1 partition of t FOR VALUES WITH (modulus 8, remainder 1);\n create table t_h2 partition of t FOR VALUES WITH (modulus 8, remainder 2);\n create table t_h3 partition of t FOR VALUES WITH (modulus 8, remainder 3);\n create table t_h4 partition of t FOR VALUES WITH (modulus 8, remainder 4);\n create table t_h5 partition of t FOR VALUES WITH (modulus 8, remainder 5);\n create table t_h6 partition of t FOR VALUES WITH (modulus 8, remainder 6);\n create table t_h7 partition of t FOR VALUES WITH (modulus 8, remainder 7);\n\n and runtime pgb:\n insert into t select nextval('s1'), repeat('A', 1000) from\ngenerate_series(1, 1000);\n}\n\nit was truncated every time, DB was checkpointed, of course it was on master.\nWithout more details from Qingsong it is going to be hard to explain\nthe boost he witnessed.\n\n-J.\n\n\n", "msg_date": "Mon, 15 Jan 2024 11:54:07 +0100", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Make NUM_XLOGINSERT_LOCKS configurable" } ]
[ { "msg_contents": "Hi all,\n\nI notice that the CREATE TYPE syntax can specify subtype_diff function\nCREATE TYPE name AS RANGE (\n SUBTYPE = subtype\n [ , SUBTYPE_OPCLASS = subtype_operator_class ]\n [ , COLLATION = collation ]\n [ , CANONICAL = canonical_function ]\n [ , SUBTYPE_DIFF = subtype_diff_function ] <————— here\n [ , MULTIRANGE_TYPE_NAME = multirange_type_name ]\n)\nAnd a example is\n```sql\nCREATE TYPE float8_range AS RANGE (subtype = float8, subtype_diff = float8mi);\n```\n\nI notice that float8mi is a C function, and I find the call_subtype_diff() in source code that it seems only can call C function.\n\nI want to know\n\n1. Can the subtype_diff function in CREATE TYPE be sql or plpgsql function?\n2. How to call subtype_diff function? I know it related with GiST index, I need a example on how to trigger subtype_diff function.\n\nWhat’s more, I want to learn how Postgres kernel call subtype_diff function (in which source file or function), that will help me a lot.\n \n\nThank you all!\nHi all,I notice that the CREATE TYPE syntax can specify subtype_diff functionCREATE TYPE name AS RANGE (\n SUBTYPE = subtype\n [ , SUBTYPE_OPCLASS = subtype_operator_class ]\n [ , COLLATION = collation ]\n [ , CANONICAL = canonical_function ]\n [ , SUBTYPE_DIFF = subtype_diff_function ] <————— here\n [ , MULTIRANGE_TYPE_NAME = multirange_type_name ]\n)And a example is```sqlCREATE TYPE float8_range AS RANGE (subtype = float8, subtype_diff = float8mi);```I notice that float8mi is a C function, and I find the call_subtype_diff() in source code that it seems only can call C function.I want to know1. Can the subtype_diff function in CREATE TYPE be sql or plpgsql function?2. How to call subtype_diff function? I know it related with GiST index, I need a example on how to trigger subtype_diff function.What’s more,  I want to learn how Postgres kernel call subtype_diff function (in which source file or function), that will help me a lot. Thank you all!", "msg_date": "Wed, 10 Jan 2024 10:49:20 +0800", "msg_from": "ddme <[email protected]>", "msg_from_op": true, "msg_subject": "Is the subtype_diff function in CREATE TYPE only can be C function?" }, { "msg_contents": "On Wed, Jan 10, 2024 at 1:49 PM ddme <[email protected]> wrote:\n>\n> Hi all,\n>\n> I notice that the CREATE TYPE syntax can specify subtype_diff function\n>\n> CREATE TYPE name AS RANGE (\n> SUBTYPE = subtype\n> [ , SUBTYPE_OPCLASS = subtype_operator_class ]\n> [ , COLLATION = collation ]\n> [ , CANONICAL = canonical_function ]\n> [ , SUBTYPE_DIFF = subtype_diff_function ] <————— here\n> [ , MULTIRANGE_TYPE_NAME = multirange_type_name ]\n> )\n>\n> And a example is\n> ```sql\n>\n> CREATE TYPE float8_range AS RANGE (subtype = float8, subtype_diff = float8mi);\n>\n> ```\n>\n> I notice that float8mi is a C function, and I find the call_subtype_diff() in source code that it seems only can call C function.\n\ncall_subtype_diff() invokes FunctionCall2Coll() which in turn invokes\nthe function handler for non-C functions. See\nfmgr_info_cxt_security() for example. So subtype_diff can be a SQL\ncallable function written in any supported language.\n\n>\n> I want to know\n>\n> 1. Can the subtype_diff function in CREATE TYPE be sql or plpgsql function?\n\nI think so.\n\n> 2. How to call subtype_diff function? I know it related with GiST index, I need a example on how to trigger subtype_diff function.\n\nI am not familiar with GiST code enough to answer that question. But\nlooking at the places where call_subtype_diff() is called, esp. the\ncomments there might give you hints.\nOR somebody more familiar with GiST code will give you a direct answer.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:34:00 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is the subtype_diff function in CREATE TYPE only can be C\n function?" }, { "msg_contents": "> 2024年1月10日 18:04,Ashutosh Bapat <[email protected]> 写道:\n> \n> On Wed, Jan 10, 2024 at 1:49 PM ddme <[email protected] <mailto:[email protected]>> wrote:\n>> \n>> Hi all,\n>> \n>> I notice that the CREATE TYPE syntax can specify subtype_diff function\n>> \n>> CREATE TYPE name AS RANGE (\n>> SUBTYPE = subtype\n>> [ , SUBTYPE_OPCLASS = subtype_operator_class ]\n>> [ , COLLATION = collation ]\n>> [ , CANONICAL = canonical_function ]\n>> [ , SUBTYPE_DIFF = subtype_diff_function ] <————— here\n>> [ , MULTIRANGE_TYPE_NAME = multirange_type_name ]\n>> )\n>> \n>> And a example is\n>> ```sql\n>> \n>> CREATE TYPE float8_range AS RANGE (subtype = float8, subtype_diff = float8mi);\n>> \n>> ```\n>> \n>> I notice that float8mi is a C function, and I find the call_subtype_diff() in source code that it seems only can call C function.\n> \n> call_subtype_diff() invokes FunctionCall2Coll() which in turn invokes\n> the function handler for non-C functions. See\n> fmgr_info_cxt_security() for example. So subtype_diff can be a SQL\n> callable function written in any supported language.\n> \n>> \n>> I want to know\n>> \n>> 1. Can the subtype_diff function in CREATE TYPE be sql or plpgsql function?\n> \n> I think so.\n> \n>> 2. How to call subtype_diff function? I know it related with GiST index, I need a example on how to trigger subtype_diff function.\n> \n> I am not familiar with GiST code enough to answer that question. But\n> looking at the places where call_subtype_diff() is called, esp. the\n> comments there might give you hints.\n> OR somebody more familiar with GiST code will give you a direct answer.\n> \n> -- \n> Best Wishes,\n> Ashutosh Bapat\n\nThank you!\n\n\nI know that range_gist_picksplit call call_subtype_diff() but I find not call path for range_gist_picksplit.\nI have try to trigger GiST index like `CREATE INDEX … USING GIST` and using select with filter to trigger index. With the help of EXPLAIN, I get that the gist index have been triggered but subtype_diff function have not\n\n\n```sql\ncreate function float4mi(a float8, b float8) RETURNS float8 LANGUAGE SQL … …\n\ncreate type float8range as range (subtype=float8, subtype_diff=float4mi);\ncreate table float8range_test(f8r float8range);\ninsert into float8range_test values('[1.111,2.344]'::float8range), ('[1.111, 4.567]'::float8range);\ncreate index my_index on float8range_test using gist(f8r);\nSET enable_seqscan = off;\nselect * from float8range_test ORDER BY f8r;\n```\n\nIs there need more setup SQL like `CREATE OPERATOR CLASS … USING gist` to trigger?\n\n\n2024年1月10日 18:04,Ashutosh Bapat <[email protected]> 写道:On Wed, Jan 10, 2024 at 1:49 PM ddme <[email protected]> wrote:Hi all,I notice that the CREATE TYPE syntax can specify subtype_diff functionCREATE TYPE name AS RANGE (   SUBTYPE = subtype   [ , SUBTYPE_OPCLASS = subtype_operator_class ]   [ , COLLATION = collation ]   [ , CANONICAL = canonical_function ]   [ , SUBTYPE_DIFF = subtype_diff_function ] <————— here   [ , MULTIRANGE_TYPE_NAME = multirange_type_name ])And a example is```sqlCREATE TYPE float8_range AS RANGE (subtype = float8, subtype_diff = float8mi);```I notice that float8mi is a C function, and I find the call_subtype_diff() in source code that it seems only can call C function.call_subtype_diff() invokes FunctionCall2Coll() which in turn invokesthe function handler for non-C functions. Seefmgr_info_cxt_security() for example. So subtype_diff can be a SQLcallable function written in any supported language.I want to know1. Can the subtype_diff function in CREATE TYPE be sql or plpgsql function?I think so.2. How to call subtype_diff function? I know it related with GiST index, I need a example on how to trigger subtype_diff function.I am not familiar with GiST code enough to answer that question. Butlooking at the places where call_subtype_diff() is called, esp. thecomments there might give you hints.OR somebody more familiar with GiST code will give you a direct answer.-- Best Wishes,Ashutosh BapatThank you!I know that range_gist_picksplit call call_subtype_diff() but I find not call path for range_gist_picksplit.I have try to trigger  GiST index like `CREATE INDEX … USING GIST` and using select with filter to trigger index. With the help of EXPLAIN, I get that the gist index have been triggered but subtype_diff function have not```sqlcreate function float4mi(a float8, b float8) RETURNS float8 LANGUAGE SQL … …create type float8range as range (subtype=float8, subtype_diff=float4mi);create table float8range_test(f8r float8range);insert into float8range_test values('[1.111,2.344]'::float8range), ('[1.111, 4.567]'::float8range);create index my_index on float8range_test using gist(f8r);SET enable_seqscan = off;select * from float8range_test ORDER BY f8r;```Is there need more setup SQL like `CREATE OPERATOR CLASS … USING gist` to trigger?", "msg_date": "Thu, 11 Jan 2024 10:53:46 +0800", "msg_from": "ddme <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is the subtype_diff function in CREATE TYPE only can be C\n function?" } ]
[ { "msg_contents": "Hi,\n\nI've been observing a failure in t/038_save_logical_slots_shutdown.pl\nof late on my developer system:\n\nt/038_save_logical_slots_shutdown.pl .. 1/?\n# Failed test 'Check that the slot's confirmed_flush LSN is the same\nas the latest_checkpoint location'\n# at t/038_save_logical_slots_shutdown.pl line 35.\n# Looks like you failed 1 test of 2.\nt/038_save_logical_slots_shutdown.pl .. Dubious, test returned 1\n(wstat 256, 0x100)\nFailed 1/2 subtests\n\nI did a quick analysis of the failure and commit\nhttps://github.com/postgres/postgres/commit/e0b2eed047df9045664da6f724cb42c10f8b12f0\nthat introduced this test. I think the issue is that the slot's\nconfirmed_flush LSN (0/1508000) and shutdown checkpoint LSN\n(0/1508018) are not the same:\n\ntmp_check/log/038_save_logical_slots_shutdown_pub.log:\n\n2024-01-10 07:55:44.539 UTC [57621] sub LOG: starting logical\ndecoding for slot \"sub\"\n2024-01-10 07:55:44.539 UTC [57621] sub DETAIL: Streaming\ntransactions committing after 0/1508000, reading WAL from 0/1507FC8.\n2024-01-10 07:55:44.539 UTC [57621] sub STATEMENT: START_REPLICATION\nSLOT \"sub\" LOGICAL 0/0 (proto_version '4', origin 'any',\npublication_names '\"pub\"')\n\nubuntu:~/postgres$ pg17/bin/pg_controldata -D\nsrc/test/recovery/tmp_check/t_038_save_logical_slots_shutdown_pub_data/pgdata/\nDatabase cluster state: in production\npg_control last modified: Wed Jan 10 07:55:44 2024\nLatest checkpoint location: 0/1508018\nLatest checkpoint's REDO location: 0/1508018\n\nBut the tests added by t/038_save_logical_slots_shutdown.pl expects\nboth LSNs to be same:\n\nsub compare_confirmed_flush\n{\n # Is it same as the value read from log?\n ok( $latest_checkpoint eq $confirmed_flush_from_log,\n \"Check that the slot's confirmed_flush LSN is the same as the\nlatest_checkpoint location\"\n );\n\nI suspect that it's quite not right to expect the slot's\nconfirmed_flush and latest checkpoint location to be same in the test.\nThis is because the shutdown checkpoint gets an LSN that's greater\nthan the slot's confirmed_flush LSN - see the shutdown checkpoint\nrecord getting inserted into WAL after the slot is marked dirty in\nCheckPointReplicationSlots().\n\nWith this analysis in mind, I think the tests need to do something\nlike the following:\n\ndiff --git a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\nb/src/test/recovery/t/038_save_logical_slots_shut\ndown.pl\nindex 5a4f5dc1d4..d49e6014fc 100644\n--- a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n+++ b/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n@@ -32,7 +32,7 @@ sub compare_confirmed_flush\n unless defined($latest_checkpoint);\n\n # Is it same as the value read from log?\n- ok( $latest_checkpoint eq $confirmed_flush_from_log,\n+ ok( $latest_checkpoint ge $confirmed_flush_from_log,\n \"Check that the slot's confirmed_flush LSN is the same\nas the latest_checkpoint location\"\n );\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:08:18 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Wed, 10 Jan 2024 at 14:08, Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> I've been observing a failure in t/038_save_logical_slots_shutdown.pl\n> of late on my developer system:\n>\n> t/038_save_logical_slots_shutdown.pl .. 1/?\n> # Failed test 'Check that the slot's confirmed_flush LSN is the same\n> as the latest_checkpoint location'\n> # at t/038_save_logical_slots_shutdown.pl line 35.\n> # Looks like you failed 1 test of 2.\n> t/038_save_logical_slots_shutdown.pl .. Dubious, test returned 1\n> (wstat 256, 0x100)\n> Failed 1/2 subtests\n>\n> I did a quick analysis of the failure and commit\n> https://github.com/postgres/postgres/commit/e0b2eed047df9045664da6f724cb42c10f8b12f0\n> that introduced this test. I think the issue is that the slot's\n> confirmed_flush LSN (0/1508000) and shutdown checkpoint LSN\n> (0/1508018) are not the same:\n>\n> tmp_check/log/038_save_logical_slots_shutdown_pub.log:\n>\n> 2024-01-10 07:55:44.539 UTC [57621] sub LOG: starting logical\n> decoding for slot \"sub\"\n> 2024-01-10 07:55:44.539 UTC [57621] sub DETAIL: Streaming\n> transactions committing after 0/1508000, reading WAL from 0/1507FC8.\n> 2024-01-10 07:55:44.539 UTC [57621] sub STATEMENT: START_REPLICATION\n> SLOT \"sub\" LOGICAL 0/0 (proto_version '4', origin 'any',\n> publication_names '\"pub\"')\n>\n> ubuntu:~/postgres$ pg17/bin/pg_controldata -D\n> src/test/recovery/tmp_check/t_038_save_logical_slots_shutdown_pub_data/pgdata/\n> Database cluster state: in production\n> pg_control last modified: Wed Jan 10 07:55:44 2024\n> Latest checkpoint location: 0/1508018\n> Latest checkpoint's REDO location: 0/1508018\n>\n> But the tests added by t/038_save_logical_slots_shutdown.pl expects\n> both LSNs to be same:\n>\n> sub compare_confirmed_flush\n> {\n> # Is it same as the value read from log?\n> ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> \"Check that the slot's confirmed_flush LSN is the same as the\n> latest_checkpoint location\"\n> );\n>\n> I suspect that it's quite not right to expect the slot's\n> confirmed_flush and latest checkpoint location to be same in the test.\n> This is because the shutdown checkpoint gets an LSN that's greater\n> than the slot's confirmed_flush LSN - see the shutdown checkpoint\n> record getting inserted into WAL after the slot is marked dirty in\n> CheckPointReplicationSlots().\n>\n> With this analysis in mind, I think the tests need to do something\n> like the following:\n>\n> diff --git a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> b/src/test/recovery/t/038_save_logical_slots_shut\n> down.pl\n> index 5a4f5dc1d4..d49e6014fc 100644\n> --- a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> +++ b/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> @@ -32,7 +32,7 @@ sub compare_confirmed_flush\n> unless defined($latest_checkpoint);\n>\n> # Is it same as the value read from log?\n> - ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> + ok( $latest_checkpoint ge $confirmed_flush_from_log,\n> \"Check that the slot's confirmed_flush LSN is the same\n> as the latest_checkpoint location\"\n> );\n>\n> Thoughts?\n\nI got the log files from Bharath offline. Thanks Bharath for sharing\nthe log files offline.\nThe WAL record sequence is exactly the same in the failing test and\ntests which are passing.\nOne observation in our case the confirmed flush lsn points exactly to\nshutdown checkpoint, but in the failing test the lsn pointed is\ninvalid, pg_waldump says that address is invalid and skips about 24\nbytes and then sees a valid record\n\nPassing case confirm flush lsn(0/150D158) from my machine:\npg_waldump 000000010000000000000001 -s 0/150D158\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/0150D158, prev 0/0150D120, desc: CHECKPOINT_SHUTDOWN redo 0/150D158;\ntli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\noldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\ntimestamp xid: 0/0; oldest running xid 0; shutdown\n\nFailing case confirm flush lsn( 0/1508000) from failing tests log file:\npg_waldump 000000010000000000000001 -s 0/1508000\npg_waldump: first record is after 0/1508000, at 0/1508018, skipping\nover 24 bytes\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/01508018, prev 0/01507FC8, desc: CHECKPOINT_SHUTDOWN redo 0/1508018;\ntli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\noldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\ntimestamp xid: 0/0; oldest running xid 0; shutdown\n\nI'm still not sure why in this case, it is not exactly pointing to a\nvalid WAL record, it has to skip 24 bytes to find the valid checkpoint\nshutdown record.\nI will investigate this further and share the analysis.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 10 Jan 2024 18:37:29 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Wed, Jan 10, 2024 at 6:37 PM vignesh C <[email protected]> wrote:\n>\n> I got the log files from Bharath offline. Thanks Bharath for sharing\n> the log files offline.\n> The WAL record sequence is exactly the same in the failing test and\n> tests which are passing.\n> One observation in our case the confirmed flush lsn points exactly to\n> shutdown checkpoint, but in the failing test the lsn pointed is\n> invalid, pg_waldump says that address is invalid and skips about 24\n> bytes and then sees a valid record\n>\n> Passing case confirm flush lsn(0/150D158) from my machine:\n> pg_waldump 000000010000000000000001 -s 0/150D158\n> rmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n> 0/0150D158, prev 0/0150D120, desc: CHECKPOINT_SHUTDOWN redo 0/150D158;\n> tli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\n> oldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\n> timestamp xid: 0/0; oldest running xid 0; shutdown\n>\n> Failing case confirm flush lsn( 0/1508000) from failing tests log file:\n> pg_waldump 000000010000000000000001 -s 0/1508000\n> pg_waldump: first record is after 0/1508000, at 0/1508018, skipping\n> over 24 bytes\n> rmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n> 0/01508018, prev 0/01507FC8, desc: CHECKPOINT_SHUTDOWN redo 0/1508018;\n> tli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\n> oldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\n> timestamp xid: 0/0; oldest running xid 0; shutdown\n>\n> I'm still not sure why in this case, it is not exactly pointing to a\n> valid WAL record, it has to skip 24 bytes to find the valid checkpoint\n> shutdown record.\n>\n\nCan we see the previous record (as pointed out by prev in the WAL\nrecord) in both cases? Also, you can see few prior records in both\ncases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:01:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Wed, Jan 10, 2024 at 2:08 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> I've been observing a failure in t/038_save_logical_slots_shutdown.pl\n> of late on my developer system:\n>\n> t/038_save_logical_slots_shutdown.pl .. 1/?\n> # Failed test 'Check that the slot's confirmed_flush LSN is the same\n> as the latest_checkpoint location'\n> # at t/038_save_logical_slots_shutdown.pl line 35.\n> # Looks like you failed 1 test of 2.\n> t/038_save_logical_slots_shutdown.pl .. Dubious, test returned 1\n> (wstat 256, 0x100)\n> Failed 1/2 subtests\n>\n> I did a quick analysis of the failure and commit\n> https://github.com/postgres/postgres/commit/e0b2eed047df9045664da6f724cb42c10f8b12f0\n> that introduced this test. I think the issue is that the slot's\n> confirmed_flush LSN (0/1508000) and shutdown checkpoint LSN\n> (0/1508018) are not the same:\n>\n> tmp_check/log/038_save_logical_slots_shutdown_pub.log:\n>\n> 2024-01-10 07:55:44.539 UTC [57621] sub LOG: starting logical\n> decoding for slot \"sub\"\n> 2024-01-10 07:55:44.539 UTC [57621] sub DETAIL: Streaming\n> transactions committing after 0/1508000, reading WAL from 0/1507FC8.\n> 2024-01-10 07:55:44.539 UTC [57621] sub STATEMENT: START_REPLICATION\n> SLOT \"sub\" LOGICAL 0/0 (proto_version '4', origin 'any',\n> publication_names '\"pub\"')\n>\n> ubuntu:~/postgres$ pg17/bin/pg_controldata -D\n> src/test/recovery/tmp_check/t_038_save_logical_slots_shutdown_pub_data/pgdata/\n> Database cluster state: in production\n> pg_control last modified: Wed Jan 10 07:55:44 2024\n> Latest checkpoint location: 0/1508018\n> Latest checkpoint's REDO location: 0/1508018\n>\n> But the tests added by t/038_save_logical_slots_shutdown.pl expects\n> both LSNs to be same:\n>\n> sub compare_confirmed_flush\n> {\n> # Is it same as the value read from log?\n> ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> \"Check that the slot's confirmed_flush LSN is the same as the\n> latest_checkpoint location\"\n> );\n>\n> I suspect that it's quite not right to expect the slot's\n> confirmed_flush and latest checkpoint location to be same in the test.\n>\n\nAs per my understanding, the reason we expect them to be the same is\nbecause we ensure that during shutdown, the walsender sends all the\nWAL just before shutdown_checkpoint and the confirm_flush also points\nto the end of WAL record before shutodwn_checkpoint. So, the next\nstarting location should be of shutdown_checkpoint record which should\nideally be the same. Do you see this failure consistently?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Jan 2024 11:22:29 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Wed, 10 Jan 2024 at 18:37, vignesh C <[email protected]> wrote:\n>\n> On Wed, 10 Jan 2024 at 14:08, Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > I've been observing a failure in t/038_save_logical_slots_shutdown.pl\n> > of late on my developer system:\n> >\n> > t/038_save_logical_slots_shutdown.pl .. 1/?\n> > # Failed test 'Check that the slot's confirmed_flush LSN is the same\n> > as the latest_checkpoint location'\n> > # at t/038_save_logical_slots_shutdown.pl line 35.\n> > # Looks like you failed 1 test of 2.\n> > t/038_save_logical_slots_shutdown.pl .. Dubious, test returned 1\n> > (wstat 256, 0x100)\n> > Failed 1/2 subtests\n> >\n> > I did a quick analysis of the failure and commit\n> > https://github.com/postgres/postgres/commit/e0b2eed047df9045664da6f724cb42c10f8b12f0\n> > that introduced this test. I think the issue is that the slot's\n> > confirmed_flush LSN (0/1508000) and shutdown checkpoint LSN\n> > (0/1508018) are not the same:\n> >\n> > tmp_check/log/038_save_logical_slots_shutdown_pub.log:\n> >\n> > 2024-01-10 07:55:44.539 UTC [57621] sub LOG: starting logical\n> > decoding for slot \"sub\"\n> > 2024-01-10 07:55:44.539 UTC [57621] sub DETAIL: Streaming\n> > transactions committing after 0/1508000, reading WAL from 0/1507FC8.\n> > 2024-01-10 07:55:44.539 UTC [57621] sub STATEMENT: START_REPLICATION\n> > SLOT \"sub\" LOGICAL 0/0 (proto_version '4', origin 'any',\n> > publication_names '\"pub\"')\n> >\n> > ubuntu:~/postgres$ pg17/bin/pg_controldata -D\n> > src/test/recovery/tmp_check/t_038_save_logical_slots_shutdown_pub_data/pgdata/\n> > Database cluster state: in production\n> > pg_control last modified: Wed Jan 10 07:55:44 2024\n> > Latest checkpoint location: 0/1508018\n> > Latest checkpoint's REDO location: 0/1508018\n> >\n> > But the tests added by t/038_save_logical_slots_shutdown.pl expects\n> > both LSNs to be same:\n> >\n> > sub compare_confirmed_flush\n> > {\n> > # Is it same as the value read from log?\n> > ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> > \"Check that the slot's confirmed_flush LSN is the same as the\n> > latest_checkpoint location\"\n> > );\n> >\n> > I suspect that it's quite not right to expect the slot's\n> > confirmed_flush and latest checkpoint location to be same in the test.\n> > This is because the shutdown checkpoint gets an LSN that's greater\n> > than the slot's confirmed_flush LSN - see the shutdown checkpoint\n> > record getting inserted into WAL after the slot is marked dirty in\n> > CheckPointReplicationSlots().\n> >\n> > With this analysis in mind, I think the tests need to do something\n> > like the following:\n> >\n> > diff --git a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> > b/src/test/recovery/t/038_save_logical_slots_shut\n> > down.pl\n> > index 5a4f5dc1d4..d49e6014fc 100644\n> > --- a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> > +++ b/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n> > @@ -32,7 +32,7 @@ sub compare_confirmed_flush\n> > unless defined($latest_checkpoint);\n> >\n> > # Is it same as the value read from log?\n> > - ok( $latest_checkpoint eq $confirmed_flush_from_log,\n> > + ok( $latest_checkpoint ge $confirmed_flush_from_log,\n> > \"Check that the slot's confirmed_flush LSN is the same\n> > as the latest_checkpoint location\"\n> > );\n> >\n> > Thoughts?\n>\n> I got the log files from Bharath offline. Thanks Bharath for sharing\n> the log files offline.\n> The WAL record sequence is exactly the same in the failing test and\n> tests which are passing.\n> One observation in our case the confirmed flush lsn points exactly to\n> shutdown checkpoint, but in the failing test the lsn pointed is\n> invalid, pg_waldump says that address is invalid and skips about 24\n> bytes and then sees a valid record\n>\n> Passing case confirm flush lsn(0/150D158) from my machine:\n> pg_waldump 000000010000000000000001 -s 0/150D158\n> rmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n> 0/0150D158, prev 0/0150D120, desc: CHECKPOINT_SHUTDOWN redo 0/150D158;\n> tli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\n> oldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\n> timestamp xid: 0/0; oldest running xid 0; shutdown\n>\n> Failing case confirm flush lsn( 0/1508000) from failing tests log file:\n> pg_waldump 000000010000000000000001 -s 0/1508000\n> pg_waldump: first record is after 0/1508000, at 0/1508018, skipping\n> over 24 bytes\n> rmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n> 0/01508018, prev 0/01507FC8, desc: CHECKPOINT_SHUTDOWN redo 0/1508018;\n> tli 1; prev tli 1; fpw true; xid 0:739; oid 16388; multi 1; offset 0;\n> oldest xid 728 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\n> timestamp xid: 0/0; oldest running xid 0; shutdown\n>\n> I'm still not sure why in this case, it is not exactly pointing to a\n> valid WAL record, it has to skip 24 bytes to find the valid checkpoint\n> shutdown record.\n> I will investigate this further and share the analysis.\n\nOn further analysis, it was found that in the failing test,\nCHECKPOINT_SHUTDOWN was started in a new page, so there was the WAL\npage header present just before the CHECKPOINT_SHUTDOWN which was\ncausing the failure. We could alternatively reproduce the issue by\nswitching the WAL file before restarting the server like in the\nattached test change patch.\nThere are a couple of ways to fix this issue a) one by switching the\nWAL before the insertion of records so that the CHECKPOINT_SHUTDOWN\ndoes not get inserted in a new page as in the attached test_fix.patch\nb) by using pg_walinspect to check that the next WAL record is\nCHECKPOINT_SHUTDOWN. I have to try this approach.\n\nThanks to Bharath and Kuroda-san for offline discussions and helping\nin getting to the root cause.\n\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 11 Jan 2024 16:35:03 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Thu, Jan 11, 2024 at 4:35 PM vignesh C <[email protected]> wrote:\n>\n> On further analysis, it was found that in the failing test,\n> CHECKPOINT_SHUTDOWN was started in a new page, so there was the WAL\n> page header present just before the CHECKPOINT_SHUTDOWN which was\n> causing the failure. We could alternatively reproduce the issue by\n> switching the WAL file before restarting the server like in the\n> attached test change patch.\n> There are a couple of ways to fix this issue a) one by switching the\n> WAL before the insertion of records so that the CHECKPOINT_SHUTDOWN\n> does not get inserted in a new page as in the attached test_fix.patch\n> b) by using pg_walinspect to check that the next WAL record is\n> CHECKPOINT_SHUTDOWN. I have to try this approach.\n>\n> Thanks to Bharath and Kuroda-san for offline discussions and helping\n> in getting to the root cause.\n\nIIUC, the problem the commit e0b2eed tries to solve is to ensure there\nare no left-over decodable WAL records between confirmed_flush LSN and\na shutdown checkpoint, which is what it is expected from the\nt/038_save_logical_slots_shutdown.pl. How about we have a PG function\nreturning true if there are any decodable WAL records between the\ngiven start_lsn and end_lsn? Usage of this new function will make the\ntests more concrete and stable. This function doesn't have to be\nsomething really new, we can just turn\nbinary_upgrade_logical_slot_has_caught_up to a general, non-binary PG\nfunction; this idea has come up before\nhttps://www.postgresql.org/message-id/CAA4eK1KZXaBgVOAdV8ZfG6AdDbKYFVz7teDa7GORgQ3RVYS93g%40mail.gmail.com.\nIf okay, I can offer to write a patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 22:03:29 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:03 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 4:35 PM vignesh C <[email protected]> wrote:\n> >\n> > On further analysis, it was found that in the failing test,\n> > CHECKPOINT_SHUTDOWN was started in a new page, so there was the WAL\n> > page header present just before the CHECKPOINT_SHUTDOWN which was\n> > causing the failure. We could alternatively reproduce the issue by\n> > switching the WAL file before restarting the server like in the\n> > attached test change patch.\n> > There are a couple of ways to fix this issue a) one by switching the\n> > WAL before the insertion of records so that the CHECKPOINT_SHUTDOWN\n> > does not get inserted in a new page as in the attached test_fix.patch\n> > b) by using pg_walinspect to check that the next WAL record is\n> > CHECKPOINT_SHUTDOWN. I have to try this approach.\n> >\n> > Thanks to Bharath and Kuroda-san for offline discussions and helping\n> > in getting to the root cause.\n>\n> IIUC, the problem the commit e0b2eed tries to solve is to ensure there\n> are no left-over decodable WAL records between confirmed_flush LSN and\n> a shutdown checkpoint, which is what it is expected from the\n> t/038_save_logical_slots_shutdown.pl. How about we have a PG function\n> returning true if there are any decodable WAL records between the\n> given start_lsn and end_lsn?\n>\n\nBut, we already test this in 003_logical_slot during a successful\nupgrade. Having an explicit test to do the same thing has some merits\nbut not sure if it is worth it. The current test tries to ensure that\nduring shutdown after we shutdown walsender and ensures that it sends\nall the wal records and receipts an ack for the same, there is no\nother WAL except shutdown_checkpoint. Vignesh's suggestion (a) makes\nthe test robust enough that it shouldn't show spurious failures like\nthe current one reported by you, so shall we try to proceed with that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Jan 2024 09:27:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Fri, Jan 12, 2024 at 9:28 AM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 10:03 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Thu, Jan 11, 2024 at 4:35 PM vignesh C <[email protected]> wrote:\n> > >\n> > > On further analysis, it was found that in the failing test,\n> > > CHECKPOINT_SHUTDOWN was started in a new page, so there was the WAL\n> > > page header present just before the CHECKPOINT_SHUTDOWN which was\n> > > causing the failure. We could alternatively reproduce the issue by\n> > > switching the WAL file before restarting the server like in the\n> > > attached test change patch.\n> > > There are a couple of ways to fix this issue a) one by switching the\n> > > WAL before the insertion of records so that the CHECKPOINT_SHUTDOWN\n> > > does not get inserted in a new page as in the attached test_fix.patch\n> > > b) by using pg_walinspect to check that the next WAL record is\n> > > CHECKPOINT_SHUTDOWN. I have to try this approach.\n> > >\n> > > Thanks to Bharath and Kuroda-san for offline discussions and helping\n> > > in getting to the root cause.\n> >\n> > IIUC, the problem the commit e0b2eed tries to solve is to ensure there\n> > are no left-over decodable WAL records between confirmed_flush LSN and\n> > a shutdown checkpoint, which is what it is expected from the\n> > t/038_save_logical_slots_shutdown.pl. How about we have a PG function\n> > returning true if there are any decodable WAL records between the\n> > given start_lsn and end_lsn?\n> >\n>\n> But, we already test this in 003_logical_slot during a successful\n> upgrade. Having an explicit test to do the same thing has some merits\n> but not sure if it is worth it.\n\nIf the code added by commit e0b2eed is covered by the new upgrade\ntest, why not remove 038_save_logical_slots_shutdown.pl altogether?\n\n> The current test tries to ensure that\n> during shutdown after we shutdown walsender and ensures that it sends\n> all the wal records and receipts an ack for the same, there is no\n> other WAL except shutdown_checkpoint. Vignesh's suggestion (a) makes\n> the test robust enough that it shouldn't show spurious failures like\n> the current one reported by you, so shall we try to proceed with that?\n\nDo you mean something like [1]? It ensures the test passes unless any\nwrites are added (in future) before the publisher restarts in the test\nwhich can again make the tests flaky. How do we ensure no one adds\nanything in before the publisher restart\n038_save_logical_slots_shutdown.pl? A note before the restart perhaps?\nI might be okay with a simple solution like [1] with a note before the\nrestart instead of other complicated ones.\n\n[1]\ndiff --git a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\nb/src/test/recovery/t/038_save_logical_slots_shutdown.pl\nindex 5a4f5dc1d4..493fdbce2f 100644\n--- a/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n+++ b/src/test/recovery/t/038_save_logical_slots_shutdown.pl\n@@ -60,6 +60,14 @@ $node_subscriber->start;\n $node_publisher->safe_psql('postgres', \"CREATE TABLE test_tbl (id int)\");\n $node_subscriber->safe_psql('postgres', \"CREATE TABLE test_tbl (id int)\");\n\n+# On some machines, it was detected that the shutdown checkpoint WAL record\n+# that gets generated as part of the publisher restart below falls exactly in\n+# the new page in the WAL file. Due to this, the latest checkpoint location and\n+# confirmed flush check in compare_confirmed_flush() was failing. Hence, we\n+# advance WAL by 1 segment before generating some data so that the shutdown\n+# checkpoint doesn't fall exactly in the new WAL file page.\n+$node_publisher->advance_wal(1);\n+\n # Insert some data\n $node_publisher->safe_psql('postgres',\n \"INSERT INTO test_tbl VALUES (generate_series(1, 5));\");\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 15:35:57 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Fri, Jan 12, 2024 at 3:36 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Jan 12, 2024 at 9:28 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jan 11, 2024 at 10:03 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Jan 11, 2024 at 4:35 PM vignesh C <[email protected]> wrote:\n> > > >\n> > > > On further analysis, it was found that in the failing test,\n> > > > CHECKPOINT_SHUTDOWN was started in a new page, so there was the WAL\n> > > > page header present just before the CHECKPOINT_SHUTDOWN which was\n> > > > causing the failure. We could alternatively reproduce the issue by\n> > > > switching the WAL file before restarting the server like in the\n> > > > attached test change patch.\n> > > > There are a couple of ways to fix this issue a) one by switching the\n> > > > WAL before the insertion of records so that the CHECKPOINT_SHUTDOWN\n> > > > does not get inserted in a new page as in the attached test_fix.patch\n> > > > b) by using pg_walinspect to check that the next WAL record is\n> > > > CHECKPOINT_SHUTDOWN. I have to try this approach.\n> > > >\n> > > > Thanks to Bharath and Kuroda-san for offline discussions and helping\n> > > > in getting to the root cause.\n> > >\n> > > IIUC, the problem the commit e0b2eed tries to solve is to ensure there\n> > > are no left-over decodable WAL records between confirmed_flush LSN and\n> > > a shutdown checkpoint, which is what it is expected from the\n> > > t/038_save_logical_slots_shutdown.pl. How about we have a PG function\n> > > returning true if there are any decodable WAL records between the\n> > > given start_lsn and end_lsn?\n> > >\n> >\n> > But, we already test this in 003_logical_slot during a successful\n> > upgrade. Having an explicit test to do the same thing has some merits\n> > but not sure if it is worth it.\n>\n> If the code added by commit e0b2eed is covered by the new upgrade\n> test, why not remove 038_save_logical_slots_shutdown.pl altogether?\n>\n\nThis is a more strict check because it is possible that even if the\nlatest confirmed_flush location is not persisted there is no\nmeaningful decodable WAL between whatever the last confirmed_flush\nlocation saved on disk and the shutdown_checkpoint record.\nKuroda-San/Vignesh, do you have any suggestion on this one?\n\n> > The current test tries to ensure that\n> > during shutdown after we shutdown walsender and ensures that it sends\n> > all the wal records and receipts an ack for the same, there is no\n> > other WAL except shutdown_checkpoint. Vignesh's suggestion (a) makes\n> > the test robust enough that it shouldn't show spurious failures like\n> > the current one reported by you, so shall we try to proceed with that?\n>\n> Do you mean something like [1]? It ensures the test passes unless any\n> writes are added (in future) before the publisher restarts in the test\n> which can again make the tests flaky. How do we ensure no one adds\n> anything in before the publisher restart\n> 038_save_logical_slots_shutdown.pl? A note before the restart perhaps?\n>\n\nI am fine with adding the note.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 13 Jan 2024 16:42:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Sat, Jan 13, 2024 at 4:43 PM Amit Kapila <[email protected]> wrote:\n>\n> > > The current test tries to ensure that\n> > > during shutdown after we shutdown walsender and ensures that it sends\n> > > all the wal records and receipts an ack for the same, there is no\n> > > other WAL except shutdown_checkpoint. Vignesh's suggestion (a) makes\n> > > the test robust enough that it shouldn't show spurious failures like\n> > > the current one reported by you, so shall we try to proceed with that?\n> >\n> > Do you mean something like [1]? It ensures the test passes unless any\n> > writes are added (in future) before the publisher restarts in the test\n> > which can again make the tests flaky. How do we ensure no one adds\n> > anything in before the publisher restart\n> > 038_save_logical_slots_shutdown.pl? A note before the restart perhaps?\n> >\n>\n> I am fine with adding the note.\n\nOkay. Please see the attached patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 14 Jan 2024 20:32:40 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "Dear Amit, Bharath,\r\n\r\n> This is a more strict check because it is possible that even if the\r\n> latest confirmed_flush location is not persisted there is no\r\n> meaningful decodable WAL between whatever the last confirmed_flush\r\n> location saved on disk and the shutdown_checkpoint record.\r\n> Kuroda-San/Vignesh, do you have any suggestion on this one?\r\n\r\nI think it should be as testcase explicitly. There are two reasons:\r\n \r\n* e0b2eed is a commit for backend codes, so it should be tested by src/test/*\r\n files. Each src/bin/XXX/*.pl files should test only their executable.\r\n* Assuming that the feature would be broken. In this case 003_logical_slots.pl\r\n would fail, but we do not have a way to recognize on the build farm.\r\n 038_save_logical_slots_shutdown.pl helps to distinguish the case.\r\n \r\nBased on that, I think it is OK to add advance_wal() and comments, like Bharath's patch.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 16 Jan 2024 06:43:04 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Tue, Jan 16, 2024 at 12:13 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Amit, Bharath,\n>\n> > This is a more strict check because it is possible that even if the\n> > latest confirmed_flush location is not persisted there is no\n> > meaningful decodable WAL between whatever the last confirmed_flush\n> > location saved on disk and the shutdown_checkpoint record.\n> > Kuroda-San/Vignesh, do you have any suggestion on this one?\n>\n> I think it should be as testcase explicitly. There are two reasons:\n>\n> * e0b2eed is a commit for backend codes, so it should be tested by src/test/*\n> files. Each src/bin/XXX/*.pl files should test only their executable.\n> * Assuming that the feature would be broken. In this case 003_logical_slots.pl\n> would fail, but we do not have a way to recognize on the build farm.\n> 038_save_logical_slots_shutdown.pl helps to distinguish the case.\n\n+1 to keep 038_save_logical_slots_shutdown.pl as-is.\n\n> Based on that, I think it is OK to add advance_wal() and comments, like Bharath's patch.\n\nThanks. I'll wait a while and then add it to CF to not lose it in the wild.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Jan 2024 16:27:20 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Thu, Jan 25, 2024 at 4:27 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Thanks. I'll wait a while and then add it to CF to not lose it in the wild.\n>\n\nFeel free to add it to CF. However, I do plan to look at it in the\nnext few days.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Jan 2024 17:07:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Thu, Jan 25, 2024 at 5:07 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 4:27 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Thanks. I'll wait a while and then add it to CF to not lose it in the wild.\n> >\n>\n> Feel free to add it to CF. However, I do plan to look at it in the\n> next few days.\n\nThanks. CF entry is here https://commitfest.postgresql.org/47/4796/.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 26 Jan 2024 20:32:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Thu, Jan 25, 2024 at 5:07 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 25, 2024 at 4:27 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Thanks. I'll wait a while and then add it to CF to not lose it in the wild.\n> >\n>\n> Feel free to add it to CF. However, I do plan to look at it in the\n> next few days.\n>\n\nThe patch looks mostly good to me. I have changed the comments and\ncommit message in the attached. I am planning to push this tomorrow\nunless you or others have any comments on it.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 12 Mar 2024 18:05:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Tue, Mar 12, 2024 at 6:05 PM Amit Kapila <[email protected]> wrote:\n>\n> The patch looks mostly good to me. I have changed the comments and\n> commit message in the attached. I am planning to push this tomorrow\n> unless you or others have any comments on it.\n\nLGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 20:45:50 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" }, { "msg_contents": "On Tue, Mar 12, 2024 at 8:46 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 12, 2024 at 6:05 PM Amit Kapila <[email protected]> wrote:\n> >\n> > The patch looks mostly good to me. I have changed the comments and\n> > commit message in the attached. I am planning to push this tomorrow\n> > unless you or others have any comments on it.\n>\n> LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 13 Mar 2024 10:37:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A failure in t/038_save_logical_slots_shutdown.pl" } ]
[ { "msg_contents": "When building current head on debian bullseye I get this compile warning:\n\nIn file included from ../src/backend/commands/dbcommands.c:20:\n../src/backend/commands/dbcommands.c: In function ‘createdb’:\n../src/include/postgres.h:104:9: warning: ‘src_hasloginevt’ may be\nused uninitialized in this function [-Wmaybe-uninitialized]\n 104 | return (Datum) (X ? 1 : 0);\n | ^~~~~~~~~~~~~~~~~~~\n../src/backend/commands/dbcommands.c:683:8: note: ‘src_hasloginevt’\nwas declared here\n 683 | bool src_hasloginevt;\n | ^~~~~~~~~~~~~~~\n\n\nI only get this when building with meson, not when building with\nautotools. AFAICT, I have the same config:\n\n./configure --enable-debug --enable-depend --with-python\n--enable-cassert --with-openssl --enable-tap-tests --with-icu\n\nvs\n\nmeson setup build -Ddebug=true -Dpython=true -Dcassert=true\n-Dssl=openssl -Dtap-test=true -Dicu=enabled -Dnls=disabled\n\n\nin both cases the compiler is:\ngcc (Debian 10.2.1-6) 10.2.1 20210110\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 10 Jan 2024 11:33:39 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "Hi,\n\n> When building current head on debian bullseye I get this compile warning:\n>\n> In file included from ../src/backend/commands/dbcommands.c:20:\n> ../src/backend/commands/dbcommands.c: In function ‘createdb’:\n> ../src/include/postgres.h:104:9: warning: ‘src_hasloginevt’ may be\n> used uninitialized in this function [-Wmaybe-uninitialized]\n> 104 | return (Datum) (X ? 1 : 0);\n> | ^~~~~~~~~~~~~~~~~~~\n> ../src/backend/commands/dbcommands.c:683:8: note: ‘src_hasloginevt’\n> was declared here\n> 683 | bool src_hasloginevt;\n> | ^~~~~~~~~~~~~~~\n>\n>\n> I only get this when building with meson, not when building with\n> autotools. AFAICT, I have the same config:\n>\n> ./configure --enable-debug --enable-depend --with-python\n> --enable-cassert --with-openssl --enable-tap-tests --with-icu\n>\n> vs\n>\n> meson setup build -Ddebug=true -Dpython=true -Dcassert=true\n> -Dssl=openssl -Dtap-test=true -Dicu=enabled -Dnls=disabled\n>\n>\n> in both cases the compiler is:\n> gcc (Debian 10.2.1-6) 10.2.1 20210110\n\nSeems to me that the compiler is not smart enough to process:\n\n```\n if (!get_db_info(dbtemplate, ShareLock,\n &src_dboid, &src_owner, &src_encoding,\n &src_istemplate, &src_allowconn, &src_hasloginevt,\n &src_frozenxid, &src_minmxid, &src_deftablespace,\n &src_collate, &src_ctype, &src_iculocale,\n&src_icurules, &src_locprovider,\n &src_collversion))\n ereport(ERROR, ...\n```\n\nShould we just silence the warning like this - see attachment? I don't\nthink createdb() is called that often to worry about slight\nperformance change, if there is any.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 10 Jan 2024 15:15:57 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "On Wed, Jan 10, 2024 at 1:16 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > When building current head on debian bullseye I get this compile warning:\n> >\n> > In file included from ../src/backend/commands/dbcommands.c:20:\n> > ../src/backend/commands/dbcommands.c: In function ‘createdb’:\n> > ../src/include/postgres.h:104:9: warning: ‘src_hasloginevt’ may be\n> > used uninitialized in this function [-Wmaybe-uninitialized]\n> > 104 | return (Datum) (X ? 1 : 0);\n> > | ^~~~~~~~~~~~~~~~~~~\n> > ../src/backend/commands/dbcommands.c:683:8: note: ‘src_hasloginevt’\n> > was declared here\n> > 683 | bool src_hasloginevt;\n> > | ^~~~~~~~~~~~~~~\n> >\n> >\n> > I only get this when building with meson, not when building with\n> > autotools. AFAICT, I have the same config:\n> >\n> > ./configure --enable-debug --enable-depend --with-python\n> > --enable-cassert --with-openssl --enable-tap-tests --with-icu\n> >\n> > vs\n> >\n> > meson setup build -Ddebug=true -Dpython=true -Dcassert=true\n> > -Dssl=openssl -Dtap-test=true -Dicu=enabled -Dnls=disabled\n> >\n> >\n> > in both cases the compiler is:\n> > gcc (Debian 10.2.1-6) 10.2.1 20210110\n>\n> Seems to me that the compiler is not smart enough to process:\n>\n> ```\n> if (!get_db_info(dbtemplate, ShareLock,\n> &src_dboid, &src_owner, &src_encoding,\n> &src_istemplate, &src_allowconn, &src_hasloginevt,\n> &src_frozenxid, &src_minmxid, &src_deftablespace,\n> &src_collate, &src_ctype, &src_iculocale,\n> &src_icurules, &src_locprovider,\n> &src_collversion))\n> ereport(ERROR, ...\n> ```\n>\n> Should we just silence the warning like this - see attachment? I don't\n> think createdb() is called that often to worry about slight\n> performance change, if there is any.\n\nCertainly looks that way, but I'm curious as to why nobody else has seen this..\n\nThat said, it appears to be gone in current master. Even though\nnothing changed in that file. Must've been some transient effect,\nthat somehow didn't get blown away by doing a clean....\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 11 Jan 2024 18:00:42 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "On Fri, Jan 12, 2024 at 1:05 AM Magnus Hagander <[email protected]> wrote:\n>\n> On Wed, Jan 10, 2024 at 1:16 PM Aleksander Alekseev\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > > When building current head on debian bullseye I get this compile warning:\n> > >\n> > > In file included from ../src/backend/commands/dbcommands.c:20:\n> > > ../src/backend/commands/dbcommands.c: In function ‘createdb’:\n> > > ../src/include/postgres.h:104:9: warning: ‘src_hasloginevt’ may be\n> > > used uninitialized in this function [-Wmaybe-uninitialized]\n> > > 104 | return (Datum) (X ? 1 : 0);\n> > > | ^~~~~~~~~~~~~~~~~~~\n> > > ../src/backend/commands/dbcommands.c:683:8: note: ‘src_hasloginevt’\n> > > was declared here\n> > > 683 | bool src_hasloginevt;\n> > > | ^~~~~~~~~~~~~~~\n> > >\n> > >\n> > > I only get this when building with meson, not when building with\n> > > autotools. AFAICT, I have the same config:\n> > >\n> > > ./configure --enable-debug --enable-depend --with-python\n> > > --enable-cassert --with-openssl --enable-tap-tests --with-icu\n> > >\n> > > vs\n> > >\n> > > meson setup build -Ddebug=true -Dpython=true -Dcassert=true\n> > > -Dssl=openssl -Dtap-test=true -Dicu=enabled -Dnls=disabled\n> > >\n> > >\n> > > in both cases the compiler is:\n> > > gcc (Debian 10.2.1-6) 10.2.1 20210110\n> >\n> > Seems to me that the compiler is not smart enough to process:\n> >\n> > ```\n> > if (!get_db_info(dbtemplate, ShareLock,\n> > &src_dboid, &src_owner, &src_encoding,\n> > &src_istemplate, &src_allowconn, &src_hasloginevt,\n> > &src_frozenxid, &src_minmxid, &src_deftablespace,\n> > &src_collate, &src_ctype, &src_iculocale,\n> > &src_icurules, &src_locprovider,\n> > &src_collversion))\n> > ereport(ERROR, ...\n> > ```\n> >\n> > Should we just silence the warning like this - see attachment? I don't\n> > think createdb() is called that often to worry about slight\n> > performance change, if there is any.\n>\n> Certainly looks that way, but I'm curious as to why nobody else has seen this..\n>\n\nI saw it sometimes, sometimes not.\nNow I think the reason is:\nit will appear when you do `-Dbuildtype=release`.\n\nbut it will not occur when I do:\n`-Dbuildtype=debug`\n\nmy current meson version is 1.3.1, my ninja version is 1.10.1.\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:19:34 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "On 2024-Jan-12, jian he wrote:\n\n> I saw it sometimes, sometimes not.\n> Now I think the reason is:\n> it will appear when you do `-Dbuildtype=release`.\n> \n> but it will not occur when I do:\n> `-Dbuildtype=debug`\n> \n> my current meson version is 1.3.1, my ninja version is 1.10.1.\n\nHmm, but why doesn't it happen for other arguments of get_db_info that\nhave pretty much identical code, say src_istemplate?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:03:23 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "On Fri, Jan 12, 2024 at 8:03 PM Alvaro Herrera <[email protected]> wrote:\n>\n> On 2024-Jan-12, jian he wrote:\n>\n> > I saw it sometimes, sometimes not.\n> > Now I think the reason is:\n> > it will appear when you do `-Dbuildtype=release`.\n> >\n> > but it will not occur when I do:\n> > `-Dbuildtype=debug`\n> >\n> > my current meson version is 1.3.1, my ninja version is 1.10.1.\n>\n> Hmm, but why doesn't it happen for other arguments of get_db_info that\n> have pretty much identical code, say src_istemplate?\n>\n\ngit at commit 6780b79d5c580586ae6feb37b9c8b8bf33367886 (HEAD ->\nmaster, origin/master, origin/HEAD)\nthe minimum setup that will generate the warning:\n\nmeson setup --reconfigure ${BUILD} \\\n-Dprefix=${PG_PREFIX} \\\n-Dpgport=5462 \\\n-Dbuildtype=release\n\ngcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n\n\n", "msg_date": "Fri, 12 Jan 2024 21:27:48 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" }, { "msg_contents": "Hi. one more feedback.\n\nI tested the original repo setup, but it does not generate a warning\non my local setup.\nmeson setup --reconfigure ${BUILD} \\\n-Dprefix=${PG_PREFIX} \\\n-Dpgport=5463 \\\n-Dplpython=enabled \\\n-Dcassert=true \\\n-Dtap_tests=enabled \\\n-Dicu=enabled \\\n-Ddebug=true \\\n-Dnls=disabled\n\n it generate warning, when I add -Dbuildtype=release :\n\nmeson setup --reconfigure ${BUILD} \\\n-Dprefix=${PG_PREFIX} \\\n-Dpgport=5463 \\\n-Dplpython=enabled \\\n-Dcassert=true \\\n-Dtap_tests=enabled \\\n-Dicu=enabled \\\n-Dbuildtype=release \\\n-Ddebug=true \\\n-Dnls=disabled\n\nAfter applying the patch, the warning disappeared.\nso it fixed the problem.\n\n\n", "msg_date": "Mon, 15 Jan 2024 13:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile warnings in dbcommands.c building with meson" } ]
[ { "msg_contents": "The meson build doesn't tell you what tool is missing when trying to\nbuild the docs (and you don't have it in the path.. sigh), it just\ntells you that something is missing. Attached is a small patch that at\nleast lists what's expected -- I'm not sure if this is the way to go,\nor if we should somehow list the individual tools that are failing\nhere?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 10 Jan 2024 12:28:57 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Slightly improved meson error for docs tools" }, { "msg_contents": "Hi,\n\n> least lists what's expected -- I'm not sure if this is the way to go,\n> or if we should somehow list the individual tools that are failing\n> here?\n\nPersonally I think the patch is OK.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:05:26 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slightly improved meson error for docs tools" }, { "msg_contents": "On Wed, Jan 10, 2024 at 1:05 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > least lists what's expected -- I'm not sure if this is the way to go,\n> > or if we should somehow list the individual tools that are failing\n> > here?\n>\n> Personally I think the patch is OK.\n\nThanks. I've pushed this one for now, we can always adjust further\nlater if needed.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 11 Jan 2024 14:56:20 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slightly improved meson error for docs tools" } ]
[ { "msg_contents": "The attached patch adds a column \"authuser\" to pg_stat_activity which\ncontains the username of the externally authenticated user, being the\nsame value as the SYSTEM_USER keyword returns in a backend.\n\nThis overlaps with for example the values in pg_stat_gss, but it will\ninclude values for authentication methods that don't have their own\nview such as peer/ident. gss/ssl info will of course still be shown,\nit is just in more than one place.\n\nI was originally thinking this column should be \"sysuser\" to map to\nthe keyword, but since we already have \"usesysid\" as a column name in\npg_stat_activity I figured that could be confusing since it actually\nmeans something completely different. But happy to change that back if\npeople think that's better.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 10 Jan 2024 12:46:34 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nThanks for the patch.\n\n> The attached patch adds a column \"authuser\" to pg_stat_activity which\n> contains the username of the externally authenticated user, being the\n> same value as the SYSTEM_USER keyword returns in a backend.\n\nI believe what was meant is \"authname\", not \"authuser\".\n\n> This overlaps with for example the values in pg_stat_gss, but it will\n> include values for authentication methods that don't have their own\n> view such as peer/ident. gss/ssl info will of course still be shown,\n> it is just in more than one place.\n>\n> I was originally thinking this column should be \"sysuser\" to map to\n> the keyword, but since we already have \"usesysid\" as a column name in\n> pg_stat_activity I figured that could be confusing since it actually\n> means something completely different. But happy to change that back if\n> people think that's better.\n\nThis part of the documentation is wrong:\n\n```\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>authname</structfield> <type>name</type>\n+ </para>\n```\n\nActually the type is `text`:\n\n```\n=# \\d pg_stat_activity ;\n View \"pg_catalog.pg_stat_activity\"\n Column | Type | Collation | Nullable | Default\n------------------+--------------------------+-----------+----------+---------\n datid | oid | | |\n datname | name | | |\n pid | integer | | |\n leader_pid | integer | | |\n usesysid | oid | | |\n usename | name | | |\n authname | text | | |\n```\n\nIt hurts my sense of beauty that usename and authname are of different\ntypes. But if I'm the only one, maybe we can close our eyes on this.\nAlso I suspect that placing usename and authname in a close proximity\ncan be somewhat confusing. Perhaps adding authname as the last column\nof the view will solve both nitpicks?\n\n```\n+ /* Information about the authenticated user */\n+ char st_authuser[NAMEDATALEN];\n```\n\nWell, here it's called \"authuser\" and it looks like the intention was\nto use `name` datatype... I suggest using \"authname\" everywhere for\nconsistency.\n\nSince the patch affects pg_proc.dat I believe the commit message\nshould remind bumping the catalog version.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 10 Jan 2024 15:44:25 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Wed, Jan 10, 2024 at 1:44 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the patch.\n>\n> > The attached patch adds a column \"authuser\" to pg_stat_activity which\n> > contains the username of the externally authenticated user, being the\n> > same value as the SYSTEM_USER keyword returns in a backend.\n>\n> I believe what was meant is \"authname\", not \"authuser\".\n>\n> > This overlaps with for example the values in pg_stat_gss, but it will\n> > include values for authentication methods that don't have their own\n> > view such as peer/ident. gss/ssl info will of course still be shown,\n> > it is just in more than one place.\n> >\n> > I was originally thinking this column should be \"sysuser\" to map to\n> > the keyword, but since we already have \"usesysid\" as a column name in\n> > pg_stat_activity I figured that could be confusing since it actually\n> > means something completely different. But happy to change that back if\n> > people think that's better.\n>\n> This part of the documentation is wrong:\n>\n> ```\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>authname</structfield> <type>name</type>\n> + </para>\n> ```\n>\n> Actually the type is `text`:\n>\n> ```\n> =# \\d pg_stat_activity ;\n> View \"pg_catalog.pg_stat_activity\"\n> Column | Type | Collation | Nullable | Default\n> ------------------+--------------------------+-----------+----------+---------\n> datid | oid | | |\n> datname | name | | |\n> pid | integer | | |\n> leader_pid | integer | | |\n> usesysid | oid | | |\n> usename | name | | |\n> authname | text | | |\n> ```\n>\n> It hurts my sense of beauty that usename and authname are of different\n> types. But if I'm the only one, maybe we can close our eyes on this.\n> Also I suspect that placing usename and authname in a close proximity\n> can be somewhat confusing. Perhaps adding authname as the last column\n> of the view will solve both nitpicks?\n\nBut it should probably actually be name, given that's the underlying\ndatatype. I kept changing it around and ended up half way in\nbetween...\n\n\n> ```\n> + /* Information about the authenticated user */\n> + char st_authuser[NAMEDATALEN];\n> ```\n>\n> Well, here it's called \"authuser\" and it looks like the intention was\n> to use `name` datatype... I suggest using \"authname\" everywhere for\n> consistency.\n\nYeah, I flipped back and forth a few times and clearly got stuck in\nthe middle. They should absolutely be the same everywhere - whatever\nname is used it should be consistent.\n\n\n> Since the patch affects pg_proc.dat I believe the commit message\n> should remind bumping the catalog version.\n\nYes.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 10 Jan 2024 14:08:03 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n\n> On Wed, Jan 10, 2024 at 1:44 PM Aleksander Alekseev\n> <[email protected]> wrote:\n>>\n>> It hurts my sense of beauty that usename and authname are of different\n>> types. But if I'm the only one, maybe we can close our eyes on this.\n>> Also I suspect that placing usename and authname in a close proximity\n>> can be somewhat confusing. Perhaps adding authname as the last column\n>> of the view will solve both nitpicks?\n>\n> But it should probably actually be name, given that's the underlying\n> datatype. I kept changing it around and ended up half way in\n> between...\n\nhttps://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-SESSION-TABLE\n(and pg_typeof(system_user)) says it's text. Which makes sense, since\nit can easily be longer than 63 bytes, e.g. in the case of a TLS client\ncertificate DN.\n\n- ilmari\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:27:50 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Wed, Jan 10, 2024 at 2:27 PM Dagfinn Ilmari Mannsåker\n<[email protected]> wrote:\n>\n> Magnus Hagander <[email protected]> writes:\n>\n> > On Wed, Jan 10, 2024 at 1:44 PM Aleksander Alekseev\n> > <[email protected]> wrote:\n> >>\n> >> It hurts my sense of beauty that usename and authname are of different\n> >> types. But if I'm the only one, maybe we can close our eyes on this.\n> >> Also I suspect that placing usename and authname in a close proximity\n> >> can be somewhat confusing. Perhaps adding authname as the last column\n> >> of the view will solve both nitpicks?\n> >\n> > But it should probably actually be name, given that's the underlying\n> > datatype. I kept changing it around and ended up half way in\n> > between...\n>\n> https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-SESSION-TABLE\n> (and pg_typeof(system_user)) says it's text. Which makes sense, since\n> it can easily be longer than 63 bytes, e.g. in the case of a TLS client\n> certificate DN.\n\nWe already truncate all those to NAMEDATALEN in pg_stat_ssl for\nexample. so I think the truncation part of those should be OK. We'll\ntruncate \"a little bit more\" since we also have the 'cert:', but not\nsignificantly so I think.\n\nbut yeah, conceptually it should probably be text because name is\nsupposedly a *postgres identifier*, which this is not.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:41:05 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 10, 2024 at 02:08:03PM +0100, Magnus Hagander wrote:\n> On Wed, Jan 10, 2024 at 1:44 PM Aleksander Alekseev\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Thanks for the patch.\n\n+1\n\n> > > This overlaps with for example the values in pg_stat_gss, but it will\n> > > include values for authentication methods that don't have their own\n> > > view such as peer/ident. gss/ssl info will of course still be shown,\n> > > it is just in more than one place.\n\nYeah, I think that's a good idea.\n\n> > It hurts my sense of beauty that usename and authname are of different\n> > types. But if I'm the only one, maybe we can close our eyes on this.\n> > Also I suspect that placing usename and authname in a close proximity\n> > can be somewhat confusing. Perhaps adding authname as the last column\n> > of the view will solve both nitpicks?\n> \n> But it should probably actually be name, given that's the underlying\n> datatype. I kept changing it around and ended up half way in\n> between...\n> \n> \n> > ```\n> > + /* Information about the authenticated user */\n> > + char st_authuser[NAMEDATALEN];\n> > ```\n> >\n> > Well, here it's called \"authuser\" and it looks like the intention was\n> > to use `name` datatype... I suggest using \"authname\" everywhere for\n> > consistency.\n\nI think it depends what we want the new field to reflect. If it is the exact\nsame thing as the SYSTEM_USER then I think it has to be text (as the SYSTEM_USER\nis made of \"auth_method:identity\"). Now if we want it to be \"only\" the identity\npart of it, then the `name` datatype would be fine. I'd vote for the exact same\nthing as the SYSTEM_USER (means auth_method:identity).\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>authname</structfield> <type>name</type>\n> + </para>\n> + <para>\n> + The authentication method and identity (if any) that the user\n> + used to log in. It contains the same value as\n> + <xref linkend=\"system-user\" /> returns in the backend.\n> + </para></entry>\n> + </row>\n\nI'm fine with auth_method:identity.\n\n> + S.authname,\n\nWhat about using system_user as the field name? (because if we keep\nauth_method:identity it's not really the authname anyway).\n\nAlso, what about adding a test in say 003_peer.pl to check that the value matches\nthe SYSTEM_USER one?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 13:56:13 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Wed, Jan 10, 2024 at 2:56 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Jan 10, 2024 at 02:08:03PM +0100, Magnus Hagander wrote:\n> > On Wed, Jan 10, 2024 at 1:44 PM Aleksander Alekseev\n> > <[email protected]> wrote:\n> > >\n> > > Hi,\n> > >\n> > > Thanks for the patch.\n>\n> +1\n>\n> > > > This overlaps with for example the values in pg_stat_gss, but it will\n> > > > include values for authentication methods that don't have their own\n> > > > view such as peer/ident. gss/ssl info will of course still be shown,\n> > > > it is just in more than one place.\n>\n> Yeah, I think that's a good idea.\n>\n> > > It hurts my sense of beauty that usename and authname are of different\n> > > types. But if I'm the only one, maybe we can close our eyes on this.\n> > > Also I suspect that placing usename and authname in a close proximity\n> > > can be somewhat confusing. Perhaps adding authname as the last column\n> > > of the view will solve both nitpicks?\n> >\n> > But it should probably actually be name, given that's the underlying\n> > datatype. I kept changing it around and ended up half way in\n> > between...\n> >\n> >\n> > > ```\n> > > + /* Information about the authenticated user */\n> > > + char st_authuser[NAMEDATALEN];\n> > > ```\n> > >\n> > > Well, here it's called \"authuser\" and it looks like the intention was\n> > > to use `name` datatype... I suggest using \"authname\" everywhere for\n> > > consistency.\n>\n> I think it depends what we want the new field to reflect. If it is the exact\n> same thing as the SYSTEM_USER then I think it has to be text (as the SYSTEM_USER\n> is made of \"auth_method:identity\"). Now if we want it to be \"only\" the identity\n> part of it, then the `name` datatype would be fine. I'd vote for the exact same\n> thing as the SYSTEM_USER (means auth_method:identity).\n\nI definitely think it should be the same. If it's not exactly the\nsame, then it should be *two* columns, one with auth method and one\nwith the name.\n\nAnd thinking more about it maybe that's cleaner, because that makes it\neasier to do things like filter based on auth method?\n\n\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>authname</structfield> <type>name</type>\n> > + </para>\n> > + <para>\n> > + The authentication method and identity (if any) that the user\n> > + used to log in. It contains the same value as\n> > + <xref linkend=\"system-user\" /> returns in the backend.\n> > + </para></entry>\n> > + </row>\n>\n> I'm fine with auth_method:identity.\n>\n> > + S.authname,\n>\n> What about using system_user as the field name? (because if we keep\n> auth_method:identity it's not really the authname anyway).\n\nI was worried system_user or sysuser would both be confusing with the\nfact that we have usesysid -- which would reference a *different*\nsys...\n\n\n> Also, what about adding a test in say 003_peer.pl to check that the value matches\n> the SYSTEM_USER one?\n\nYeah, that's a good idea I think. I quickly looked over for tests and\ncouldn't really find any for pg_stat_activity, btu we can definitely\npiggyback them in one or more of the auth tests.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:59:42 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 10, 2024 at 02:59:42PM +0100, Magnus Hagander wrote:\n> On Wed, Jan 10, 2024 at 2:56 PM Bertrand Drouvot\n> I definitely think it should be the same. If it's not exactly the\n> same, then it should be *two* columns, one with auth method and one\n> with the name.\n> \n> And thinking more about it maybe that's cleaner, because that makes it\n> easier to do things like filter based on auth method?\n\nYeah, that's sounds even better.\n\n> \n> > > + <row>\n> > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>authname</structfield> <type>name</type>\n> > > + </para>\n> > > + <para>\n> > > + The authentication method and identity (if any) that the user\n> > > + used to log in. It contains the same value as\n> > > + <xref linkend=\"system-user\" /> returns in the backend.\n> > > + </para></entry>\n> > > + </row>\n> >\n> > I'm fine with auth_method:identity.\n> >\n> > > + S.authname,\n> >\n> > What about using system_user as the field name? (because if we keep\n> > auth_method:identity it's not really the authname anyway).\n> \n> I was worried system_user or sysuser would both be confusing with the\n> fact that we have usesysid -- which would reference a *different*\n> sys...\n\nIf we go the 2 fields way, then what about auth_identity and auth_method then?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 10 Jan 2024 14:12:28 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On 1/10/24 08:59, Magnus Hagander wrote:\n> On Wed, Jan 10, 2024 at 2:56 PM Bertrand Drouvot\n>> I think it depends what we want the new field to reflect. If it is the exact\n>> same thing as the SYSTEM_USER then I think it has to be text (as the SYSTEM_USER\n>> is made of \"auth_method:identity\"). Now if we want it to be \"only\" the identity\n>> part of it, then the `name` datatype would be fine. I'd vote for the exact same\n>> thing as the SYSTEM_USER (means auth_method:identity).\n> \n> I definitely think it should be the same. If it's not exactly the\n> same, then it should be *two* columns, one with auth method and one\n> with the name.\n> \n> And thinking more about it maybe that's cleaner, because that makes it\n> easier to do things like filter based on auth method?\n\n+1 for the overall feature and +1 for two columns\n\n>> > + <row>\n>> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n>> > + <structfield>authname</structfield> <type>name</type>\n>> > + </para>\n>> > + <para>\n>> > + The authentication method and identity (if any) that the user\n>> > + used to log in. It contains the same value as\n>> > + <xref linkend=\"system-user\" /> returns in the backend.\n>> > + </para></entry>\n>> > + </row>\n>>\n>> I'm fine with auth_method:identity.\n>>\n>> > + S.authname,\n>>\n>> What about using system_user as the field name? (because if we keep\n>> auth_method:identity it's not really the authname anyway).\n> \n> I was worried system_user or sysuser would both be confusing with the\n> fact that we have usesysid -- which would reference a *different*\n> sys...\n\n\nI think if it is exactly \"system_user\" it would be pretty clearly a \nmatch for SYSTEM_USER\n\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Wed, 10 Jan 2024 09:17:13 -0500", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Jan 10, 2024 at 02:59:42PM +0100, Magnus Hagander wrote:\n> > On Wed, Jan 10, 2024 at 2:56 PM Bertrand Drouvot\n> > I definitely think it should be the same. If it's not exactly the\n> > same, then it should be *two* columns, one with auth method and one\n> > with the name.\n> >\n> > And thinking more about it maybe that's cleaner, because that makes it\n> > easier to do things like filter based on auth method?\n>\n> Yeah, that's sounds even better.\n>\n> >\n> > > > + <row>\n> > > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > > + <structfield>authname</structfield> <type>name</type>\n> > > > + </para>\n> > > > + <para>\n> > > > + The authentication method and identity (if any) that the user\n> > > > + used to log in. It contains the same value as\n> > > > + <xref linkend=\"system-user\" /> returns in the backend.\n> > > > + </para></entry>\n> > > > + </row>\n> > >\n> > > I'm fine with auth_method:identity.\n> > >\n> > > > + S.authname,\n> > >\n> > > What about using system_user as the field name? (because if we keep\n> > > auth_method:identity it's not really the authname anyway).\n> >\n> > I was worried system_user or sysuser would both be confusing with the\n> > fact that we have usesysid -- which would reference a *different*\n> > sys...\n>\n> If we go the 2 fields way, then what about auth_identity and auth_method then?\n\n\nHere is an updated patch based on this idea.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 11 Jan 2024 14:24:58 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 11, 2024 at 02:24:58PM +0100, Magnus Hagander wrote:\n> On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > If we go the 2 fields way, then what about auth_identity and auth_method then?\n> \n> \n> Here is an updated patch based on this idea.\n\nThanks!\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>auth_method</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ The authentication method used for authenticating the connection, or\n+ NULL for background processes.\n+ </para></entry>\n\nI'm wondering if it would make sense to populate it for parallel workers too.\nI think it's doable thanks to d951052, but I'm not sure it's worth it (one could\njoin based on the leader_pid though). OTOH that would be consistent with\nhow the SYSTEM_USER behaves with parallel workers (it's populated).\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>auth_identity</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ The identity (if any) that the user presented during the authentication\n+ cycle before they were assigned a database role. Contains the same\n+ value as <xref linkend=\"system-user\" />\n\nSame remark regarding the parallel workers case +:\n\n- Would it be better to use the `name` datatype for auth_identity?\n- what about \"Contains the same value as the identity part in <xref linkend=\"system-user\" />\"?\n\n+ /*\n+ * Trust doesn't set_authn_id(), but we still need to store the\n+ * auth_method\n+ */\n+ MyClientConnectionInfo.auth_method = uaTrust;\n\n+1, I think it is useful here to provide \"trust\" and not a NULL value in the\ncontext of this patch.\n\n+# pg_stat_activity shold contain trust and empty string for trust auth\n\ntypo: s/shold/should/\n\n+# Users with md5 auth should show both auth method and name in pg_stat_activity\n\nwhat about \"show both auth method and identity\"?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 16:55:21 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, Jan 11, 2024 at 02:24:58PM +0100, Magnus Hagander wrote:\n> > On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > If we go the 2 fields way, then what about auth_identity and auth_method then?\n> >\n> >\n> > Here is an updated patch based on this idea.\n>\n> Thanks!\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>auth_method</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + The authentication method used for authenticating the connection, or\n> + NULL for background processes.\n> + </para></entry>\n>\n> I'm wondering if it would make sense to populate it for parallel workers too.\n> I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> join based on the leader_pid though). OTOH that would be consistent with\n> how the SYSTEM_USER behaves with parallel workers (it's populated).\n\nI guess one could conceptually argue that \"authentication happens int\nhe leader\". But we do populate it with the other user records, and\nit'd be weird if this one was excluded.\n\nThe tricky thing is that pgstat_bestart() is called long before we\ndeserialize the data. But from what I can tell it should be safe to\nchange it per the attached? That should be AFAICT an extremely short\nwindow of time longer before we report it, not enough to matter.\n\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>auth_identity</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + The identity (if any) that the user presented during the authentication\n> + cycle before they were assigned a database role. Contains the same\n> + value as <xref linkend=\"system-user\" />\n>\n> Same remark regarding the parallel workers case +:\n>\n> - Would it be better to use the `name` datatype for auth_identity?\n\nI've been going back and forth. And I think my conclusion is that it's\nnot a postgres identifier, so it shouldn't be. See the earlier\ndiscussion, and for example that that's what we do for cert names when\nSSL is used.\n\n> - what about \"Contains the same value as the identity part in <xref linkend=\"system-user\" />\"?\n>\n> + /*\n> + * Trust doesn't set_authn_id(), but we still need to store the\n> + * auth_method\n> + */\n> + MyClientConnectionInfo.auth_method = uaTrust;\n>\n> +1, I think it is useful here to provide \"trust\" and not a NULL value in the\n> context of this patch.\n\nYeah, that's probably \"independently correct\", but actually useful here.\n\n\n> +# pg_stat_activity shold contain trust and empty string for trust auth\n>\n> typo: s/shold/should/\n\nOps.\n\n\n> +# Users with md5 auth should show both auth method and name in pg_stat_activity\n>\n> what about \"show both auth method and identity\"?\n\nGood spot, yeah, I changed it over to identity everywhere else so it\nshould be here as well.\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Jan 2024 17:16:53 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 05:16:53PM +0100, Magnus Hagander wrote:\n> On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > I'm wondering if it would make sense to populate it for parallel workers too.\n> > I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> > join based on the leader_pid though). OTOH that would be consistent with\n> > how the SYSTEM_USER behaves with parallel workers (it's populated).\n> \n> I guess one could conceptually argue that \"authentication happens int\n> he leader\". But we do populate it with the other user records, and\n> it'd be weird if this one was excluded.\n> \n> The tricky thing is that pgstat_bestart() is called long before we\n> deserialize the data. But from what I can tell it should be safe to\n> change it per the attached? That should be AFAICT an extremely short\n> window of time longer before we report it, not enough to matter.\n\nThanks! Yeah, that seems reasonable to me. Also, I think we should remove the\n\"MyProcPort\" test here then (looking at v3):\n\n+ if (MyProcPort && MyClientConnectionInfo.authn_id)\n+ strlcpy(lbeentry.st_auth_identity, MyClientConnectionInfo.authn_id, NAMEDATALEN);\n+ else\n+ MemSet(&lbeentry.st_auth_identity, 0, sizeof(lbeentry.st_auth_identity));\n\nto get the st_auth_identity propagated to the parallel workers.\n\n> >\n> > Same remark regarding the parallel workers case +:\n> >\n> > - Would it be better to use the `name` datatype for auth_identity?\n> \n> I've been going back and forth. And I think my conclusion is that it's\n> not a postgres identifier, so it shouldn't be. See the earlier\n> discussion, and for example that that's what we do for cert names when\n> SSL is used.\n\nYeah, Okay let's keep text then.\n\n> \n> > - what about \"Contains the same value as the identity part in <xref linkend=\"system-user\" />\"?\n\nNot sure, but looks like you missed this comment?\n\n> >\n> > + /*\n> > + * Trust doesn't set_authn_id(), but we still need to store the\n> > + * auth_method\n> > + */\n> > + MyClientConnectionInfo.auth_method = uaTrust;\n> >\n> > +1, I think it is useful here to provide \"trust\" and not a NULL value in the\n> > context of this patch.\n> \n> Yeah, that's probably \"independently correct\", but actually useful here.\n\n+1\n\n> > +# Users with md5 auth should show both auth method and name in pg_stat_activity\n> >\n> > what about \"show both auth method and identity\"?\n> \n> Good spot, yeah, I changed it over to identity everywhere else so it\n> should be here as well.\n\nDid you forget to share the new revision (aka v4)? I can only see the\n\"reorder_parallel_worker_bestart.patch\" attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 10:17:34 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Mon, Jan 15, 2024 at 11:17 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Fri, Jan 12, 2024 at 05:16:53PM +0100, Magnus Hagander wrote:\n> > On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > I'm wondering if it would make sense to populate it for parallel workers too.\n> > > I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> > > join based on the leader_pid though). OTOH that would be consistent with\n> > > how the SYSTEM_USER behaves with parallel workers (it's populated).\n> >\n> > I guess one could conceptually argue that \"authentication happens int\n> > he leader\". But we do populate it with the other user records, and\n> > it'd be weird if this one was excluded.\n> >\n> > The tricky thing is that pgstat_bestart() is called long before we\n> > deserialize the data. But from what I can tell it should be safe to\n> > change it per the attached? That should be AFAICT an extremely short\n> > window of time longer before we report it, not enough to matter.\n>\n> Thanks! Yeah, that seems reasonable to me. Also, I think we should remove the\n> \"MyProcPort\" test here then (looking at v3):\n>\n> + if (MyProcPort && MyClientConnectionInfo.authn_id)\n> + strlcpy(lbeentry.st_auth_identity, MyClientConnectionInfo.authn_id, NAMEDATALEN);\n> + else\n> + MemSet(&lbeentry.st_auth_identity, 0, sizeof(lbeentry.st_auth_identity));\n>\n> to get the st_auth_identity propagated to the parallel workers.\n\nYup, I had done that in v4 which as you noted further down, I forgot to post.\n\n\n> > > - what about \"Contains the same value as the identity part in <xref linkend=\"system-user\" />\"?\n>\n> Not sure, but looks like you missed this comment?\n\nI did. Agree with your comment, and updated now.\n\n\n> > > +# Users with md5 auth should show both auth method and name in pg_stat_activity\n> > >\n> > > what about \"show both auth method and identity\"?\n> >\n> > Good spot, yeah, I changed it over to identity everywhere else so it\n> > should be here as well.\n>\n> Did you forget to share the new revision (aka v4)? I can only see the\n> \"reorder_parallel_worker_bestart.patch\" attached.\n\nI did. Here it is, and also including that suggested docs fix as well\nas a rebase on current master.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 18 Jan 2024 16:01:33 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 18, 2024 at 04:01:33PM +0100, Magnus Hagander wrote:\n> On Mon, Jan 15, 2024 at 11:17 AM Bertrand Drouvot\n> > Did you forget to share the new revision (aka v4)? I can only see the\n> > \"reorder_parallel_worker_bestart.patch\" attached.\n> \n> I did. Here it is, and also including that suggested docs fix as well\n> as a rebase on current master.\n> \n\nThanks!\n\nJust a few comments:\n\n1 ===\n\n+ The authentication method used for authenticating the connection, or\n+ NULL for background processes.\n\nWhat about? \"NULL for background processes (except for parallel workers which\ninherit this information from their leader process)\"\n\n2 ===\n\n+ cycle before they were assigned a database role. Contains the same\n+ value as the identity part in <xref linkend=\"system-user\" />, or NULL\n+ for background processes.\n\nSame comment about parallel workers.\n\n3 ===\n\n+# pg_stat_activity should contain trust and empty string for trust auth\n+$res = $node->safe_psql(\n+ 'postgres',\n+ \"SELECT auth_method, auth_identity='' FROM pg_stat_activity WHERE pid=pg_backend_pid()\",\n+ connstr => \"user=scram_role\");\n+is($res, 'trust|t', 'Users with trust authentication should show correctly in pg_stat_activity');\n+\n+# pg_stat_activity should contain NULL for auth of background processes\n+# (test is a bit out of place here, but this is where the other pg_stat_activity.auth* tests are)\n+$res = $node->safe_psql(\n+ 'postgres',\n+ \"SELECT auth_method IS NULL, auth_identity IS NULL FROM pg_stat_activity WHERE backend_type='checkpointer'\",\n+);\n+is($res, 't|t', 'Background processes should show NULL for auth in pg_stat_activity');\n\nWhat do you think about testing the parallel workers cases too? (I'm not 100%\nsure it's worth the extra complexity though).\n\nApart from those 3, it looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 06:20:00 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\n> > Did you forget to share the new revision (aka v4)? I can only see the\n> > \"reorder_parallel_worker_bestart.patch\" attached.\n>\n> I did. Here it is, and also including that suggested docs fix as well\n> as a rebase on current master.\n\n```\n+ lbeentry.st_auth_method = MyClientConnectionInfo.auth_method;\n+ if (MyClientConnectionInfo.authn_id)\n+ strlcpy(lbeentry.st_auth_identity,\nMyClientConnectionInfo.authn_id, NAMEDATALEN);\n+ else\n+ MemSet(&lbeentry.st_auth_identity, 0,\nsizeof(lbeentry.st_auth_identity));\n```\n\nI believe using sizeof(lbeentry.st_auth_identity) instead of\nNAMEDATALEN is generally considered a better practice.\n\nCalling MemSet for a CString seems to be an overkill. I suggest\nsetting lbeentry.st_auth_identity[0] to zero.\n\nExcept for these nitpicks v4 LGTM.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Jan 2024 14:33:08 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 18, 2024 at 11:01 PM Magnus Hagander <[email protected]> wrote:\n>\n> I did. Here it is, and also including that suggested docs fix as well\n> as a rebase on current master.\n\n+ if (MyClientConnectionInfo.authn_id)\n+ strlcpy(lbeentry.st_auth_identity,\nMyClientConnectionInfo.authn_id, NAMEDATALEN);\n+ else\n+ MemSet(&lbeentry.st_auth_identity, 0,\nsizeof(lbeentry.st_auth_identity));\n\nShould we use pg_mbcliplen() here? I don't think there's any strong\nguarantee that no multibyte character can be used. I also agree with\nthe nearby comment about MemSet being overkill.\n\n+ value as the identity part in <xref linkend=\"system-user\" />, or NULL\nI was looking at\nhttps://www.postgresql.org/docs/current/auth-username-maps.html and\nnoticed that this page is switching between system-user and\nsystem-username. Should we clean that up while at it?\n\n\n", "msg_date": "Fri, 19 Jan 2024 20:43:05 +0800", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 19, 2024 at 7:20 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, Jan 18, 2024 at 04:01:33PM +0100, Magnus Hagander wrote:\n> > On Mon, Jan 15, 2024 at 11:17 AM Bertrand Drouvot\n> > > Did you forget to share the new revision (aka v4)? I can only see the\n> > > \"reorder_parallel_worker_bestart.patch\" attached.\n> >\n> > I did. Here it is, and also including that suggested docs fix as well\n> > as a rebase on current master.\n> >\n>\n> Thanks!\n>\n> Just a few comments:\n>\n> 1 ===\n>\n> + The authentication method used for authenticating the connection, or\n> + NULL for background processes.\n>\n> What about? \"NULL for background processes (except for parallel workers which\n> inherit this information from their leader process)\"\n\nUgh. That doesn't read very well at all to me. Maybe just \"NULL for\nbackground processes without a user\"?\n\n\n> 2 ===\n>\n> + cycle before they were assigned a database role. Contains the same\n> + value as the identity part in <xref linkend=\"system-user\" />, or NULL\n> + for background processes.\n>\n> Same comment about parallel workers.\n>\n> 3 ===\n>\n> +# pg_stat_activity should contain trust and empty string for trust auth\n> +$res = $node->safe_psql(\n> + 'postgres',\n> + \"SELECT auth_method, auth_identity='' FROM pg_stat_activity WHERE pid=pg_backend_pid()\",\n> + connstr => \"user=scram_role\");\n> +is($res, 'trust|t', 'Users with trust authentication should show correctly in pg_stat_activity');\n> +\n> +# pg_stat_activity should contain NULL for auth of background processes\n> +# (test is a bit out of place here, but this is where the other pg_stat_activity.auth* tests are)\n> +$res = $node->safe_psql(\n> + 'postgres',\n> + \"SELECT auth_method IS NULL, auth_identity IS NULL FROM pg_stat_activity WHERE backend_type='checkpointer'\",\n> +);\n> +is($res, 't|t', 'Background processes should show NULL for auth in pg_stat_activity');\n>\n> What do you think about testing the parallel workers cases too? (I'm not 100%\n> sure it's worth the extra complexity though).\n\nI'm leaning towards not worth that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 16 Feb 2024 20:17:41 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 19, 2024 at 12:33 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > > Did you forget to share the new revision (aka v4)? I can only see the\n> > > \"reorder_parallel_worker_bestart.patch\" attached.\n> >\n> > I did. Here it is, and also including that suggested docs fix as well\n> > as a rebase on current master.\n>\n> ```\n> + lbeentry.st_auth_method = MyClientConnectionInfo.auth_method;\n> + if (MyClientConnectionInfo.authn_id)\n> + strlcpy(lbeentry.st_auth_identity,\n> MyClientConnectionInfo.authn_id, NAMEDATALEN);\n> + else\n> + MemSet(&lbeentry.st_auth_identity, 0,\n> sizeof(lbeentry.st_auth_identity));\n> ```\n>\n> I believe using sizeof(lbeentry.st_auth_identity) instead of\n> NAMEDATALEN is generally considered a better practice.\n\nWe use the NAMEDATALEN method in the rest of the function, so I did it\nthe same way for consistency. I think if we want to change that, we\nshould change the whole function at once to keep it consistent.\n\n\n> Calling MemSet for a CString seems to be an overkill. I suggest\n> setting lbeentry.st_auth_identity[0] to zero.\n\nFair enough. Will make that change.\n\n//Magnus\n\n\n", "msg_date": "Fri, 16 Feb 2024 20:23:22 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Jan 19, 2024 at 1:43 PM Julien Rouhaud <[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, Jan 18, 2024 at 11:01 PM Magnus Hagander <[email protected]> wrote:\n> >\n> > I did. Here it is, and also including that suggested docs fix as well\n> > as a rebase on current master.\n>\n> + if (MyClientConnectionInfo.authn_id)\n> + strlcpy(lbeentry.st_auth_identity,\n> MyClientConnectionInfo.authn_id, NAMEDATALEN);\n> + else\n> + MemSet(&lbeentry.st_auth_identity, 0,\n> sizeof(lbeentry.st_auth_identity));\n>\n> Should we use pg_mbcliplen() here? I don't think there's any strong\n> guarantee that no multibyte character can be used. I also agree with\n> the nearby comment about MemSet being overkill.\n\nHm. Good question. I don't think there is such a guarantee, no. So\nsomething like attached v5?\n\nAlso, wouldn't that problem already exist a few lines down for the SSL parts?\n\n> + value as the identity part in <xref linkend=\"system-user\" />, or NULL\n> I was looking at\n> https://www.postgresql.org/docs/current/auth-username-maps.html and\n> noticed that this page is switching between system-user and\n> system-username. Should we clean that up while at it?\n\nSeems like something we should clean up yes, but not as part of this patch.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 16 Feb 2024 20:39:26 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2024-01-10 12:46:34 +0100, Magnus Hagander wrote:\n> The attached patch adds a column \"authuser\" to pg_stat_activity which\n> contains the username of the externally authenticated user, being the\n> same value as the SYSTEM_USER keyword returns in a backend.\n\nI continue to think that it's a bad idea to make pg_stat_activity ever wider\nwith columns that do not actually describe properties that change across the\ncourse of a session. Yes, there's the argument that that ship has sailed, but\nI don't think that's a good reason to continue ever further down that road.\n\nIt's not just a usability issue, it also makes it more expensive to query\npg_stat_activity. This is of course more pronounced with textual columns than\nwith integer ones.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Feb 2024 11:41:55 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2024-01-12 17:16:53 +0100, Magnus Hagander wrote:\n> On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > On Thu, Jan 11, 2024 at 02:24:58PM +0100, Magnus Hagander wrote:\n> > > On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > If we go the 2 fields way, then what about auth_identity and auth_method then?\n> > >\n> > >\n> > > Here is an updated patch based on this idea.\n> >\n> > Thanks!\n> >\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>auth_method</structfield> <type>text</type>\n> > + </para>\n> > + <para>\n> > + The authentication method used for authenticating the connection, or\n> > + NULL for background processes.\n> > + </para></entry>\n> >\n> > I'm wondering if it would make sense to populate it for parallel workers too.\n> > I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> > join based on the leader_pid though). OTOH that would be consistent with\n> > how the SYSTEM_USER behaves with parallel workers (it's populated).\n> \n> I guess one could conceptually argue that \"authentication happens int\n> he leader\". But we do populate it with the other user records, and\n> it'd be weird if this one was excluded.\n> \n> The tricky thing is that pgstat_bestart() is called long before we\n> deserialize the data. But from what I can tell it should be safe to\n> change it per the attached? That should be AFAICT an extremely short\n> window of time longer before we report it, not enough to matter.\n\nI don't like that one bit. The whole subsystem initialization dance already is\nquite complicated, particularly for pgstat, we shouldn't make it more\ncomplicated. Especially not when the initialization is moved quite a bit away\nfrom all the other calls.\n\nBesides just that, I also don't think delaying visibility of the worker in\npg_stat_activity until parallel worker initialization has completed is good,\nthat's not all cheap work.\n\n\nMaybe I am missing something, but why aren't we just getting the value from\nthe leader's entry, instead of copying it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Feb 2024 11:55:39 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 8:41 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-01-10 12:46:34 +0100, Magnus Hagander wrote:\n> > The attached patch adds a column \"authuser\" to pg_stat_activity which\n> > contains the username of the externally authenticated user, being the\n> > same value as the SYSTEM_USER keyword returns in a backend.\n>\n> I continue to think that it's a bad idea to make pg_stat_activity ever wider\n> with columns that do not actually describe properties that change across the\n> course of a session. Yes, there's the argument that that ship has sailed, but\n> I don't think that's a good reason to continue ever further down that road.\n>\n> It's not just a usability issue, it also makes it more expensive to query\n> pg_stat_activity. This is of course more pronounced with textual columns than\n> with integer ones.\n\nThat's a fair point, but I do think that has in most ways already sailed, yes.\n\nI mean, we could split it into more than one view. But adding a new\nview for every new thing we want to show is also not very good from\neither a usability or performance perspective. So where would we put\nit?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 16 Feb 2024 20:57:59 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2024-02-16 20:57:59 +0100, Magnus Hagander wrote:\n> On Fri, Feb 16, 2024 at 8:41 PM Andres Freund <[email protected]> wrote:\n> > On 2024-01-10 12:46:34 +0100, Magnus Hagander wrote:\n> > > The attached patch adds a column \"authuser\" to pg_stat_activity which\n> > > contains the username of the externally authenticated user, being the\n> > > same value as the SYSTEM_USER keyword returns in a backend.\n> >\n> > I continue to think that it's a bad idea to make pg_stat_activity ever wider\n> > with columns that do not actually describe properties that change across the\n> > course of a session. Yes, there's the argument that that ship has sailed, but\n> > I don't think that's a good reason to continue ever further down that road.\n> >\n> > It's not just a usability issue, it also makes it more expensive to query\n> > pg_stat_activity. This is of course more pronounced with textual columns than\n> > with integer ones.\n> \n> That's a fair point, but I do think that has in most ways already sailed, yes.\n> \n> I mean, we could split it into more than one view. But adding a new\n> view for every new thing we want to show is also not very good from\n> either a usability or performance perspective. So where would we put\n> it?\n\nI think we should group new properties that don't change over the course of a\nsession ([1]) in a new view (e.g. pg_stat_session). I don't think we need one\nview per property, but I do think it makes sense to split information that\nchanges very frequently (like most pg_stat_activity contents) from information\nthat doesn't (like auth_method, auth_identity).\n\nGreetings,\n\nAndres Freund\n\n[1]\n\nAdditionally I think something like pg_stat_session could also contain\nper-session cumulative counters like the session's contribution to\npg_stat_database.{idle_in_transaction_time,active_time}\n\n\n", "msg_date": "Fri, 16 Feb 2024 12:20:01 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> I mean, we could split it into more than one view. But adding a new\n> view for every new thing we want to show is also not very good from\n> either a usability or performance perspective. So where would we put\n> it?\n\nIt'd have to be a new view with a row per session, showing static\n(or at least mostly static?) properties of the session.\n\nCould we move some existing fields of pg_stat_activity into such a\nview? In any case, there'd have to be a key column to use to join\nit to pg_stat_activity.\n\nI'm not sure that this is worth the trouble TBH. If it can be shown\nthat pulling a few fields out of pg_stat_activity actually does make\nfor a useful speedup, then maybe OK ... but Andres hasn't provided\nany evidence that there's a measurable issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Feb 2024 15:22:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 9:20 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-02-16 20:57:59 +0100, Magnus Hagander wrote:\n> > On Fri, Feb 16, 2024 at 8:41 PM Andres Freund <[email protected]> wrote:\n> > > On 2024-01-10 12:46:34 +0100, Magnus Hagander wrote:\n> > > > The attached patch adds a column \"authuser\" to pg_stat_activity which\n> > > > contains the username of the externally authenticated user, being the\n> > > > same value as the SYSTEM_USER keyword returns in a backend.\n> > >\n> > > I continue to think that it's a bad idea to make pg_stat_activity ever wider\n> > > with columns that do not actually describe properties that change across the\n> > > course of a session. Yes, there's the argument that that ship has sailed, but\n> > > I don't think that's a good reason to continue ever further down that road.\n> > >\n> > > It's not just a usability issue, it also makes it more expensive to query\n> > > pg_stat_activity. This is of course more pronounced with textual columns than\n> > > with integer ones.\n> >\n> > That's a fair point, but I do think that has in most ways already sailed, yes.\n> >\n> > I mean, we could split it into more than one view. But adding a new\n> > view for every new thing we want to show is also not very good from\n> > either a usability or performance perspective. So where would we put\n> > it?\n>\n> I think we should group new properties that don't change over the course of a\n> session ([1]) in a new view (e.g. pg_stat_session). I don't think we need one\n> view per property, but I do think it makes sense to split information that\n> changes very frequently (like most pg_stat_activity contents) from information\n> that doesn't (like auth_method, auth_identity).\n\nThat would make sense in many ways, but ends up with \"other level of\nannoyances\". E.g. the database name and oid don't change, but would we\nwant to move those out of pg_stat_activity? Same for username? Don't\nwe just end up in a grayzone about what belongs where?\n\nAlso - were you envisioning just another view, or actually replacing\nthe pg_stat_get_activity() part? As in where do you think the cost\ncomes?\n\n(And as to Toms question about key column - the pid column can surely\nbe that? We already do that for pg_stat_ssl and pg_stat_gssapi, that\nare both driven from pg_stat_get_activity() but shows a different set\nof columns.\n\n\n> Additionally I think something like pg_stat_session could also contain\n> per-session cumulative counters like the session's contribution to\n> pg_stat_database.{idle_in_transaction_time,active_time}\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 16 Feb 2024 21:31:47 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 8:55 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-01-12 17:16:53 +0100, Magnus Hagander wrote:\n> > On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > On Thu, Jan 11, 2024 at 02:24:58PM +0100, Magnus Hagander wrote:\n> > > > On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > If we go the 2 fields way, then what about auth_identity and auth_method then?\n> > > >\n> > > >\n> > > > Here is an updated patch based on this idea.\n> > >\n> > > Thanks!\n> > >\n> > > + <row>\n> > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>auth_method</structfield> <type>text</type>\n> > > + </para>\n> > > + <para>\n> > > + The authentication method used for authenticating the connection, or\n> > > + NULL for background processes.\n> > > + </para></entry>\n> > >\n> > > I'm wondering if it would make sense to populate it for parallel workers too.\n> > > I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> > > join based on the leader_pid though). OTOH that would be consistent with\n> > > how the SYSTEM_USER behaves with parallel workers (it's populated).\n> >\n> > I guess one could conceptually argue that \"authentication happens int\n> > he leader\". But we do populate it with the other user records, and\n> > it'd be weird if this one was excluded.\n> >\n> > The tricky thing is that pgstat_bestart() is called long before we\n> > deserialize the data. But from what I can tell it should be safe to\n> > change it per the attached? That should be AFAICT an extremely short\n> > window of time longer before we report it, not enough to matter.\n>\n> I don't like that one bit. The whole subsystem initialization dance already is\n> quite complicated, particularly for pgstat, we shouldn't make it more\n> complicated. Especially not when the initialization is moved quite a bit away\n> from all the other calls.\n>\n> Besides just that, I also don't think delaying visibility of the worker in\n> pg_stat_activity until parallel worker initialization has completed is good,\n> that's not all cheap work.\n>\n>\n> Maybe I am missing something, but why aren't we just getting the value from\n> the leader's entry, instead of copying it?\n\nThe answer to that would be \"because I didn't think of it\" :)\n\nWere you thinking of something like the attached?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 16 Feb 2024 21:41:41 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2024-02-16 15:22:16 -0500, Tom Lane wrote:\n> Magnus Hagander <[email protected]> writes:\n> > I mean, we could split it into more than one view. But adding a new\n> > view for every new thing we want to show is also not very good from\n> > either a usability or performance perspective. So where would we put\n> > it?\n>\n> It'd have to be a new view with a row per session, showing static\n> (or at least mostly static?) properties of the session.\n\nYep.\n\n\n> Could we move some existing fields of pg_stat_activity into such a\n> view?\n\nI'd suspect that at least some of\n - leader_pid\n - datid\n - datname\n - usesysid\n - usename\n - backend_start\n - client_addr\n - client_hostname\n - client_port\n - backend_type\n\ncould be moved. Whether's worth breaking existing queries, I don't quite know.\n\nOne option would be to not return (some) of them from pg_stat_get_activity(),\nbut add them to the view in a way that the planner can elide the reference.\n\n\n> I'm not sure that this is worth the trouble TBH. If it can be shown\n> that pulling a few fields out of pg_stat_activity actually does make\n> for a useful speedup, then maybe OK ... but Andres hasn't provided\n> any evidence that there's a measurable issue.\n\nIf I thought that the two columns proposed here were all that we wanted to\nadd, I'd not be worried. But there have been quite a few other fields\nproposed, e.g. tracking idle/active time on a per-connection granularity.\n\nWe even already have a patch to add pg_stat_session\nhttps://commitfest.postgresql.org/47/3405/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Feb 2024 12:45:17 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn 2024-02-16 21:41:41 +0100, Magnus Hagander wrote:\n> > Maybe I am missing something, but why aren't we just getting the value from\n> > the leader's entry, instead of copying it?\n>\n> The answer to that would be \"because I didn't think of it\" :)\n\n:)\n\n\n> Were you thinking of something like the attached?\n\n> @@ -435,6 +438,22 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> \t\t\t\t{\n> \t\t\t\t\tvalues[29] = Int32GetDatum(leader->pid);\n> \t\t\t\t\tnulls[29] = false;\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * The authenticated user in a parallel worker is the same as the one in\n> +\t\t\t\t\t * the leader, so look it up there.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tif (leader->backendId)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tLocalPgBackendStatus *leaderstat = pgstat_get_local_beentry_by_backend_id(leader->backendId);\n> +\n> +\t\t\t\t\t\tif (leaderstat->backendStatus.st_auth_method != uaReject && leaderstat->backendStatus.st_auth_method != uaImplicitReject)\n> +\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\tnulls[31] = nulls[32] = false;\n> +\t\t\t\t\t\t\tvalues[31] = CStringGetTextDatum(hba_authname(leaderstat->backendStatus.st_auth_method));\n> +\t\t\t\t\t\t\tvalues[32] = CStringGetTextDatum(leaderstat->backendStatus.st_auth_identity);\n> +\t\t\t\t\t\t}\n> +\t\t\t\t\t}\n\nMostly, yes.\n\nI only skimmed the patch, but it sure looks to me that we could end up with\nnone of the branches setting 31,32, so I think you'd have to make sure to\nhandle that case.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Feb 2024 12:51:29 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 9:51 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-02-16 21:41:41 +0100, Magnus Hagander wrote:\n> > > Maybe I am missing something, but why aren't we just getting the value from\n> > > the leader's entry, instead of copying it?\n> >\n> > The answer to that would be \"because I didn't think of it\" :)\n>\n> :)\n>\n>\n> > Were you thinking of something like the attached?\n>\n> > @@ -435,6 +438,22 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> > {\n> > values[29] = Int32GetDatum(leader->pid);\n> > nulls[29] = false;\n> > +\n> > + /*\n> > + * The authenticated user in a parallel worker is the same as the one in\n> > + * the leader, so look it up there.\n> > + */\n> > + if (leader->backendId)\n> > + {\n> > + LocalPgBackendStatus *leaderstat = pgstat_get_local_beentry_by_backend_id(leader->backendId);\n> > +\n> > + if (leaderstat->backendStatus.st_auth_method != uaReject && leaderstat->backendStatus.st_auth_method != uaImplicitReject)\n> > + {\n> > + nulls[31] = nulls[32] = false;\n> > + values[31] = CStringGetTextDatum(hba_authname(leaderstat->backendStatus.st_auth_method));\n> > + values[32] = CStringGetTextDatum(leaderstat->backendStatus.st_auth_identity);\n> > + }\n> > + }\n>\n> Mostly, yes.\n>\n> I only skimmed the patch, but it sure looks to me that we could end up with\n> none of the branches setting 31,32, so I think you'd have to make sure to\n> handle that case.\n\nThat case sets nulls[] for both of them to true I believe? And when\nthat is set I don't believe we need to set the values themselves.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 16 Feb 2024 21:56:25 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On 2024-02-16 21:56:25 +0100, Magnus Hagander wrote:\n> On Fri, Feb 16, 2024 at 9:51 PM Andres Freund <[email protected]> wrote:\n> > I only skimmed the patch, but it sure looks to me that we could end up with\n> > none of the branches setting 31,32, so I think you'd have to make sure to\n> > handle that case.\n> \n> That case sets nulls[] for both of them to true I believe? And when\n> that is set I don't believe we need to set the values themselves.\n\nSeems I skimmed too quickly :) - you're right.\n\n\n", "msg_date": "Fri, 16 Feb 2024 13:02:20 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 16, 2024 at 08:17:41PM +0100, Magnus Hagander wrote:\n> On Fri, Jan 19, 2024 at 7:20 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, Jan 18, 2024 at 04:01:33PM +0100, Magnus Hagander wrote:\n> > > On Mon, Jan 15, 2024 at 11:17 AM Bertrand Drouvot\n> > > > Did you forget to share the new revision (aka v4)? I can only see the\n> > > > \"reorder_parallel_worker_bestart.patch\" attached.\n> > >\n> > > I did. Here it is, and also including that suggested docs fix as well\n> > > as a rebase on current master.\n> > >\n> >\n> > Thanks!\n> >\n> > Just a few comments:\n> >\n> > 1 ===\n> >\n> > + The authentication method used for authenticating the connection, or\n> > + NULL for background processes.\n> >\n> > What about? \"NULL for background processes (except for parallel workers which\n> > inherit this information from their leader process)\"\n> \n> Ugh. That doesn't read very well at all to me. Maybe just \"NULL for\n> background processes without a user\"?\n\nNot sure, as I think it could be NULL for background processes that provided\na user in BackgroundWorkerInitializeConnection() too.\n\n> > 2 ===\n> >\n> > + cycle before they were assigned a database role. Contains the same\n> > + value as the identity part in <xref linkend=\"system-user\" />, or NULL\n> > + for background processes.\n> >\n> > Same comment about parallel workers.\n> >\n> > 3 ===\n> >\n> > +# pg_stat_activity should contain trust and empty string for trust auth\n> > +$res = $node->safe_psql(\n> > + 'postgres',\n> > + \"SELECT auth_method, auth_identity='' FROM pg_stat_activity WHERE pid=pg_backend_pid()\",\n> > + connstr => \"user=scram_role\");\n> > +is($res, 'trust|t', 'Users with trust authentication should show correctly in pg_stat_activity');\n> > +\n> > +# pg_stat_activity should contain NULL for auth of background processes\n> > +# (test is a bit out of place here, but this is where the other pg_stat_activity.auth* tests are)\n> > +$res = $node->safe_psql(\n> > + 'postgres',\n> > + \"SELECT auth_method IS NULL, auth_identity IS NULL FROM pg_stat_activity WHERE backend_type='checkpointer'\",\n> > +);\n> > +is($res, 't|t', 'Background processes should show NULL for auth in pg_stat_activity');\n> >\n> > What do you think about testing the parallel workers cases too? (I'm not 100%\n> > sure it's worth the extra complexity though).\n> \n> I'm leaning towards not worth that.\n\nOkay, I'm fine with that too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 08:25:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 16, 2024 at 08:39:26PM +0100, Magnus Hagander wrote:\n> On Fri, Jan 19, 2024 at 1:43 PM Julien Rouhaud <[email protected]> wrote:\n> > + value as the identity part in <xref linkend=\"system-user\" />, or NULL\n> > I was looking at\n> > https://www.postgresql.org/docs/current/auth-username-maps.html and\n> > noticed that this page is switching between system-user and\n> > system-username. Should we clean that up while at it?\n> \n> Seems like something we should clean up yes, but not as part of this patch.\n\nAgree, done in [1].\n\n[1]: https://www.postgresql.org/message-id/ZdMWux1HpIebkEmd%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 08:55:20 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 16, 2024 at 09:41:41PM +0100, Magnus Hagander wrote:\n> On Fri, Feb 16, 2024 at 8:55 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2024-01-12 17:16:53 +0100, Magnus Hagander wrote:\n> > > On Thu, Jan 11, 2024 at 5:55 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > > On Thu, Jan 11, 2024 at 02:24:58PM +0100, Magnus Hagander wrote:\n> > > > > On Wed, Jan 10, 2024 at 3:12 PM Bertrand Drouvot\n> > > > > <[email protected]> wrote:\n> > > > > >\n> > > > > > If we go the 2 fields way, then what about auth_identity and auth_method then?\n> > > > >\n> > > > >\n> > > > > Here is an updated patch based on this idea.\n> > > >\n> > > > Thanks!\n> > > >\n> > > > + <row>\n> > > > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > > + <structfield>auth_method</structfield> <type>text</type>\n> > > > + </para>\n> > > > + <para>\n> > > > + The authentication method used for authenticating the connection, or\n> > > > + NULL for background processes.\n> > > > + </para></entry>\n> > > >\n> > > > I'm wondering if it would make sense to populate it for parallel workers too.\n> > > > I think it's doable thanks to d951052, but I'm not sure it's worth it (one could\n> > > > join based on the leader_pid though). OTOH that would be consistent with\n> > > > how the SYSTEM_USER behaves with parallel workers (it's populated).\n> > >\n> > > I guess one could conceptually argue that \"authentication happens int\n> > > he leader\". But we do populate it with the other user records, and\n> > > it'd be weird if this one was excluded.\n> > >\n> > > The tricky thing is that pgstat_bestart() is called long before we\n> > > deserialize the data. But from what I can tell it should be safe to\n> > > change it per the attached? That should be AFAICT an extremely short\n> > > window of time longer before we report it, not enough to matter.\n> >\n> > I don't like that one bit. The whole subsystem initialization dance already is\n> > quite complicated, particularly for pgstat, we shouldn't make it more\n> > complicated. Especially not when the initialization is moved quite a bit away\n> > from all the other calls.\n> >\n> > Besides just that, I also don't think delaying visibility of the worker in\n> > pg_stat_activity until parallel worker initialization has completed is good,\n> > that's not all cheap work.\n> >\n> >\n> > Maybe I am missing something, but why aren't we just getting the value from\n> > the leader's entry, instead of copying it?\n\nGood point!\n\n> The answer to that would be \"because I didn't think of it\" :)\n\nI'm in the same boat ;-) \n\n> Were you thinking of something like the attached?\n\nDoing it that way looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 09:12:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 9:31 PM Magnus Hagander <[email protected]> wrote:\n>\n> On Fri, Feb 16, 2024 at 9:20 PM Andres Freund <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On 2024-02-16 20:57:59 +0100, Magnus Hagander wrote:\n> > > On Fri, Feb 16, 2024 at 8:41 PM Andres Freund <[email protected]> wrote:\n> > > > On 2024-01-10 12:46:34 +0100, Magnus Hagander wrote:\n> > > > > The attached patch adds a column \"authuser\" to pg_stat_activity which\n> > > > > contains the username of the externally authenticated user, being the\n> > > > > same value as the SYSTEM_USER keyword returns in a backend.\n> > > >\n> > > > I continue to think that it's a bad idea to make pg_stat_activity ever wider\n> > > > with columns that do not actually describe properties that change across the\n> > > > course of a session. Yes, there's the argument that that ship has sailed, but\n> > > > I don't think that's a good reason to continue ever further down that road.\n> > > >\n> > > > It's not just a usability issue, it also makes it more expensive to query\n> > > > pg_stat_activity. This is of course more pronounced with textual columns than\n> > > > with integer ones.\n> > >\n> > > That's a fair point, but I do think that has in most ways already sailed, yes.\n> > >\n> > > I mean, we could split it into more than one view. But adding a new\n> > > view for every new thing we want to show is also not very good from\n> > > either a usability or performance perspective. So where would we put\n> > > it?\n> >\n> > I think we should group new properties that don't change over the course of a\n> > session ([1]) in a new view (e.g. pg_stat_session). I don't think we need one\n> > view per property, but I do think it makes sense to split information that\n> > changes very frequently (like most pg_stat_activity contents) from information\n> > that doesn't (like auth_method, auth_identity).\n>\n> That would make sense in many ways, but ends up with \"other level of\n> annoyances\". E.g. the database name and oid don't change, but would we\n> want to move those out of pg_stat_activity? Same for username? Don't\n> we just end up in a grayzone about what belongs where?\n>\n> Also - were you envisioning just another view, or actually replacing\n> the pg_stat_get_activity() part? As in where do you think the cost\n> comes?\n\nAndres -- did you spot this question in the middle or did it get lost\nin the flurry of others? :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Feb 2024 22:21:30 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" }, { "msg_contents": "On Fri, Feb 16, 2024 at 9:45 PM Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> On 2024-02-16 15:22:16 -0500, Tom Lane wrote:\n> > Magnus Hagander <[email protected]> writes:\n> > > I mean, we could split it into more than one view. But adding a new\n> > > view for every new thing we want to show is also not very good from\n> > > either a usability or performance perspective. So where would we put\n> > > it?\n> >\n> > It'd have to be a new view with a row per session, showing static\n> > (or at least mostly static?) properties of the session.\n>\n> Yep.\n>\n>\n> > Could we move some existing fields of pg_stat_activity into such a\n> > view?\n>\n> I'd suspect that at least some of\n> - leader_pid\n> - datid\n> - datname\n> - usesysid\n> - usename\n> - backend_start\n> - client_addr\n> - client_hostname\n> - client_port\n> - backend_type\n>\n> could be moved. Whether's worth breaking existing queries, I don't quite know.\n\nI think that's the big question. I think if we move all of those we\nwill break every single monitoring tool out there for postgres...\nThat's a pretty hefty price.\n\n\n> One option would be to not return (some) of them from pg_stat_get_activity(),\n> but add them to the view in a way that the planner can elide the reference.\n\nWithout having any numbers, I would think that the join to pg_authid\nfor exapmle is likely more costly than returning all the other fields.\nBut that one does get eliminated as long as one doesn't query that\ncolumn. But if we make more things \"joined in from the view\", isn't\nthat likely to just make it more expensive in most cases?\n\n\n> > I'm not sure that this is worth the trouble TBH. If it can be shown\n> > that pulling a few fields out of pg_stat_activity actually does make\n> > for a useful speedup, then maybe OK ... but Andres hasn't provided\n> > any evidence that there's a measurable issue.\n>\n> If I thought that the two columns proposed here were all that we wanted to\n> add, I'd not be worried. But there have been quite a few other fields\n> proposed, e.g. tracking idle/active time on a per-connection granularity.\n>\n> We even already have a patch to add pg_stat_session\n> https://commitfest.postgresql.org/47/3405/\n\nIn a way, that's yet another different type of values though -- it\ncontains accumulated stats. So we really have 3 types -- \"info\" that's\nnot really stats (username, etc), \"current state\" (query, wait events,\nstate) and \"accumulated stats\" (counters since start).If we don't want\nto combine them all, we should perhaps not combine any and actually\nhave 3 views?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 20 Feb 2024 22:32:53 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": true, "msg_subject": "Re: System username in pg_stat_activity" } ]
[ { "msg_contents": "Hello all,\n\n*TL; DR*\nThere has been a discussion about GUCifying the MAX_WAL_SEND constant in\nwalsender.c in here <https://commitfest.postgresql.org/13/958/> nearly 7\nyears ago, but resulting in nothing in the end. Today, I found out the\nconfigurability of this parameter can be very helpful. So, I want to submit\na patch for it, but I also want to know your comments before then.\n\nWhat is MAX_WAL_SEND?\nIt's the maximum size of WAL records that walsender reads from the disk and\nthen sends to the standby servers. Its current value is hardcoded as\nXLOG_BLCKSZ * 16, which is 128KiB (8KiB * 16) by default.\n\nWhy do I think it can be beneficial to GUCify it?\nWe use Postgres in K8s along with RBD disks. Today, I found out that\nbecause we use remote disks, it's better to read bigger chunks of data from\nthe disk in one operation. In our setup (which I assume is not a silly\nsetup), we had a query that took 4.5 seconds to execute on the primary\nserver and ~21 seconds to send the WAL records to our synchronous standby\nserver. After recompiling Postgres and setting MAX_WAL_SEND to 16MiB, that\n21 seconds decreased to just 1.5 seconds, which is a 92.9% improvement.\n\nThank you for your comments in advance\nMajid\n\nHello all,TL; DRThere has been a discussion about GUCifying the MAX_WAL_SEND constant in walsender.c in here nearly 7 years ago, but resulting in nothing in the end. Today, I found out the configurability of this parameter can be very helpful. So, I want to submit a patch for it, but I also want to know your comments before then.What is MAX_WAL_SEND?It's the maximum size of WAL records that walsender reads from the disk and then sends to the standby servers. Its current value is hardcoded as XLOG_BLCKSZ * 16, which is 128KiB (8KiB * 16) by default.Why do I think it can be beneficial to GUCify it?We use Postgres in K8s along with RBD disks. Today, I found out that because we use remote disks, it's better to read bigger chunks of data from the disk in one operation. In our setup (which I assume is not a silly setup), we had a query that took 4.5 seconds to execute on the primary server and ~21 seconds to send the WAL records to our synchronous standby server. After recompiling Postgres and setting MAX_WAL_SEND to 16MiB, that 21 seconds decreased to just 1.5 seconds, which is a 92.9% improvement.Thank you for your comments in advanceMajid", "msg_date": "Thu, 11 Jan 2024 01:15:48 +0330", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "GUCifying MAX_WAL_SEND" }, { "msg_contents": "On Thu, Jan 11, 2024 at 01:15:48AM +0330, Majid Garoosi wrote:\n> Why do I think it can be beneficial to GUCify it?\n> We use Postgres in K8s along with RBD disks. Today, I found out that\n> because we use remote disks, it's better to read bigger chunks of data from\n> the disk in one operation. In our setup (which I assume is not a silly\n> setup), we had a query that took 4.5 seconds to execute on the primary\n> server and ~21 seconds to send the WAL records to our synchronous standby\n> server. After recompiling Postgres and setting MAX_WAL_SEND to 16MiB, that\n> 21 seconds decreased to just 1.5 seconds, which is a 92.9% improvement.\n\nI did not follow the older thread, but MAX_SEND_SIZE does not stress\nme as a performance bottleneck in walsender.c when we use it for\ncalculations with the end LSN, so I can get behind the argument of\nGUC-ifying it when reading WAL data in batches, even if it can be easy\nto get it wrong. The hardcoded value comes back to 40f908bdcdc7 back\nin 2010, and hardware has evolved a lot since that point.\n\nSo I'd +1 your suggestion. A RBD setup sounds exotic to me for a\ndatabase, though :)\n--\nMichael", "msg_date": "Thu, 11 Jan 2024 07:55:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUCifying MAX_WAL_SEND" } ]
[ { "msg_contents": "Hi,\n\nReplication slots in postgres will prevent removal of required\nresources when there is no connection using them (inactive). This\nconsumes storage because neither required WAL nor required rows from\nthe user tables/system catalogs can be removed by VACUUM as long as\nthey are required by a replication slot. In extreme cases this could\ncause the transaction ID wraparound.\n\nCurrently postgres has the ability to invalidate inactive replication\nslots based on the amount of WAL (set via max_slot_wal_keep_size GUC)\nthat will be needed for the slots in case they become active. However,\nthe wraparound issue isn't effectively covered by\nmax_slot_wal_keep_size - one can't tell postgres to invalidate a\nreplication slot if it is blocking VACUUM. Also, it is often tricky to\nchoose a default value for max_slot_wal_keep_size, because the amount\nof WAL that gets generated and allocated storage for the database can\nvary.\n\nTherefore, it is often easy for developers to do the following:\na) set an XID age (age of slot's xmin or catalog_xmin) of say 1 or 1.5\nbillion, after which the slots get invalidated.\nb) set a timeout of say 1 or 2 or 3 days, after which the inactive\nslots get invalidated.\n\nTo implement (a), postgres needs a new GUC called max_slot_xid_age.\nThe checkpointer then invalidates all the slots whose xmin (the oldest\ntransaction that this slot needs the database to retain) or\ncatalog_xmin (the oldest transaction affecting the system catalogs\nthat this slot needs the database to retain) has reached the age\nspecified by this setting.\n\nTo implement (b), first postgres needs to track the replication slot\nmetrics like the time at which the slot became inactive (inactive_at\ntimestamptz) and the total number of times the slot became inactive in\nits lifetime (inactive_count numeric) in ReplicationSlotPersistentData\nstructure. And, then it needs a new timeout GUC called\ninactive_replication_slot_timeout. Whenever a slot becomes inactive,\nthe current timestamp and inactive count are stored in\nReplicationSlotPersistentData structure and persisted to disk. The\ncheckpointer then invalidates all the slots that are lying inactive\nfor about inactive_replication_slot_timeout duration starting from\ninactive_at.\n\nIn addition to implementing (b), these two new metrics enable\ndevelopers to improve their monitoring tools as the metrics are\nexposed via pg_replication_slots system view. For instance, one can\nbuild a monitoring tool that signals when replication slots are lying\ninactive for a day or so using inactive_at metric, and/or when a\nreplication slot is becoming inactive too frequently using inactive_at\nmetric.\n\nI’m attaching the v1 patch set as described below:\n0001 - Tracks invalidation_reason in pg_replication_slots. This is\nneeded because slots now have multiple reasons for slot invalidation.\n0002 - Tracks inactive replication slot information inactive_at and\ninactive_timeout.\n0003 - Adds inactive_timeout based replication slot invalidation.\n0004 - Adds XID based replication slot invalidation.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 11 Jan 2024 10:48:13 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:48 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Replication slots in postgres will prevent removal of required\n> resources when there is no connection using them (inactive). This\n> consumes storage because neither required WAL nor required rows from\n> the user tables/system catalogs can be removed by VACUUM as long as\n> they are required by a replication slot. In extreme cases this could\n> cause the transaction ID wraparound.\n>\n> Currently postgres has the ability to invalidate inactive replication\n> slots based on the amount of WAL (set via max_slot_wal_keep_size GUC)\n> that will be needed for the slots in case they become active. However,\n> the wraparound issue isn't effectively covered by\n> max_slot_wal_keep_size - one can't tell postgres to invalidate a\n> replication slot if it is blocking VACUUM. Also, it is often tricky to\n> choose a default value for max_slot_wal_keep_size, because the amount\n> of WAL that gets generated and allocated storage for the database can\n> vary.\n>\n> Therefore, it is often easy for developers to do the following:\n> a) set an XID age (age of slot's xmin or catalog_xmin) of say 1 or 1.5\n> billion, after which the slots get invalidated.\n> b) set a timeout of say 1 or 2 or 3 days, after which the inactive\n> slots get invalidated.\n>\n> To implement (a), postgres needs a new GUC called max_slot_xid_age.\n> The checkpointer then invalidates all the slots whose xmin (the oldest\n> transaction that this slot needs the database to retain) or\n> catalog_xmin (the oldest transaction affecting the system catalogs\n> that this slot needs the database to retain) has reached the age\n> specified by this setting.\n>\n> To implement (b), first postgres needs to track the replication slot\n> metrics like the time at which the slot became inactive (inactive_at\n> timestamptz) and the total number of times the slot became inactive in\n> its lifetime (inactive_count numeric) in ReplicationSlotPersistentData\n> structure. And, then it needs a new timeout GUC called\n> inactive_replication_slot_timeout. Whenever a slot becomes inactive,\n> the current timestamp and inactive count are stored in\n> ReplicationSlotPersistentData structure and persisted to disk. The\n> checkpointer then invalidates all the slots that are lying inactive\n> for about inactive_replication_slot_timeout duration starting from\n> inactive_at.\n>\n> In addition to implementing (b), these two new metrics enable\n> developers to improve their monitoring tools as the metrics are\n> exposed via pg_replication_slots system view. For instance, one can\n> build a monitoring tool that signals when replication slots are lying\n> inactive for a day or so using inactive_at metric, and/or when a\n> replication slot is becoming inactive too frequently using inactive_at\n> metric.\n>\n> I’m attaching the v1 patch set as described below:\n> 0001 - Tracks invalidation_reason in pg_replication_slots. This is\n> needed because slots now have multiple reasons for slot invalidation.\n> 0002 - Tracks inactive replication slot information inactive_at and\n> inactive_timeout.\n> 0003 - Adds inactive_timeout based replication slot invalidation.\n> 0004 - Adds XID based replication slot invalidation.\n>\n> Thoughts?\n\nNeeded a rebase due to c393308b. Please find the attached v2 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 27 Jan 2024 01:18:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Jan 27, 2024 at 1:18 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Jan 11, 2024 at 10:48 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Replication slots in postgres will prevent removal of required\n> > resources when there is no connection using them (inactive). This\n> > consumes storage because neither required WAL nor required rows from\n> > the user tables/system catalogs can be removed by VACUUM as long as\n> > they are required by a replication slot. In extreme cases this could\n> > cause the transaction ID wraparound.\n> >\n> > Currently postgres has the ability to invalidate inactive replication\n> > slots based on the amount of WAL (set via max_slot_wal_keep_size GUC)\n> > that will be needed for the slots in case they become active. However,\n> > the wraparound issue isn't effectively covered by\n> > max_slot_wal_keep_size - one can't tell postgres to invalidate a\n> > replication slot if it is blocking VACUUM. Also, it is often tricky to\n> > choose a default value for max_slot_wal_keep_size, because the amount\n> > of WAL that gets generated and allocated storage for the database can\n> > vary.\n> >\n> > Therefore, it is often easy for developers to do the following:\n> > a) set an XID age (age of slot's xmin or catalog_xmin) of say 1 or 1.5\n> > billion, after which the slots get invalidated.\n> > b) set a timeout of say 1 or 2 or 3 days, after which the inactive\n> > slots get invalidated.\n> >\n> > To implement (a), postgres needs a new GUC called max_slot_xid_age.\n> > The checkpointer then invalidates all the slots whose xmin (the oldest\n> > transaction that this slot needs the database to retain) or\n> > catalog_xmin (the oldest transaction affecting the system catalogs\n> > that this slot needs the database to retain) has reached the age\n> > specified by this setting.\n> >\n> > To implement (b), first postgres needs to track the replication slot\n> > metrics like the time at which the slot became inactive (inactive_at\n> > timestamptz) and the total number of times the slot became inactive in\n> > its lifetime (inactive_count numeric) in ReplicationSlotPersistentData\n> > structure. And, then it needs a new timeout GUC called\n> > inactive_replication_slot_timeout. Whenever a slot becomes inactive,\n> > the current timestamp and inactive count are stored in\n> > ReplicationSlotPersistentData structure and persisted to disk. The\n> > checkpointer then invalidates all the slots that are lying inactive\n> > for about inactive_replication_slot_timeout duration starting from\n> > inactive_at.\n> >\n> > In addition to implementing (b), these two new metrics enable\n> > developers to improve their monitoring tools as the metrics are\n> > exposed via pg_replication_slots system view. For instance, one can\n> > build a monitoring tool that signals when replication slots are lying\n> > inactive for a day or so using inactive_at metric, and/or when a\n> > replication slot is becoming inactive too frequently using inactive_at\n> > metric.\n> >\n> > I’m attaching the v1 patch set as described below:\n> > 0001 - Tracks invalidation_reason in pg_replication_slots. This is\n> > needed because slots now have multiple reasons for slot invalidation.\n> > 0002 - Tracks inactive replication slot information inactive_at and\n> > inactive_timeout.\n> > 0003 - Adds inactive_timeout based replication slot invalidation.\n> > 0004 - Adds XID based replication slot invalidation.\n> >\n> > Thoughts?\n>\n> Needed a rebase due to c393308b. Please find the attached v2 patch set.\n\nNeeded a rebase due to commit 776621a (conflict in\nsrc/test/recovery/meson.build for new TAP test file added). Please\nfind the attached v3 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 31 Jan 2024 18:35:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 11, 2024 at 10:48:13AM +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> Therefore, it is often easy for developers to do the following:\n> a) set an XID age (age of slot's xmin or catalog_xmin) of say 1 or 1.5\n> billion, after which the slots get invalidated.\n> b) set a timeout of say 1 or 2 or 3 days, after which the inactive\n> slots get invalidated.\n> \n> To implement (a), postgres needs a new GUC called max_slot_xid_age.\n> The checkpointer then invalidates all the slots whose xmin (the oldest\n> transaction that this slot needs the database to retain) or\n> catalog_xmin (the oldest transaction affecting the system catalogs\n> that this slot needs the database to retain) has reached the age\n> specified by this setting.\n> \n> To implement (b), first postgres needs to track the replication slot\n> metrics like the time at which the slot became inactive (inactive_at\n> timestamptz) and the total number of times the slot became inactive in\n> its lifetime (inactive_count numeric) in ReplicationSlotPersistentData\n> structure. And, then it needs a new timeout GUC called\n> inactive_replication_slot_timeout. Whenever a slot becomes inactive,\n> the current timestamp and inactive count are stored in\n> ReplicationSlotPersistentData structure and persisted to disk. The\n> checkpointer then invalidates all the slots that are lying inactive\n> for about inactive_replication_slot_timeout duration starting from\n> inactive_at.\n> \n> In addition to implementing (b), these two new metrics enable\n> developers to improve their monitoring tools as the metrics are\n> exposed via pg_replication_slots system view. For instance, one can\n> build a monitoring tool that signals when replication slots are lying\n> inactive for a day or so using inactive_at metric, and/or when a\n> replication slot is becoming inactive too frequently using inactive_at\n> metric.\n\nThanks for the patch and +1 for the idea, I think adding those new\n\"invalidation reasons\" make sense.\n\n> \n> I’m attaching the v1 patch set as described below:\n> 0001 - Tracks invalidation_reason in pg_replication_slots. This is\n> needed because slots now have multiple reasons for slot invalidation.\n> 0002 - Tracks inactive replication slot information inactive_at and\n> inactive_timeout.\n> 0003 - Adds inactive_timeout based replication slot invalidation.\n> 0004 - Adds XID based replication slot invalidation.\n>\n\nI think it's better to have the XID one being discussed/implemented before the\ninactive_timeout one: what about changing the 0002, 0003 and 0004 ordering?\n\n0004 -> 0002\n0002 -> 0003\n0003 -> 0004\n\nAs far 0001:\n\n\"\nThis commit renames conflict_reason to\ninvalidation_reason, and adds the support to show invalidation\nreasons for both physical and logical slots.\n\"\n\nI'm not sure I like the fact that \"invalidations\" and \"conflicts\" are merged\ninto a single field. I'd vote to keep conflict_reason as it is and add a new\ninvalidation_reason (and put \"conflict\" as value when it is the case). The reason\nis that I think they are 2 different concepts (could be linked though) and that\nit would be easier to check for conflicts (means conflict_reason is not NULL).\n\nRegards,\n \n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 5 Feb 2024 09:45:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Jan 11, 2024 at 10:48 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Replication slots in postgres will prevent removal of required\n> resources when there is no connection using them (inactive). This\n> consumes storage because neither required WAL nor required rows from\n> the user tables/system catalogs can be removed by VACUUM as long as\n> they are required by a replication slot. In extreme cases this could\n> cause the transaction ID wraparound.\n>\n> Currently postgres has the ability to invalidate inactive replication\n> slots based on the amount of WAL (set via max_slot_wal_keep_size GUC)\n> that will be needed for the slots in case they become active. However,\n> the wraparound issue isn't effectively covered by\n> max_slot_wal_keep_size - one can't tell postgres to invalidate a\n> replication slot if it is blocking VACUUM. Also, it is often tricky to\n> choose a default value for max_slot_wal_keep_size, because the amount\n> of WAL that gets generated and allocated storage for the database can\n> vary.\n>\n> Therefore, it is often easy for developers to do the following:\n> a) set an XID age (age of slot's xmin or catalog_xmin) of say 1 or 1.5\n> billion, after which the slots get invalidated.\n> b) set a timeout of say 1 or 2 or 3 days, after which the inactive\n> slots get invalidated.\n>\n> To implement (a), postgres needs a new GUC called max_slot_xid_age.\n> The checkpointer then invalidates all the slots whose xmin (the oldest\n> transaction that this slot needs the database to retain) or\n> catalog_xmin (the oldest transaction affecting the system catalogs\n> that this slot needs the database to retain) has reached the age\n> specified by this setting.\n>\n> To implement (b), first postgres needs to track the replication slot\n> metrics like the time at which the slot became inactive (inactive_at\n> timestamptz) and the total number of times the slot became inactive in\n> its lifetime (inactive_count numeric) in ReplicationSlotPersistentData\n> structure. And, then it needs a new timeout GUC called\n> inactive_replication_slot_timeout. Whenever a slot becomes inactive,\n> the current timestamp and inactive count are stored in\n> ReplicationSlotPersistentData structure and persisted to disk. The\n> checkpointer then invalidates all the slots that are lying inactive\n> for about inactive_replication_slot_timeout duration starting from\n> inactive_at.\n>\n> In addition to implementing (b), these two new metrics enable\n> developers to improve their monitoring tools as the metrics are\n> exposed via pg_replication_slots system view. For instance, one can\n> build a monitoring tool that signals when replication slots are lying\n> inactive for a day or so using inactive_at metric, and/or when a\n> replication slot is becoming inactive too frequently using inactive_at\n> metric.\n>\n> I’m attaching the v1 patch set as described below:\n> 0001 - Tracks invalidation_reason in pg_replication_slots. This is\n> needed because slots now have multiple reasons for slot invalidation.\n> 0002 - Tracks inactive replication slot information inactive_at and\n> inactive_timeout.\n> 0003 - Adds inactive_timeout based replication slot invalidation.\n> 0004 - Adds XID based replication slot invalidation.\n>\n> Thoughts?\n>\n+1 for the idea, here are some comments on 0002, I will review other\npatches soon and respond.\n\n1.\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>inactive_at</structfield> <type>timestamptz</type>\n+ </para>\n+ <para>\n+ The time at which the slot became inactive.\n+ <literal>NULL</literal> if the slot is currently actively being\n+ used.\n+ </para></entry>\n+ </row>\n\nMaybe we can change the field name to 'last_inactive_at'? or maybe the\ncomment can explain timestampt at which slot was last inactivated.\nI think since we are already maintaining the inactive_count so better\nto explicitly say this is the last invaliding time.\n\n2.\n+ /*\n+ * XXX: Can inactive_count of type uint64 ever overflow? It takes\n+ * about a half-billion years for inactive_count to overflow even\n+ * if slot becomes inactive for every 1 millisecond. So, using\n+ * pg_add_u64_overflow might be an overkill.\n+ */\n\nCorrect we don't need to use pg_add_u64_overflow for this counter.\n\n3.\n\n+\n+ /* Convert to numeric. */\n+ snprintf(buf, sizeof buf, UINT64_FORMAT, slot_contents.data.inactive_count);\n+ values[i++] = DirectFunctionCall3(numeric_in,\n+ CStringGetDatum(buf),\n+ ObjectIdGetDatum(0),\n+ Int32GetDatum(-1));\n\nWhat is the purpose of doing this? I mean inactive_count is 8 byte\ninteger and you can define function outparameter as 'int8' which is 8\nbyte integer. Then you don't need to convert int to string and then\nto numeric?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 6 Feb 2024 14:16:19 +0530", "msg_from": "Dilip Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Feb 6, 2024 at 2:16 PM Dilip Kumar <[email protected]> wrote:\n>\n> > Thoughts?\n> >\n> +1 for the idea, here are some comments on 0002, I will review other\n> patches soon and respond.\n\nThanks for looking at it.\n\n> + <structfield>inactive_at</structfield> <type>timestamptz</type>\n>\n> Maybe we can change the field name to 'last_inactive_at'? or maybe the\n> comment can explain timestampt at which slot was last inactivated.\n> I think since we are already maintaining the inactive_count so better\n> to explicitly say this is the last invaliding time.\n\nlast_inactive_at looks better, so will use that in the next version of\nthe patch.\n\n> 2.\n> + /*\n> + * XXX: Can inactive_count of type uint64 ever overflow? It takes\n> + * about a half-billion years for inactive_count to overflow even\n> + * if slot becomes inactive for every 1 millisecond. So, using\n> + * pg_add_u64_overflow might be an overkill.\n> + */\n>\n> Correct we don't need to use pg_add_u64_overflow for this counter.\n\nWill remove this comment in the next version of the patch.\n\n> + /* Convert to numeric. */\n> + snprintf(buf, sizeof buf, UINT64_FORMAT, slot_contents.data.inactive_count);\n> + values[i++] = DirectFunctionCall3(numeric_in,\n> + CStringGetDatum(buf),\n> + ObjectIdGetDatum(0),\n> + Int32GetDatum(-1));\n>\n> What is the purpose of doing this? I mean inactive_count is 8 byte\n> integer and you can define function outparameter as 'int8' which is 8\n> byte integer. Then you don't need to convert int to string and then\n> to numeric?\n\nNope, it's of type uint64, so reporting it as numeric is a way\ntypically used elsewhere - see code around /* Convert to numeric. */.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Feb 2024 23:02:33 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Feb 5, 2024 at 3:15 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Thanks for the patch and +1 for the idea, I think adding those new\n> \"invalidation reasons\" make sense.\n\nThanks for looking at it.\n\n> I think it's better to have the XID one being discussed/implemented before the\n> inactive_timeout one: what about changing the 0002, 0003 and 0004 ordering?\n>\n> 0004 -> 0002\n> 0002 -> 0003\n> 0003 -> 0004\n\nDone that way.\n\n> As far 0001:\n>\n> \"\n> This commit renames conflict_reason to\n> invalidation_reason, and adds the support to show invalidation\n> reasons for both physical and logical slots.\n> \"\n>\n> I'm not sure I like the fact that \"invalidations\" and \"conflicts\" are merged\n> into a single field. I'd vote to keep conflict_reason as it is and add a new\n> invalidation_reason (and put \"conflict\" as value when it is the case). The reason\n> is that I think they are 2 different concepts (could be linked though) and that\n> it would be easier to check for conflicts (means conflict_reason is not NULL).\n\nSo, do you want conflict_reason for only logical slots, and a separate\ncolumn for invalidation_reason for both logical and physical slots? Is\nthere any strong reason to have two properties \"conflict\" and\n\"invalidated\" for slots? They both are the same internally, so why\nconfuse the users?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 7 Feb 2024 00:22:07 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 07, 2024 at 12:22:07AM +0530, Bharath Rupireddy wrote:\n> On Mon, Feb 5, 2024 at 3:15 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > I'm not sure I like the fact that \"invalidations\" and \"conflicts\" are merged\n> > into a single field. I'd vote to keep conflict_reason as it is and add a new\n> > invalidation_reason (and put \"conflict\" as value when it is the case). The reason\n> > is that I think they are 2 different concepts (could be linked though) and that\n> > it would be easier to check for conflicts (means conflict_reason is not NULL).\n> \n> So, do you want conflict_reason for only logical slots, and a separate\n> column for invalidation_reason for both logical and physical slots?\n\nYes, with \"conflict\" as value in case of conflicts (and one would need to refer\nto the conflict_reason reason to see the reason).\n\n> Is there any strong reason to have two properties \"conflict\" and\n> \"invalidated\" for slots?\n\nI think \"conflict\" is an important topic and does contain several reasons. The\nslot \"first\" conflict and then leads to slot \"invalidation\". \n\n> They both are the same internally, so why\n> confuse the users?\n\nI don't think that would confuse the users, I do think that would be easier to\ncheck for conflicting slots.\n\nI did not look closely at the code, just played a bit with the patch and was able\nto produce something like:\n\npostgres=# select slot_name,slot_type,active,active_pid,wal_status,invalidation_reason from pg_replication_slots;\n slot_name | slot_type | active | active_pid | wal_status | invalidation_reason\n-------------+-----------+--------+------------+------------+---------------------\n rep1 | physical | f | | reserved |\n master_slot | physical | t | 1482441 | unreserved | wal_removed\n(2 rows)\n\ndoes that make sense to have an \"active/working\" slot \"ivalidated\"?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 9 Feb 2024 07:42:53 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Feb 9, 2024 at 1:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> I think \"conflict\" is an important topic and does contain several reasons. The\n> slot \"first\" conflict and then leads to slot \"invalidation\".\n>\n> > They both are the same internally, so why\n> > confuse the users?\n>\n> I don't think that would confuse the users, I do think that would be easier to\n> check for conflicting slots.\n\nI've added a separate column for invalidation reasons for now. I'll\nsee how others think on this as the time goes by.\n\n> I did not look closely at the code, just played a bit with the patch and was able\n> to produce something like:\n>\n> postgres=# select slot_name,slot_type,active,active_pid,wal_status,invalidation_reason from pg_replication_slots;\n> slot_name | slot_type | active | active_pid | wal_status | invalidation_reason\n> -------------+-----------+--------+------------+------------+---------------------\n> rep1 | physical | f | | reserved |\n> master_slot | physical | t | 1482441 | unreserved | wal_removed\n> (2 rows)\n>\n> does that make sense to have an \"active/working\" slot \"ivalidated\"?\n\nThanks. Can you please provide the steps to generate this error? Are\nyou setting max_slot_wal_keep_size on primary to generate\n\"wal_removed\"?\n\nAttached v5 patch set after rebasing and addressing review comments.\nPlease review it further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Feb 2024 12:05:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Feb 20, 2024 at 12:05 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>> [...] and was able to produce something like:\n> >\n> > postgres=# select slot_name,slot_type,active,active_pid,wal_status,invalidation_reason from pg_replication_slots;\n> > slot_name | slot_type | active | active_pid | wal_status | invalidation_reason\n> > -------------+-----------+--------+------------+------------+---------------------\n> > rep1 | physical | f | | reserved |\n> > master_slot | physical | t | 1482441 | unreserved | wal_removed\n> > (2 rows)\n> >\n> > does that make sense to have an \"active/working\" slot \"ivalidated\"?\n>\n> Thanks. Can you please provide the steps to generate this error? Are\n> you setting max_slot_wal_keep_size on primary to generate\n> \"wal_removed\"?\n\nI'm able to reproduce [1] the state [2] where the slot got invalidated\nfirst, then its wal_status became unreserved, but still the slot is\nserving after the standby comes up online after it catches up with the\nprimary getting the WAL files from the archive. There's a good reason\nfor this state -\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/slotfuncs.c;h=d2fa5e669a32f19989b0d987d3c7329851a1272e;hb=ff9e1e764fcce9a34467d614611a34d4d2a91b50#l351.\nThis intermittent state can only happen for physical slots, not for\nlogical slots because logical subscribers can't get the missing\nchanges from the WAL stored in the archive.\n\nAnd, the fact looks to be that an invalidated slot can never become\nnormal but still can serve a standby if the standby is able to catch\nup by fetching required WAL (this is the WAL the slot couldn't keep\nfor the standby) from elsewhere (archive via restore_command).\n\nAs far as the 0001 patch is concerned, it reports the\ninvalidation_reason as long as slot_contents.data.invalidated !=\nRS_INVAL_NONE. I think this is okay.\n\nThoughts?\n\n[1]\n./initdb -D db17\necho \"max_wal_size = 128MB\nmax_slot_wal_keep_size = 64MB\narchive_mode = on\narchive_command='cp %p\n/home/ubuntu/postgres/pg17/bin/archived_wal/%f'\" | tee -a\ndb17/postgresql.conf\n\n./pg_ctl -D db17 -l logfile17 start\n\n./psql -d postgres -p 5432 -c \"SELECT\npg_create_physical_replication_slot('sb_repl_slot', true, false);\"\n\nrm -rf sbdata logfilesbdata\n./pg_basebackup -D sbdata\n\necho \"port=5433\nprimary_conninfo='host=localhost port=5432 dbname=postgres user=ubuntu'\nprimary_slot_name='sb_repl_slot'\nrestore_command='cp /home/ubuntu/postgres/pg17/bin/archived_wal/%f\n%p'\" | tee -a sbdata/postgresql.conf\n\ntouch sbdata/standby.signal\n\n./pg_ctl -D sbdata -l logfilesbdata start\n./psql -d postgres -p 5433 -c \"SELECT pg_is_in_recovery();\"\n\n./pg_ctl -D sbdata -l logfilesbdata stop\n\n./psql -d postgres -p 5432 -c \"SELECT pg_logical_emit_message(true,\n'mymessage', repeat('aaaa', 10000000));\"\n./psql -d postgres -p 5432 -c \"CHECKPOINT;\"\n./pg_ctl -D sbdata -l logfilesbdata start\n./psql -d postgres -p 5432 -xc \"SELECT * FROM pg_replication_slots;\"\n\n[2]\npostgres=# SELECT * FROM pg_replication_slots;\n-[ RECORD 1 ]-------+-------------\nslot_name | sb_repl_slot\nplugin |\nslot_type | physical\ndatoid |\ndatabase |\ntemporary | f\nactive | t\nactive_pid | 710667\nxmin |\ncatalog_xmin |\nrestart_lsn | 0/115D21A0\nconfirmed_flush_lsn |\nwal_status | unreserved\nsafe_wal_size | 77782624\ntwo_phase | f\nconflict_reason |\nfailover | f\nsynced | f\ninvalidation_reason | wal_removed\nlast_inactive_at |\ninactive_count | 1\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Feb 2024 10:55:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 21, 2024 at 10:55:00AM +0530, Bharath Rupireddy wrote:\n> On Tue, Feb 20, 2024 at 12:05 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> >> [...] and was able to produce something like:\n> > >\n> > > postgres=# select slot_name,slot_type,active,active_pid,wal_status,invalidation_reason from pg_replication_slots;\n> > > slot_name | slot_type | active | active_pid | wal_status | invalidation_reason\n> > > -------------+-----------+--------+------------+------------+---------------------\n> > > rep1 | physical | f | | reserved |\n> > > master_slot | physical | t | 1482441 | unreserved | wal_removed\n> > > (2 rows)\n> > >\n> > > does that make sense to have an \"active/working\" slot \"ivalidated\"?\n> >\n> > Thanks. Can you please provide the steps to generate this error? Are\n> > you setting max_slot_wal_keep_size on primary to generate\n> > \"wal_removed\"?\n> \n> I'm able to reproduce [1] the state [2] where the slot got invalidated\n> first, then its wal_status became unreserved, but still the slot is\n> serving after the standby comes up online after it catches up with the\n> primary getting the WAL files from the archive. There's a good reason\n> for this state -\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/replication/slotfuncs.c;h=d2fa5e669a32f19989b0d987d3c7329851a1272e;hb=ff9e1e764fcce9a34467d614611a34d4d2a91b50#l351.\n> This intermittent state can only happen for physical slots, not for\n> logical slots because logical subscribers can't get the missing\n> changes from the WAL stored in the archive.\n> \n> And, the fact looks to be that an invalidated slot can never become\n> normal but still can serve a standby if the standby is able to catch\n> up by fetching required WAL (this is the WAL the slot couldn't keep\n> for the standby) from elsewhere (archive via restore_command).\n> \n> As far as the 0001 patch is concerned, it reports the\n> invalidation_reason as long as slot_contents.data.invalidated !=\n> RS_INVAL_NONE. I think this is okay.\n> \n> Thoughts?\n\nYeah, looking at the code I agree that looks ok. OTOH, that looks confusing,\nmaybe we should add a few words about it in the doc?\n\nLooking at v5-0001:\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>invalidation_reason</structfield> <type>text</type>\n+ </para>\n+ <para>\n\nMy initial thought was to put \"conflict\" value in this new field in case of\nconflict (not to mention the conflict reason in it). With the current proposal\ninvalidation_reason could report the same as conflict_reason, which sounds weird\nto me.\n\nDoes that make sense to you to use \"conflict\" as value in \"invalidation_reason\"\nwhen the slot has \"conflict_reason\" not NULL?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Feb 2024 12:25:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Feb 21, 2024 at 5:55 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > As far as the 0001 patch is concerned, it reports the\n> > invalidation_reason as long as slot_contents.data.invalidated !=\n> > RS_INVAL_NONE. I think this is okay.\n> >\n> > Thoughts?\n>\n> Yeah, looking at the code I agree that looks ok. OTOH, that looks confusing,\n> maybe we should add a few words about it in the doc?\n\nI'll think about it.\n\n> Looking at v5-0001:\n>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>invalidation_reason</structfield> <type>text</type>\n> + </para>\n> + <para>\n>\n> My initial thought was to put \"conflict\" value in this new field in case of\n> conflict (not to mention the conflict reason in it). With the current proposal\n> invalidation_reason could report the same as conflict_reason, which sounds weird\n> to me.\n>\n> Does that make sense to you to use \"conflict\" as value in \"invalidation_reason\"\n> when the slot has \"conflict_reason\" not NULL?\n\nI'm thinking the other way around - how about we revert\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=007693f2a3ac2ac19affcb03ad43cdb36ccff5b5,\nthat is, put in place \"conflict\" as a boolean and introduce\ninvalidation_reason the text form. So, for logical slots, whenever the\n\"conflict\" column is true, the reason is found in invaldiation_reason\ncolumn? How does it sound? Again the debate might be \"conflict\" vs\n\"invalidation\", but that looks clean IMHO.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Feb 2024 20:10:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 21, 2024 at 08:10:00PM +0530, Bharath Rupireddy wrote:\n> On Wed, Feb 21, 2024 at 5:55 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > My initial thought was to put \"conflict\" value in this new field in case of\n> > conflict (not to mention the conflict reason in it). With the current proposal\n> > invalidation_reason could report the same as conflict_reason, which sounds weird\n> > to me.\n> >\n> > Does that make sense to you to use \"conflict\" as value in \"invalidation_reason\"\n> > when the slot has \"conflict_reason\" not NULL?\n> \n> I'm thinking the other way around - how about we revert\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=007693f2a3ac2ac19affcb03ad43cdb36ccff5b5,\n> that is, put in place \"conflict\" as a boolean and introduce\n> invalidation_reason the text form. So, for logical slots, whenever the\n> \"conflict\" column is true, the reason is found in invaldiation_reason\n> column? How does it sound?\n\nYeah, I think that looks fine too. We would need more change (like take care of\nddd5f4f54a for example).\n\nCC'ing Amit, Hou-San and Shveta to get their point of view (as the ones behind\n007693f2a3 and ddd5f4f54a).\n\nRegarding,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Feb 2024 08:14:57 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Feb 22, 2024 at 1:44 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > Does that make sense to you to use \"conflict\" as value in \"invalidation_reason\"\n> > > when the slot has \"conflict_reason\" not NULL?\n> >\n> > I'm thinking the other way around - how about we revert\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=007693f2a3ac2ac19affcb03ad43cdb36ccff5b5,\n> > that is, put in place \"conflict\" as a boolean and introduce\n> > invalidation_reason the text form. So, for logical slots, whenever the\n> > \"conflict\" column is true, the reason is found in invaldiation_reason\n> > column? How does it sound?\n>\n> Yeah, I think that looks fine too. We would need more change (like take care of\n> ddd5f4f54a for example).\n>\n> CC'ing Amit, Hou-San and Shveta to get their point of view (as the ones behind\n> 007693f2a3 and ddd5f4f54a).\n\nYeah, let's wait for what others think about it.\n\nFWIW, I've had to rebase the patches due to 943f7ae1c. Please see the\nattached v6 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 1 Mar 2024 20:02:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Feb 21, 2024 at 08:10:00PM +0530, Bharath Rupireddy wrote:\n> I'm thinking the other way around - how about we revert\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=007693f2a3ac2ac19affcb03ad43cdb36ccff5b5,\n> that is, put in place \"conflict\" as a boolean and introduce\n> invalidation_reason the text form. So, for logical slots, whenever the\n> \"conflict\" column is true, the reason is found in invaldiation_reason\n> column? How does it sound? Again the debate might be \"conflict\" vs\n> \"invalidation\", but that looks clean IMHO.\n\nWould you ever see \"conflict\" as false and \"invalidation_reason\" as\nnon-null for a logical slot?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 1 Mar 2024 16:11:08 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 2, 2024 at 3:41 AM Nathan Bossart <[email protected]> wrote:\n>\n> > [....] how about we revert\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=007693f2a3ac2ac19affcb03ad43cdb36ccff5b5,\n>\n> Would you ever see \"conflict\" as false and \"invalidation_reason\" as\n> non-null for a logical slot?\n\nNo. Because both conflict and invalidation_reason are decided based on\nthe invalidation reason i.e. value of slot_contents.data.invalidated.\nIOW, a logical slot that reports conflict as true must have been\ninvalidated.\n\nDo you have any thoughts on reverting 007693f and introducing\ninvalidation_reason?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 3 Mar 2024 23:40:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 03, 2024 at 11:40:00PM +0530, Bharath Rupireddy wrote:\n> On Sat, Mar 2, 2024 at 3:41 AM Nathan Bossart <[email protected]> wrote:\n>> Would you ever see \"conflict\" as false and \"invalidation_reason\" as\n>> non-null for a logical slot?\n> \n> No. Because both conflict and invalidation_reason are decided based on\n> the invalidation reason i.e. value of slot_contents.data.invalidated.\n> IOW, a logical slot that reports conflict as true must have been\n> invalidated.\n> \n> Do you have any thoughts on reverting 007693f and introducing\n> invalidation_reason?\n\nUnless I am misinterpreting some details, ISTM we could rename this column\nto invalidation_reason and use it for both logical and physical slots. I'm\nnot seeing a strong need for another column. Perhaps I am missing\nsomething...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 3 Mar 2024 15:44:34 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> On Sun, Mar 03, 2024 at 11:40:00PM +0530, Bharath Rupireddy wrote:\n>> Do you have any thoughts on reverting 007693f and introducing\n>> invalidation_reason?\n> \n> Unless I am misinterpreting some details, ISTM we could rename this column\n> to invalidation_reason and use it for both logical and physical slots. I'm\n> not seeing a strong need for another column. Perhaps I am missing\n> something...\n\nAnd also, please don't be hasty in taking a decision that would\ninvolve a revert of 007693f without informing the committer of this \ncommit about that. I am adding Amit Kapila in CC of this thread for\nawareness.\n--\nMichael", "msg_date": "Mon, 4 Mar 2024 10:32:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> On Sun, Mar 03, 2024 at 11:40:00PM +0530, Bharath Rupireddy wrote:\n> > On Sat, Mar 2, 2024 at 3:41 AM Nathan Bossart <[email protected]> wrote:\n> >> Would you ever see \"conflict\" as false and \"invalidation_reason\" as\n> >> non-null for a logical slot?\n> > \n> > No. Because both conflict and invalidation_reason are decided based on\n> > the invalidation reason i.e. value of slot_contents.data.invalidated.\n> > IOW, a logical slot that reports conflict as true must have been\n> > invalidated.\n> > \n> > Do you have any thoughts on reverting 007693f and introducing\n> > invalidation_reason?\n> \n> Unless I am misinterpreting some details, ISTM we could rename this column\n> to invalidation_reason and use it for both logical and physical slots. I'm\n> not seeing a strong need for another column.\n\nYeah having two columns was more for convenience purpose. Without the \"conflict\"\none, a slot conflicting with recovery would be \"a logical slot having a non NULL\ninvalidation_reason\".\n\nI'm also fine with one column if most of you prefer that way.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 4 Mar 2024 08:41:01 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 4, 2024 at 2:11 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> > On Sun, Mar 03, 2024 at 11:40:00PM +0530, Bharath Rupireddy wrote:\n> > > On Sat, Mar 2, 2024 at 3:41 AM Nathan Bossart <[email protected]> wrote:\n> > >> Would you ever see \"conflict\" as false and \"invalidation_reason\" as\n> > >> non-null for a logical slot?\n> > >\n> > > No. Because both conflict and invalidation_reason are decided based on\n> > > the invalidation reason i.e. value of slot_contents.data.invalidated.\n> > > IOW, a logical slot that reports conflict as true must have been\n> > > invalidated.\n> > >\n> > > Do you have any thoughts on reverting 007693f and introducing\n> > > invalidation_reason?\n> >\n> > Unless I am misinterpreting some details, ISTM we could rename this column\n> > to invalidation_reason and use it for both logical and physical slots. I'm\n> > not seeing a strong need for another column.\n>\n> Yeah having two columns was more for convenience purpose. Without the \"conflict\"\n> one, a slot conflicting with recovery would be \"a logical slot having a non NULL\n> invalidation_reason\".\n>\n> I'm also fine with one column if most of you prefer that way.\n\nWhile we debate on the above, please find the attached v7 patch set\nafter rebasing.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Mar 2024 00:50:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 06, 2024 at 12:50:38AM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 4, 2024 at 2:11 PM Bertrand Drouvot\n> <[email protected]> wrote:\n>> On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n>> > Unless I am misinterpreting some details, ISTM we could rename this column\n>> > to invalidation_reason and use it for both logical and physical slots. I'm\n>> > not seeing a strong need for another column.\n>>\n>> Yeah having two columns was more for convenience purpose. Without the \"conflict\"\n>> one, a slot conflicting with recovery would be \"a logical slot having a non NULL\n>> invalidation_reason\".\n>>\n>> I'm also fine with one column if most of you prefer that way.\n> \n> While we debate on the above, please find the attached v7 patch set\n> after rebasing.\n\nIt looks like Bertrand is okay with reusing the same column for both\nlogical and physical slots, which IIUC is what you initially proposed in v1\nof the patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 5 Mar 2024 13:44:43 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 05, 2024 at 01:44:43PM -0600, Nathan Bossart wrote:\n> On Wed, Mar 06, 2024 at 12:50:38AM +0530, Bharath Rupireddy wrote:\n> > On Mon, Mar 4, 2024 at 2:11 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> >> On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> >> > Unless I am misinterpreting some details, ISTM we could rename this column\n> >> > to invalidation_reason and use it for both logical and physical slots. I'm\n> >> > not seeing a strong need for another column.\n> >>\n> >> Yeah having two columns was more for convenience purpose. Without the \"conflict\"\n> >> one, a slot conflicting with recovery would be \"a logical slot having a non NULL\n> >> invalidation_reason\".\n> >>\n> >> I'm also fine with one column if most of you prefer that way.\n> > \n> > While we debate on the above, please find the attached v7 patch set\n> > after rebasing.\n> \n> It looks like Bertrand is okay with reusing the same column for both\n> logical and physical slots\n\nYeah, I'm okay with one column.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 09:12:15 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 2:42 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Mar 05, 2024 at 01:44:43PM -0600, Nathan Bossart wrote:\n> > On Wed, Mar 06, 2024 at 12:50:38AM +0530, Bharath Rupireddy wrote:\n> > > On Mon, Mar 4, 2024 at 2:11 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > >> On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> > >> > Unless I am misinterpreting some details, ISTM we could rename this column\n> > >> > to invalidation_reason and use it for both logical and physical slots. I'm\n> > >> > not seeing a strong need for another column.\n> > >>\n> > >> Yeah having two columns was more for convenience purpose. Without the \"conflict\"\n> > >> one, a slot conflicting with recovery would be \"a logical slot having a non NULL\n> > >> invalidation_reason\".\n> > >>\n> > >> I'm also fine with one column if most of you prefer that way.\n> > >\n> > > While we debate on the above, please find the attached v7 patch set\n> > > after rebasing.\n> >\n> > It looks like Bertrand is okay with reusing the same column for both\n> > logical and physical slots\n>\n> Yeah, I'm okay with one column.\n\nThanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 6 Mar 2024 14:46:57 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 06, 2024 at 02:46:57PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 6, 2024 at 2:42 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Yeah, I'm okay with one column.\n> \n> Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n\nThanks!\n\nA few comments:\n\n1 ===\n\n+ The reason for the slot's invalidation. <literal>NULL</literal> if the\n+ slot is currently actively being used.\n\ns/currently actively being used/not invalidated/ ? (I mean it could be valid\nand not being used).\n\n2 ===\n\n+ the slot is marked as invalidated. In case of logical slots, it\n+ represents the reason for the logical slot's conflict with recovery.\n\ns/the reason for the logical slot's conflict with recovery./the recovery conflict reason./ ?\n\n3 ===\n\n@@ -667,13 +667,13 @@ get_old_cluster_logical_slot_infos(DbInfo *dbinfo, bool live_check)\n * removed.\n */\n res = executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase, failover, \"\n- \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n+ \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n \"FROM pg_catalog.pg_replication_slots \"\n \"WHERE slot_type = 'logical' AND \"\n \"database = current_database() AND \"\n \"temporary IS FALSE;\",\n live_check ? \"FALSE\" :\n- \"(CASE WHEN conflict_reason IS NOT NULL THEN FALSE \"\n+ \"(CASE WHEN invalidation_reason IS NOT NULL THEN FALSE \"\n\nYeah that's fine because there is logical slot filtering here.\n\n4 ===\n\n-GetSlotInvalidationCause(const char *conflict_reason)\n+GetSlotInvalidationCause(const char *invalidation_reason)\n\nShould we change the comment \"Maps a conflict reason\" above this function?\n\n5 ===\n\n-# Check conflict_reason is NULL for physical slot\n+# Check invalidation_reason is NULL for physical slot\n $res = $node_primary->safe_psql(\n 'postgres', qq[\n- SELECT conflict_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n+ SELECT invalidation_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n );\n\n\nI don't think this test is needed anymore: it does not make that much sense since\nit's done after the primary database initialization and startup.\n\n6 ===\n\n@@ -680,7 +680,7 @@ ok( $node_standby->poll_query_until(\n is( $node_standby->safe_psql(\n 'postgres',\n q[select bool_or(conflicting) from\n- (select conflict_reason is not NULL as conflicting\n+ (select invalidation_reason is not NULL as conflicting\n from pg_replication_slots WHERE slot_type = 'logical')]),\n 'f',\n 'Logical slots are reported as non conflicting');\n\nWhat about?\n\n\"\n# Verify slots are reported as valid in pg_replication_slots\nis( $node_standby->safe_psql(\n 'postgres',\n q[select bool_or(invalidated) from\n (select invalidation_reason is not NULL as invalidated\n from pg_replication_slots WHERE slot_type = 'logical')]),\n 'f',\n 'Logical slots are reported as valid');\n\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 10:26:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 4, 2024 at 3:14 AM Nathan Bossart <[email protected]> wrote:\n>\n> On Sun, Mar 03, 2024 at 11:40:00PM +0530, Bharath Rupireddy wrote:\n> > On Sat, Mar 2, 2024 at 3:41 AM Nathan Bossart <[email protected]> wrote:\n> >> Would you ever see \"conflict\" as false and \"invalidation_reason\" as\n> >> non-null for a logical slot?\n> >\n> > No. Because both conflict and invalidation_reason are decided based on\n> > the invalidation reason i.e. value of slot_contents.data.invalidated.\n> > IOW, a logical slot that reports conflict as true must have been\n> > invalidated.\n> >\n> > Do you have any thoughts on reverting 007693f and introducing\n> > invalidation_reason?\n>\n> Unless I am misinterpreting some details, ISTM we could rename this column\n> to invalidation_reason and use it for both logical and physical slots. I'm\n> not seeing a strong need for another column. Perhaps I am missing\n> something...\n>\n\nIIUC, the current conflict_reason is primarily used to determine\nlogical slots on standby that got invalidated due to recovery time\nconflict. On the primary, it will also show logical slots that got\ninvalidated due to the corresponding WAL got removed. Is that\nunderstanding correct? If so, we are already sort of overloading this\ncolumn. However, now adding more invalidation reasons that won't\nhappen during recovery conflict handling will change entirely the\npurpose (as per the name we use) of this variable. I think\ninvalidation_reason could depict this column correctly but OTOH I\nguess it would lose its original meaning/purpose.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Mar 2024 16:28:27 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 2:47 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n>\n\n@@ -1629,6 +1634,20 @@\nInvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n }\n }\n break;\n+ case RS_INVAL_INACTIVE_TIMEOUT:\n+ if (s->data.last_inactive_at > 0)\n+ {\n+ TimestampTz now;\n+\n+ Assert(s->data.persistency == RS_PERSISTENT);\n+ Assert(s->active_pid == 0);\n+\n+ now = GetCurrentTimestamp();\n+ if (TimestampDifferenceExceeds(s->data.last_inactive_at, now,\n+ inactive_replication_slot_timeout * 1000))\n\nYou might want to consider its interaction with sync slots on standby.\nSay, there is no activity on slots in terms of processing the changes\nfor slots. Now, we won't perform sync of such slots on standby showing\nthem inactive as per your new criteria where as same slots could still\nbe valid on primary as the walsender is still active. This may be more\nof a theoretical point as in running system there will probably be\nsome activity but I think this needs some thougths.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Mar 2024 16:49:04 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 4:28 PM Amit Kapila <[email protected]> wrote:\n>\n> IIUC, the current conflict_reason is primarily used to determine\n> logical slots on standby that got invalidated due to recovery time\n> conflict. On the primary, it will also show logical slots that got\n> invalidated due to the corresponding WAL got removed. Is that\n> understanding correct?\n\nThat's right.\n\n> If so, we are already sort of overloading this\n> column. However, now adding more invalidation reasons that won't\n> happen during recovery conflict handling will change entirely the\n> purpose (as per the name we use) of this variable. I think\n> invalidation_reason could depict this column correctly but OTOH I\n> guess it would lose its original meaning/purpose.\n\nHm. I get the concern. Are you okay with having inavlidation_reason\nseparately for both logical and physical slots? In such a case,\nlogical slots that got invalidated on the standby will have duplicate\ninfo in conflict_reason and invalidation_reason, is this fine?\n\nAnother idea is to make 'conflict_reason text' as a 'conflicting\nboolean' again (revert 007693f2a3), and have 'invalidation_reason\ntext' for both logical and physical slots. So, whenever 'conflicting'\nis true, one can look at invalidation_reason for the reason for\nconflict. How does this sound?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Mar 2024 20:08:19 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n>\n> You might want to consider its interaction with sync slots on standby.\n> Say, there is no activity on slots in terms of processing the changes\n> for slots. Now, we won't perform sync of such slots on standby showing\n> them inactive as per your new criteria where as same slots could still\n> be valid on primary as the walsender is still active. This may be more\n> of a theoretical point as in running system there will probably be\n> some activity but I think this needs some thougths.\n\nI believe the xmin and catalog_xmin of the sync slots on the standby\nkeep advancing depending on the slots on the primary, no? If yes, the\nXID age based invalidation shouldn't be a problem.\n\nI believe there are no walsenders started for the sync slots on the\nstandbys, right? If yes, the inactive timeout based invalidation also\nshouldn't be a problem. Because, the inactive timeouts for a slot are\ntracked only for walsenders because they are the ones that typically\nhold replication slots for longer durations and for real replication\nuse. We did a similar thing in a recent commit [1].\n\nIs my understanding right? Do you still see any problems with it?\n\n[1]\ncommit 7c3fb505b14e86581b6a052075a294c78c91b123\nAuthor: Amit Kapila <[email protected]>\nDate: Tue Nov 21 07:59:53 2023 +0530\n\n Log messages for replication slot acquisition and release.\n.........\n Note that these messages are emitted only for walsenders but not for\n backends. This is because walsenders are the ones that typically hold\n replication slots for longer durations, unlike backends which hold them\n for executing replication related functions.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 8 Mar 2024 22:42:20 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 8, 2024 at 8:08 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 6, 2024 at 4:28 PM Amit Kapila <[email protected]> wrote:\n> >\n> > IIUC, the current conflict_reason is primarily used to determine\n> > logical slots on standby that got invalidated due to recovery time\n> > conflict. On the primary, it will also show logical slots that got\n> > invalidated due to the corresponding WAL got removed. Is that\n> > understanding correct?\n>\n> That's right.\n>\n> > If so, we are already sort of overloading this\n> > column. However, now adding more invalidation reasons that won't\n> > happen during recovery conflict handling will change entirely the\n> > purpose (as per the name we use) of this variable. I think\n> > invalidation_reason could depict this column correctly but OTOH I\n> > guess it would lose its original meaning/purpose.\n>\n> Hm. I get the concern. Are you okay with having inavlidation_reason\n> separately for both logical and physical slots? In such a case,\n> logical slots that got invalidated on the standby will have duplicate\n> info in conflict_reason and invalidation_reason, is this fine?\n>\n\nIf we have duplicate information in two columns that could be\nconfusing for users. BTW, isn't the recovery conflict occur only\nbecause of rows_removed and wal_level_insufficient reasons? The\nwal_removed or the new reasons you are proposing can't happen because\nof recovery conflict. Am, I missing something here?\n\n> Another idea is to make 'conflict_reason text' as a 'conflicting\n> boolean' again (revert 007693f2a3), and have 'invalidation_reason\n> text' for both logical and physical slots. So, whenever 'conflicting'\n> is true, one can look at invalidation_reason for the reason for\n> conflict. How does this sound?\n>\n\nSo, does this mean that conflicting will only be true for some of the\nreasons (say wal_level_insufficient, rows_removed, wal_removed) and\nlogical slots but not for others? I think that will also not eliminate\nthe duplicate information as user could have deduced that from single\ncolumn\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 11 Mar 2024 11:25:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 8, 2024 at 10:42 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n> >\n> > You might want to consider its interaction with sync slots on standby.\n> > Say, there is no activity on slots in terms of processing the changes\n> > for slots. Now, we won't perform sync of such slots on standby showing\n> > them inactive as per your new criteria where as same slots could still\n> > be valid on primary as the walsender is still active. This may be more\n> > of a theoretical point as in running system there will probably be\n> > some activity but I think this needs some thougths.\n>\n> I believe the xmin and catalog_xmin of the sync slots on the standby\n> keep advancing depending on the slots on the primary, no? If yes, the\n> XID age based invalidation shouldn't be a problem.\n>\n> I believe there are no walsenders started for the sync slots on the\n> standbys, right? If yes, the inactive timeout based invalidation also\n> shouldn't be a problem. Because, the inactive timeouts for a slot are\n> tracked only for walsenders because they are the ones that typically\n> hold replication slots for longer durations and for real replication\n> use. We did a similar thing in a recent commit [1].\n>\n> Is my understanding right?\n>\n\nYes, your understanding is correct. I wanted us to consider having new\nparameters like 'inactive_replication_slot_timeout' to be at\nslot-level instead of GUC. I think this new parameter doesn't seem to\nbe the similar as 'max_slot_wal_keep_size' which leads to truncation\nof WAL at global and then invalidates the appropriate slots. OTOH, the\n'inactive_replication_slot_timeout' doesn't appear to have a similar\nglobal effect. The other thing we should consider is what if the\ncheckpoint happens at a timeout greater than\n'inactive_replication_slot_timeout'? Shall, we consider doing it via\nsome other background process or do we think checkpointer is the best\nwe can have?\n\n>\n Do you still see any problems with it?\n>\n\nSorry, I haven't done any detailed review yet so can't say with\nconfidence whether there is any problem or not w.r.t sync slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 11 Mar 2024 15:44:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 2:47 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n>\n\nCommit message says: \"Currently postgres has the ability to invalidate\ninactive replication slots based on the amount of WAL (set via\nmax_slot_wal_keep_size GUC) that will be needed for the slots in case\nthey become active. However, choosing a default value for\nmax_slot_wal_keep_size is tricky. Because the amount of WAL a customer\ngenerates, and their allocated storage will vary greatly in\nproduction, making it difficult to pin down a one-size-fits-all value.\nIt is often easy for developers to set an XID age (age of slot's xmin\nor catalog_xmin) of say 1 or 1.5 billion, after which the slots get\ninvalidated.\"\n\nI don't see how it will be easier for the user to choose the default\nvalue of 'max_slot_xid_age' compared to 'max_slot_wal_keep_size'. But,\nI agree similar to 'max_slot_wal_keep_size', 'max_slot_xid_age' can be\nanother parameter to allow vacuum to proceed removing the rows which\notherwise it wouldn't have been as those would be required by some\nslot. Now, if this understanding is correct, we should probably make\nthis invalidation happen by (auto)vacuum after computing the age based\non this new parameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 11 Mar 2024 16:09:27 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 11, 2024 at 04:09:27PM +0530, Amit Kapila wrote:\n> I don't see how it will be easier for the user to choose the default\n> value of 'max_slot_xid_age' compared to 'max_slot_wal_keep_size'. But,\n> I agree similar to 'max_slot_wal_keep_size', 'max_slot_xid_age' can be\n> another parameter to allow vacuum to proceed removing the rows which\n> otherwise it wouldn't have been as those would be required by some\n> slot.\n\nYeah, the idea is to help prevent transaction ID wraparound, so I would\nexpect max_slot_xid_age to ordinarily be set relatively high, i.e., 1.5B+.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 11 Mar 2024 09:13:57 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 08, 2024 at 10:42:20PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n> >\n> > You might want to consider its interaction with sync slots on standby.\n> > Say, there is no activity on slots in terms of processing the changes\n> > for slots. Now, we won't perform sync of such slots on standby showing\n> > them inactive as per your new criteria where as same slots could still\n> > be valid on primary as the walsender is still active. This may be more\n> > of a theoretical point as in running system there will probably be\n> > some activity but I think this needs some thougths.\n> \n> I believe the xmin and catalog_xmin of the sync slots on the standby\n> keep advancing depending on the slots on the primary, no? If yes, the\n> XID age based invalidation shouldn't be a problem.\n> \n> I believe there are no walsenders started for the sync slots on the\n> standbys, right? If yes, the inactive timeout based invalidation also\n> shouldn't be a problem. Because, the inactive timeouts for a slot are\n> tracked only for walsenders because they are the ones that typically\n> hold replication slots for longer durations and for real replication\n> use. We did a similar thing in a recent commit [1].\n> \n> Is my understanding right? Do you still see any problems with it?\n\nWould that make sense to \"simply\" discard/prevent those kind of invalidations\nfor \"synced\" slot on standby? I mean, do they make sense given the fact that\nthose slots are not usable until the standby is promoted?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 07:54:16 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 1:24 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 08, 2024 at 10:42:20PM +0530, Bharath Rupireddy wrote:\n> > On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > You might want to consider its interaction with sync slots on standby.\n> > > Say, there is no activity on slots in terms of processing the changes\n> > > for slots. Now, we won't perform sync of such slots on standby showing\n> > > them inactive as per your new criteria where as same slots could still\n> > > be valid on primary as the walsender is still active. This may be more\n> > > of a theoretical point as in running system there will probably be\n> > > some activity but I think this needs some thougths.\n> >\n> > I believe the xmin and catalog_xmin of the sync slots on the standby\n> > keep advancing depending on the slots on the primary, no? If yes, the\n> > XID age based invalidation shouldn't be a problem.\n> >\n> > I believe there are no walsenders started for the sync slots on the\n> > standbys, right? If yes, the inactive timeout based invalidation also\n> > shouldn't be a problem. Because, the inactive timeouts for a slot are\n> > tracked only for walsenders because they are the ones that typically\n> > hold replication slots for longer durations and for real replication\n> > use. We did a similar thing in a recent commit [1].\n> >\n> > Is my understanding right? Do you still see any problems with it?\n>\n> Would that make sense to \"simply\" discard/prevent those kind of invalidations\n> for \"synced\" slot on standby? I mean, do they make sense given the fact that\n> those slots are not usable until the standby is promoted?\n>\n\nAFAIR, we don't prevent similar invalidations due to\n'max_slot_wal_keep_size' for sync slots, so why to prevent it for\nthese new parameters? This will unnecessarily create inconsistency in\nthe invalidation behavior.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Mar 2024 17:51:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 11, 2024 at 11:26 AM Amit Kapila <[email protected]> wrote:\n>\n> > Hm. I get the concern. Are you okay with having inavlidation_reason\n> > separately for both logical and physical slots? In such a case,\n> > logical slots that got invalidated on the standby will have duplicate\n> > info in conflict_reason and invalidation_reason, is this fine?\n> >\n>\n> If we have duplicate information in two columns that could be\n> confusing for users. BTW, isn't the recovery conflict occur only\n> because of rows_removed and wal_level_insufficient reasons? The\n> wal_removed or the new reasons you are proposing can't happen because\n> of recovery conflict. Am, I missing something here?\n\nMy understanding aligns with yours that the rows_removed and\nwal_level_insufficient invalidations can occur only upon recovery\nconflict.\n\nFWIW, a test named 'synchronized slot has been invalidated' in\n040_standby_failover_slots_sync.pl inappropriately uses\nconflict_reason = 'wal_removed' logical slot on standby. As per the\nabove understanding, it's inappropriate to use conflict_reason here\nbecause wal_removed invalidation doesn't conflict with recovery.\n\n> > Another idea is to make 'conflict_reason text' as a 'conflicting\n> > boolean' again (revert 007693f2a3), and have 'invalidation_reason\n> > text' for both logical and physical slots. So, whenever 'conflicting'\n> > is true, one can look at invalidation_reason for the reason for\n> > conflict. How does this sound?\n> >\n>\n> So, does this mean that conflicting will only be true for some of the\n> reasons (say wal_level_insufficient, rows_removed, wal_removed) and\n> logical slots but not for others? I think that will also not eliminate\n> the duplicate information as user could have deduced that from single\n> column.\n\nSo, how about we turn conflict_reason to only report the reasons that\nactually cause conflict with recovery for logical slots, something\nlike below, and then have invalidation_cause as a generic column for\nall sorts of invalidation reasons for both logical and physical slots?\n\nReplicationSlotInvalidationCause cause = slot_contents.data.invalidated;\n\nif (slot_contents.data.database == InvalidOid ||\n cause == RS_INVAL_NONE ||\n cause != RS_INVAL_HORIZON ||\n cause != RS_INVAL_WAL_LEVEL)\n{\n nulls[i++] = true;\n}\nelse\n{\n Assert(cause == RS_INVAL_HORIZON || cause == RS_INVAL_WAL_LEVEL);\n\n values[i++] = CStringGetTextDatum(SlotInvalidationCauses[cause]);\n}\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 20:55:37 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 12, 2024 at 05:51:43PM +0530, Amit Kapila wrote:\n> On Tue, Mar 12, 2024 at 1:24 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 08, 2024 at 10:42:20PM +0530, Bharath Rupireddy wrote:\n> > > On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n> > > >\n> > > > You might want to consider its interaction with sync slots on standby.\n> > > > Say, there is no activity on slots in terms of processing the changes\n> > > > for slots. Now, we won't perform sync of such slots on standby showing\n> > > > them inactive as per your new criteria where as same slots could still\n> > > > be valid on primary as the walsender is still active. This may be more\n> > > > of a theoretical point as in running system there will probably be\n> > > > some activity but I think this needs some thougths.\n> > >\n> > > I believe the xmin and catalog_xmin of the sync slots on the standby\n> > > keep advancing depending on the slots on the primary, no? If yes, the\n> > > XID age based invalidation shouldn't be a problem.\n> > >\n> > > I believe there are no walsenders started for the sync slots on the\n> > > standbys, right? If yes, the inactive timeout based invalidation also\n> > > shouldn't be a problem. Because, the inactive timeouts for a slot are\n> > > tracked only for walsenders because they are the ones that typically\n> > > hold replication slots for longer durations and for real replication\n> > > use. We did a similar thing in a recent commit [1].\n> > >\n> > > Is my understanding right? Do you still see any problems with it?\n> >\n> > Would that make sense to \"simply\" discard/prevent those kind of invalidations\n> > for \"synced\" slot on standby? I mean, do they make sense given the fact that\n> > those slots are not usable until the standby is promoted?\n> >\n> \n> AFAIR, we don't prevent similar invalidations due to\n> 'max_slot_wal_keep_size' for sync slots,\n\nRight, we'd invalidate them on the standby should the standby sync slot restart_lsn\nexceeds the limit.\n\n> so why to prevent it for\n> these new parameters? This will unnecessarily create inconsistency in\n> the invalidation behavior.\n\nYeah, but I think wal removal has a direct impact on the slot usuability which\nis probably not the case with the new XID and Timeout ones. That's why I thought\nabout handling them differently (but I'm also fine if that's not the case).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 15:40:59 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 5:51 PM Amit Kapila <[email protected]> wrote:\n>\n> > Would that make sense to \"simply\" discard/prevent those kind of invalidations\n> > for \"synced\" slot on standby? I mean, do they make sense given the fact that\n> > those slots are not usable until the standby is promoted?\n>\n> AFAIR, we don't prevent similar invalidations due to\n> 'max_slot_wal_keep_size' for sync slots, so why to prevent it for\n> these new parameters? This will unnecessarily create inconsistency in\n> the invalidation behavior.\n\nRight. +1 to keep the behaviour consistent for all invalidations.\nHowever, an assertion that inactive_timeout isn't set for synced slots\non the standby isn't a bad idea because we rely on the fact that\nwalsenders aren't started for synced slots. Again, I think it misses\nthe consistency in the invalidation behaviour.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 21:14:40 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 9:11 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > AFAIR, we don't prevent similar invalidations due to\n> > 'max_slot_wal_keep_size' for sync slots,\n>\n> Right, we'd invalidate them on the standby should the standby sync slot restart_lsn\n> exceeds the limit.\n\nRight. Help me understand this a bit - is the wal_removed invalidation\ngoing to conflict with recovery on the standby?\n\nPer the discussion upthread, I'm trying to understand what\ninvalidation reasons will exactly cause conflict with recovery? Is it\njust rows_removed and wal_level_insufficient invalidations? My\nunderstanding on the conflict with recovery and invalidation reason\nhas been a bit off track. Perhaps, we need to clarify these two things\nin the docs for the end users as well?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 21:19:35 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 11, 2024 at 3:44 PM Amit Kapila <[email protected]> wrote:\n>\n> Yes, your understanding is correct. I wanted us to consider having new\n> parameters like 'inactive_replication_slot_timeout' to be at\n> slot-level instead of GUC. I think this new parameter doesn't seem to\n> be the similar as 'max_slot_wal_keep_size' which leads to truncation\n> of WAL at global and then invalidates the appropriate slots. OTOH, the\n> 'inactive_replication_slot_timeout' doesn't appear to have a similar\n> global effect.\n\nlast_inactive_at is tracked for each slot using which slots get\ninvalidated based on inactive_replication_slot_timeout. It's like\nmax_slot_wal_keep_size invalidating slots based on restart_lsn. In a\nway, both are similar, right?\n\n> The other thing we should consider is what if the\n> checkpoint happens at a timeout greater than\n> 'inactive_replication_slot_timeout'?\n\nIn such a case, the slots get invalidated upon the next checkpoint as\nthe (current_checkpointer_timeout - last_inactive_at) will then be\ngreater than inactive_replication_slot_timeout.\n\n> Shall, we consider doing it via\n> some other background process or do we think checkpointer is the best\n> we can have?\n\nThe same problem exists if we do it with some other background\nprocess. I think the checkpointer is best because it already\ninvalidates slots for wal_removed cause, and flushes all replication\nslots to disk. Moving this new invalidation functionality into some\nother background process such as autovacuum will not only burden that\nprocess' work but also mix up the unique functionality of that\nbackground process.\n\nHaving said above, I'm open to ideas from others as I'm not so sure if\nthere's any issue with checkpointer invalidating the slots for new\nreasons.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 22:09:48 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 11, 2024 at 4:09 PM Amit Kapila <[email protected]> wrote:\n>\n> I don't see how it will be easier for the user to choose the default\n> value of 'max_slot_xid_age' compared to 'max_slot_wal_keep_size'. But,\n> I agree similar to 'max_slot_wal_keep_size', 'max_slot_xid_age' can be\n> another parameter to allow vacuum to proceed removing the rows which\n> otherwise it wouldn't have been as those would be required by some\n> slot. Now, if this understanding is correct, we should probably make\n> this invalidation happen by (auto)vacuum after computing the age based\n> on this new parameter.\n\nCurrently, the patch computes the XID age in the checkpointer using\nthe next XID (gets from ReadNextFullTransactionId()) and slot's xmin\nand catalog_xmin. I think the checkpointer is best because it already\ninvalidates slots for wal_removed cause, and flushes all replication\nslots to disk. Moving this new invalidation functionality into some\nother background process such as autovacuum will not only burden that\nprocess' work but also mix up the unique functionality of that\nbackground process.\n\nHaving said above, I'm open to ideas from others as I'm not so sure if\nthere's any issue with checkpointer invalidating the slots for new\nreasons.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 12 Mar 2024 22:51:49 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 8:55 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Mar 11, 2024 at 11:26 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > Hm. I get the concern. Are you okay with having inavlidation_reason\n> > > separately for both logical and physical slots? In such a case,\n> > > logical slots that got invalidated on the standby will have duplicate\n> > > info in conflict_reason and invalidation_reason, is this fine?\n> > >\n> >\n> > If we have duplicate information in two columns that could be\n> > confusing for users. BTW, isn't the recovery conflict occur only\n> > because of rows_removed and wal_level_insufficient reasons? The\n> > wal_removed or the new reasons you are proposing can't happen because\n> > of recovery conflict. Am, I missing something here?\n>\n> My understanding aligns with yours that the rows_removed and\n> wal_level_insufficient invalidations can occur only upon recovery\n> conflict.\n>\n> FWIW, a test named 'synchronized slot has been invalidated' in\n> 040_standby_failover_slots_sync.pl inappropriately uses\n> conflict_reason = 'wal_removed' logical slot on standby. As per the\n> above understanding, it's inappropriate to use conflict_reason here\n> because wal_removed invalidation doesn't conflict with recovery.\n>\n> > > Another idea is to make 'conflict_reason text' as a 'conflicting\n> > > boolean' again (revert 007693f2a3), and have 'invalidation_reason\n> > > text' for both logical and physical slots. So, whenever 'conflicting'\n> > > is true, one can look at invalidation_reason for the reason for\n> > > conflict. How does this sound?\n> > >\n> >\n> > So, does this mean that conflicting will only be true for some of the\n> > reasons (say wal_level_insufficient, rows_removed, wal_removed) and\n> > logical slots but not for others? I think that will also not eliminate\n> > the duplicate information as user could have deduced that from single\n> > column.\n>\n> So, how about we turn conflict_reason to only report the reasons that\n> actually cause conflict with recovery for logical slots, something\n> like below, and then have invalidation_cause as a generic column for\n> all sorts of invalidation reasons for both logical and physical slots?\n>\n\nIf our above understanding is correct then coflict_reason will be a\nsubset of invalidation_reason. If so, whatever way we arrange this\ninformation, there will be some sort of duplicity unless we just have\none column 'invalidation_reason' and update the docs to interpret it\ncorrectly for conflicts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:21:32 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 9:11 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Mar 12, 2024 at 05:51:43PM +0530, Amit Kapila wrote:\n> > On Tue, Mar 12, 2024 at 1:24 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n>\n> > so why to prevent it for\n> > these new parameters? This will unnecessarily create inconsistency in\n> > the invalidation behavior.\n>\n> Yeah, but I think wal removal has a direct impact on the slot usuability which\n> is probably not the case with the new XID and Timeout ones.\n>\n\nBTW, is XID the based parameter 'max_slot_xid_age' not have similarity\nwith 'max_slot_wal_keep_size'? I think it will impact the rows we\nremoved based on xid horizons. Don't we need to consider it while\nvacuum computing the xid horizons in ComputeXidHorizons() similar to\nwhat we do for WAL w.r.t 'max_slot_wal_keep_size'?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:38:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 12, 2024 at 10:10 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Mar 11, 2024 at 3:44 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Yes, your understanding is correct. I wanted us to consider having new\n> > parameters like 'inactive_replication_slot_timeout' to be at\n> > slot-level instead of GUC. I think this new parameter doesn't seem to\n> > be the similar as 'max_slot_wal_keep_size' which leads to truncation\n> > of WAL at global and then invalidates the appropriate slots. OTOH, the\n> > 'inactive_replication_slot_timeout' doesn't appear to have a similar\n> > global effect.\n>\n> last_inactive_at is tracked for each slot using which slots get\n> invalidated based on inactive_replication_slot_timeout. It's like\n> max_slot_wal_keep_size invalidating slots based on restart_lsn. In a\n> way, both are similar, right?\n>\n\nThere is some similarity but 'max_slot_wal_keep_size' leads to\ntruncation of WAL which in turn leads to invalidation of slots. Here,\nI am also trying to be cautious in adding a GUC unless it is required\nor having a slot-level parameter doesn't serve the need. Having said\nthat, I see that there is an argument that we should follow the path\nof 'max_slot_wal_keep_size' GUC and there is some value to it but\nstill I think avoiding a new GUC for inactivity in the slot would\noutweigh.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 13 Mar 2024 09:54:15 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 6, 2024 at 2:47 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 6, 2024 at 2:42 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Tue, Mar 05, 2024 at 01:44:43PM -0600, Nathan Bossart wrote:\n> > > On Wed, Mar 06, 2024 at 12:50:38AM +0530, Bharath Rupireddy wrote:\n> > > > On Mon, Mar 4, 2024 at 2:11 PM Bertrand Drouvot\n> > > > <[email protected]> wrote:\n> > > >> On Sun, Mar 03, 2024 at 03:44:34PM -0600, Nathan Bossart wrote:\n> > > >> > Unless I am misinterpreting some details, ISTM we could rename this column\n> > > >> > to invalidation_reason and use it for both logical and physical slots. I'm\n> > > >> > not seeing a strong need for another column.\n> > > >>\n> > > >> Yeah having two columns was more for convenience purpose. Without the \"conflict\"\n> > > >> one, a slot conflicting with recovery would be \"a logical slot having a non NULL\n> > > >> invalidation_reason\".\n> > > >>\n> > > >> I'm also fine with one column if most of you prefer that way.\n> > > >\n> > > > While we debate on the above, please find the attached v7 patch set\n> > > > after rebasing.\n> > >\n> > > It looks like Bertrand is okay with reusing the same column for both\n> > > logical and physical slots\n> >\n> > Yeah, I'm okay with one column.\n>\n> Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n\nJFYI, the patch does not apply to the head. There is a conflict in\nmultiple files.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 13 Mar 2024 11:13:18 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "> JFYI, the patch does not apply to the head. There is a conflict in\n> multiple files.\n\nFor review purposes, I applied v8 to the March 6 code-base. I have yet\nto review in detail, please find my initial thoughts:\n\n1)\nI found that 'inactive_replication_slot_timeout' works only if there\nwas any walsender ever started for that slot . The logic is under\n'am_walsender' check. Is this intentional?\nIf I create a slot and use only pg_logical_slot_get_changes or\npg_replication_slot_advance on it, it never gets invalidated due to\ntimeout. While, when I set 'max_slot_xid_age' or say\n'max_slot_wal_keep_size' to a lower value, the said slot is\ninvalidated correctly with 'xid_aged' and 'wal_removed' reasons\nrespectively.\n\nExample:\nWith inactive_replication_slot_timeout=1min, test1_3 is the slot for\nwhich there is no walsender and only advance and get_changes SQL\nfunctions were called; test1_4 is the one for which pg_recvlogical was\nrun for a second.\n\n test1_3 | 785 | | reserved | | t\n | |\n test1_4 | 798 | | lost | inactive_timeout | t |\n2024-03-13 11:52:41.58446+05:30 |\n\nAnd when inactive_replication_slot_timeout=0 and max_slot_xid_age=10\n\n test1_3 | 785 | | lost | xid_aged | t\n | |\n test1_4 | 798 | | lost | inactive_timeout | t |\n2024-03-13 11:52:41.58446+05:30 |\n\n\n2)\nThe msg for patch 3 says:\n--------------\na) when replication slots is lying inactive for a day or so using\nlast_inactive_at metric,\nb) when a replication slot is becoming inactive too frequently using\nlast_inactive_at metric.\n--------------\n I think in b, you want to refer to inactive_count instead of last_inactive_at?\n\n3)\nI do not see invalidation_reason updated for 2 new reasons in system-views.sgml\n\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 13 Mar 2024 12:45:06 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 12, 2024 at 09:19:35PM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 12, 2024 at 9:11 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > AFAIR, we don't prevent similar invalidations due to\n> > > 'max_slot_wal_keep_size' for sync slots,\n> >\n> > Right, we'd invalidate them on the standby should the standby sync slot restart_lsn\n> > exceeds the limit.\n> \n> Right. Help me understand this a bit - is the wal_removed invalidation\n> going to conflict with recovery on the standby?\n\nI don't think so, as it's not directly related to recovery. The slot will\nbe invalided on the standby though.\n\n> Per the discussion upthread, I'm trying to understand what\n> invalidation reasons will exactly cause conflict with recovery? Is it\n> just rows_removed and wal_level_insufficient invalidations? \n\nYes, that's the ones added in be87200efd.\n\nSee the error messages on a standby:\n\n== wal removal\n\npostgres=# SELECT * FROM pg_logical_slot_get_changes('lsub4_slot', NULL, NULL, 'include-xids', '0');\nERROR: can no longer get changes from replication slot \"lsub4_slot\"\nDETAIL: This slot has been invalidated because it exceeded the maximum reserved size.\n\n== wal level\n\npostgres=# select conflict_reason from pg_replication_slots where slot_name = 'lsub5_slot';;\n conflict_reason\n------------------------\n wal_level_insufficient\n(1 row)\n\npostgres=# SELECT * FROM pg_logical_slot_get_changes('lsub5_slot', NULL, NULL, 'include-xids', '0');\nERROR: can no longer get changes from replication slot \"lsub5_slot\"\nDETAIL: This slot has been invalidated because it was conflicting with recovery.\n\n== rows removal\n\npostgres=# select conflict_reason from pg_replication_slots where slot_name = 'lsub6_slot';;\n conflict_reason\n-----------------\n rows_removed\n(1 row)\n\npostgres=# SELECT * FROM pg_logical_slot_get_changes('lsub6_slot', NULL, NULL, 'include-xids', '0');\nERROR: can no longer get changes from replication slot \"lsub6_slot\"\nDETAIL: This slot has been invalidated because it was conflicting with recovery.\n\nAs you can see, only wal level and rows removal are mentioning conflict with\nrecovery.\n\nSo, are we already \"wrong\" mentioning \"wal_removed\" in conflict_reason?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 07:21:18 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 8, 2024 at 10:42 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 6, 2024 at 4:49 PM Amit Kapila <[email protected]> wrote:\n> >\n> > You might want to consider its interaction with sync slots on standby.\n> > Say, there is no activity on slots in terms of processing the changes\n> > for slots. Now, we won't perform sync of such slots on standby showing\n> > them inactive as per your new criteria where as same slots could still\n> > be valid on primary as the walsender is still active. This may be more\n> > of a theoretical point as in running system there will probably be\n> > some activity but I think this needs some thougths.\n>\n> I believe the xmin and catalog_xmin of the sync slots on the standby\n> keep advancing depending on the slots on the primary, no? If yes, the\n> XID age based invalidation shouldn't be a problem.\n\nIf the user has not enabled slot-sync worker and is relying on the SQL\nfunction pg_sync_replication_slots(), then the xmin and catalog_xmin\nof synced slots may not keep on advancing. These will be advanced only\non next run of function. But meanwhile the synced slots may be\ninvalidated due to 'xid_aged'. Then the next time, when user runs\npg_sync_replication_slots() again, the invalidated slots will be\ndropped and will be recreated by this SQL function (provided they are\nvalid on primary and are invalidated on standby alone). I am not\nstating that it is a problem, but we need to think if this is what we\nwant. Secondly, the behaviour is not same with 'inactive_timeout'\ninvalidation. Synced slots are immune to 'inactive_timeout'\ninvalidation as this invalidation happens only in walsender, while\nthese are not immune to 'xid_aged' invalidation. So again, needs some\nthoughts here.\n\n> I believe there are no walsenders started for the sync slots on the\n> standbys, right? If yes, the inactive timeout based invalidation also\n> shouldn't be a problem. Because, the inactive timeouts for a slot are\n> tracked only for walsenders because they are the ones that typically\n> hold replication slots for longer durations and for real replication\n> use. We did a similar thing in a recent commit [1].\n>\n> Is my understanding right? Do you still see any problems with it?\n\nI have explained the situation above for us to think over it better.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:45:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 9:21 AM Amit Kapila <[email protected]> wrote:\n>\n> > So, how about we turn conflict_reason to only report the reasons that\n> > actually cause conflict with recovery for logical slots, something\n> > like below, and then have invalidation_cause as a generic column for\n> > all sorts of invalidation reasons for both logical and physical slots?\n>\n> If our above understanding is correct then coflict_reason will be a\n> subset of invalidation_reason. If so, whatever way we arrange this\n> information, there will be some sort of duplicity unless we just have\n> one column 'invalidation_reason' and update the docs to interpret it\n> correctly for conflicts.\n\nYes, there will be some sort of duplicity if we emit conflict_reason\nas a text field. However, I still think the better way is to turn\nconflict_reason text to conflict boolean and set it to true only on\nrows_removed and wal_level_insufficient invalidations. When conflict\nboolean is true, one (including all the tests that we've added\nrecently) can look for invalidation_reason text field for the reason.\nThis sounds reasonable to me as opposed to we just mentioning in the\ndocs that \"if invalidation_reason is rows_removed or\nwal_level_insufficient it's the reason for conflict with recovery\".\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 21:24:17 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 12:51 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> See the error messages on a standby:\n>\n> == wal removal\n>\n> postgres=# SELECT * FROM pg_logical_slot_get_changes('lsub4_slot', NULL, NULL, 'include-xids', '0');\n> ERROR: can no longer get changes from replication slot \"lsub4_slot\"\n> DETAIL: This slot has been invalidated because it exceeded the maximum reserved size.\n>\n> == wal level\n>\n> postgres=# select conflict_reason from pg_replication_slots where slot_name = 'lsub5_slot';;\n> conflict_reason\n> ------------------------\n> wal_level_insufficient\n> (1 row)\n>\n> postgres=# SELECT * FROM pg_logical_slot_get_changes('lsub5_slot', NULL, NULL, 'include-xids', '0');\n> ERROR: can no longer get changes from replication slot \"lsub5_slot\"\n> DETAIL: This slot has been invalidated because it was conflicting with recovery.\n>\n> == rows removal\n>\n> postgres=# select conflict_reason from pg_replication_slots where slot_name = 'lsub6_slot';;\n> conflict_reason\n> -----------------\n> rows_removed\n> (1 row)\n>\n> postgres=# SELECT * FROM pg_logical_slot_get_changes('lsub6_slot', NULL, NULL, 'include-xids', '0');\n> ERROR: can no longer get changes from replication slot \"lsub6_slot\"\n> DETAIL: This slot has been invalidated because it was conflicting with recovery.\n>\n> As you can see, only wal level and rows removal are mentioning conflict with\n> recovery.\n>\n> So, are we already \"wrong\" mentioning \"wal_removed\" in conflict_reason?\n\nIt looks like yes. So, how about we fix it the way proposed here -\nhttps://www.postgresql.org/message-id/CALj2ACVd_dizYQiZwwUfsb%2BhG-fhGYo_kEDq0wn_vNwQvOrZHg%40mail.gmail.com?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 22:06:27 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 11:13 AM shveta malik <[email protected]> wrote:\n>\n> > Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n>\n> JFYI, the patch does not apply to the head. There is a conflict in\n> multiple files.\n\nThanks for looking into this. I noticed that the v8 patches needed\nrebase. Before I go do anything with the patches, I'm trying to gain\nconsensus on the design. Following is the summary of design choices\nwe've discussed so far:\n1) conflict_reason vs invalidation_reason.\n2) When to compute the XID age?\n3) Where to do the invalidations? Is it in the checkpointer or\nautovacuum or some other process?\n4) Interaction of these new invalidations with sync slots on the standby.\n\nI hope to get on to these one after the other.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 22:16:13 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 9:24 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 9:21 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > So, how about we turn conflict_reason to only report the reasons that\n> > > actually cause conflict with recovery for logical slots, something\n> > > like below, and then have invalidation_cause as a generic column for\n> > > all sorts of invalidation reasons for both logical and physical slots?\n> >\n> > If our above understanding is correct then coflict_reason will be a\n> > subset of invalidation_reason. If so, whatever way we arrange this\n> > information, there will be some sort of duplicity unless we just have\n> > one column 'invalidation_reason' and update the docs to interpret it\n> > correctly for conflicts.\n>\n> Yes, there will be some sort of duplicity if we emit conflict_reason\n> as a text field. However, I still think the better way is to turn\n> conflict_reason text to conflict boolean and set it to true only on\n> rows_removed and wal_level_insufficient invalidations. When conflict\n> boolean is true, one (including all the tests that we've added\n> recently) can look for invalidation_reason text field for the reason.\n> This sounds reasonable to me as opposed to we just mentioning in the\n> docs that \"if invalidation_reason is rows_removed or\n> wal_level_insufficient it's the reason for conflict with recovery\".\n>\n\nFair point. I think we can go either way. Bertrand, Nathan, and\nothers, do you have an opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 14 Mar 2024 12:24:00 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 10:16 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 11:13 AM shveta malik <[email protected]> wrote:\n> >\n> > > Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n> >\n> > JFYI, the patch does not apply to the head. There is a conflict in\n> > multiple files.\n>\n> Thanks for looking into this. I noticed that the v8 patches needed\n> rebase. Before I go do anything with the patches, I'm trying to gain\n> consensus on the design. Following is the summary of design choices\n> we've discussed so far:\n> 1) conflict_reason vs invalidation_reason.\n> 2) When to compute the XID age?\n>\n\nI feel we should focus on two things (a) one is to introduce a new\ncolumn invalidation_reason, and (b) let's try to first complete\ninvalidation due to timeout. We can look into XID stuff if time\npermits, remember, we don't have ample time left.\n\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 14 Mar 2024 12:27:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 14, 2024 at 12:24 PM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 9:24 PM Bharath Rupireddy\n> >\n> > Yes, there will be some sort of duplicity if we emit conflict_reason\n> > as a text field. However, I still think the better way is to turn\n> > conflict_reason text to conflict boolean and set it to true only on\n> > rows_removed and wal_level_insufficient invalidations. When conflict\n> > boolean is true, one (including all the tests that we've added\n> > recently) can look for invalidation_reason text field for the reason.\n> > This sounds reasonable to me as opposed to we just mentioning in the\n> > docs that \"if invalidation_reason is rows_removed or\n> > wal_level_insufficient it's the reason for conflict with recovery\".\n> >\n> Fair point. I think we can go either way. Bertrand, Nathan, and\n> others, do you have an opinion on this matter?\n\nWhile we wait to hear from others on this, I'm attaching the v9 patch\nset implementing the above idea (check 0001 patch). Please have a\nlook. I'll come back to the other review comments soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 14 Mar 2024 19:57:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 14, 2024 at 7:58 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 12:24 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Mar 13, 2024 at 9:24 PM Bharath Rupireddy\n> > >\n> > > Yes, there will be some sort of duplicity if we emit conflict_reason\n> > > as a text field. However, I still think the better way is to turn\n> > > conflict_reason text to conflict boolean and set it to true only on\n> > > rows_removed and wal_level_insufficient invalidations. When conflict\n> > > boolean is true, one (including all the tests that we've added\n> > > recently) can look for invalidation_reason text field for the reason.\n> > > This sounds reasonable to me as opposed to we just mentioning in the\n> > > docs that \"if invalidation_reason is rows_removed or\n> > > wal_level_insufficient it's the reason for conflict with recovery\".\n\n+1 on maintaining both conflicting and invalidation_reason\n\n> > Fair point. I think we can go either way. Bertrand, Nathan, and\n> > others, do you have an opinion on this matter?\n>\n> While we wait to hear from others on this, I'm attaching the v9 patch\n> set implementing the above idea (check 0001 patch). Please have a\n> look. I'll come back to the other review comments soon.\n\nThanks for the patch. JFYI, patch09 does not apply to HEAD, some\nrecent commit caused the conflict.\n\nSome trivial comments on patch001 (yet to review other patches)\n\n1)\ninfo.c:\n\n- \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n+ \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n\nCan we revert back to 'conflicting as invalid' since it is a query for\nlogical slots only.\n\n2)\n040_standby_failover_slots_sync.pl:\n\n- q{SELECT conflict_reason IS NULL AND synced AND NOT temporary FROM\npg_replication_slots WHERE slot_name = 'lsub1_slot';}\n+ q{SELECT invalidation_reason IS NULL AND synced AND NOT temporary\nFROM pg_replication_slots WHERE slot_name = 'lsub1_slot';}\n\nHere too, can we have 'NOT conflicting' instead of '\ninvalidation_reason IS NULL' as it is a logical slot test.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:14:49 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 13, 2024 at 9:38 AM Amit Kapila <[email protected]> wrote:\n>\n> BTW, is XID the based parameter 'max_slot_xid_age' not have similarity\n> with 'max_slot_wal_keep_size'? I think it will impact the rows we\n> removed based on xid horizons. Don't we need to consider it while\n> vacuum computing the xid horizons in ComputeXidHorizons() similar to\n> what we do for WAL w.r.t 'max_slot_wal_keep_size'?\n\nI'm having a hard time understanding why we'd need something up there\nin ComputeXidHorizons(). Can you elaborate it a bit please?\n\nWhat's proposed with max_slot_xid_age is that during checkpoint we\nlook at slot's xmin and catalog_xmin, and the current system txn id.\nThen, if the XID age of (xmin, catalog_xmin) and current_xid crosses\nmax_slot_xid_age, we invalidate the slot. Let me illustrate how all\nthis works:\n\n1. Setup a primary and standby with hot_standby_feedback set to on on\nstandby. For instance, check my scripts at [1].\n\n2. Stop the standby to make the slot inactive on the primary. Check\nthe slot is holding xmin of 738.\n./pg_ctl -D sbdata -l logfilesbdata stop\n\npostgres=# SELECT * FROM pg_replication_slots;\n-[ RECORD 1 ]-------+-------------\nslot_name | sb_repl_slot\nplugin |\nslot_type | physical\ndatoid |\ndatabase |\ntemporary | f\nactive | f\nactive_pid |\nxmin | 738\ncatalog_xmin |\nrestart_lsn | 0/3000000\nconfirmed_flush_lsn |\nwal_status | reserved\nsafe_wal_size |\ntwo_phase | f\nconflict_reason |\nfailover | f\nsynced | f\n\n3. Start consuming the XIDs on the primary with the following script\nfor instance\n./psql -d postgres -p 5432\nDROP TABLE tab_int;\nCREATE TABLE tab_int (a int);\n\ndo $$\nbegin\n for i in 1..268435 loop\n -- use an exception block so that each iteration eats an XID\n begin\n insert into tab_int values (i);\n exception\n when division_by_zero then null;\n end;\n end loop;\nend$$;\n\n4. Make some dead rows in the table.\nupdate tab_int set a = a+1;\ndelete from tab_int where a%4=0;\n\npostgres=# SELECT n_dead_tup, n_tup_ins, n_tup_upd, n_tup_del FROM\npg_stat_user_tables WHERE relname = 'tab_int';\n-[ RECORD 1 ]------\nn_dead_tup | 335544\nn_tup_ins | 268435\nn_tup_upd | 268435\nn_tup_del | 67109\n\n5. Try vacuuming to delete the dead rows, observe 'tuples: 0 removed,\n536870 remain, 335544 are dead but not yet removable'. The dead rows\ncan't be removed because the inactive slot is holding an xmin, see\n'removable cutoff: 738, which was 268441 XIDs old when operation\nended'.\n\npostgres=# vacuum verbose tab_int;\nINFO: vacuuming \"postgres.public.tab_int\"\nINFO: finished vacuuming \"postgres.public.tab_int\": index scans: 0\npages: 0 removed, 2376 remain, 2376 scanned (100.00% of total)\ntuples: 0 removed, 536870 remain, 335544 are dead but not yet removable\nremovable cutoff: 738, which was 268441 XIDs old when operation ended\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\navg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\nbuffer usage: 4759 hits, 0 misses, 0 dirtied\nWAL usage: 0 records, 0 full page images, 0 bytes\nsystem usage: CPU: user: 0.07 s, system: 0.00 s, elapsed: 0.07 s\nVACUUM\n\n6. Now, repeat the above steps but with setting max_slot_xid_age =\n200000 on the primary.\n\n7. Do a checkpoint to invalidate the slot.\npostgres=# checkpoint;\nCHECKPOINT\npostgres=# SELECT * FROM pg_replication_slots;\n-[ RECORD 1 ]-------+-------------\nslot_name | sb_repl_slot\nplugin |\nslot_type | physical\ndatoid |\ndatabase |\ntemporary | f\nactive | f\nactive_pid |\nxmin | 738\ncatalog_xmin |\nrestart_lsn | 0/3000000\nconfirmed_flush_lsn |\nwal_status | lost\nsafe_wal_size |\ntwo_phase | f\nconflicting |\nfailover | f\nsynced | f\ninvalidation_reason | xid_aged\n\n8. And, then vacuum the table, observe 'tuples: 335544 removed, 201326\nremain, 0 are dead but not yet removable'.\n\npostgres=# vacuum verbose tab_int;\nINFO: vacuuming \"postgres.public.tab_int\"\nINFO: finished vacuuming \"postgres.public.tab_int\": index scans: 0\npages: 0 removed, 2376 remain, 2376 scanned (100.00% of total)\ntuples: 335544 removed, 201326 remain, 0 are dead but not yet removable\nremovable cutoff: 269179, which was 0 XIDs old when operation ended\nnew relfrozenxid: 269179, which is 268441 XIDs ahead of previous value\nfrozen: 1189 pages from table (50.04% of total) had 201326 tuples frozen\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\navg read rate: 0.000 MB/s, avg write rate: 193.100 MB/s\nbuffer usage: 4760 hits, 0 misses, 2381 dirtied\nWAL usage: 5942 records, 2378 full page images, 8343275 bytes\nsystem usage: CPU: user: 0.09 s, system: 0.00 s, elapsed: 0.09 s\nVACUUM\n\n[1]\ncd /home/ubuntu/postgres/pg17/bin\n./pg_ctl -D db17 -l logfile17 stop\nrm -rf db17 logfile17\nrm -rf /home/ubuntu/postgres/pg17/bin/archived_wal\nmkdir /home/ubuntu/postgres/pg17/bin/archived_wal\n\n./initdb -D db17\necho \"archive_mode = on\narchive_command='cp %p\n/home/ubuntu/postgres/pg17/bin/archived_wal/%f'\" | tee -a\ndb17/postgresql.conf\n\n./pg_ctl -D db17 -l logfile17 start\n./psql -d postgres -p 5432 -c \"SELECT\npg_create_physical_replication_slot('sb_repl_slot', true, false);\"\n\nrm -rf sbdata logfilesbdata\n./pg_basebackup -D sbdata\necho \"port=5433\nprimary_conninfo='host=localhost port=5432 dbname=postgres user=ubuntu'\nprimary_slot_name='sb_repl_slot'\nrestore_command='cp /home/ubuntu/postgres/pg17/bin/archived_wal/%f %p'\nhot_standby_feedback = on\" | tee -a sbdata/postgresql.conf\n\ntouch sbdata/standby.signal\n\n./pg_ctl -D sbdata -l logfilesbdata start\n./psql -d postgres -p 5433 -c \"SELECT pg_is_in_recovery();\"\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:44:55 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 14, 2024 at 7:58 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> While we wait to hear from others on this, I'm attaching the v9 patch\n> set implementing the above idea (check 0001 patch). Please have a\n> look. I'll come back to the other review comments soon.\n>\n\npatch002:\n\n1)\nI would like to understand the purpose of 'inactive_count'? Is it only\nfor users for monitoring purposes? We are not using it anywhere\ninternally.\n\nI shutdown the instance 5 times and found that 'inactive_count' became\n5 for all the slots created on that instance. Is this intentional? I\nmean we can not really use them if the instance is down. I felt it\nshould increment the inactive_count only if during the span of\ninstance, they were actually inactive i.e. no streaming or replication\nhappening through them.\n\n\n2)\nslot.c:\n+ case RS_INVAL_XID_AGE:\n+ {\n+ if (TransactionIdIsNormal(s->data.xmin))\n+ {\n+ ..........\n+ }\n+ if (TransactionIdIsNormal(s->data.catalog_xmin))\n+ {\n+ ..........\n+ }\n+ }\n\nCan we optimize this code? It has duplicate code for processing\ns->data.catalog_xmin and s->data.xmin. Can we create a sub-function\nfor this purpose and call it twice here?\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 15 Mar 2024 12:49:07 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 15, 2024 at 10:15 AM shveta malik <[email protected]> wrote:\n>\n> > > > wal_level_insufficient it's the reason for conflict with recovery\".\n>\n> +1 on maintaining both conflicting and invalidation_reason\n\nThanks.\n\n> Thanks for the patch. JFYI, patch09 does not apply to HEAD, some\n> recent commit caused the conflict.\n\nYep, the conflict is in src/test/recovery/meson.build and is because\nof e6927270cd18d535b77cbe79c55c6584351524be.\n\n> Some trivial comments on patch001 (yet to review other patches)\n\nThanks for looking into this.\n\n> 1)\n> info.c:\n>\n> - \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n> + \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n>\n> Can we revert back to 'conflicting as invalid' since it is a query for\n> logical slots only.\n\nI guess, no. There the intention is to check for invalid logical slots\nnot just for the conflicting ones. The logical slots can get\ninvalidated due to other reasons as well.\n\n> 2)\n> 040_standby_failover_slots_sync.pl:\n>\n> - q{SELECT conflict_reason IS NULL AND synced AND NOT temporary FROM\n> pg_replication_slots WHERE slot_name = 'lsub1_slot';}\n> + q{SELECT invalidation_reason IS NULL AND synced AND NOT temporary\n> FROM pg_replication_slots WHERE slot_name = 'lsub1_slot';}\n>\n> Here too, can we have 'NOT conflicting' instead of '\n> invalidation_reason IS NULL' as it is a logical slot test.\n\nI guess no. The tests are ensuring the slot on the standby isn't invalidated.\n\nIn general, one needs to use the 'conflicting' column from\npg_replication_slots when the intention is to look for reasons for\nconflicts, otherwise use the 'invalidation_reason' column for\ninvalidations.\n\nPlease see the attached v10 patch set after resolving the merge\nconflict and fixing an indentation warning in the TAP test file.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 15 Mar 2024 17:35:27 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 14, 2024 at 12:24:00PM +0530, Amit Kapila wrote:\n> On Wed, Mar 13, 2024 at 9:24 PM Bharath Rupireddy\n> <[email protected]> wrote:\n>> On Wed, Mar 13, 2024 at 9:21 AM Amit Kapila <[email protected]> wrote:\n>> > > So, how about we turn conflict_reason to only report the reasons that\n>> > > actually cause conflict with recovery for logical slots, something\n>> > > like below, and then have invalidation_cause as a generic column for\n>> > > all sorts of invalidation reasons for both logical and physical slots?\n>> >\n>> > If our above understanding is correct then coflict_reason will be a\n>> > subset of invalidation_reason. If so, whatever way we arrange this\n>> > information, there will be some sort of duplicity unless we just have\n>> > one column 'invalidation_reason' and update the docs to interpret it\n>> > correctly for conflicts.\n>>\n>> Yes, there will be some sort of duplicity if we emit conflict_reason\n>> as a text field. However, I still think the better way is to turn\n>> conflict_reason text to conflict boolean and set it to true only on\n>> rows_removed and wal_level_insufficient invalidations. When conflict\n>> boolean is true, one (including all the tests that we've added\n>> recently) can look for invalidation_reason text field for the reason.\n>> This sounds reasonable to me as opposed to we just mentioning in the\n>> docs that \"if invalidation_reason is rows_removed or\n>> wal_level_insufficient it's the reason for conflict with recovery\".\n> \n> Fair point. I think we can go either way. Bertrand, Nathan, and\n> others, do you have an opinion on this matter?\n\nWFM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 15 Mar 2024 09:28:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 14, 2024 at 12:24:00PM +0530, Amit Kapila wrote:\n> On Wed, Mar 13, 2024 at 9:24 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Wed, Mar 13, 2024 at 9:21 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > > > So, how about we turn conflict_reason to only report the reasons that\n> > > > actually cause conflict with recovery for logical slots, something\n> > > > like below, and then have invalidation_cause as a generic column for\n> > > > all sorts of invalidation reasons for both logical and physical slots?\n> > >\n> > > If our above understanding is correct then coflict_reason will be a\n> > > subset of invalidation_reason. If so, whatever way we arrange this\n> > > information, there will be some sort of duplicity unless we just have\n> > > one column 'invalidation_reason' and update the docs to interpret it\n> > > correctly for conflicts.\n> >\n> > Yes, there will be some sort of duplicity if we emit conflict_reason\n> > as a text field. However, I still think the better way is to turn\n> > conflict_reason text to conflict boolean and set it to true only on\n> > rows_removed and wal_level_insufficient invalidations. When conflict\n> > boolean is true, one (including all the tests that we've added\n> > recently) can look for invalidation_reason text field for the reason.\n> > This sounds reasonable to me as opposed to we just mentioning in the\n> > docs that \"if invalidation_reason is rows_removed or\n> > wal_level_insufficient it's the reason for conflict with recovery\".\n> >\n> \n> Fair point. I think we can go either way. Bertrand, Nathan, and\n> others, do you have an opinion on this matter?\n\nSounds like a good approach to me and one will be able to quickly identify\nif a conflict occured.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 15 Mar 2024 16:45:18 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 15, 2024 at 12:49 PM shveta malik <[email protected]> wrote:\n>\n> patch002:\n>\n> 1)\n> I would like to understand the purpose of 'inactive_count'? Is it only\n> for users for monitoring purposes? We are not using it anywhere\n> internally.\n\ninactive_count metric helps detect unstable replication slots\nconnections that have a lot of disconnections. It's not used for the\ninactive_timeout based slot invalidation mechanism.\n\n> I shutdown the instance 5 times and found that 'inactive_count' became\n> 5 for all the slots created on that instance. Is this intentional?\n\nYes, it's incremented on shutdown (and for that matter upon every slot\nrelease) for all the slots that are tied to walsenders.\n\n> I mean we can not really use them if the instance is down. I felt it\n> should increment the inactive_count only if during the span of\n> instance, they were actually inactive i.e. no streaming or replication\n> happening through them.\n\ninactive_count is persisted to disk- upon clean shutdown, so, once the\nslots become active again, one gets to see the metric and deduce some\ninfo on disconnections.\n\nHaving said that, I'm okay to hear from others on the inactive_count\nmetric being added.\n\n> 2)\n> slot.c:\n> + case RS_INVAL_XID_AGE:\n>\n> Can we optimize this code? It has duplicate code for processing\n> s->data.catalog_xmin and s->data.xmin. Can we create a sub-function\n> for this purpose and call it twice here?\n\nGood idea. Done that way.\n\n> 2)\n> The msg for patch 3 says:\n> --------------\n> a) when replication slots is lying inactive for a day or so using\n> last_inactive_at metric,\n> b) when a replication slot is becoming inactive too frequently using\n> last_inactive_at metric.\n> --------------\n> I think in b, you want to refer to inactive_count instead of last_inactive_at?\n\nRight. Changed.\n\n> 3)\n> I do not see invalidation_reason updated for 2 new reasons in system-views.sgml\n\nNice catch. Added them now.\n\nI've also responded to Bertrand's comments here.\n\nOn Wed, Mar 6, 2024 at 3:56 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> A few comments:\n>\n> 1 ===\n>\n> + The reason for the slot's invalidation. <literal>NULL</literal> if the\n> + slot is currently actively being used.\n>\n> s/currently actively being used/not invalidated/ ? (I mean it could be valid\n> and not being used).\n\nChanged.\n\n> 3 ===\n>\n> res = executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase, failover, \"\n> - \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n> + \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n> \"FROM pg_catalog.pg_replication_slots \"\n> - \"(CASE WHEN conflict_reason IS NOT NULL THEN FALSE \"\n> + \"(CASE WHEN invalidation_reason IS NOT NULL THEN FALSE \"\n>\n> Yeah that's fine because there is logical slot filtering here.\n\nRight. And, we really are looking for invalid slots there, so use of\ninvalidation_reason is much more correct than conflicting.\n\n> 4 ===\n>\n> -GetSlotInvalidationCause(const char *conflict_reason)\n> +GetSlotInvalidationCause(const char *invalidation_reason)\n>\n> Should we change the comment \"Maps a conflict reason\" above this function?\n\nChanged.\n\n> 5 ===\n>\n> -# Check conflict_reason is NULL for physical slot\n> +# Check invalidation_reason is NULL for physical slot\n> $res = $node_primary->safe_psql(\n> 'postgres', qq[\n> - SELECT conflict_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n> + SELECT invalidation_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n> );\n>\n>\n> I don't think this test is needed anymore: it does not make that much sense since\n> it's done after the primary database initialization and startup.\n\nIt is now turned into a test verifying 'conflicting boolean' is null\nfor the physical slot. Isn't that okay?\n\n> 6 ===\n>\n> 'Logical slots are reported as non conflicting');\n>\n> What about?\n>\n> \"\n> # Verify slots are reported as valid in pg_replication_slots\n> 'Logical slots are reported as valid');\n> \"\n\nChanged.\n\nPlease see the attached v11 patch set with all the above review\ncomments addressed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 16 Mar 2024 09:29:01 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 15, 2024 at 10:45 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 13, 2024 at 9:38 AM Amit Kapila <[email protected]> wrote:\n> >\n> > BTW, is XID the based parameter 'max_slot_xid_age' not have similarity\n> > with 'max_slot_wal_keep_size'? I think it will impact the rows we\n> > removed based on xid horizons. Don't we need to consider it while\n> > vacuum computing the xid horizons in ComputeXidHorizons() similar to\n> > what we do for WAL w.r.t 'max_slot_wal_keep_size'?\n>\n> I'm having a hard time understanding why we'd need something up there\n> in ComputeXidHorizons(). Can you elaborate it a bit please?\n>\n> What's proposed with max_slot_xid_age is that during checkpoint we\n> look at slot's xmin and catalog_xmin, and the current system txn id.\n> Then, if the XID age of (xmin, catalog_xmin) and current_xid crosses\n> max_slot_xid_age, we invalidate the slot.\n>\n\nI can see that in your patch (in function\nInvalidatePossiblyObsoleteSlot()). As per my understanding, we need\nsomething similar for slot xids in ComputeXidHorizons() as we are\ndoing WAL in KeepLogSeg(). In KeepLogSeg(), we compute the minimum LSN\nlocation required by slots and then adjust it for\n'max_slot_wal_keep_size'. On similar lines, currently in\nComputeXidHorizons(), we compute the minimum xid required by slots\n(procArray->replication_slot_xmin and\nprocArray->replication_slot_catalog_xmin) but then don't adjust it for\n'max_slot_xid_age'. I could be missing something in this but it is\nbetter to keep discussing this and try to move with another parameter\n'inactive_replication_slot_timeout' which according to me can be kept\nat slot level instead of a GUC but OTOH we need to see the arguments\non both side and then decide which makes more sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 16 Mar 2024 15:54:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 16, 2024 at 3:55 PM Amit Kapila <[email protected]> wrote:\n>\n> procArray->replication_slot_catalog_xmin) but then don't adjust it for\n> 'max_slot_xid_age'. I could be missing something in this but it is\n> better to keep discussing this and try to move with another parameter\n> 'inactive_replication_slot_timeout' which according to me can be kept\n> at slot level instead of a GUC but OTOH we need to see the arguments\n> on both side and then decide which makes more sense.\n\nHm. Are you suggesting inactive_timeout to be a slot level parameter\nsimilar to 'failover' property added recently by\nc393308b69d229b664391ac583b9e07418d411b6 and\n73292404370c9900a96e2bebdc7144f7010339cf? With this approach, one can\nset inactive_timeout while creating the slot either via\npg_create_physical_replication_slot() or\npg_create_logical_replication_slot() or CREATE_REPLICATION_SLOT or\nALTER_REPLICATION_SLOT command, and postgres tracks the\nlast_inactive_at for every slot based on which the slot gets\ninvalidated. If this understanding is right, I can go ahead and work\ntowards it.\n\nAlternatively, we can go the route of making GUC a list of key-value\npairs of {slot_name, inactive_timeout}, but this kind of GUC for\nsetting slot level parameters is going to be the first of its kind, so\nI'd prefer the above approach.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 17 Mar 2024 14:03:10 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 17, 2024 at 2:03 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Sat, Mar 16, 2024 at 3:55 PM Amit Kapila <[email protected]> wrote:\n> >\n> > procArray->replication_slot_catalog_xmin) but then don't adjust it for\n> > 'max_slot_xid_age'. I could be missing something in this but it is\n> > better to keep discussing this and try to move with another parameter\n> > 'inactive_replication_slot_timeout' which according to me can be kept\n> > at slot level instead of a GUC but OTOH we need to see the arguments\n> > on both side and then decide which makes more sense.\n>\n> Hm. Are you suggesting inactive_timeout to be a slot level parameter\n> similar to 'failover' property added recently by\n> c393308b69d229b664391ac583b9e07418d411b6 and\n> 73292404370c9900a96e2bebdc7144f7010339cf? With this approach, one can\n> set inactive_timeout while creating the slot either via\n> pg_create_physical_replication_slot() or\n> pg_create_logical_replication_slot() or CREATE_REPLICATION_SLOT or\n> ALTER_REPLICATION_SLOT command, and postgres tracks the\n> last_inactive_at for every slot based on which the slot gets\n> invalidated. If this understanding is right, I can go ahead and work\n> towards it.\n>\n\nYeah, I have something like that in mind. You can prepare the patch\nbut it would be good if others involved in this thread can also share\ntheir opinion.\n\n> Alternatively, we can go the route of making GUC a list of key-value\n> pairs of {slot_name, inactive_timeout}, but this kind of GUC for\n> setting slot level parameters is going to be the first of its kind, so\n> I'd prefer the above approach.\n>\n\nI would prefer a slot-level parameter in this case rather than a GUC.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Mar 2024 08:50:56 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 16, 2024 at 3:55 PM Amit Kapila <[email protected]> wrote:\n>\n> > What's proposed with max_slot_xid_age is that during checkpoint we\n> > look at slot's xmin and catalog_xmin, and the current system txn id.\n> > Then, if the XID age of (xmin, catalog_xmin) and current_xid crosses\n> > max_slot_xid_age, we invalidate the slot.\n> >\n>\n> I can see that in your patch (in function\n> InvalidatePossiblyObsoleteSlot()). As per my understanding, we need\n> something similar for slot xids in ComputeXidHorizons() as we are\n> doing WAL in KeepLogSeg(). In KeepLogSeg(), we compute the minimum LSN\n> location required by slots and then adjust it for\n> 'max_slot_wal_keep_size'. On similar lines, currently in\n> ComputeXidHorizons(), we compute the minimum xid required by slots\n> (procArray->replication_slot_xmin and\n> procArray->replication_slot_catalog_xmin) but then don't adjust it for\n> 'max_slot_xid_age'. I could be missing something in this but it is\n> better to keep discussing this\n\nAfter invalidating slots because of max_slot_xid_age, the\nprocArray->replication_slot_xmin and\nprocArray->replication_slot_catalog_xmin are recomputed immediately in\nInvalidateObsoleteReplicationSlots->ReplicationSlotsComputeRequiredXmin->ProcArraySetReplicationSlotXmin.\nAnd, later the XID horizons in ComputeXidHorizons are computed before\nthe vacuum on each table via GetOldestNonRemovableTransactionId.\nAren't these enough? Do you want the XID horizons recomputed\nimmediately, something like the below?\n\n/* Invalidate replication slots based on xmin or catalog_xmin age */\nif (max_slot_xid_age > 0)\n{\n if (InvalidateObsoleteReplicationSlots(RS_INVAL_XID_AGE,\n 0, InvalidOid,\n InvalidTransactionId))\n {\n ComputeXidHorizonsResult horizons;\n\n /*\n * Some slots have been invalidated; update the XID horizons\n * as a side-effect.\n */\n ComputeXidHorizons(&horizons);\n }\n}\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 09:58:42 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 18, 2024 at 9:58 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Sat, Mar 16, 2024 at 3:55 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > What's proposed with max_slot_xid_age is that during checkpoint we\n> > > look at slot's xmin and catalog_xmin, and the current system txn id.\n> > > Then, if the XID age of (xmin, catalog_xmin) and current_xid crosses\n> > > max_slot_xid_age, we invalidate the slot.\n> > >\n> >\n> > I can see that in your patch (in function\n> > InvalidatePossiblyObsoleteSlot()). As per my understanding, we need\n> > something similar for slot xids in ComputeXidHorizons() as we are\n> > doing WAL in KeepLogSeg(). In KeepLogSeg(), we compute the minimum LSN\n> > location required by slots and then adjust it for\n> > 'max_slot_wal_keep_size'. On similar lines, currently in\n> > ComputeXidHorizons(), we compute the minimum xid required by slots\n> > (procArray->replication_slot_xmin and\n> > procArray->replication_slot_catalog_xmin) but then don't adjust it for\n> > 'max_slot_xid_age'. I could be missing something in this but it is\n> > better to keep discussing this\n>\n> After invalidating slots because of max_slot_xid_age, the\n> procArray->replication_slot_xmin and\n> procArray->replication_slot_catalog_xmin are recomputed immediately in\n> InvalidateObsoleteReplicationSlots->ReplicationSlotsComputeRequiredXmin->ProcArraySetReplicationSlotXmin.\n> And, later the XID horizons in ComputeXidHorizons are computed before\n> the vacuum on each table via GetOldestNonRemovableTransactionId.\n> Aren't these enough?\n>\n\nIIUC, this will be delayed by one cycle in the vacuum rather than\ndoing it when the slot's xmin age is crossed and it can be\ninvalidated.\n\n Do you want the XID horizons recomputed\n> immediately, something like the below?\n>\n\nI haven't thought of the exact logic but we can try to mimic the\nhandling similar to WAL.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Mar 2024 10:03:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sat, Mar 16, 2024 at 09:29:01AM +0530, Bharath Rupireddy wrote:\n> I've also responded to Bertrand's comments here.\n\nThanks!\n\n> \n> On Wed, Mar 6, 2024 at 3:56 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > 5 ===\n> >\n> > -# Check conflict_reason is NULL for physical slot\n> > +# Check invalidation_reason is NULL for physical slot\n> > $res = $node_primary->safe_psql(\n> > 'postgres', qq[\n> > - SELECT conflict_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n> > + SELECT invalidation_reason is null FROM pg_replication_slots where slot_name = '$primary_slotname';]\n> > );\n> >\n> >\n> > I don't think this test is needed anymore: it does not make that much sense since\n> > it's done after the primary database initialization and startup.\n> \n> It is now turned into a test verifying 'conflicting boolean' is null\n> for the physical slot. Isn't that okay?\n\nYeah makes more sense now, thanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 09:18:37 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 18, 2024 at 08:50:56AM +0530, Amit Kapila wrote:\n> On Sun, Mar 17, 2024 at 2:03 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Sat, Mar 16, 2024 at 3:55 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > procArray->replication_slot_catalog_xmin) but then don't adjust it for\n> > > 'max_slot_xid_age'. I could be missing something in this but it is\n> > > better to keep discussing this and try to move with another parameter\n> > > 'inactive_replication_slot_timeout' which according to me can be kept\n> > > at slot level instead of a GUC but OTOH we need to see the arguments\n> > > on both side and then decide which makes more sense.\n> >\n> > Hm. Are you suggesting inactive_timeout to be a slot level parameter\n> > similar to 'failover' property added recently by\n> > c393308b69d229b664391ac583b9e07418d411b6 and\n> > 73292404370c9900a96e2bebdc7144f7010339cf? With this approach, one can\n> > set inactive_timeout while creating the slot either via\n> > pg_create_physical_replication_slot() or\n> > pg_create_logical_replication_slot() or CREATE_REPLICATION_SLOT or\n> > ALTER_REPLICATION_SLOT command, and postgres tracks the\n> > last_inactive_at for every slot based on which the slot gets\n> > invalidated. If this understanding is right, I can go ahead and work\n> > towards it.\n> >\n> \n> Yeah, I have something like that in mind. You can prepare the patch\n> but it would be good if others involved in this thread can also share\n> their opinion.\n\nI think it makes sense to put the inactive_timeout granularity at the slot\nlevel (as the activity could vary a lot say between one slot linked to a \nsubcription and one linked to some plugins). As far max_slot_xid_age I've the\nfeeling that a new GUC is good enough.\n\n> > Alternatively, we can go the route of making GUC a list of key-value\n> > pairs of {slot_name, inactive_timeout}, but this kind of GUC for\n> > setting slot level parameters is going to be the first of its kind, so\n> > I'd prefer the above approach.\n> >\n> \n> I would prefer a slot-level parameter in this case rather than a GUC.\n\nYeah, same here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 09:32:40 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sat, Mar 16, 2024 at 09:29:01AM +0530, Bharath Rupireddy wrote:\n> Please see the attached v11 patch set with all the above review\n> comments addressed.\n\nThanks!\n\nLooking at 0001:\n\n1 ===\n\n+ True if this logical slot conflicted with recovery (and so is now\n+ invalidated). When this column is true, check\n\nWorth to add back the physical slot mention \"Always NULL for physical slots.\"?\n\n2 ===\n\n@@ -1023,9 +1023,10 @@ CREATE VIEW pg_replication_slots AS\n L.wal_status,\n L.safe_wal_size,\n L.two_phase,\n- L.conflict_reason,\n+ L.conflicting,\n L.failover,\n- L.synced\n+ L.synced,\n+ L.invalidation_reason\n\nWhat about making invalidation_reason close to conflict_reason?\n\n3 ===\n\n- * Maps a conflict reason for a replication slot to\n+ * Maps a invalidation reason for a replication slot to\n\ns/a invalidation/an invalidation/?\n\n4 ===\n\nWhile at it, shouldn't we also rename \"conflict\" to say \"invalidation_cause\" in\nInvalidatePossiblyObsoleteSlot()?\n\n5 ===\n\n+ * rows_removed and wal_level_insufficient are only two reasons\n\ns/are only two/are the only two/?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 10:12:15 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 14, 2024 at 12:27:26PM +0530, Amit Kapila wrote:\n> On Wed, Mar 13, 2024 at 10:16 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Wed, Mar 13, 2024 at 11:13 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n> > >\n> > > JFYI, the patch does not apply to the head. There is a conflict in\n> > > multiple files.\n> >\n> > Thanks for looking into this. I noticed that the v8 patches needed\n> > rebase. Before I go do anything with the patches, I'm trying to gain\n> > consensus on the design. Following is the summary of design choices\n> > we've discussed so far:\n> > 1) conflict_reason vs invalidation_reason.\n> > 2) When to compute the XID age?\n> >\n> \n> I feel we should focus on two things (a) one is to introduce a new\n> column invalidation_reason, and (b) let's try to first complete\n> invalidation due to timeout. We can look into XID stuff if time\n> permits, remember, we don't have ample time left.\n\nAgree. While it makes sense to invalidate slots for wal removal in\nCreateCheckPoint() (because this is the place where wal is removed), I 'm not\nsure this is the right place for the 2 new cases.\n\nLet's focus on the timeout one as proposed above (as probably the simplest one):\nas this one is purely related to time and activity what about to invalidate them\nwhen?:\n\n- their usage resume\n- in pg_get_replication_slots()\n\nThe idea is to invalidate the slot when one resumes activity on it or wants to\nget information about it (and among other things wants to know if the slot is\nvalid or not).\n\nThoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Mar 2024 14:49:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 18, 2024 at 8:19 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Thu, Mar 14, 2024 at 12:27:26PM +0530, Amit Kapila wrote:\n> > On Wed, Mar 13, 2024 at 10:16 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Wed, Mar 13, 2024 at 11:13 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > > Thanks. v8-0001 is how it looks. Please see the v8 patch set with this change.\n> > > >\n> > > > JFYI, the patch does not apply to the head. There is a conflict in\n> > > > multiple files.\n> > >\n> > > Thanks for looking into this. I noticed that the v8 patches needed\n> > > rebase. Before I go do anything with the patches, I'm trying to gain\n> > > consensus on the design. Following is the summary of design choices\n> > > we've discussed so far:\n> > > 1) conflict_reason vs invalidation_reason.\n> > > 2) When to compute the XID age?\n> > >\n> >\n> > I feel we should focus on two things (a) one is to introduce a new\n> > column invalidation_reason, and (b) let's try to first complete\n> > invalidation due to timeout. We can look into XID stuff if time\n> > permits, remember, we don't have ample time left.\n>\n> Agree. While it makes sense to invalidate slots for wal removal in\n> CreateCheckPoint() (because this is the place where wal is removed), I 'm not\n> sure this is the right place for the 2 new cases.\n>\n> Let's focus on the timeout one as proposed above (as probably the simplest one):\n> as this one is purely related to time and activity what about to invalidate them\n> when?:\n>\n> - their usage resume\n> - in pg_get_replication_slots()\n>\n> The idea is to invalidate the slot when one resumes activity on it or wants to\n> get information about it (and among other things wants to know if the slot is\n> valid or not).\n>\n\nTrying to invalidate at those two places makes sense to me but we\nstill need to cover the cases where it takes very long to resume the\nslot activity and the dangling slot cases where the activity is never\nresumed. How about apart from the above two places, trying to\ninvalidate in CheckPointReplicationSlots() where we are traversing all\nthe slots? This could prevent invalid slots from being marked as\ndirty.\n\nBTW, how will the user use 'inactive_count' to know whether a\nreplication slot is becoming inactive too frequently? The patch just\nkeeps incrementing this counter, one will never know in the last 'n'\nminutes, how many times the slot became inactive unless there is some\nmonitoring tool that keeps capturing this counter from time to time\nand calculates the frequency in some way. Even, if this is useful, it\nis not clear to me whether we need to store 'inactive_count' in the\nslot's persistent data. I understand it could be a metric required by\nthe user but wouldn't it be better to track this via\npg_stat_replication_slots such that we don't need to store this in\nslot's persist data? If this understanding is correct, I would say\nlet's remove 'inactive_count' as well from the main patch and discuss\nit separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Mar 2024 10:56:25 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 19, 2024 at 10:56:25AM +0530, Amit Kapila wrote:\n> On Mon, Mar 18, 2024 at 8:19 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Agree. While it makes sense to invalidate slots for wal removal in\n> > CreateCheckPoint() (because this is the place where wal is removed), I 'm not\n> > sure this is the right place for the 2 new cases.\n> >\n> > Let's focus on the timeout one as proposed above (as probably the simplest one):\n> > as this one is purely related to time and activity what about to invalidate them\n> > when?:\n> >\n> > - their usage resume\n> > - in pg_get_replication_slots()\n> >\n> > The idea is to invalidate the slot when one resumes activity on it or wants to\n> > get information about it (and among other things wants to know if the slot is\n> > valid or not).\n> >\n> \n> Trying to invalidate at those two places makes sense to me but we\n> still need to cover the cases where it takes very long to resume the\n> slot activity and the dangling slot cases where the activity is never\n> resumed.\n\nI understand it's better to have the slot reflecting its real status internally\nbut it is a real issue if that's not the case until the activity on it is resumed?\n(just asking, not saying we should not)\n\n> How about apart from the above two places, trying to\n> invalidate in CheckPointReplicationSlots() where we are traversing all\n> the slots?\n\nI think that's a good place but there is still a window of time (that could also\nbe \"large\" depending of the activity and the checkpoint frequency) during which\nthe slot is not known as invalid internally. But yeah, at leat we know that we'll\nmark it as invalid at some point...\n\nBTW:\n\n if (am_walsender)\n {\n+ if (slot->data.persistency == RS_PERSISTENT)\n+ {\n+ SpinLockAcquire(&slot->mutex);\n+ slot->data.last_inactive_at = GetCurrentTimestamp();\n+ slot->data.inactive_count++;\n+ SpinLockRelease(&slot->mutex);\n\nI'm also feeling the same concern as Shveta mentioned in [1]: that a \"normal\"\nbackend using pg_logical_slot_get_changes() or friends would not set the\nlast_inactive_at.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uD64X%3D2ENmbHaRiWTKeQawr-rbGoy_GdhQQLVXzUSKTMg%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Mar 2024 09:41:10 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 19, 2024 at 3:11 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Mar 19, 2024 at 10:56:25AM +0530, Amit Kapila wrote:\n> > On Mon, Mar 18, 2024 at 8:19 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > Agree. While it makes sense to invalidate slots for wal removal in\n> > > CreateCheckPoint() (because this is the place where wal is removed), I 'm not\n> > > sure this is the right place for the 2 new cases.\n> > >\n> > > Let's focus on the timeout one as proposed above (as probably the simplest one):\n> > > as this one is purely related to time and activity what about to invalidate them\n> > > when?:\n> > >\n> > > - their usage resume\n> > > - in pg_get_replication_slots()\n> > >\n> > > The idea is to invalidate the slot when one resumes activity on it or wants to\n> > > get information about it (and among other things wants to know if the slot is\n> > > valid or not).\n> > >\n> >\n> > Trying to invalidate at those two places makes sense to me but we\n> > still need to cover the cases where it takes very long to resume the\n> > slot activity and the dangling slot cases where the activity is never\n> > resumed.\n>\n> I understand it's better to have the slot reflecting its real status internally\n> but it is a real issue if that's not the case until the activity on it is resumed?\n> (just asking, not saying we should not)\n>\n\nSorry, I didn't understand your point. Can you try to explain by example?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Mar 2024 16:20:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 19, 2024 at 04:20:35PM +0530, Amit Kapila wrote:\n> On Tue, Mar 19, 2024 at 3:11 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Tue, Mar 19, 2024 at 10:56:25AM +0530, Amit Kapila wrote:\n> > > On Mon, Mar 18, 2024 at 8:19 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > > Agree. While it makes sense to invalidate slots for wal removal in\n> > > > CreateCheckPoint() (because this is the place where wal is removed), I 'm not\n> > > > sure this is the right place for the 2 new cases.\n> > > >\n> > > > Let's focus on the timeout one as proposed above (as probably the simplest one):\n> > > > as this one is purely related to time and activity what about to invalidate them\n> > > > when?:\n> > > >\n> > > > - their usage resume\n> > > > - in pg_get_replication_slots()\n> > > >\n> > > > The idea is to invalidate the slot when one resumes activity on it or wants to\n> > > > get information about it (and among other things wants to know if the slot is\n> > > > valid or not).\n> > > >\n> > >\n> > > Trying to invalidate at those two places makes sense to me but we\n> > > still need to cover the cases where it takes very long to resume the\n> > > slot activity and the dangling slot cases where the activity is never\n> > > resumed.\n> >\n> > I understand it's better to have the slot reflecting its real status internally\n> > but it is a real issue if that's not the case until the activity on it is resumed?\n> > (just asking, not saying we should not)\n> >\n> \n> Sorry, I didn't understand your point. Can you try to explain by example?\n\nSorry if that was not clear, let me try to rephrase it first: what issue to you\nsee if the invalidation of such a slot occurs only when its usage resume or\nwhen pg_get_replication_slots() is triggered? I understand that this could lead\nto the slot not being invalidated (maybe forever) but is that an issue for an\ninactive slot?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 19 Mar 2024 12:42:21 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 18, 2024 at 3:02 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > Hm. Are you suggesting inactive_timeout to be a slot level parameter\n> > > similar to 'failover' property added recently by\n> > > c393308b69d229b664391ac583b9e07418d411b6 and\n> > > 73292404370c9900a96e2bebdc7144f7010339cf?\n> >\n> > Yeah, I have something like that in mind. You can prepare the patch\n> > but it would be good if others involved in this thread can also share\n> > their opinion.\n>\n> I think it makes sense to put the inactive_timeout granularity at the slot\n> level (as the activity could vary a lot say between one slot linked to a\n> subcription and one linked to some plugins). As far max_slot_xid_age I've the\n> feeling that a new GUC is good enough.\n\nWell, here I'm implementing the above idea. The attached v12 patches\nmajorly have the following changes:\n\n1. inactive_timeout is now slot-level, that is, one can set it while\ncreating the slot either via SQL functions or via replication commands\nor via subscription.\n2. last_inactive_at and inactive_timeout are now tracked in on-disk\nreplication slot data structure.\n3. last_inactive_at is now set even for non-walsenders whenever the\nslot is released as opposed to initial versions of the patches setting\nit only for walsenders.\n4. slot's inactive_timeout parameter is now migrated to the new\ncluster with pg_upgrade.\n5. slot's inactive_timeout parameter is now synced to the standby when\nfailover is enabled for the slot.\n6. Test cases are added to cover most of the above cases including new\ninvalidation mechanisms.\n\nFollowing are some open points:\n\n1. Where to do inactive_timeout invalidation exactly if not the checkpointer.\n2. Where to do XID age invalidation exactly if not the checkpointer.\n3. How to go about recomputing XID horizons based on max_slot_xid_age.\nDoes the slot's horizon's need to be adjusted in ComputeXidHorizons()?\n4. New invalidation mechanisms interaction with slot sync feature.\n5. Review comments on 0001 from Bertrand.\n\nPlease see the attached v12 patches.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 20 Mar 2024 00:48:55 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 19, 2024 at 6:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Mar 19, 2024 at 04:20:35PM +0530, Amit Kapila wrote:\n> > On Tue, Mar 19, 2024 at 3:11 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Tue, Mar 19, 2024 at 10:56:25AM +0530, Amit Kapila wrote:\n> > > > On Mon, Mar 18, 2024 at 8:19 PM Bertrand Drouvot\n> > > > <[email protected]> wrote:\n> > > > > Agree. While it makes sense to invalidate slots for wal removal in\n> > > > > CreateCheckPoint() (because this is the place where wal is removed), I 'm not\n> > > > > sure this is the right place for the 2 new cases.\n> > > > >\n> > > > > Let's focus on the timeout one as proposed above (as probably the simplest one):\n> > > > > as this one is purely related to time and activity what about to invalidate them\n> > > > > when?:\n> > > > >\n> > > > > - their usage resume\n> > > > > - in pg_get_replication_slots()\n> > > > >\n> > > > > The idea is to invalidate the slot when one resumes activity on it or wants to\n> > > > > get information about it (and among other things wants to know if the slot is\n> > > > > valid or not).\n> > > > >\n> > > >\n> > > > Trying to invalidate at those two places makes sense to me but we\n> > > > still need to cover the cases where it takes very long to resume the\n> > > > slot activity and the dangling slot cases where the activity is never\n> > > > resumed.\n> > >\n> > > I understand it's better to have the slot reflecting its real status internally\n> > > but it is a real issue if that's not the case until the activity on it is resumed?\n> > > (just asking, not saying we should not)\n> > >\n> >\n> > Sorry, I didn't understand your point. Can you try to explain by example?\n>\n> Sorry if that was not clear, let me try to rephrase it first: what issue to you\n> see if the invalidation of such a slot occurs only when its usage resume or\n> when pg_get_replication_slots() is triggered? I understand that this could lead\n> to the slot not being invalidated (maybe forever) but is that an issue for an\n> inactive slot?\n>\n\nIt has the risk of preventing WAL and row removal. I think this is the\nprimary reason we are at the first place planning to have such a\nparameter. So, we should have some way to invalidate it even when the\nwalsender/backend process doesn't use it again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Mar 2024 07:58:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 20, 2024 at 12:49 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Following are some open points:\n>\n> 1. Where to do inactive_timeout invalidation exactly if not the checkpointer.\n>\n\nI have suggested to do it at the time of CheckpointReplicationSlots()\nand Bertrand suggested to do it whenever we resume using the slot. I\nthink we should follow both the suggestions.\n\n> 2. Where to do XID age invalidation exactly if not the checkpointer.\n> 3. How to go about recomputing XID horizons based on max_slot_xid_age.\n> Does the slot's horizon's need to be adjusted in ComputeXidHorizons()?\n>\n\nI suggest postponing the patch for xid based invalidation for a later\ndiscussion.\n\n> 4. New invalidation mechanisms interaction with slot sync feature.\n>\n\nYeah, this is important. My initial thoughts are that synced slots\nshouldn't be invalidated on the standby due to timeout.\n\n> 5. Review comments on 0001 from Bertrand.\n>\n> Please see the attached v12 patches.\n>\n\nThanks for quickly updating the patches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Mar 2024 08:58:05 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 18, 2024 at 3:42 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Looking at 0001:\n\nThanks for reviewing.\n\n> 1 ===\n>\n> + True if this logical slot conflicted with recovery (and so is now\n> + invalidated). When this column is true, check\n>\n> Worth to add back the physical slot mention \"Always NULL for physical slots.\"?\n\nWill change.\n\n> 2 ===\n>\n> @@ -1023,9 +1023,10 @@ CREATE VIEW pg_replication_slots AS\n> L.wal_status,\n> L.safe_wal_size,\n> L.two_phase,\n> - L.conflict_reason,\n> + L.conflicting,\n> L.failover,\n> - L.synced\n> + L.synced,\n> + L.invalidation_reason\n>\n> What about making invalidation_reason close to conflict_reason?\n\nNot required I think. One can pick the required columns in the SELECT\nclause anyways.\n\n> 3 ===\n>\n> - * Maps a conflict reason for a replication slot to\n> + * Maps a invalidation reason for a replication slot to\n>\n> s/a invalidation/an invalidation/?\n\nWill change.\n\n> 4 ===\n>\n> While at it, shouldn't we also rename \"conflict\" to say \"invalidation_cause\" in\n> InvalidatePossiblyObsoleteSlot()?\n\nThat's inline with our understanding about conflict vs invalidation,\nand keeps the function generic. Will change.\n\n> 5 ===\n>\n> + * rows_removed and wal_level_insufficient are only two reasons\n>\n> s/are only two/are the only two/?\n\nWill change..\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 11:17:47 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 20, 2024 at 08:58:05AM +0530, Amit Kapila wrote:\n> On Wed, Mar 20, 2024 at 12:49 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> >\n> > Following are some open points:\n> >\n> > 1. Where to do inactive_timeout invalidation exactly if not the checkpointer.\n> >\n> \n> I have suggested to do it at the time of CheckpointReplicationSlots()\n> and Bertrand suggested to do it whenever we resume using the slot. I\n> think we should follow both the suggestions.\n\nAgree. I also think that pg_get_replication_slots() would be a good place, so\nthat queries would return the right invalidation status.\n\n> > 4. New invalidation mechanisms interaction with slot sync feature.\n> >\n> \n> Yeah, this is important. My initial thoughts are that synced slots\n> shouldn't be invalidated on the standby due to timeout.\n\n+1\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 07:34:04 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 18, 2024 at 3:02 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > > Hm. Are you suggesting inactive_timeout to be a slot level parameter\n> > > > similar to 'failover' property added recently by\n> > > > c393308b69d229b664391ac583b9e07418d411b6 and\n> > > > 73292404370c9900a96e2bebdc7144f7010339cf?\n> > >\n> > > Yeah, I have something like that in mind. You can prepare the patch\n> > > but it would be good if others involved in this thread can also share\n> > > their opinion.\n> >\n> > I think it makes sense to put the inactive_timeout granularity at the slot\n> > level (as the activity could vary a lot say between one slot linked to a\n> > subcription and one linked to some plugins). As far max_slot_xid_age I've the\n> > feeling that a new GUC is good enough.\n> \n> Well, here I'm implementing the above idea.\n\nThanks!\n\n> The attached v12 patches\n> majorly have the following changes:\n> \n> 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> replication slot data structure.\n\nShould last_inactive_at be tracked on disk? Say the engine is down for a period\nof time > inactive_timeout then the slot will be invalidated after the engine\nre-start (if no activity before we invalidate the slot). Should the time the\nengine is down be counted as \"inactive\" time? I've the feeling it should not, and\nthat we should only take into account inactive time while the engine is up.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 08:21:54 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 18, 2024 at 3:02 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > > Hm. Are you suggesting inactive_timeout to be a slot level parameter\n> > > > similar to 'failover' property added recently by\n> > > > c393308b69d229b664391ac583b9e07418d411b6 and\n> > > > 73292404370c9900a96e2bebdc7144f7010339cf?\n> > >\n> > > Yeah, I have something like that in mind. You can prepare the patch\n> > > but it would be good if others involved in this thread can also share\n> > > their opinion.\n> >\n> > I think it makes sense to put the inactive_timeout granularity at the slot\n> > level (as the activity could vary a lot say between one slot linked to a\n> > subcription and one linked to some plugins). As far max_slot_xid_age I've the\n> > feeling that a new GUC is good enough.\n> \n> Well, here I'm implementing the above idea. The attached v12 patches\n> majorly have the following changes:\n> \n\nRegarding v12-0004: \"Allow setting inactive_timeout in the replication command\",\nshouldn't we also add an new SQL API say: pg_alter_replication_slot() that would\nallow to change the timeout property? \n\nThat would allow users to alter this property without the need to make a\nreplication connection. \n\nBut the issue is that it would make it inconsistent with the new inactivetimeout\nin the subscription that is added in \"v12-0005\". But do we need to display\nsubinactivetimeout in pg_subscription (and even allow it at subscription creation\n/ alter) after all? (I've the feeling there is less such a need as compare to\nsubfailover, subtwophasestate for example).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 20 Mar 2024 13:38:18 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 20, 2024 at 1:04 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 08:58:05AM +0530, Amit Kapila wrote:\n> > On Wed, Mar 20, 2024 at 12:49 AM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > Following are some open points:\n> > >\n> > > 1. Where to do inactive_timeout invalidation exactly if not the checkpointer.\n> > >\n> > I have suggested to do it at the time of CheckpointReplicationSlots()\n> > and Bertrand suggested to do it whenever we resume using the slot. I\n> > think we should follow both the suggestions.\n>\n> Agree. I also think that pg_get_replication_slots() would be a good place, so\n> that queries would return the right invalidation status.\n\nI've addressed review comments and attaching the v13 patches with the\nfollowing changes:\n\n1. Invalidate replication slot due to inactive_timeout:\n1.1 In CheckpointReplicationSlots() to help with automatic invalidation.\n1.2 In pg_get_replication_slots to help readers see the latest slot information.\n1.3 In ReplicationSlotAcquire for walsenders as typically walsenders\nare the ones that use slots for longer durations for streaming\nstandbys and logical subscribers.\n1.4 In ReplicationSlotAcquire when called from\npg_logical_slot_get_changes_guts to help with logical decoding clients\nto disallow decoding from invalidated slots.\n1.5 In ReplicationSlotAcquire when called from\npg_replication_slot_advance to help with disallowing advancing\ninvalidated slots.\n2. Have a new input parameter bool check_for_invalidation for\nReplicationSlotAcquire(). When true, check for the inactive_timeout\ninvalidation, if invalidated, error out.\n3. Have a new function to just do inactive_timeout invalidation.\n4. Do not update last_inactive_at for failover slots on standby to not\ninvalidate failover slots on the standby.\n5. In ReplicationSlotAcquire(), invalidate the slot before making it active.\n6. Make last_inactive_at a shared-memory parameter as opposed to an\non-disk parameter to help not count the server downtime for inactive\ntime.\n7. Let the failover slot on standby and pg_upgraded slots get\ninactive_timeout parameter from the primary and old cluster\nrespectively.\n\nPlease see the attached v13 patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Mar 2024 05:05:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 20, 2024 at 7:08 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Regarding v12-0004: \"Allow setting inactive_timeout in the replication command\",\n> shouldn't we also add an new SQL API say: pg_alter_replication_slot() that would\n> allow to change the timeout property?\n>\n> That would allow users to alter this property without the need to make a\n> replication connection.\n\n+1 to add a new SQL function pg_alter_replication_slot(). It helps\nfirst create the slots and then later decide the appropriate\ninactive_timeout. It might grow into altering other slot parameters\nsuch as failover (I'm not sure if altering failover property on the\nprimary after a while makes it the right candidate for syncing on the\nstandby). Perhaps, we can add it for altering just inactive_timeout\nfor now and be done with it.\n\nFWIW, ALTER_REPLICATION_SLOT was added keeping in mind just the\nfailover property for logical slots, that's why it emits an error\n\"cannot use ALTER_REPLICATION_SLOT with a physical replication slot\"\n\n> But the issue is that it would make it inconsistent with the new inactivetimeout\n> in the subscription that is added in \"v12-0005\".\n\nCan you please elaborate what the inconsistency it causes with inactivetimeout?\n\n> But do we need to display\n> subinactivetimeout in pg_subscription (and even allow it at subscription creation\n> / alter) after all? (I've the feeling there is less such a need as compare to\n> subfailover, subtwophasestate for example).\n\nMaybe we don't need to. One can always trace down to the replication\nslot associated with the subscription on the publisher, and get to\nknow what the slot's inactive_timeout setting is. However, it looks to\nme that it avoids one going to the publisher to know the\ninactive_timeout value for a subscription. Moreover, we are allowing\nthe inactive_timeout to be set via CREATE/ALTER SUBSCRIPTION command,\nI believe there's nothing wrong if it's also part of the\npg_subscription catalog.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 05:19:05 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> >\n> > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > replication slot data structure.\n>\n> Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> of time > inactive_timeout then the slot will be invalidated after the engine\n> re-start (if no activity before we invalidate the slot). Should the time the\n> engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> that we should only take into account inactive time while the engine is up.\n>\n\nGood point. The question is how do we achieve this without persisting\nthe 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\nhad some valid value before we shut down but it still didn't cross the\nconfigured 'inactive_timeout' value, so, we won't be able to\ninvalidate it. Now, after the restart, as we don't know the\nlast_inactive_at's value before the shutdown, we will initialize it\nwith 0 (this is what Bharath seems to have done in the latest\nv13-0002* patch). After this, even if walsender or backend never\nacquires the slot, we won't invalidate it. OTOH, if we track\n'last_inactive_at' on the disk, after, restart, we could initialize it\nto the current time if the value is non-zero. Do you have any better\nideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 08:47:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 5:19 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 7:08 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Regarding v12-0004: \"Allow setting inactive_timeout in the replication command\",\n> > shouldn't we also add an new SQL API say: pg_alter_replication_slot() that would\n> > allow to change the timeout property?\n> >\n> > That would allow users to alter this property without the need to make a\n> > replication connection.\n>\n> +1 to add a new SQL function pg_alter_replication_slot().\n>\n\nI also don't see any obvious problem with such an API. However, this\nis not a good time to invent new APIs. Let's keep the feature simple\nand then we can extend it in the next version after more discussion\nand probably by that time we will get some feedback from the field as\nwell.\n\n>\n> It helps\n> first create the slots and then later decide the appropriate\n> inactive_timeout. It might grow into altering other slot parameters\n> such as failover (I'm not sure if altering failover property on the\n> primary after a while makes it the right candidate for syncing on the\n> standby). Perhaps, we can add it for altering just inactive_timeout\n> for now and be done with it.\n>\n> FWIW, ALTER_REPLICATION_SLOT was added keeping in mind just the\n> failover property for logical slots, that's why it emits an error\n> \"cannot use ALTER_REPLICATION_SLOT with a physical replication slot\"\n>\n> > But the issue is that it would make it inconsistent with the new inactivetimeout\n> > in the subscription that is added in \"v12-0005\".\n>\n> Can you please elaborate what the inconsistency it causes with inactivetimeout?\n>\n\nI think the inconsistency can arise from the fact that on publisher\none can change the inactive_timeout for the slot corresponding to a\nsubscription but the subscriber won't know, so it will still show the\nold value. If we want we can document this as a limitation and let\nusers be aware of it. However, I feel at this stage, let's not even\nexpose this from the subscription or maybe we can discuss it once/if\nwe are done with other patches. Anyway, if one wants to use this\nfeature with a subscription, she can create a slot first on the\npublisher with inactive_timeout value and then associate such a slot\nwith a required subscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 09:07:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 9:07 AM Amit Kapila <[email protected]> wrote:\n>\n> I also don't see any obvious problem with such an API. However, this\n> is not a good time to invent new APIs. Let's keep the feature simple\n> and then we can extend it in the next version after more discussion\n> and probably by that time we will get some feedback from the field as\n> well.\n\nI couldn't agree more.\n\n> > > But the issue is that it would make it inconsistent with the new inactivetimeout\n> > > in the subscription that is added in \"v12-0005\".\n> >\n> > Can you please elaborate what the inconsistency it causes with inactivetimeout?\n> >\n> I think the inconsistency can arise from the fact that on publisher\n> one can change the inactive_timeout for the slot corresponding to a\n> subscription but the subscriber won't know, so it will still show the\n> old value.\n\nUnderstood.\n\n> If we want we can document this as a limitation and let\n> users be aware of it. However, I feel at this stage, let's not even\n> expose this from the subscription or maybe we can discuss it once/if\n> we are done with other patches. Anyway, if one wants to use this\n> feature with a subscription, she can create a slot first on the\n> publisher with inactive_timeout value and then associate such a slot\n> with a required subscription.\n\nIf we are not exposing it via subscription (meaning, we don't consider\nv13-0004 and v13-0005 patches), I feel we can have a new SQL API\npg_alter_replication_slot(int inactive_timeout) for now just altering\nthe inactive_timeout of a given slot.\n\nWith this approach, one can do either of the following:\n1) Create a slot with SQL API with inactive_timeout set, and use it\nfor subscriptions or for streaming standbys.\n2) Create a slot with SQL API without inactive_timeout set, use it for\nsubscriptions or for streaming standbys, and set inactive_timeout\nlater via pg_alter_replication_slot() depending on how the slot is\nconsumed\n3) Create a subscription with create_slot=true, and set\ninactive_timeout via pg_alter_replication_slot() depending on how the\nslot is consumed.\n\nThis approach seems consistent and minimal to start with.\n\nIf we agree on this, I'll drop both 0004 and 0005 that are allowing\ninactive_timeout to be set via replication commands and via\ncreate/alter subscription respectively, and implement\npg_alter_replication_slot().\n\nFWIW, adding the new SQL API pg_alter_replication_slot() isn't that hard.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 10:53:54 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 8:47 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> > >\n> > > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > > replication slot data structure.\n> >\n> > Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> > of time > inactive_timeout then the slot will be invalidated after the engine\n> > re-start (if no activity before we invalidate the slot). Should the time the\n> > engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> > that we should only take into account inactive time while the engine is up.\n> >\n>\n> Good point. The question is how do we achieve this without persisting\n> the 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\n> had some valid value before we shut down but it still didn't cross the\n> configured 'inactive_timeout' value, so, we won't be able to\n> invalidate it. Now, after the restart, as we don't know the\n> last_inactive_at's value before the shutdown, we will initialize it\n> with 0 (this is what Bharath seems to have done in the latest\n> v13-0002* patch). After this, even if walsender or backend never\n> acquires the slot, we won't invalidate it. OTOH, if we track\n> 'last_inactive_at' on the disk, after, restart, we could initialize it\n> to the current time if the value is non-zero. Do you have any better\n> ideas?\n\nThis sounds reasonable to me at least.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 10:55:31 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 08:47:18AM +0530, Amit Kapila wrote:\n> On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> > >\n> > > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > > replication slot data structure.\n> >\n> > Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> > of time > inactive_timeout then the slot will be invalidated after the engine\n> > re-start (if no activity before we invalidate the slot). Should the time the\n> > engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> > that we should only take into account inactive time while the engine is up.\n> >\n> \n> Good point. The question is how do we achieve this without persisting\n> the 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\n> had some valid value before we shut down but it still didn't cross the\n> configured 'inactive_timeout' value, so, we won't be able to\n> invalidate it. Now, after the restart, as we don't know the\n> last_inactive_at's value before the shutdown, we will initialize it\n> with 0 (this is what Bharath seems to have done in the latest\n> v13-0002* patch). After this, even if walsender or backend never\n> acquires the slot, we won't invalidate it. OTOH, if we track\n> 'last_inactive_at' on the disk, after, restart, we could initialize it\n> to the current time if the value is non-zero. Do you have any better\n> ideas?\n> \n\nI think that setting last_inactive_at when we restart makes sense if the slot\nhas been active previously. I think the idea is because it's holding xmin/catalog_xmin\nand that we don't want to prevent rows removal longer that the timeout.\n\nSo what about relying on xmin/catalog_xmin instead that way?\n\n- For physical slots if xmin is set then set last_inactive_at to the current\ntime at restart (else zero).\n\n- For logical slot, it's not the same as the catalog_xmin is set at the slot\ncreation time. So what about setting last_inactive_at at the current time at \nrestart but also at creation time for logical slot? (Setting it to zero at\ncreation time (as we do in v13) does not look right, given the fact that it's\n\"already\" holding a catalog_xmin).\n\nThat way, we'd ensure that we are not holding rows for longer that the timeout\nand we don't need to persist last_inactive_at.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 05:53:48 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 10:53:54AM +0530, Bharath Rupireddy wrote:\n> On Thu, Mar 21, 2024 at 9:07 AM Amit Kapila <[email protected]> wrote:\n> > > > But the issue is that it would make it inconsistent with the new inactivetimeout\n> > > > in the subscription that is added in \"v12-0005\".\n> > >\n> > > Can you please elaborate what the inconsistency it causes with inactivetimeout?\n> > >\n> > I think the inconsistency can arise from the fact that on publisher\n> > one can change the inactive_timeout for the slot corresponding to a\n> > subscription but the subscriber won't know, so it will still show the\n> > old value.\n\nYeah, that was what I had in mind.\n\n> > If we want we can document this as a limitation and let\n> > users be aware of it. However, I feel at this stage, let's not even\n> > expose this from the subscription or maybe we can discuss it once/if\n> > we are done with other patches.\n\nI agree, it's important to expose it for things like \"failover\" but I think we\ncan get rid of it for the timeout one.\n\n>> Anyway, if one wants to use this\n> > feature with a subscription, she can create a slot first on the\n> > publisher with inactive_timeout value and then associate such a slot\n> > with a required subscription.\n\nRight.\n\n> \n> If we are not exposing it via subscription (meaning, we don't consider\n> v13-0004 and v13-0005 patches), I feel we can have a new SQL API\n> pg_alter_replication_slot(int inactive_timeout) for now just altering\n> the inactive_timeout of a given slot.\n\nAgree, that seems more \"natural\" that going through a replication connection.\n\n> With this approach, one can do either of the following:\n> 1) Create a slot with SQL API with inactive_timeout set, and use it\n> for subscriptions or for streaming standbys.\n\nYes.\n\n> 2) Create a slot with SQL API without inactive_timeout set, use it for\n> subscriptions or for streaming standbys, and set inactive_timeout\n> later via pg_alter_replication_slot() depending on how the slot is\n> consumed\n\nYes.\n\n> 3) Create a subscription with create_slot=true, and set\n> inactive_timeout via pg_alter_replication_slot() depending on how the\n> slot is consumed.\n\nYes.\n\nWe could also do the above 3 and altering the timeout with a replication\nconnection but the SQL API seems more natural to me.\n\n> \n> This approach seems consistent and minimal to start with.\n> \n> If we agree on this, I'll drop both 0004 and 0005 that are allowing\n> inactive_timeout to be set via replication commands and via\n> create/alter subscription respectively, and implement\n> pg_alter_replication_slot().\n\n+1 on this.\n\n> FWIW, adding the new SQL API pg_alter_replication_slot() isn't that hard.\n\nAlso I think we should ensure that one could \"only\" alter the timeout property\nfor the time being (if not that could lead to the subscription inconsistency \nmentioned above).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 06:07:24 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:23 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 08:47:18AM +0530, Amit Kapila wrote:\n> > On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> > > >\n> > > > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > > > replication slot data structure.\n> > >\n> > > Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> > > of time > inactive_timeout then the slot will be invalidated after the engine\n> > > re-start (if no activity before we invalidate the slot). Should the time the\n> > > engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> > > that we should only take into account inactive time while the engine is up.\n> > >\n> >\n> > Good point. The question is how do we achieve this without persisting\n> > the 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\n> > had some valid value before we shut down but it still didn't cross the\n> > configured 'inactive_timeout' value, so, we won't be able to\n> > invalidate it. Now, after the restart, as we don't know the\n> > last_inactive_at's value before the shutdown, we will initialize it\n> > with 0 (this is what Bharath seems to have done in the latest\n> > v13-0002* patch). After this, even if walsender or backend never\n> > acquires the slot, we won't invalidate it. OTOH, if we track\n> > 'last_inactive_at' on the disk, after, restart, we could initialize it\n> > to the current time if the value is non-zero. Do you have any better\n> > ideas?\n> >\n>\n> I think that setting last_inactive_at when we restart makes sense if the slot\n> has been active previously. I think the idea is because it's holding xmin/catalog_xmin\n> and that we don't want to prevent rows removal longer that the timeout.\n>\n> So what about relying on xmin/catalog_xmin instead that way?\n>\n\nThat doesn't sound like a great idea because xmin/catalog_xmin values\nwon't tell us before restart whether it was active or not. It could\nhave been inactive for long time before restart but the xmin values\ncould still be valid. What about we always set 'last_inactive_at' at\nrestart (if the slot's inactive_timeout has non-zero value) and reset\nit as soon as someone acquires that slot? Now, if the slot doesn't get\nacquired till 'inactive_timeout', checkpointer will invalidate the\nslot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:43:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:37 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 10:53:54AM +0530, Bharath Rupireddy wrote:\n> > On Thu, Mar 21, 2024 at 9:07 AM Amit Kapila <[email protected]> wrote:\n> > > > > But the issue is that it would make it inconsistent with the new inactivetimeout\n> > > > > in the subscription that is added in \"v12-0005\".\n> > > >\n> > > > Can you please elaborate what the inconsistency it causes with inactivetimeout?\n> > > >\n> > > I think the inconsistency can arise from the fact that on publisher\n> > > one can change the inactive_timeout for the slot corresponding to a\n> > > subscription but the subscriber won't know, so it will still show the\n> > > old value.\n>\n> Yeah, that was what I had in mind.\n>\n> > > If we want we can document this as a limitation and let\n> > > users be aware of it. However, I feel at this stage, let's not even\n> > > expose this from the subscription or maybe we can discuss it once/if\n> > > we are done with other patches.\n>\n> I agree, it's important to expose it for things like \"failover\" but I think we\n> can get rid of it for the timeout one.\n>\n> >> Anyway, if one wants to use this\n> > > feature with a subscription, she can create a slot first on the\n> > > publisher with inactive_timeout value and then associate such a slot\n> > > with a required subscription.\n>\n> Right.\n>\n> >\n> > If we are not exposing it via subscription (meaning, we don't consider\n> > v13-0004 and v13-0005 patches), I feel we can have a new SQL API\n> > pg_alter_replication_slot(int inactive_timeout) for now just altering\n> > the inactive_timeout of a given slot.\n>\n> Agree, that seems more \"natural\" that going through a replication connection.\n>\n> > With this approach, one can do either of the following:\n> > 1) Create a slot with SQL API with inactive_timeout set, and use it\n> > for subscriptions or for streaming standbys.\n>\n> Yes.\n>\n> > 2) Create a slot with SQL API without inactive_timeout set, use it for\n> > subscriptions or for streaming standbys, and set inactive_timeout\n> > later via pg_alter_replication_slot() depending on how the slot is\n> > consumed\n>\n> Yes.\n>\n> > 3) Create a subscription with create_slot=true, and set\n> > inactive_timeout via pg_alter_replication_slot() depending on how the\n> > slot is consumed.\n>\n> Yes.\n>\n> We could also do the above 3 and altering the timeout with a replication\n> connection but the SQL API seems more natural to me.\n>\n\nIf we want to go with this then I think we should at least ensure that\nif one specified timeout via CREATE_REPLICATION_SLOT or\nALTER_REPLICATION_SLOT that should be honored.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:53:32 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 11:43:54AM +0530, Amit Kapila wrote:\n> On Thu, Mar 21, 2024 at 11:23 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Thu, Mar 21, 2024 at 08:47:18AM +0530, Amit Kapila wrote:\n> > > On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> > > > >\n> > > > > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > > > > replication slot data structure.\n> > > >\n> > > > Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> > > > of time > inactive_timeout then the slot will be invalidated after the engine\n> > > > re-start (if no activity before we invalidate the slot). Should the time the\n> > > > engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> > > > that we should only take into account inactive time while the engine is up.\n> > > >\n> > >\n> > > Good point. The question is how do we achieve this without persisting\n> > > the 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\n> > > had some valid value before we shut down but it still didn't cross the\n> > > configured 'inactive_timeout' value, so, we won't be able to\n> > > invalidate it. Now, after the restart, as we don't know the\n> > > last_inactive_at's value before the shutdown, we will initialize it\n> > > with 0 (this is what Bharath seems to have done in the latest\n> > > v13-0002* patch). After this, even if walsender or backend never\n> > > acquires the slot, we won't invalidate it. OTOH, if we track\n> > > 'last_inactive_at' on the disk, after, restart, we could initialize it\n> > > to the current time if the value is non-zero. Do you have any better\n> > > ideas?\n> > >\n> >\n> > I think that setting last_inactive_at when we restart makes sense if the slot\n> > has been active previously. I think the idea is because it's holding xmin/catalog_xmin\n> > and that we don't want to prevent rows removal longer that the timeout.\n> >\n> > So what about relying on xmin/catalog_xmin instead that way?\n> >\n> \n> That doesn't sound like a great idea because xmin/catalog_xmin values\n> won't tell us before restart whether it was active or not. It could\n> have been inactive for long time before restart but the xmin values\n> could still be valid.\n\nRight, the idea here was more like \"don't hold xmin/catalog_xmin\" for longer\nthan timeout.\n\nMy concern was that we set catalog_xmin at logical slot creation time. So if we\nset last_inactive_at to zero at creation time and the slot is not used for a long\nperiod of time > timeout, then I think it's not helping there.\n\n> What about we always set 'last_inactive_at' at\n> restart (if the slot's inactive_timeout has non-zero value) and reset\n> it as soon as someone acquires that slot? Now, if the slot doesn't get\n> acquired till 'inactive_timeout', checkpointer will invalidate the\n> slot.\n\nYeah that sounds good to me, but I think we should set last_inactive_at at creation\ntime too, if not:\n\n- physical slot could remain valid for long time after creation (which is fine)\nbut the behavior would change at restart.\n- logical slot would have the \"issue\" reported above (holding catalog_xmin). \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 06:45:28 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 11:53:32AM +0530, Amit Kapila wrote:\n> On Thu, Mar 21, 2024 at 11:37 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > We could also do the above 3 and altering the timeout with a replication\n> > connection but the SQL API seems more natural to me.\n> >\n> \n> If we want to go with this then I think we should at least ensure that\n> if one specified timeout via CREATE_REPLICATION_SLOT or\n> ALTER_REPLICATION_SLOT that should be honored.\n\nYeah, agree.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 06:50:12 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 05:05:46AM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 20, 2024 at 1:04 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Wed, Mar 20, 2024 at 08:58:05AM +0530, Amit Kapila wrote:\n> > > On Wed, Mar 20, 2024 at 12:49 AM Bharath Rupireddy\n> > > <[email protected]> wrote:\n> > > >\n> > > > Following are some open points:\n> > > >\n> > > > 1. Where to do inactive_timeout invalidation exactly if not the checkpointer.\n> > > >\n> > > I have suggested to do it at the time of CheckpointReplicationSlots()\n> > > and Bertrand suggested to do it whenever we resume using the slot. I\n> > > think we should follow both the suggestions.\n> >\n> > Agree. I also think that pg_get_replication_slots() would be a good place, so\n> > that queries would return the right invalidation status.\n> \n> I've addressed review comments and attaching the v13 patches with the\n> following changes:\n\nThanks!\n\nv13-0001 looks good to me. The only Nit (that I've mentioned up-thread) is that\nin the pg_replication_slots view, the invalidation_reason is \"far away\" from the\nconflicting field. I understand that one could query the fields individually but\nwhen describing the view or reading the doc, it seems more appropriate to see\nthem closer. Also as \"failover\" and \"synced\" are also new in version 17, there\nis no risk to break order by \"17,18\" kind of queries (which are the failover\nand sync positions).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 07:10:16 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 12:40 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> v13-0001 looks good to me. The only Nit (that I've mentioned up-thread) is that\n> in the pg_replication_slots view, the invalidation_reason is \"far away\" from the\n> conflicting field. I understand that one could query the fields individually but\n> when describing the view or reading the doc, it seems more appropriate to see\n> them closer. Also as \"failover\" and \"synced\" are also new in version 17, there\n> is no risk to break order by \"17,18\" kind of queries (which are the failover\n> and sync positions).\n\nHm, yeah, I can change that in the next version of the patches. Thanks.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 14:43:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 12:15 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 11:43:54AM +0530, Amit Kapila wrote:\n> > On Thu, Mar 21, 2024 at 11:23 AM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Mar 21, 2024 at 08:47:18AM +0530, Amit Kapila wrote:\n> > > > On Wed, Mar 20, 2024 at 1:51 PM Bertrand Drouvot\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > On Wed, Mar 20, 2024 at 12:48:55AM +0530, Bharath Rupireddy wrote:\n> > > > > >\n> > > > > > 2. last_inactive_at and inactive_timeout are now tracked in on-disk\n> > > > > > replication slot data structure.\n> > > > >\n> > > > > Should last_inactive_at be tracked on disk? Say the engine is down for a period\n> > > > > of time > inactive_timeout then the slot will be invalidated after the engine\n> > > > > re-start (if no activity before we invalidate the slot). Should the time the\n> > > > > engine is down be counted as \"inactive\" time? I've the feeling it should not, and\n> > > > > that we should only take into account inactive time while the engine is up.\n> > > > >\n> > > >\n> > > > Good point. The question is how do we achieve this without persisting\n> > > > the 'last_inactive_at'? Say, 'last_inactive_at' for a particular slot\n> > > > had some valid value before we shut down but it still didn't cross the\n> > > > configured 'inactive_timeout' value, so, we won't be able to\n> > > > invalidate it. Now, after the restart, as we don't know the\n> > > > last_inactive_at's value before the shutdown, we will initialize it\n> > > > with 0 (this is what Bharath seems to have done in the latest\n> > > > v13-0002* patch). After this, even if walsender or backend never\n> > > > acquires the slot, we won't invalidate it. OTOH, if we track\n> > > > 'last_inactive_at' on the disk, after, restart, we could initialize it\n> > > > to the current time if the value is non-zero. Do you have any better\n> > > > ideas?\n> > > >\n> > >\n> > > I think that setting last_inactive_at when we restart makes sense if the slot\n> > > has been active previously. I think the idea is because it's holding xmin/catalog_xmin\n> > > and that we don't want to prevent rows removal longer that the timeout.\n> > >\n> > > So what about relying on xmin/catalog_xmin instead that way?\n> > >\n> >\n> > That doesn't sound like a great idea because xmin/catalog_xmin values\n> > won't tell us before restart whether it was active or not. It could\n> > have been inactive for long time before restart but the xmin values\n> > could still be valid.\n>\n> Right, the idea here was more like \"don't hold xmin/catalog_xmin\" for longer\n> than timeout.\n>\n> My concern was that we set catalog_xmin at logical slot creation time. So if we\n> set last_inactive_at to zero at creation time and the slot is not used for a long\n> period of time > timeout, then I think it's not helping there.\n>\n\nBut, we do call ReplicationSlotRelease() after slot creation. For\nexample, see CreateReplicationSlot(). So wouldn't that take care of\nthe case you are worried about?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 15:20:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 3:20 PM Amit Kapila <[email protected]> wrote:\n>\n> > My concern was that we set catalog_xmin at logical slot creation time. So if we\n> > set last_inactive_at to zero at creation time and the slot is not used for a long\n> > period of time > timeout, then I think it's not helping there.\n>\n> But, we do call ReplicationSlotRelease() after slot creation. For\n> example, see CreateReplicationSlot(). So wouldn't that take care of\n> the case you are worried about?\n\nRight. That's true even for pg_create_physical_replication_slot and\npg_create_logical_replication_slot. AFAICS, setting it to the current\ntimestamp in ReplicationSlotRelease suffices unless I'm missing\nsomething.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 16:13:31 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 2:44 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Mar 21, 2024 at 12:40 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > v13-0001 looks good to me. The only Nit (that I've mentioned up-thread) is that\n> > in the pg_replication_slots view, the invalidation_reason is \"far away\" from the\n> > conflicting field. I understand that one could query the fields individually but\n> > when describing the view or reading the doc, it seems more appropriate to see\n> > them closer. Also as \"failover\" and \"synced\" are also new in version 17, there\n> > is no risk to break order by \"17,18\" kind of queries (which are the failover\n> > and sync positions).\n>\n> Hm, yeah, I can change that in the next version of the patches. Thanks.\n>\n\nThis makes sense to me. Apart from this, few more comments on 0001.\n1.\n--- a/src/bin/pg_upgrade/info.c\n+++ b/src/bin/pg_upgrade/info.c\n@@ -676,13 +676,13 @@ get_old_cluster_logical_slot_infos(DbInfo\n*dbinfo, bool live_check)\n * removed.\n */\n res = executeQueryOrDie(conn, \"SELECT slot_name, plugin, two_phase,\nfailover, \"\n- \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n+ \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n \"FROM pg_catalog.pg_replication_slots \"\n \"WHERE slot_type = 'logical' AND \"\n \"database = current_database() AND \"\n \"temporary IS FALSE;\",\n live_check ? \"FALSE\" :\n- \"(CASE WHEN conflict_reason IS NOT NULL THEN FALSE \"\n+ \"(CASE WHEN conflicting THEN FALSE \"\n\nI think here at both places we need to change 'conflict_reason' to\n'conflicting'.\n\n2.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>invalidation_reason</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ The reason for the slot's invalidation. It is set for both logical and\n+ physical slots. <literal>NULL</literal> if the slot is not invalidated.\n+ Possible values are:\n+ <itemizedlist spacing=\"compact\">\n+ <listitem>\n+ <para>\n+ <literal>wal_removed</literal> means that the required WAL has been\n+ removed.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>rows_removed</literal> means that the required rows have\n+ been removed.\n+ </para>\n+ </listitem>\n+ <listitem>\n+ <para>\n+ <literal>wal_level_insufficient</literal> means that the\n+ primary doesn't have a <xref linkend=\"guc-wal-level\"/> sufficient to\n+ perform logical decoding.\n+ </para>\n\nCan the reasons 'rows_removed' and 'wal_level_insufficient' appear for\nphysical slots? If not, then it is not clear from above text.\n\n3.\n-# Verify slots are reported as non conflicting in pg_replication_slots\n+# Verify slots are reported as valid in pg_replication_slots\n is( $node_standby->safe_psql(\n 'postgres',\n q[select bool_or(conflicting) from\n- (select conflict_reason is not NULL as conflicting\n- from pg_replication_slots WHERE slot_type = 'logical')]),\n+ (select conflicting from pg_replication_slots\n+ where slot_type = 'logical')]),\n 'f',\n- 'Logical slots are reported as non conflicting');\n+ 'Logical slots are reported as valid');\n\nI don't think we need to change the comment or success message in this test.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 21 Mar 2024 16:25:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Mar 21, 2024 at 04:13:31PM +0530, Bharath Rupireddy wrote:\n> On Thu, Mar 21, 2024 at 3:20 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > My concern was that we set catalog_xmin at logical slot creation time. So if we\n> > > set last_inactive_at to zero at creation time and the slot is not used for a long\n> > > period of time > timeout, then I think it's not helping there.\n> >\n> > But, we do call ReplicationSlotRelease() after slot creation. For\n> > example, see CreateReplicationSlot(). So wouldn't that take care of\n> > the case you are worried about?\n> \n> Right. That's true even for pg_create_physical_replication_slot and\n> pg_create_logical_replication_slot. AFAICS, setting it to the current\n> timestamp in ReplicationSlotRelease suffices unless I'm missing\n> something.\n\nRight, but we have:\n\n\"\n if (set_last_inactive_at &&\n slot->data.persistency == RS_PERSISTENT)\n {\n /*\n * There's no point in allowing failover slots to get invalidated\n * based on slot's inactive_timeout parameter on standby. The failover\n * slots simply get synced from the primary on the standby.\n */\n if (!(RecoveryInProgress() && slot->data.failover))\n {\n SpinLockAcquire(&slot->mutex);\n slot->last_inactive_at = GetCurrentTimestamp();\n SpinLockRelease(&slot->mutex);\n }\n }\n\"\n\nwhile we set set_last_inactive_at to false at creation time so that last_inactive_at\nis not set to GetCurrentTimestamp(). We should set set_last_inactive_at to true\nif a timeout is provided during the slot creation.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 21 Mar 2024 11:20:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 4:25 PM Amit Kapila <[email protected]> wrote:\n>\n> This makes sense to me. Apart from this, few more comments on 0001.\n\nThanks for looking into it.\n\n> 1.\n> - \"%s as caught_up, conflict_reason IS NOT NULL as invalid \"\n> + \"%s as caught_up, invalidation_reason IS NOT NULL as invalid \"\n> live_check ? \"FALSE\" :\n> - \"(CASE WHEN conflict_reason IS NOT NULL THEN FALSE \"\n> + \"(CASE WHEN conflicting THEN FALSE \"\n>\n> I think here at both places we need to change 'conflict_reason' to\n> 'conflicting'.\n\nBasically, the idea there is to not live_check for invalidated logical\nslots. It has nothing to do with conflicting. Up until now,\nconflict_reason is also reporting wal_removed (although wrongly\nincluding rows_removed, wal_level_insufficient, the two reasons for\nconflicts). So, I think invalidation_reason is right for invalid\ncolumn. Also, I think we need to change conflicting to\ninvalidation_reason for live_check. So, I've changed that to use\ninvalidation_reason for both columns.\n\n> 2.\n>\n> Can the reasons 'rows_removed' and 'wal_level_insufficient' appear for\n> physical slots?\n\nNo. They can only occur for logical slots, check\nInvalidatePossiblyObsoleteSlot, only the logical slots get\ninvalidated.\n\n> If not, then it is not clear from above text.\n\nI've stated that \"It is set only for logical slots.\" for rows_removed\nand wal_level_insufficient. Other reasons can occur for both slots.\n\n> 3.\n> -# Verify slots are reported as non conflicting in pg_replication_slots\n> +# Verify slots are reported as valid in pg_replication_slots\n> is( $node_standby->safe_psql(\n> 'postgres',\n> q[select bool_or(conflicting) from\n> - (select conflict_reason is not NULL as conflicting\n> - from pg_replication_slots WHERE slot_type = 'logical')]),\n> + (select conflicting from pg_replication_slots\n> + where slot_type = 'logical')]),\n> 'f',\n> - 'Logical slots are reported as non conflicting');\n> + 'Logical slots are reported as valid');\n>\n> I don't think we need to change the comment or success message in this test.\n\nYes. There the intention of the test case is to verify logical slots\nare reported as non conflicting. So, I changed them.\n\nPlease find the v14-0001 patch for now. I'll post the other patches soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 21 Mar 2024 23:21:03 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 21, 2024 at 11:21 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Please find the v14-0001 patch for now. I'll post the other patches soon.\n>\n\nLGTM. Let's wait for Bertrand to see if he has more comments on 0001\nand then I'll push it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Mar 2024 10:49:17 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 10:49:17AM +0530, Amit Kapila wrote:\n> On Thu, Mar 21, 2024 at 11:21 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> >\n> > Please find the v14-0001 patch for now.\n\nThanks!\n\n> LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> and then I'll push it.\n\nLGTM too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 07:09:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 12:39 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > Please find the v14-0001 patch for now.\n>\n> Thanks!\n>\n> > LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> > and then I'll push it.\n>\n> LGTM too.\n\nThanks. Here I'm implementing the following:\n\n0001 Track invalidation_reason in pg_replication_slots\n0002 Track last_inactive_at in pg_replication_slots\n0003 Allow setting inactive_timeout for replication slots via SQL API\n0004 Introduce new SQL funtion pg_alter_replication_slot\n0005 Allow setting inactive_timeout in the replication command\n0006 Add inactive_timeout based replication slot invalidation\n\n1. Keep it last_inactive_at as a shared memory variable, but always\nset it at restart if the slot's inactive_timeout has non-zero value\nand reset it as soon as someone acquires that slot so that if the slot\ndoesn't get acquired till inactive_timeout, checkpointer will\ninvalidate the slot.\n2. Ensure with pg_alter_replication_slot one could \"only\" alter the\ntimeout property for the time being, if not that could lead to the\nsubscription inconsistency.\n3. Have some notes in the CREATE and ALTER SUBSCRIPTION docs about\nusing an existing slot to leverage inactive_timeout feature.\n4. last_inactive_at should also be set to the current time during slot\ncreation because if one creates a slot and does nothing with it then\nit's the time it starts to be inactive.\n5. We don't set last_inactive_at to GetCurrentTimestamp() for failover slots.\n6. Leave the patch that added support for inactive_timeout in subscriptions.\n\nPlease see the attached v14 patch set. No change in the attached\nv14-0001 from the previous patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 22 Mar 2024 13:45:01 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> On Fri, Mar 22, 2024 at 12:39 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > > Please find the v14-0001 patch for now.\n> >\n> > Thanks!\n> >\n> > > LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> > > and then I'll push it.\n> >\n> > LGTM too.\n> \n> Thanks. Here I'm implementing the following:\n\nThanks!\n\n> 0001 Track invalidation_reason in pg_replication_slots\n> 0002 Track last_inactive_at in pg_replication_slots\n> 0003 Allow setting inactive_timeout for replication slots via SQL API\n> 0004 Introduce new SQL funtion pg_alter_replication_slot\n> 0005 Allow setting inactive_timeout in the replication command\n> 0006 Add inactive_timeout based replication slot invalidation\n> \n> 1. Keep it last_inactive_at as a shared memory variable, but always\n> set it at restart if the slot's inactive_timeout has non-zero value\n> and reset it as soon as someone acquires that slot so that if the slot\n> doesn't get acquired till inactive_timeout, checkpointer will\n> invalidate the slot.\n> 4. last_inactive_at should also be set to the current time during slot\n> creation because if one creates a slot and does nothing with it then\n> it's the time it starts to be inactive.\n\nI did not look at the code yet but just tested the behavior. It works as you\ndescribe it but I think this behavior is weird because:\n\n- when we create a slot without a timeout then last_inactive_at is set. I think\nthat's fine, but then:\n- when we restart the engine, then last_inactive_at is gone (as timeout is not\nset).\n\nI think last_inactive_at should be set also at engine restart even if there is\nno timeout. I don't think we should link both. Changing my mind here on this\nsubject due to the testing.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 08:57:33 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 2:27 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n>\n> > 0001 Track invalidation_reason in pg_replication_slots\n> > 0002 Track last_inactive_at in pg_replication_slots\n> > 0003 Allow setting inactive_timeout for replication slots via SQL API\n> > 0004 Introduce new SQL funtion pg_alter_replication_slot\n> > 0005 Allow setting inactive_timeout in the replication command\n> > 0006 Add inactive_timeout based replication slot invalidation\n> >\n> > 1. Keep it last_inactive_at as a shared memory variable, but always\n> > set it at restart if the slot's inactive_timeout has non-zero value\n> > and reset it as soon as someone acquires that slot so that if the slot\n> > doesn't get acquired till inactive_timeout, checkpointer will\n> > invalidate the slot.\n> > 4. last_inactive_at should also be set to the current time during slot\n> > creation because if one creates a slot and does nothing with it then\n> > it's the time it starts to be inactive.\n>\n> I did not look at the code yet but just tested the behavior. It works as you\n> describe it but I think this behavior is weird because:\n>\n> - when we create a slot without a timeout then last_inactive_at is set. I think\n> that's fine, but then:\n> - when we restart the engine, then last_inactive_at is gone (as timeout is not\n> set).\n>\n> I think last_inactive_at should be set also at engine restart even if there is\n> no timeout.\n\nI think it is the opposite. Why do we need to set 'last_inactive_at'\nwhen inactive_timeout is not set? BTW, haven't we discussed that we\ndon't need to set 'last_inactive_at' at the time of slot creation as\nit is sufficient to set it at the time ReplicationSlotRelease()?\n\nA few other comments:\n==================\n1.\n@@ -1027,7 +1027,8 @@ CREATE VIEW pg_replication_slots AS\n L.invalidation_reason,\n L.failover,\n L.synced,\n- L.last_inactive_at\n+ L.last_inactive_at,\n+ L.inactive_timeout\n\nI think it would be better to keep 'inactive_timeout' ahead of\n'last_inactive_at' as that is the primary field. In major versions, we\ndon't have to strictly keep the new fields at the end. In this case,\nit seems better to keep these two new fields after two_phase so that\nthese are before invalidation_reason where we can show the\ninvalidation due to these fields.\n\n2.\n void\n-ReplicationSlotRelease(void)\n+ReplicationSlotRelease(bool set_last_inactive_at)\n\nWhy do we need a parameter here? Can't we directly check from the slot\nwhether 'inactive_timeout' has a non-zero value?\n\n3.\n+ /*\n+ * There's no point in allowing failover slots to get invalidated\n+ * based on slot's inactive_timeout parameter on standby. The failover\n+ * slots simply get synced from the primary on the standby.\n+ */\n+ if (!(RecoveryInProgress() && slot->data.failover))\n\nI think you need to check 'sync' flag instead of 'failover'.\nGenerally, failover marker slots should be invalidated either on\nprimary or standby unless on standby the 'failover' marked slot is\nsynced from the primary.\n\n4. I feel the patches should be arranged like 0003->0001, 0002->0002,\n0006->0003. We can leave remaining for the time being till we get\nthese three patches (all three need to be committed as one but it is\nokay to keep them separate for review) committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Mar 2024 14:59:21 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> On Fri, Mar 22, 2024 at 12:39 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > > Please find the v14-0001 patch for now.\n> >\n> > Thanks!\n> >\n> > > LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> > > and then I'll push it.\n> >\n> > LGTM too.\n> \n> \n> Please see the attached v14 patch set. No change in the attached\n> v14-0001 from the previous patch.\n\nLooking at v14-0002:\n\n1 ===\n\n@@ -691,6 +699,13 @@ ReplicationSlotRelease(void)\n ConditionVariableBroadcast(&slot->active_cv);\n }\n\n+ if (slot->data.persistency == RS_PERSISTENT)\n+ {\n+ SpinLockAcquire(&slot->mutex);\n+ slot->last_inactive_at = GetCurrentTimestamp();\n+ SpinLockRelease(&slot->mutex);\n+ }\n\nI'm not sure we should do system calls while we're holding a spinlock.\nAssign a variable before?\n\n2 ===\n\nAlso, what about moving this here?\n\n\"\n if (slot->data.persistency == RS_PERSISTENT)\n {\n /*\n * Mark persistent slot inactive. We're not freeing it, just\n * disconnecting, but wake up others that may be waiting for it.\n */\n SpinLockAcquire(&slot->mutex);\n slot->active_pid = 0;\n SpinLockRelease(&slot->mutex);\n ConditionVariableBroadcast(&slot->active_cv);\n }\n\"\n\nThat would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n\n3 ===\n\n@@ -2341,6 +2356,7 @@ RestoreSlotFromDisk(const char *name)\n\n slot->in_use = true;\n slot->active_pid = 0;\n+ slot->last_inactive_at = 0;\n\nI think we should put GetCurrentTimestamp() here. It's done in v14-0006 but I\nthink it's better to do it in 0002 (and not taking care of inactive_timeout).\n\n4 ===\n\n Track last_inactive_at in pg_replication_slots\n\n doc/src/sgml/system-views.sgml | 11 +++++++++++\n src/backend/catalog/system_views.sql | 3 ++-\n src/backend/replication/slot.c | 16 ++++++++++++++++\n src/backend/replication/slotfuncs.c | 7 ++++++-\n src/include/catalog/pg_proc.dat | 6 +++---\n src/include/replication/slot.h | 3 +++\n src/test/regress/expected/rules.out | 5 +++--\n 7 files changed, 44 insertions(+), 7 deletions(-)\n\nWorth to add some tests too (or we postpone them in future commits because we're\nconfident enough they will follow soon)?\n\n5 ===\n\nMost of the fields that reflect a time (not duration) in the system views are\nxxxx_time, so I'm wondering if instead of \"last_inactive_at\" we should use\nsomething like \"last_inactive_time\"?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:45:29 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 02:59:21PM +0530, Amit Kapila wrote:\n> On Fri, Mar 22, 2024 at 2:27 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> >\n> > > 0001 Track invalidation_reason in pg_replication_slots\n> > > 0002 Track last_inactive_at in pg_replication_slots\n> > > 0003 Allow setting inactive_timeout for replication slots via SQL API\n> > > 0004 Introduce new SQL funtion pg_alter_replication_slot\n> > > 0005 Allow setting inactive_timeout in the replication command\n> > > 0006 Add inactive_timeout based replication slot invalidation\n> > >\n> > > 1. Keep it last_inactive_at as a shared memory variable, but always\n> > > set it at restart if the slot's inactive_timeout has non-zero value\n> > > and reset it as soon as someone acquires that slot so that if the slot\n> > > doesn't get acquired till inactive_timeout, checkpointer will\n> > > invalidate the slot.\n> > > 4. last_inactive_at should also be set to the current time during slot\n> > > creation because if one creates a slot and does nothing with it then\n> > > it's the time it starts to be inactive.\n> >\n> > I did not look at the code yet but just tested the behavior. It works as you\n> > describe it but I think this behavior is weird because:\n> >\n> > - when we create a slot without a timeout then last_inactive_at is set. I think\n> > that's fine, but then:\n> > - when we restart the engine, then last_inactive_at is gone (as timeout is not\n> > set).\n> >\n> > I think last_inactive_at should be set also at engine restart even if there is\n> > no timeout.\n> \n> I think it is the opposite. Why do we need to set 'last_inactive_at'\n> when inactive_timeout is not set?\n\nI think those are unrelated, one could want to know when a slot has been inactive\neven if no timeout is set. I understand that for this patch series we have in mind \nto use them both to invalidate slots but I think that there is use case to not\nuse both in correlation. Also not setting last_inactive_at could give the \"false\"\nimpression that the slot is active.\n\n> BTW, haven't we discussed that we\n> don't need to set 'last_inactive_at' at the time of slot creation as\n> it is sufficient to set it at the time ReplicationSlotRelease()?\n\nRight.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 09:53:13 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:15 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n>\n> 1 ===\n>\n> @@ -691,6 +699,13 @@ ReplicationSlotRelease(void)\n> ConditionVariableBroadcast(&slot->active_cv);\n> }\n>\n> + if (slot->data.persistency == RS_PERSISTENT)\n> + {\n> + SpinLockAcquire(&slot->mutex);\n> + slot->last_inactive_at = GetCurrentTimestamp();\n> + SpinLockRelease(&slot->mutex);\n> + }\n>\n> I'm not sure we should do system calls while we're holding a spinlock.\n> Assign a variable before?\n>\n> 2 ===\n>\n> Also, what about moving this here?\n>\n> \"\n> if (slot->data.persistency == RS_PERSISTENT)\n> {\n> /*\n> * Mark persistent slot inactive. We're not freeing it, just\n> * disconnecting, but wake up others that may be waiting for it.\n> */\n> SpinLockAcquire(&slot->mutex);\n> slot->active_pid = 0;\n> SpinLockRelease(&slot->mutex);\n> ConditionVariableBroadcast(&slot->active_cv);\n> }\n> \"\n>\n> That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n>\n\nThat sounds like a good idea. Also, don't we need to consider physical\nslots where we don't reserve WAL during slot creation? I don't think\nthere is a need to set inactive_at for such slots. If we agree,\nprobably checking restart_lsn should suffice the need to know whether\nthe WAL is reserved or not.\n\n>\n> 5 ===\n>\n> Most of the fields that reflect a time (not duration) in the system views are\n> xxxx_time, so I'm wondering if instead of \"last_inactive_at\" we should use\n> something like \"last_inactive_time\"?\n>\n\nHow about naming it as last_active_time? This will indicate the time\nat which the slot was last active.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Mar 2024 15:56:23 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:23 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 02:59:21PM +0530, Amit Kapila wrote:\n> > On Fri, Mar 22, 2024 at 2:27 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> > >\n> > > > 0001 Track invalidation_reason in pg_replication_slots\n> > > > 0002 Track last_inactive_at in pg_replication_slots\n> > > > 0003 Allow setting inactive_timeout for replication slots via SQL API\n> > > > 0004 Introduce new SQL funtion pg_alter_replication_slot\n> > > > 0005 Allow setting inactive_timeout in the replication command\n> > > > 0006 Add inactive_timeout based replication slot invalidation\n> > > >\n> > > > 1. Keep it last_inactive_at as a shared memory variable, but always\n> > > > set it at restart if the slot's inactive_timeout has non-zero value\n> > > > and reset it as soon as someone acquires that slot so that if the slot\n> > > > doesn't get acquired till inactive_timeout, checkpointer will\n> > > > invalidate the slot.\n> > > > 4. last_inactive_at should also be set to the current time during slot\n> > > > creation because if one creates a slot and does nothing with it then\n> > > > it's the time it starts to be inactive.\n> > >\n> > > I did not look at the code yet but just tested the behavior. It works as you\n> > > describe it but I think this behavior is weird because:\n> > >\n> > > - when we create a slot without a timeout then last_inactive_at is set. I think\n> > > that's fine, but then:\n> > > - when we restart the engine, then last_inactive_at is gone (as timeout is not\n> > > set).\n> > >\n> > > I think last_inactive_at should be set also at engine restart even if there is\n> > > no timeout.\n> >\n> > I think it is the opposite. Why do we need to set 'last_inactive_at'\n> > when inactive_timeout is not set?\n>\n> I think those are unrelated, one could want to know when a slot has been inactive\n> even if no timeout is set. I understand that for this patch series we have in mind\n> to use them both to invalidate slots but I think that there is use case to not\n> use both in correlation. Also not setting last_inactive_at could give the \"false\"\n> impression that the slot is active.\n>\n\nI see your point and agree with this. I feel we can commit this part\nfirst then, probably that is the reason Bharath has kept it as a\nseparate patch. It would be good add the use case for this patch in\nthe commit message.\n\nA minor comment:\n\n if (SlotIsLogical(s))\n pgstat_acquire_replslot(s);\n\n+ if (s->data.persistency == RS_PERSISTENT)\n+ {\n+ SpinLockAcquire(&s->mutex);\n+ s->last_inactive_at = 0;\n+ SpinLockRelease(&s->mutex);\n+ }\n+\n\nI think this part of the change needs a comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Mar 2024 16:16:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 03:56:23PM +0530, Amit Kapila wrote:\n> On Fri, Mar 22, 2024 at 3:15 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> >\n> > 1 ===\n> >\n> > @@ -691,6 +699,13 @@ ReplicationSlotRelease(void)\n> > ConditionVariableBroadcast(&slot->active_cv);\n> > }\n> >\n> > + if (slot->data.persistency == RS_PERSISTENT)\n> > + {\n> > + SpinLockAcquire(&slot->mutex);\n> > + slot->last_inactive_at = GetCurrentTimestamp();\n> > + SpinLockRelease(&slot->mutex);\n> > + }\n> >\n> > I'm not sure we should do system calls while we're holding a spinlock.\n> > Assign a variable before?\n> >\n> > 2 ===\n> >\n> > Also, what about moving this here?\n> >\n> > \"\n> > if (slot->data.persistency == RS_PERSISTENT)\n> > {\n> > /*\n> > * Mark persistent slot inactive. We're not freeing it, just\n> > * disconnecting, but wake up others that may be waiting for it.\n> > */\n> > SpinLockAcquire(&slot->mutex);\n> > slot->active_pid = 0;\n> > SpinLockRelease(&slot->mutex);\n> > ConditionVariableBroadcast(&slot->active_cv);\n> > }\n> > \"\n> >\n> > That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n> >\n> \n> That sounds like a good idea. Also, don't we need to consider physical\n> slots where we don't reserve WAL during slot creation? I don't think\n> there is a need to set inactive_at for such slots.\n\nIf the slot is not active, why shouldn't we set inactive_at? I can understand\nthat such a slots do not present \"any risks\" but I think we should still set\ninactive_at (also to not give the false impression that the slot is active).\n\n> > 5 ===\n> >\n> > Most of the fields that reflect a time (not duration) in the system views are\n> > xxxx_time, so I'm wondering if instead of \"last_inactive_at\" we should use\n> > something like \"last_inactive_time\"?\n> >\n> \n> How about naming it as last_active_time? This will indicate the time\n> at which the slot was last active.\n\nI thought about it too but I think it could be missleading as one could think that \nit should be updated each time WAL record decoding is happening.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:00:07 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 04:16:19PM +0530, Amit Kapila wrote:\n> On Fri, Mar 22, 2024 at 3:23 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 22, 2024 at 02:59:21PM +0530, Amit Kapila wrote:\n> > > On Fri, Mar 22, 2024 at 2:27 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> > > >\n> > > > > 0001 Track invalidation_reason in pg_replication_slots\n> > > > > 0002 Track last_inactive_at in pg_replication_slots\n> > > > > 0003 Allow setting inactive_timeout for replication slots via SQL API\n> > > > > 0004 Introduce new SQL funtion pg_alter_replication_slot\n> > > > > 0005 Allow setting inactive_timeout in the replication command\n> > > > > 0006 Add inactive_timeout based replication slot invalidation\n> > > > >\n> > > > > 1. Keep it last_inactive_at as a shared memory variable, but always\n> > > > > set it at restart if the slot's inactive_timeout has non-zero value\n> > > > > and reset it as soon as someone acquires that slot so that if the slot\n> > > > > doesn't get acquired till inactive_timeout, checkpointer will\n> > > > > invalidate the slot.\n> > > > > 4. last_inactive_at should also be set to the current time during slot\n> > > > > creation because if one creates a slot and does nothing with it then\n> > > > > it's the time it starts to be inactive.\n> > > >\n> > > > I did not look at the code yet but just tested the behavior. It works as you\n> > > > describe it but I think this behavior is weird because:\n> > > >\n> > > > - when we create a slot without a timeout then last_inactive_at is set. I think\n> > > > that's fine, but then:\n> > > > - when we restart the engine, then last_inactive_at is gone (as timeout is not\n> > > > set).\n> > > >\n> > > > I think last_inactive_at should be set also at engine restart even if there is\n> > > > no timeout.\n> > >\n> > > I think it is the opposite. Why do we need to set 'last_inactive_at'\n> > > when inactive_timeout is not set?\n> >\n> > I think those are unrelated, one could want to know when a slot has been inactive\n> > even if no timeout is set. I understand that for this patch series we have in mind\n> > to use them both to invalidate slots but I think that there is use case to not\n> > use both in correlation. Also not setting last_inactive_at could give the \"false\"\n> > impression that the slot is active.\n> >\n> \n> I see your point and agree with this. I feel we can commit this part\n> first then,\n\nAgree that in this case the current ordering makes sense (as setting\nlast_inactive_at would be completly unrelated to the timeout).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:03:36 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 7:15 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> On Fri, Mar 22, 2024 at 12:39 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > > Please find the v14-0001 patch for now.\n> >\n> > Thanks!\n> >\n> > > LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> > > and then I'll push it.\n> >\n> > LGTM too.\n>\n> Thanks. Here I'm implementing the following:\n>\n> 0001 Track invalidation_reason in pg_replication_slots\n> 0002 Track last_inactive_at in pg_replication_slots\n> 0003 Allow setting inactive_timeout for replication slots via SQL API\n> 0004 Introduce new SQL funtion pg_alter_replication_slot\n> 0005 Allow setting inactive_timeout in the replication command\n> 0006 Add inactive_timeout based replication slot invalidation\n>\n> 1. Keep it last_inactive_at as a shared memory variable, but always\n> set it at restart if the slot's inactive_timeout has non-zero value\n> and reset it as soon as someone acquires that slot so that if the slot\n> doesn't get acquired till inactive_timeout, checkpointer will\n> invalidate the slot.\n> 2. Ensure with pg_alter_replication_slot one could \"only\" alter the\n> timeout property for the time being, if not that could lead to the\n> subscription inconsistency.\n> 3. Have some notes in the CREATE and ALTER SUBSCRIPTION docs about\n> using an existing slot to leverage inactive_timeout feature.\n> 4. last_inactive_at should also be set to the current time during slot\n> creation because if one creates a slot and does nothing with it then\n> it's the time it starts to be inactive.\n> 5. We don't set last_inactive_at to GetCurrentTimestamp() for failover\n> slots.\n> 6. Leave the patch that added support for inactive_timeout in\n> subscriptions.\n>\n> Please see the attached v14 patch set. No change in the attached\n> v14-0001 from the previous patch.\n>\n>\n>\nSome comments:\n1. In patch 0005:\nIn ReplicationSlotAlter():\n+ lock_acquired = false;\n if (MyReplicationSlot->data.failover != failover)\n {\n SpinLockAcquire(&MyReplicationSlot->mutex);\n+ lock_acquired = true;\n MyReplicationSlot->data.failover = failover;\n+ }\n+\n+ if (MyReplicationSlot->data.inactive_timeout != inactive_timeout)\n+ {\n+ if (!lock_acquired)\n+ {\n+ SpinLockAcquire(&MyReplicationSlot->mutex);\n+ lock_acquired = true;\n+ }\n+\n+ MyReplicationSlot->data.inactive_timeout = inactive_timeout;\n+ }\n+\n+ if (lock_acquired)\n+ {\n SpinLockRelease(&MyReplicationSlot->mutex);\n\nCan't you make it shorter like below:\nlock_acquired = false;\n\nif (MyReplicationSlot->data.failover != failover ||\nMyReplicationSlot->data.inactive_timeout != inactive_timeout) {\n SpinLockAcquire(&MyReplicationSlot->mutex);\n lock_acquired = true;\n}\n\nif (MyReplicationSlot->data.failover != failover) {\n MyReplicationSlot->data.failover = failover;\n}\n\nif (MyReplicationSlot->data.inactive_timeout != inactive_timeout) {\n MyReplicationSlot->data.inactive_timeout = inactive_timeout;\n}\n\nif (lock_acquired) {\n SpinLockRelease(&MyReplicationSlot->mutex);\n ReplicationSlotMarkDirty();\n ReplicationSlotSave();\n}\n\n2. In patch 0005: why change walrcv_alter_slot option? it doesn't seem to\nbe used anywhere, any use case for it? If required, would the intention be\nto add this as a Create Subscription option?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Fri, Mar 22, 2024 at 7:15 PM Bharath Rupireddy <[email protected]> wrote:On Fri, Mar 22, 2024 at 12:39 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > Please find the v14-0001 patch for now.\n>\n> Thanks!\n>\n> > LGTM. Let's wait for Bertrand to see if he has more comments on 0001\n> > and then I'll push it.\n>\n> LGTM too.\n\nThanks. Here I'm implementing the following:\n\n0001 Track invalidation_reason in pg_replication_slots\n0002 Track last_inactive_at in pg_replication_slots\n0003 Allow setting inactive_timeout for replication slots via SQL API\n0004 Introduce new SQL funtion pg_alter_replication_slot\n0005 Allow setting inactive_timeout in the replication command\n0006 Add inactive_timeout based replication slot invalidation\n\n1. Keep it last_inactive_at as a shared memory variable, but always\nset it at restart if the slot's inactive_timeout has non-zero value\nand reset it as soon as someone acquires that slot so that if the slot\ndoesn't get acquired  till inactive_timeout, checkpointer will\ninvalidate the slot.\n2. Ensure with pg_alter_replication_slot one could \"only\" alter the\ntimeout property for the time being, if not that could lead to the\nsubscription inconsistency.\n3. Have some notes in the CREATE and ALTER SUBSCRIPTION docs about\nusing an existing slot to leverage inactive_timeout feature.\n4. last_inactive_at should also be set to the current time during slot\ncreation because if one creates a slot and does nothing with it then\nit's the time it starts to be inactive.\n5. We don't set last_inactive_at to GetCurrentTimestamp() for failover slots.\n6. Leave the patch that added support for inactive_timeout in subscriptions.\n\nPlease see the attached v14 patch set. No change in the attached\nv14-0001 from the previous patch.\nSome comments:1. In patch 0005: In ReplicationSlotAlter():+\tlock_acquired = false; \tif (MyReplicationSlot->data.failover != failover) \t{ \t\tSpinLockAcquire(&MyReplicationSlot->mutex);+\t\tlock_acquired = true; \t\tMyReplicationSlot->data.failover = failover;+\t}++\tif (MyReplicationSlot->data.inactive_timeout != inactive_timeout)+\t{+\t\tif (!lock_acquired)+\t\t{+\t\t\tSpinLockAcquire(&MyReplicationSlot->mutex);+\t\t\tlock_acquired = true;+\t\t}++\t\tMyReplicationSlot->data.inactive_timeout = inactive_timeout;+\t}++\tif (lock_acquired)+\t{ \t\tSpinLockRelease(&MyReplicationSlot->mutex);Can't you make it shorter like below:lock_acquired = false;if (MyReplicationSlot->data.failover != failover || MyReplicationSlot->data.inactive_timeout != inactive_timeout) {    SpinLockAcquire(&MyReplicationSlot->mutex);    lock_acquired = true;}if (MyReplicationSlot->data.failover != failover) {    MyReplicationSlot->data.failover = failover;}if (MyReplicationSlot->data.inactive_timeout != inactive_timeout) {    MyReplicationSlot->data.inactive_timeout = inactive_timeout;}if (lock_acquired) {    SpinLockRelease(&MyReplicationSlot->mutex);    ReplicationSlotMarkDirty();    ReplicationSlotSave();}2. In patch 0005:  why change walrcv_alter_slot option? it doesn't seem to be used anywhere, any use case for it? If required, would the intention be to add this as a Create Subscription option?regards,Ajin CherianFujitsu Australia", "msg_date": "Fri, 22 Mar 2024 23:24:43 +1100", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 5:30 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 03:56:23PM +0530, Amit Kapila wrote:\n> > On Fri, Mar 22, 2024 at 3:15 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Mar 22, 2024 at 01:45:01PM +0530, Bharath Rupireddy wrote:\n> > >\n> > > 1 ===\n> > >\n> > > @@ -691,6 +699,13 @@ ReplicationSlotRelease(void)\n> > > ConditionVariableBroadcast(&slot->active_cv);\n> > > }\n> > >\n> > > + if (slot->data.persistency == RS_PERSISTENT)\n> > > + {\n> > > + SpinLockAcquire(&slot->mutex);\n> > > + slot->last_inactive_at = GetCurrentTimestamp();\n> > > + SpinLockRelease(&slot->mutex);\n> > > + }\n> > >\n> > > I'm not sure we should do system calls while we're holding a spinlock.\n> > > Assign a variable before?\n> > >\n> > > 2 ===\n> > >\n> > > Also, what about moving this here?\n> > >\n> > > \"\n> > > if (slot->data.persistency == RS_PERSISTENT)\n> > > {\n> > > /*\n> > > * Mark persistent slot inactive. We're not freeing it, just\n> > > * disconnecting, but wake up others that may be waiting for it.\n> > > */\n> > > SpinLockAcquire(&slot->mutex);\n> > > slot->active_pid = 0;\n> > > SpinLockRelease(&slot->mutex);\n> > > ConditionVariableBroadcast(&slot->active_cv);\n> > > }\n> > > \"\n> > >\n> > > That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n> > >\n> >\n> > That sounds like a good idea. Also, don't we need to consider physical\n> > slots where we don't reserve WAL during slot creation? I don't think\n> > there is a need to set inactive_at for such slots.\n>\n> If the slot is not active, why shouldn't we set inactive_at? I can understand\n> that such a slots do not present \"any risks\" but I think we should still set\n> inactive_at (also to not give the false impression that the slot is active).\n>\n\nBut OTOH, there is a chance that we will invalidate such slots even\nthough they have never reserved WAL in the first place which doesn't\nappear to be a good thing.\n\n> > > 5 ===\n> > >\n> > > Most of the fields that reflect a time (not duration) in the system views are\n> > > xxxx_time, so I'm wondering if instead of \"last_inactive_at\" we should use\n> > > something like \"last_inactive_time\"?\n> > >\n> >\n> > How about naming it as last_active_time? This will indicate the time\n> > at which the slot was last active.\n>\n> I thought about it too but I think it could be missleading as one could think that\n> it should be updated each time WAL record decoding is happening.\n>\n\nFair enough.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 22 Mar 2024 18:02:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 22, 2024 at 06:02:11PM +0530, Amit Kapila wrote:\n> On Fri, Mar 22, 2024 at 5:30 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > On Fri, Mar 22, 2024 at 03:56:23PM +0530, Amit Kapila wrote:\n> > > >\n> > > > That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n> > > >\n> > >\n> > > That sounds like a good idea. Also, don't we need to consider physical\n> > > slots where we don't reserve WAL during slot creation? I don't think\n> > > there is a need to set inactive_at for such slots.\n> >\n> > If the slot is not active, why shouldn't we set inactive_at? I can understand\n> > that such a slots do not present \"any risks\" but I think we should still set\n> > inactive_at (also to not give the false impression that the slot is active).\n> >\n> \n> But OTOH, there is a chance that we will invalidate such slots even\n> though they have never reserved WAL in the first place which doesn't\n> appear to be a good thing.\n\nThat's right but I don't think it is not a good thing. I think we should treat\ninactive_at as an independent field (like if the timeout one does not exist at\nall) and just focus on its meaning (slot being inactive). If one sets a timeout\n(> 0) and gets an invalidation then I think it works as designed (even if the\nslot does not present any \"risk\" as it does not hold any rows or WAL). \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 22 Mar 2024 13:47:34 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 3:15 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Looking at v14-0002:\n\nThanks for reviewing. I agree that 0002 with last_inactive_at can go\nindependently and be of use on its own in addition to helping\nimplement inactive_timeout based invalidation.\n\n> 1 ===\n>\n> @@ -691,6 +699,13 @@ ReplicationSlotRelease(void)\n> ConditionVariableBroadcast(&slot->active_cv);\n> }\n>\n> + if (slot->data.persistency == RS_PERSISTENT)\n> + {\n> + SpinLockAcquire(&slot->mutex);\n> + slot->last_inactive_at = GetCurrentTimestamp();\n> + SpinLockRelease(&slot->mutex);\n> + }\n>\n> I'm not sure we should do system calls while we're holding a spinlock.\n> Assign a variable before?\n\nCan do that. Then, the last_inactive_at = current_timestamp + mutex\nacquire time. But, that shouldn't be a problem than doing system calls\nwhile holding the mutex. So, done that way.\n\n> 2 ===\n>\n> Also, what about moving this here?\n>\n> \"\n> if (slot->data.persistency == RS_PERSISTENT)\n> {\n> /*\n> * Mark persistent slot inactive. We're not freeing it, just\n> * disconnecting, but wake up others that may be waiting for it.\n> */\n> SpinLockAcquire(&slot->mutex);\n> slot->active_pid = 0;\n> SpinLockRelease(&slot->mutex);\n> ConditionVariableBroadcast(&slot->active_cv);\n> }\n> \"\n>\n> That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n\nUgh. Done that now.\n\n> 3 ===\n>\n> @@ -2341,6 +2356,7 @@ RestoreSlotFromDisk(const char *name)\n>\n> slot->in_use = true;\n> slot->active_pid = 0;\n> + slot->last_inactive_at = 0;\n>\n> I think we should put GetCurrentTimestamp() here. It's done in v14-0006 but I\n> think it's better to do it in 0002 (and not taking care of inactive_timeout).\n\nDone.\n\n> 4 ===\n>\n> Track last_inactive_at in pg_replication_slots\n>\n> doc/src/sgml/system-views.sgml | 11 +++++++++++\n> src/backend/catalog/system_views.sql | 3 ++-\n> src/backend/replication/slot.c | 16 ++++++++++++++++\n> src/backend/replication/slotfuncs.c | 7 ++++++-\n> src/include/catalog/pg_proc.dat | 6 +++---\n> src/include/replication/slot.h | 3 +++\n> src/test/regress/expected/rules.out | 5 +++--\n> 7 files changed, 44 insertions(+), 7 deletions(-)\n>\n> Worth to add some tests too (or we postpone them in future commits because we're\n> confident enough they will follow soon)?\n\nYes. Added some tests in a new TAP test file named\nsrc/test/recovery/t/043_replslot_misc.pl. This new file can be used to\nadd miscellaneous replication tests in future as well. I couldn't find\na better place in existing test files - tried having the new tests for\nphysical slots in t/001_stream_rep.pl and I didn't find a right place\nfor logical slots.\n\n> 5 ===\n>\n> Most of the fields that reflect a time (not duration) in the system views are\n> xxxx_time, so I'm wondering if instead of \"last_inactive_at\" we should use\n> something like \"last_inactive_time\"?\n\nYeah, I can see that. So, I changed it to last_inactive_time.\n\nI agree with treating last_inactive_time as a separate property of the\nslot having its own use in addition to helping implement\ninactive_timeout based invalidation. I think it can go separately.\n\nI tried to address the review comments received for this patch alone\nand attached v15-0001. I'll post other patches soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 23 Mar 2024 03:02:26 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 22, 2024 at 7:17 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 06:02:11PM +0530, Amit Kapila wrote:\n> > On Fri, Mar 22, 2024 at 5:30 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > > On Fri, Mar 22, 2024 at 03:56:23PM +0530, Amit Kapila wrote:\n> > > > >\n> > > > > That would avoid testing twice \"slot->data.persistency == RS_PERSISTENT\".\n> > > > >\n> > > >\n> > > > That sounds like a good idea. Also, don't we need to consider physical\n> > > > slots where we don't reserve WAL during slot creation? I don't think\n> > > > there is a need to set inactive_at for such slots.\n> > >\n> > > If the slot is not active, why shouldn't we set inactive_at? I can understand\n> > > that such a slots do not present \"any risks\" but I think we should still set\n> > > inactive_at (also to not give the false impression that the slot is active).\n> > >\n> >\n> > But OTOH, there is a chance that we will invalidate such slots even\n> > though they have never reserved WAL in the first place which doesn't\n> > appear to be a good thing.\n>\n> That's right but I don't think it is not a good thing. I think we should treat\n> inactive_at as an independent field (like if the timeout one does not exist at\n> all) and just focus on its meaning (slot being inactive). If one sets a timeout\n> (> 0) and gets an invalidation then I think it works as designed (even if the\n> slot does not present any \"risk\" as it does not hold any rows or WAL).\n>\n\nFair point.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 23 Mar 2024 10:36:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 23, 2024 at 3:02 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Mar 22, 2024 at 3:15 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> >\n> > Worth to add some tests too (or we postpone them in future commits because we're\n> > confident enough they will follow soon)?\n>\n> Yes. Added some tests in a new TAP test file named\n> src/test/recovery/t/043_replslot_misc.pl. This new file can be used to\n> add miscellaneous replication tests in future as well. I couldn't find\n> a better place in existing test files - tried having the new tests for\n> physical slots in t/001_stream_rep.pl and I didn't find a right place\n> for logical slots.\n>\n\nHow about adding the test in 019_replslot_limit? It is not a direct\nfit but I feel later we can even add 'invalid_timeout' related tests\nin this file which will use last_inactive_time feature. It is also\npossible that some of the tests added by the 'invalid_timeout' feature\nwill obviate the need for some of these tests.\n\nReview of v15\n==============\n1.\n@@ -1026,7 +1026,8 @@ CREATE VIEW pg_replication_slots AS\n L.conflicting,\n L.invalidation_reason,\n L.failover,\n- L.synced\n+ L.synced,\n+ L.last_inactive_time\n FROM pg_get_replication_slots() AS L\n\nAs mentioned previously, let's keep these new fields before\nconflicting and after two_phase.\n\n2.\n+# Get last_inactive_time value after slot's creation. Note that the\nslot is still\n+# inactive unless it's used by the standby below.\n+my $last_inactive_time_1 = $primary->safe_psql('postgres',\n+ qq(SELECT last_inactive_time FROM pg_replication_slots WHERE\nslot_name = '$sb_slot' AND last_inactive_time IS NOT NULL;)\n+);\n\nWe should check $last_inactive_time_1 to be a valid value and add a\nsimilar check for logical slots.\n\n3. BTW, why don't we set last_inactive_time for temporary slots\n(RS_TEMPORARY) as well? Don't we even invalidate temporary slots? If\nso, then I think we should set last_inactive_time for those as well\nand later allow them to be invalidated based on timeout parameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 23 Mar 2024 11:27:20 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 23, 2024 at 11:27 AM Amit Kapila <[email protected]> wrote:\n>\n> How about adding the test in 019_replslot_limit? It is not a direct\n> fit but I feel later we can even add 'invalid_timeout' related tests\n> in this file which will use last_inactive_time feature.\n\nI'm thinking the other way. Now, the new TAP file 043_replslot_misc.pl\ncan have last_inactive_time tests, and later invalid_timeout ones too.\nThis way 019_replslot_limit.pl is not cluttered.\n\n> It is also\n> possible that some of the tests added by the 'invalid_timeout' feature\n> will obviate the need for some of these tests.\n\nMight be. But, I prefer to keep both these tests separate but in the\nsame file 043_replslot_misc.pl. Because we cover some corner cases the\nlast_inactive_time is set upon loading the slot from disk.\n\n> Review of v15\n> ==============\n> 1.\n> @@ -1026,7 +1026,8 @@ CREATE VIEW pg_replication_slots AS\n> L.conflicting,\n> L.invalidation_reason,\n> L.failover,\n> - L.synced\n> + L.synced,\n> + L.last_inactive_time\n> FROM pg_get_replication_slots() AS L\n>\n> As mentioned previously, let's keep these new fields before\n> conflicting and after two_phase.\n\nSorry, I forgot to notice that comment (out of a flood of comments\nreally :)). Now, done that way.\n\n> 2.\n> +# Get last_inactive_time value after slot's creation. Note that the\n> slot is still\n> +# inactive unless it's used by the standby below.\n> +my $last_inactive_time_1 = $primary->safe_psql('postgres',\n> + qq(SELECT last_inactive_time FROM pg_replication_slots WHERE\n> slot_name = '$sb_slot' AND last_inactive_time IS NOT NULL;)\n> +);\n>\n> We should check $last_inactive_time_1 to be a valid value and add a\n> similar check for logical slots.\n\nThat's taken care by the type cast we do, right? Isn't that enough?\n\nis( $primary->safe_psql(\n 'postgres',\n qq[SELECT last_inactive_time >\n'$last_inactive_time'::timestamptz FROM pg_replication_slots WHERE\nslot_name = '$sb_slot' AND last_inactive_time IS NOT NULL;]\n ),\n 't',\n 'last inactive time for an inactive physical slot is updated correctly');\n\nFor instance, setting last_inactive_time_1 to an invalid value fails\nwith the following error:\n\nerror running SQL: 'psql:<stdin>:1: ERROR: invalid input syntax for\ntype timestamp with time zone: \"foo\"\nLINE 1: SELECT last_inactive_time > 'foo'::timestamptz FROM pg_repli...\n\n> 3. BTW, why don't we set last_inactive_time for temporary slots\n> (RS_TEMPORARY) as well? Don't we even invalidate temporary slots? If\n> so, then I think we should set last_inactive_time for those as well\n> and later allow them to be invalidated based on timeout parameter.\n\nWFM. Done that way.\n\nPlease see the attached v16 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 23 Mar 2024 13:11:50 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sat, Mar 23, 2024 at 01:11:50PM +0530, Bharath Rupireddy wrote:\n> On Sat, Mar 23, 2024 at 11:27 AM Amit Kapila <[email protected]> wrote:\n> >\n> > How about adding the test in 019_replslot_limit? It is not a direct\n> > fit but I feel later we can even add 'invalid_timeout' related tests\n> > in this file which will use last_inactive_time feature.\n> \n> I'm thinking the other way. Now, the new TAP file 043_replslot_misc.pl\n> can have last_inactive_time tests, and later invalid_timeout ones too.\n> This way 019_replslot_limit.pl is not cluttered.\n\nI share the same opinion as Amit: I think 019_replslot_limit would be a better\nplace, because I see the timeout as another kind of limit.\n\n> \n> > It is also\n> > possible that some of the tests added by the 'invalid_timeout' feature\n> > will obviate the need for some of these tests.\n> \n> Might be. But, I prefer to keep both these tests separate but in the\n> same file 043_replslot_misc.pl. Because we cover some corner cases the\n> last_inactive_time is set upon loading the slot from disk.\n\nRight but I think that this test does not necessary have to be in the same .pl\nas the one testing the timeout. Could be added in one of the existing .pl like\n001_stream_rep.pl for example.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 23 Mar 2024 09:04:45 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 23, 2024 at 2:34 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > How about adding the test in 019_replslot_limit? It is not a direct\n> > > fit but I feel later we can even add 'invalid_timeout' related tests\n> > > in this file which will use last_inactive_time feature.\n> >\n> > I'm thinking the other way. Now, the new TAP file 043_replslot_misc.pl\n> > can have last_inactive_time tests, and later invalid_timeout ones too.\n> > This way 019_replslot_limit.pl is not cluttered.\n>\n> I share the same opinion as Amit: I think 019_replslot_limit would be a better\n> place, because I see the timeout as another kind of limit.\n\nHm. Done that way.\n\nPlease see the attached v17 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 24 Mar 2024 08:00:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Mar 23, 2024 at 1:12 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Sat, Mar 23, 2024 at 11:27 AM Amit Kapila <[email protected]> wrote:\n> >\n>\n> > 2.\n> > +# Get last_inactive_time value after slot's creation. Note that the\n> > slot is still\n> > +# inactive unless it's used by the standby below.\n> > +my $last_inactive_time_1 = $primary->safe_psql('postgres',\n> > + qq(SELECT last_inactive_time FROM pg_replication_slots WHERE\n> > slot_name = '$sb_slot' AND last_inactive_time IS NOT NULL;)\n> > +);\n> >\n> > We should check $last_inactive_time_1 to be a valid value and add a\n> > similar check for logical slots.\n>\n> That's taken care by the type cast we do, right? Isn't that enough?\n>\n> is( $primary->safe_psql(\n> 'postgres',\n> qq[SELECT last_inactive_time >\n> '$last_inactive_time'::timestamptz FROM pg_replication_slots WHERE\n> slot_name = '$sb_slot' AND last_inactive_time IS NOT NULL;]\n> ),\n> 't',\n> 'last inactive time for an inactive physical slot is updated correctly');\n>\n> For instance, setting last_inactive_time_1 to an invalid value fails\n> with the following error:\n>\n> error running SQL: 'psql:<stdin>:1: ERROR: invalid input syntax for\n> type timestamp with time zone: \"foo\"\n> LINE 1: SELECT last_inactive_time > 'foo'::timestamptz FROM pg_repli...\n>\n\nIt would be found at a later point. It would be probably better to\nverify immediately after the test that fetches the last_inactive_time\nvalue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 24 Mar 2024 10:40:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 24, 2024 at 10:40 AM Amit Kapila <[email protected]> wrote:\n>\n> > For instance, setting last_inactive_time_1 to an invalid value fails\n> > with the following error:\n> >\n> > error running SQL: 'psql:<stdin>:1: ERROR: invalid input syntax for\n> > type timestamp with time zone: \"foo\"\n> > LINE 1: SELECT last_inactive_time > 'foo'::timestamptz FROM pg_repli...\n> >\n>\n> It would be found at a later point. It would be probably better to\n> verify immediately after the test that fetches the last_inactive_time\n> value.\n\nAgree. I've added a few more checks explicitly to verify the\nlast_inactive_time is sane with the following:\n\n qq[SELECT '$last_inactive_time'::timestamptz > to_timestamp(0)\nAND '$last_inactive_time'::timestamptz >\n'$slot_creation_time'::timestamptz;]\n\nI've attached the v18 patch set here. I've also addressed earlier\nreview comments from Amit, Ajin Cherian. Note that I've added new\ninvalidation mechanism tests in a separate TAP test file just because\nI don't want to clutter or bloat any of the existing files and spread\ntests for physical slots and logical slots into separate existing TAP\nfiles.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 24 Mar 2024 15:05:44 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 24, 2024 at 3:05 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Sun, Mar 24, 2024 at 10:40 AM Amit Kapila <[email protected]> wrote:\n> >\n> > > For instance, setting last_inactive_time_1 to an invalid value fails\n> > > with the following error:\n> > >\n> > > error running SQL: 'psql:<stdin>:1: ERROR: invalid input syntax for\n> > > type timestamp with time zone: \"foo\"\n> > > LINE 1: SELECT last_inactive_time > 'foo'::timestamptz FROM pg_repli...\n> > >\n> >\n> > It would be found at a later point. It would be probably better to\n> > verify immediately after the test that fetches the last_inactive_time\n> > value.\n>\n> Agree. I've added a few more checks explicitly to verify the\n> last_inactive_time is sane with the following:\n>\n> qq[SELECT '$last_inactive_time'::timestamptz > to_timestamp(0)\n> AND '$last_inactive_time'::timestamptz >\n> '$slot_creation_time'::timestamptz;]\n>\n\nSuch a test looks reasonable but shall we add equal to in the second\npart of the test (like '$last_inactive_time'::timestamptz >=\n> '$slot_creation_time'::timestamptz;). This is just to be sure that even if the test ran fast enough to give the same time, the test shouldn't fail. I think it won't matter for correctness as well.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:48:23 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 9:48 AM Amit Kapila <[email protected]> wrote:\n>\n>\n> Such a test looks reasonable but shall we add equal to in the second\n> part of the test (like '$last_inactive_time'::timestamptz >=\n> > '$slot_creation_time'::timestamptz;). This is just to be sure that even if the test ran fast enough to give the same time, the test shouldn't fail. I think it won't matter for correctness as well.\n>\n\nApart from this, I have made minor changes in the comments. See and\nlet me know what you think of attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 25 Mar 2024 10:28:31 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> I've attached the v18 patch set here.\n\nThanks for the patches. Please find few comments:\n\npatch 001:\n--------\n\n1)\nslot.h:\n\n+ /* The time at which this slot become inactive */\n+ TimestampTz last_inactive_time;\n\nbecome -->became\n\n---------\npatch 002:\n\n2)\nslotsync.c:\n\n ReplicationSlotCreate(remote_slot->name, true, RS_TEMPORARY,\n remote_slot->two_phase,\n remote_slot->failover,\n- true);\n+ true, 0);\n\n+ slot->data.inactive_timeout = remote_slot->inactive_timeout;\n\nIs there a reason we are not passing 'remote_slot->inactive_timeout'\nto ReplicationSlotCreate() directly?\n\n---------\n\n3)\nslotfuncs.c\npg_create_logical_replication_slot():\n+ int inactive_timeout = PG_GETARG_INT32(5);\n\nCan we mention here that timeout is in seconds either in comment or\nrename variable to inactive_timeout_secs?\n\nPlease do this for create_physical_replication_slot(),\ncreate_logical_replication_slot(),\npg_create_physical_replication_slot() as well.\n\n---------\n4)\n+ int inactive_timeout; /* The amount of time in seconds the slot\n+ * is allowed to be inactive. */\n } LogicalSlotInfo;\n\n Do we need to mention \"before getting invalided\" like other places\n(in last patch)?\n\n----------\n\n 5)\nSame at these two places. \"before getting invalided\" to be added in\nthe last patch otherwise the info is incompleted.\n\n+\n+ /* The amount of time in seconds the slot is allowed to be inactive */\n+ int inactive_timeout;\n } ReplicationSlotPersistentData;\n\n\n+ * inactive_timeout: The amount of time in seconds the slot is allowed to be\n+ * inactive.\n */\n void\n ReplicationSlotCreate(const char *name, bool db_specific,\n Same here. \"before getting invalidated\" ?\n\n--------\n\nReviewing more..\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:33:30 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n>\n> On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > I've attached the v18 patch set here.\n>\n\nI have a question. Don't we allow creating subscriptions on an\nexisting slot with a non-null 'inactive_timeout' set where\n'inactive_timeout' of the slot is retained even after subscription\ncreation?\n\nI tried this:\n\n===================\n--On publisher, create slot with 120sec inactive_timeout:\nSELECT * FROM pg_create_logical_replication_slot('logical_slot1',\n'pgoutput', false, true, true, 120);\n\n--On subscriber, create sub using logical_slot1\ncreate subscription mysubnew1_1 connection 'dbname=newdb1\nhost=localhost user=shveta port=5433' publication mypubnew1_1 WITH\n(failover = true, create_slot=false, slot_name='logical_slot1');\n\n--Before creating sub, pg_replication_slots output:\n slot_name | failover | synced | active | temp | conf |\n lat | inactive_timeout\n---------------+----------+--------+--------+------+------+----------------------------------+------------------\n logical_slot1 | t | f | f | f | f | 2024-03-25\n11:11:55.375736+05:30 | 120\n\n--After creating sub pg_replication_slots output: (inactive_timeout is 0 now):\n slot_name |failover | synced | active | temp | conf | | lat |\ninactive_timeout\n---------------+---------+--------+--------+------+------+-+-----+------------------\n logical_slot1 |t | f | t | f | f | | |\n 0\n===================\n\nIn CreateSubscription, we call 'walrcv_alter_slot()' /\n'ReplicationSlotAlter()' when create_slot is false. This call ends up\nsetting active_timeout from 120sec to 0. Is it intentional?\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 11:53:53 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 10:28 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 9:48 AM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Such a test looks reasonable but shall we add equal to in the second\n> > part of the test (like '$last_inactive_time'::timestamptz >=\n> > > '$slot_creation_time'::timestamptz;). This is just to be sure that even if the test ran fast enough to give the same time, the test shouldn't fail. I think it won't matter for correctness as well.\n\nAgree. I added that in v19 patch. I was having that concern in my\nmind. That's the reason I wasn't capturing current_time something like\nbelow for the same worry that current_timestamp might be the same (or\nnearly the same) as the slot creation time. That's why I ended up\ncapturing current_timestamp in a separate query than clubbing it up\nwith pg_create_physical_replication_slot.\n\nSELECT current_timestamp FROM pg_create_physical_replication_slot('foo');\n\n> Apart from this, I have made minor changes in the comments. See and\n> let me know what you think of attached.\n\nLGTM. I've merged the diff into v19 patch.\n\nPlease find the attached v19 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 25 Mar 2024 12:25:21 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 11:53 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n> >\n> > On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > I've attached the v18 patch set here.\n\nI have one concern, for synced slots on standby, how do we disallow\ninvalidation due to inactive-timeout immediately after promotion?\n\nFor synced slots, last_inactive_time and inactive_timeout are both\nset. Let's say I bring down primary for promotion of standby and then\npromote standby, there are chances that it may end up invalidating\nsynced slots (considering standby is not brought down during promotion\nand thus inactive_timeout may already be past 'last_inactive_time'). I\ntried with smaller unit of inactive_timeout:\n\n--Shutdown primary to prepare for planned promotion.\n\n--On standby, one synced slot with last_inactive_time (lat) as 12:21\n slot_name | failover | synced | active | temp | conf | res |\n lat | inactive_timeout\n---------------+----------+--------+--------+------+------+-----+----------------------------------+------------------\n logical_slot1 | t | t | f | f |\nf | | 2024-03-25 12:21:09.020757+05:30 | 60\n\n--wait for some time, now the time is 12:24\npostgres=# select now();\n now\n----------------------------------\n 2024-03-25 12:24:17.616716+05:30\n\n-- promote immediately:\n./pg_ctl -D ../../standbydb/ promote -w\n\n--on promoted standby:\npostgres=# select pg_is_in_recovery();\n pg_is_in_recovery\n-------------------\n f\n\n--synced slot is invalidated immediately on promotion.\n slot_name | failover | synced | active | temp | conf\n | res | lat |\ninactive_timeout\n---------------+----------+--------+--------+------+------+------------------+----------------------------------+--------\n logical_slot1 | t | t | f | f\n| f | inactive_timeout | 2024-03-25\n12:21:09.020757+05:30 |\n\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 12:43:19 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 11:53 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n> > >\n> > > On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> > > <[email protected]> wrote:\n> > > >\n> > > > I've attached the v18 patch set here.\n>\n> I have one concern, for synced slots on standby, how do we disallow\n> invalidation due to inactive-timeout immediately after promotion?\n>\n> For synced slots, last_inactive_time and inactive_timeout are both\n> set. Let's say I bring down primary for promotion of standby and then\n> promote standby, there are chances that it may end up invalidating\n> synced slots (considering standby is not brought down during promotion\n> and thus inactive_timeout may already be past 'last_inactive_time').\n>\n\nThis raises the question of whether we need to set\n'last_inactive_time' synced slots on the standby?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 12:59:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 12:25:21PM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 25, 2024 at 10:28 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 9:48 AM Amit Kapila <[email protected]> wrote:\n> > >\n> > >\n> > > Such a test looks reasonable but shall we add equal to in the second\n> > > part of the test (like '$last_inactive_time'::timestamptz >=\n> > > > '$slot_creation_time'::timestamptz;). This is just to be sure that even if the test ran fast enough to give the same time, the test shouldn't fail. I think it won't matter for correctness as well.\n> \n> Agree. I added that in v19 patch. I was having that concern in my\n> mind. That's the reason I wasn't capturing current_time something like\n> below for the same worry that current_timestamp might be the same (or\n> nearly the same) as the slot creation time. That's why I ended up\n> capturing current_timestamp in a separate query than clubbing it up\n> with pg_create_physical_replication_slot.\n> \n> SELECT current_timestamp FROM pg_create_physical_replication_slot('foo');\n> \n> > Apart from this, I have made minor changes in the comments. See and\n> > let me know what you think of attached.\n> \n\nThanks!\n\nv19-0001 LGTM, just one Nit comment for 019_replslot_limit.pl:\n\nThe code for \"Get last_inactive_time value after the slot's creation\" and \n\"Check that the captured time is sane\" is somehow duplicated: is it worth creating\n2 functions?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 07:35:45 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 12:59:52PM +0530, Amit Kapila wrote:\n> On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 11:53 AM shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> > > > <[email protected]> wrote:\n> > > > >\n> > > > > I've attached the v18 patch set here.\n> >\n> > I have one concern, for synced slots on standby, how do we disallow\n> > invalidation due to inactive-timeout immediately after promotion?\n> >\n> > For synced slots, last_inactive_time and inactive_timeout are both\n> > set.\n\nYeah, and I can see last_inactive_time is moving on the standby (while not the\ncase on the primary), probably due to the sync worker slot acquisition/release\nwhich does not seem right.\n\n> Let's say I bring down primary for promotion of standby and then\n> > promote standby, there are chances that it may end up invalidating\n> > synced slots (considering standby is not brought down during promotion\n> > and thus inactive_timeout may already be past 'last_inactive_time').\n> >\n> \n> This raises the question of whether we need to set\n> 'last_inactive_time' synced slots on the standby?\n\nYeah, I think that last_inactive_time should stay at 0 on synced slots on the\nstandby because such slots are not usable anyway (until the standby gets promoted).\n\nSo, I think that last_inactive_time does not make sense if the slot never had\nthe chance to be active.\n\nOTOH I think the timeout invalidation (if any) should be synced from primary.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 08:07:35 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 1:37 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Mar 25, 2024 at 12:59:52PM +0530, Amit Kapila wrote:\n> > On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> > >\n> > > On Mon, Mar 25, 2024 at 11:53 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> > > > > <[email protected]> wrote:\n> > > > > >\n> > > > > > I've attached the v18 patch set here.\n> > >\n> > > I have one concern, for synced slots on standby, how do we disallow\n> > > invalidation due to inactive-timeout immediately after promotion?\n> > >\n> > > For synced slots, last_inactive_time and inactive_timeout are both\n> > > set.\n>\n> Yeah, and I can see last_inactive_time is moving on the standby (while not the\n> case on the primary), probably due to the sync worker slot acquisition/release\n> which does not seem right.\n>\n> > Let's say I bring down primary for promotion of standby and then\n> > > promote standby, there are chances that it may end up invalidating\n> > > synced slots (considering standby is not brought down during promotion\n> > > and thus inactive_timeout may already be past 'last_inactive_time').\n> > >\n> >\n> > This raises the question of whether we need to set\n> > 'last_inactive_time' synced slots on the standby?\n>\n> Yeah, I think that last_inactive_time should stay at 0 on synced slots on the\n> standby because such slots are not usable anyway (until the standby gets promoted).\n>\n> So, I think that last_inactive_time does not make sense if the slot never had\n> the chance to be active.\n>\n> OTOH I think the timeout invalidation (if any) should be synced from primary.\n\nYes, even I feel that last_inactive_time makes sense only when the\nslot is available to be used. Synced slots are not available to be\nused until standby is promoted and thus last_inactive_time can be\nskipped to be set for synced_slots. But once primay is invalidated due\nto inactive-timeout, that invalidation should be synced to standby\n(which is happening currently).\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 14:07:21 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 02:07:21PM +0530, shveta malik wrote:\n> On Mon, Mar 25, 2024 at 1:37 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Mon, Mar 25, 2024 at 12:59:52PM +0530, Amit Kapila wrote:\n> > > On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> > > >\n> > > > On Mon, Mar 25, 2024 at 11:53 AM shveta malik <[email protected]> wrote:\n> > > > >\n> > > > > On Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n> > > > > >\n> > > > > > On Sun, Mar 24, 2024 at 3:06 PM Bharath Rupireddy\n> > > > > > <[email protected]> wrote:\n> > > > > > >\n> > > > > > > I've attached the v18 patch set here.\n> > > >\n> > > > I have one concern, for synced slots on standby, how do we disallow\n> > > > invalidation due to inactive-timeout immediately after promotion?\n> > > >\n> > > > For synced slots, last_inactive_time and inactive_timeout are both\n> > > > set.\n> >\n> > Yeah, and I can see last_inactive_time is moving on the standby (while not the\n> > case on the primary), probably due to the sync worker slot acquisition/release\n> > which does not seem right.\n> >\n> > > Let's say I bring down primary for promotion of standby and then\n> > > > promote standby, there are chances that it may end up invalidating\n> > > > synced slots (considering standby is not brought down during promotion\n> > > > and thus inactive_timeout may already be past 'last_inactive_time').\n> > > >\n> > >\n> > > This raises the question of whether we need to set\n> > > 'last_inactive_time' synced slots on the standby?\n> >\n> > Yeah, I think that last_inactive_time should stay at 0 on synced slots on the\n> > standby because such slots are not usable anyway (until the standby gets promoted).\n> >\n> > So, I think that last_inactive_time does not make sense if the slot never had\n> > the chance to be active.\n> >\n> > OTOH I think the timeout invalidation (if any) should be synced from primary.\n> \n> Yes, even I feel that last_inactive_time makes sense only when the\n> slot is available to be used. Synced slots are not available to be\n> used until standby is promoted and thus last_inactive_time can be\n> skipped to be set for synced_slots. But once primay is invalidated due\n> to inactive-timeout, that invalidation should be synced to standby\n> (which is happening currently).\n> \n\nyeah, syncing the invalidation and always keeping last_inactive_time to zero \nfor synced slots looks right to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 08:51:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 1:37 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Yeah, and I can see last_inactive_time is moving on the standby (while not the\n> case on the primary), probably due to the sync worker slot acquisition/release\n> which does not seem right.\n>\n\nYes, you are right, last_inactive_time keeps on moving for synced\nslots on standby. Once I disabled slot-sync worker, then it is\nconstant. Then it only changes if I call pg_sync_replication_slots().\n\nOn a different note, I noticed that we allow altering\ninactive_timeout for synced-slots on standby. And again overwrite it\nwith the primary's value in the next sync cycle. Steps:\n\n====================\n--Check pg_replication_slots for synced slot on standby, inactive_timeout is 120\n slot_name | failover | synced | active | inactive_timeout\n---------------+----------+--------+--------+------------------\n logical_slot1 | t | t | f | 120\n\n--Alter on standby\nSELECT 'alter' FROM pg_alter_replication_slot('logical_slot1', 900);\n\n--Check pg_replication_slots:\n slot_name | failover | synced | active | inactive_timeout\n---------------+----------+--------+--------+------------------\n logical_slot1 | t | t | f | 900\n\n--Run sync function\nSELECT pg_sync_replication_slots();\n\n--check again, inactive_timeout is set back to primary's value.\n slot_name | failover | synced | active | inactive_timeout\n---------------+----------+--------+--------+------------------\n logical_slot1 | t | t | f | 120\n\n ====================\n\nI feel altering synced slot's inactive_timeout should be prohibited on\nstandby. It should be in sync with primary always. Thoughts?\n\nI am listing the concerns raised by me:\n1) create-subscription with create_slot=false overwriting\ninactive_timeout of existing slot ([1])\n2) last_inactive_time set for synced slots may result in invalidation\nof slot on promotion. ([2])\n3) alter replication slot to alter inactive_timout for synced slots on\nstandby, should this be allowed?\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uAqBi%2BGbNn2ngJ-A_Z905CD3ss896bqY2ACUjGiF1Gkng%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAJpy0uCLu%2BmqAwAMum%3DpXE9YYsy0BE7hOSw_Wno5vjwpFY%3D63g%40mail.gmail.com\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 14:39:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Mar 25, 2024 at 02:39:50PM +0530, shveta malik wrote:\n> I am listing the concerns raised by me:\n> 3) alter replication slot to alter inactive_timout for synced slots on\n> standby, should this be allowed?\n\nI don't think it should be allowed.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 09:23:53 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 1:37 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > I have one concern, for synced slots on standby, how do we disallow\n> > > invalidation due to inactive-timeout immediately after promotion?\n> > >\n> > > For synced slots, last_inactive_time and inactive_timeout are both\n> > > set.\n>\n> Yeah, and I can see last_inactive_time is moving on the standby (while not the\n> case on the primary), probably due to the sync worker slot acquisition/release\n> which does not seem right.\n>\n> > Let's say I bring down primary for promotion of standby and then\n> > > promote standby, there are chances that it may end up invalidating\n> > > synced slots (considering standby is not brought down during promotion\n> > > and thus inactive_timeout may already be past 'last_inactive_time').\n> > >\n> >\n> > This raises the question of whether we need to set\n> > 'last_inactive_time' synced slots on the standby?\n>\n> Yeah, I think that last_inactive_time should stay at 0 on synced slots on the\n> standby because such slots are not usable anyway (until the standby gets promoted).\n>\n> So, I think that last_inactive_time does not make sense if the slot never had\n> the chance to be active.\n\nRight. Done that way i.e. not setting the last_inactive_time for slots\nboth while releasing the slot and restoring from the disk.\n\nAlso, I've added a TAP function to check if the captured times are\nsane per Bertrand's review comment.\n\nPlease see the attached v20 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 25 Mar 2024 15:31:15 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 3:31 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Right. Done that way i.e. not setting the last_inactive_time for slots\n> both while releasing the slot and restoring from the disk.\n>\n> Also, I've added a TAP function to check if the captured times are\n> sane per Bertrand's review comment.\n>\n> Please see the attached v20 patch.\n\nThanks for the patch. The issue of unnecessary invalidation of synced\nslots on promotion is resolved in this patch.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 16:08:59 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 3:31 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Right. Done that way i.e. not setting the last_inactive_time for slots\n> both while releasing the slot and restoring from the disk.\n>\n> Also, I've added a TAP function to check if the captured times are\n> sane per Bertrand's review comment.\n>\n> Please see the attached v20 patch.\n>\n\nPushed, after minor changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:03:59 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 2:40 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 1:37 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Yeah, and I can see last_inactive_time is moving on the standby (while not the\n> > case on the primary), probably due to the sync worker slot acquisition/release\n> > which does not seem right.\n> >\n>\n> Yes, you are right, last_inactive_time keeps on moving for synced\n> slots on standby. Once I disabled slot-sync worker, then it is\n> constant. Then it only changes if I call pg_sync_replication_slots().\n>\n> On a different note, I noticed that we allow altering\n> inactive_timeout for synced-slots on standby. And again overwrite it\n> with the primary's value in the next sync cycle. Steps:\n>\n> ====================\n> --Check pg_replication_slots for synced slot on standby, inactive_timeout is 120\n> slot_name | failover | synced | active | inactive_timeout\n> ---------------+----------+--------+--------+------------------\n> logical_slot1 | t | t | f | 120\n>\n> --Alter on standby\n> SELECT 'alter' FROM pg_alter_replication_slot('logical_slot1', 900);\n>\n\nI think we should keep pg_alter_replication_slot() as the last\npriority among the remaining patches for this release. Let's try to\nfirst finish the primary functionality of inactive_timeout patch.\nOtherwise, I agree that the problem reported by you should be fixed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:10:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 5:10 PM Amit Kapila <[email protected]> wrote:\n>\n> I think we should keep pg_alter_replication_slot() as the last\n> priority among the remaining patches for this release. Let's try to\n> first finish the primary functionality of inactive_timeout patch.\n> Otherwise, I agree that the problem reported by you should be fixed.\n\nNoted. Will focus on v18-002 patch now.\n\nI was debugging the flow and just noticed that RecoveryInProgress()\nalways returns 'true' during\nStartupReplicationSlots()-->RestoreSlotFromDisk() (even on primary) as\n'xlogctl->SharedRecoveryState' is always 'RECOVERY_STATE_CRASH' at\nthat time. The 'xlogctl->SharedRecoveryState' is changed to\n'RECOVERY_STATE_DONE' on primary and to 'RECOVERY_STATE_ARCHIVE' on\nstandby at a later stage in StartupXLOG() (after we are done loading\nslots).\n\nThe impact of this is, the condition in RestoreSlotFromDisk() in v20-001:\n\nif (!(RecoveryInProgress() && slot->data.synced))\n slot->last_inactive_time = GetCurrentTimestamp();\n\nis merely equivalent to:\n\nif (!slot->data.synced)\n slot->last_inactive_time = GetCurrentTimestamp();\n\nThus on primary, after restart, last_inactive_at is set correctly,\nwhile on promoted standby (new primary), last_inactive_at is always\nNULL after restart for the synced slots.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:24:25 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "I apologize that I haven't been able to keep up with this thread for a\nwhile, but I'm happy to see the continued interest in $SUBJECT.\n\nOn Sun, Mar 24, 2024 at 03:05:44PM +0530, Bharath Rupireddy wrote:\n> This commit particularly lets one specify the inactive_timeout for\n> a slot via SQL functions pg_create_physical_replication_slot and\n> pg_create_logical_replication_slot.\n\nOff-list, Bharath brought to my attention that the current proposal was to\nset the timeout at the slot level. While I think that is an entirely\nreasonable thing to support, the main use-case I have in mind for this\nfeature is for an administrator that wants to prevent inactive slots from\ncausing problems (e.g., transaction ID wraparound) on a server or a number\nof servers. For that use-case, I think a GUC would be much more\nconvenient. Perhaps there could be a default inactive slot timeout GUC\nthat would be used in the absence of a slot-level setting. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 25 Mar 2024 14:54:43 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n>\n> I have one concern, for synced slots on standby, how do we disallow\n> invalidation due to inactive-timeout immediately after promotion?\n>\n> For synced slots, last_inactive_time and inactive_timeout are both\n> set. Let's say I bring down primary for promotion of standby and then\n> promote standby, there are chances that it may end up invalidating\n> synced slots (considering standby is not brought down during promotion\n> and thus inactive_timeout may already be past 'last_inactive_time').\n>\n\nOn standby, if we decide to maintain valid last_inactive_time for\nsynced slots, then invalidation is correctly restricted in\nInvalidateSlotForInactiveTimeout() for synced slots using the check:\n\n if (RecoveryInProgress() && slot->data.synced)\n return false;\n\nBut immediately after promotion, we can not rely on the above check\nand thus possibility of synced slots invalidation is there. To\nmaintain consistent behavior regarding the setting of\nlast_inactive_time for synced slots, similar to user slots, one\npotential solution to prevent this invalidation issue is to update the\nlast_inactive_time of all synced slots within the ShutDownSlotSync()\nfunction during FinishWalRecovery(). This approach ensures that\npromotion doesn't immediately invalidate slots, and henceforth, we\npossess a correct last_inactive_time as a basis for invalidation going\nforward. This will be equivalent to updating last_inactive_time during\nrestart (but without actual restart during promotion).\nThe plus point of maintaining last_inactive_time for synced slots\ncould be, this can provide data to the user on when last time the sync\nwas attempted on that particular slot by background slot sync worker\nor SQl function. Thoughts?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:30:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:24 AM Nathan Bossart <[email protected]> wrote:\n>\n>\n> On Sun, Mar 24, 2024 at 03:05:44PM +0530, Bharath Rupireddy wrote:\n> > This commit particularly lets one specify the inactive_timeout for\n> > a slot via SQL functions pg_create_physical_replication_slot and\n> > pg_create_logical_replication_slot.\n>\n> Off-list, Bharath brought to my attention that the current proposal was to\n> set the timeout at the slot level. While I think that is an entirely\n> reasonable thing to support, the main use-case I have in mind for this\n> feature is for an administrator that wants to prevent inactive slots from\n> causing problems (e.g., transaction ID wraparound) on a server or a number\n> of servers. For that use-case, I think a GUC would be much more\n> convenient. Perhaps there could be a default inactive slot timeout GUC\n> that would be used in the absence of a slot-level setting. Thoughts?\n>\n\nYeah, that is a valid point. One of the reasons for keeping it at slot\nlevel was to allow different subscribers/output plugins to have a\ndifferent setting for invalid_timeout for their respective slots based\non their usage. Now, having it as a GUC also has some valid use cases\nas pointed out by you but I am not sure having both at slot level and\nat GUC level is required. I was a bit inclined to have it at slot\nlevel for now and then based on some field usage report we can later\nadd GUC as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 10:13:55 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 9:30 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> >\n> > I have one concern, for synced slots on standby, how do we disallow\n> > invalidation due to inactive-timeout immediately after promotion?\n> >\n> > For synced slots, last_inactive_time and inactive_timeout are both\n> > set. Let's say I bring down primary for promotion of standby and then\n> > promote standby, there are chances that it may end up invalidating\n> > synced slots (considering standby is not brought down during promotion\n> > and thus inactive_timeout may already be past 'last_inactive_time').\n> >\n>\n> On standby, if we decide to maintain valid last_inactive_time for\n> synced slots, then invalidation is correctly restricted in\n> InvalidateSlotForInactiveTimeout() for synced slots using the check:\n>\n> if (RecoveryInProgress() && slot->data.synced)\n> return false;\n>\n> But immediately after promotion, we can not rely on the above check\n> and thus possibility of synced slots invalidation is there. To\n> maintain consistent behavior regarding the setting of\n> last_inactive_time for synced slots, similar to user slots, one\n> potential solution to prevent this invalidation issue is to update the\n> last_inactive_time of all synced slots within the ShutDownSlotSync()\n> function during FinishWalRecovery(). This approach ensures that\n> promotion doesn't immediately invalidate slots, and henceforth, we\n> possess a correct last_inactive_time as a basis for invalidation going\n> forward. This will be equivalent to updating last_inactive_time during\n> restart (but without actual restart during promotion).\n> The plus point of maintaining last_inactive_time for synced slots\n> could be, this can provide data to the user on when last time the sync\n> was attempted on that particular slot by background slot sync worker\n> or SQl function. Thoughts?\n\nPlease find the attached v21 patch implementing the above idea. It\nalso has changes for renaming last_inactive_time to inactive_since.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 11:07:51 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 09:30:32AM +0530, shveta malik wrote:\n> On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> >\n> > I have one concern, for synced slots on standby, how do we disallow\n> > invalidation due to inactive-timeout immediately after promotion?\n> >\n> > For synced slots, last_inactive_time and inactive_timeout are both\n> > set. Let's say I bring down primary for promotion of standby and then\n> > promote standby, there are chances that it may end up invalidating\n> > synced slots (considering standby is not brought down during promotion\n> > and thus inactive_timeout may already be past 'last_inactive_time').\n> >\n> \n> On standby, if we decide to maintain valid last_inactive_time for\n> synced slots, then invalidation is correctly restricted in\n> InvalidateSlotForInactiveTimeout() for synced slots using the check:\n> \n> if (RecoveryInProgress() && slot->data.synced)\n> return false;\n\nRight.\n\n> But immediately after promotion, we can not rely on the above check\n> and thus possibility of synced slots invalidation is there. To\n> maintain consistent behavior regarding the setting of\n> last_inactive_time for synced slots, similar to user slots, one\n> potential solution to prevent this invalidation issue is to update the\n> last_inactive_time of all synced slots within the ShutDownSlotSync()\n> function during FinishWalRecovery(). This approach ensures that\n> promotion doesn't immediately invalidate slots, and henceforth, we\n> possess a correct last_inactive_time as a basis for invalidation going\n> forward. This will be equivalent to updating last_inactive_time during\n> restart (but without actual restart during promotion).\n> The plus point of maintaining last_inactive_time for synced slots\n> could be, this can provide data to the user on when last time the sync\n> was attempted on that particular slot by background slot sync worker\n> or SQl function. Thoughts?\n\nYeah, another plus point is that if the primary is down then one could look\nat the synced \"active_since\" on the standby to get an idea of it (depends of the\nlast sync though).\n\nThe issue that I can see with your proposal is: what if one synced the slots\nmanually (with pg_sync_replication_slots()) but does not use the sync worker?\nThen I think ShutDownSlotSync() is not going to help in that case.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 05:55:11 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Mar 24, 2024 at 3:05 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> I've attached the v18 patch set here. I've also addressed earlier\n> review comments from Amit, Ajin Cherian. Note that I've added new\n> invalidation mechanism tests in a separate TAP test file just because\n> I don't want to clutter or bloat any of the existing files and spread\n> tests for physical slots and logical slots into separate existing TAP\n> files.\n>\n\nReview comments on v18_0002 and v18_0005\n=======================================\n1.\n ReplicationSlotCreate(const char *name, bool db_specific,\n ReplicationSlotPersistency persistency,\n- bool two_phase, bool failover, bool synced)\n+ bool two_phase, bool failover, bool synced,\n+ int inactive_timeout)\n {\n ReplicationSlot *slot = NULL;\n int i;\n@@ -345,6 +348,18 @@ ReplicationSlotCreate(const char *name, bool db_specific,\n errmsg(\"cannot enable failover for a temporary replication slot\"));\n }\n\n+ if (inactive_timeout > 0)\n+ {\n+ /*\n+ * Do not allow users to set inactive_timeout for temporary slots,\n+ * because temporary slots will not be saved to the disk.\n+ */\n+ if (persistency == RS_TEMPORARY)\n+ ereport(ERROR,\n+ errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot set inactive_timeout for a temporary replication slot\"));\n+ }\n\nWe have decided to update inactive_since for temporary slots. So,\nunless there is some reason, we should allow inactive_timeout to also\nbe set for temporary slots.\n\n2.\n--- a/src/backend/catalog/system_views.sql\n+++ b/src/backend/catalog/system_views.sql\n@@ -1024,6 +1024,7 @@ CREATE VIEW pg_replication_slots AS\n L.safe_wal_size,\n L.two_phase,\n L.last_inactive_time,\n+ L.inactive_timeout,\n\nShall we keep inactive_timeout before\nlast_inactive_time/inactive_since? I don't have any strong reason to\npropose that way apart from that the former is provided by the user.\n\n3.\n@@ -287,6 +288,13 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n slot_contents = *slot;\n SpinLockRelease(&slot->mutex);\n\n+ /*\n+ * Here's an opportunity to invalidate inactive replication slots\n+ * based on timeout, so let's do it.\n+ */\n+ if (InvalidateReplicationSlotForInactiveTimeout(slot, false, true, true))\n+ invalidated = true;\n\nI don't think we should try to invalidate the slots in\npg_get_replication_slots. This function's purpose is to get the\ncurrent information on slots and has no intention to perform any work\nfor slots. Any error due to invalidation won't be what the user would\nbe expecting here.\n\n4.\n+static bool\n+InvalidateSlotForInactiveTimeout(ReplicationSlot *slot,\n+ bool need_control_lock,\n+ bool need_mutex)\n{\n...\n...\n+ if (need_control_lock)\n+ LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n+\n+ Assert(LWLockHeldByMeInMode(ReplicationSlotControlLock, LW_SHARED));\n+\n+ /*\n+ * Check if the slot needs to be invalidated due to inactive_timeout. We\n+ * do this with the spinlock held to avoid race conditions -- for example\n+ * the restart_lsn could move forward, or the slot could be dropped.\n+ */\n+ if (need_mutex)\n+ SpinLockAcquire(&slot->mutex);\n...\n\nI find this combination of parameters a bit strange. Because, say if\nneed_mutex is false and need_control_lock is true then that means this\nfunction will acquire LWlock after acquiring spinlock which is\nunacceptable. Now, this may not happen in practice as the callers\nwon't pass such a combination but still, this functionality should be\nimproved.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 11:26:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 05:55:11AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Mar 26, 2024 at 09:30:32AM +0530, shveta malik wrote:\n> > On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> > >\n> > > I have one concern, for synced slots on standby, how do we disallow\n> > > invalidation due to inactive-timeout immediately after promotion?\n> > >\n> > > For synced slots, last_inactive_time and inactive_timeout are both\n> > > set. Let's say I bring down primary for promotion of standby and then\n> > > promote standby, there are chances that it may end up invalidating\n> > > synced slots (considering standby is not brought down during promotion\n> > > and thus inactive_timeout may already be past 'last_inactive_time').\n> > >\n> > \n> > On standby, if we decide to maintain valid last_inactive_time for\n> > synced slots, then invalidation is correctly restricted in\n> > InvalidateSlotForInactiveTimeout() for synced slots using the check:\n> > \n> > if (RecoveryInProgress() && slot->data.synced)\n> > return false;\n> \n> Right.\n> \n> > But immediately after promotion, we can not rely on the above check\n> > and thus possibility of synced slots invalidation is there. To\n> > maintain consistent behavior regarding the setting of\n> > last_inactive_time for synced slots, similar to user slots, one\n> > potential solution to prevent this invalidation issue is to update the\n> > last_inactive_time of all synced slots within the ShutDownSlotSync()\n> > function during FinishWalRecovery(). This approach ensures that\n> > promotion doesn't immediately invalidate slots, and henceforth, we\n> > possess a correct last_inactive_time as a basis for invalidation going\n> > forward. This will be equivalent to updating last_inactive_time during\n> > restart (but without actual restart during promotion).\n> > The plus point of maintaining last_inactive_time for synced slots\n> > could be, this can provide data to the user on when last time the sync\n> > was attempted on that particular slot by background slot sync worker\n> > or SQl function. Thoughts?\n> \n> Yeah, another plus point is that if the primary is down then one could look\n> at the synced \"active_since\" on the standby to get an idea of it (depends of the\n> last sync though).\n> \n> The issue that I can see with your proposal is: what if one synced the slots\n> manually (with pg_sync_replication_slots()) but does not use the sync worker?\n> Then I think ShutDownSlotSync() is not going to help in that case.\n\nIt looks like ShutDownSlotSync() is always called (even if sync_replication_slots = off),\nso that sounds ok to me (I should have checked the code, I was under the impression\nShutDownSlotSync() was not called if sync_replication_slots = off).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 06:06:12 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 11:36 AM Bertrand Drouvot\n<[email protected]> wrote:\n> >\n> > The issue that I can see with your proposal is: what if one synced the slots\n> > manually (with pg_sync_replication_slots()) but does not use the sync worker?\n> > Then I think ShutDownSlotSync() is not going to help in that case.\n>\n> It looks like ShutDownSlotSync() is always called (even if sync_replication_slots = off),\n> so that sounds ok to me (I should have checked the code, I was under the impression\n> ShutDownSlotSync() was not called if sync_replication_slots = off).\n\nRight, it is called irrespective of sync_replication_slots.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 11:50:45 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 11:08 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 9:30 AM shveta malik <[email protected]> wrote:\n> >\n> > On Mon, Mar 25, 2024 at 12:43 PM shveta malik <[email protected]> wrote:\n> > >\n> > > I have one concern, for synced slots on standby, how do we disallow\n> > > invalidation due to inactive-timeout immediately after promotion?\n> > >\n> > > For synced slots, last_inactive_time and inactive_timeout are both\n> > > set. Let's say I bring down primary for promotion of standby and then\n> > > promote standby, there are chances that it may end up invalidating\n> > > synced slots (considering standby is not brought down during promotion\n> > > and thus inactive_timeout may already be past 'last_inactive_time').\n> > >\n> >\n> > On standby, if we decide to maintain valid last_inactive_time for\n> > synced slots, then invalidation is correctly restricted in\n> > InvalidateSlotForInactiveTimeout() for synced slots using the check:\n> >\n> > if (RecoveryInProgress() && slot->data.synced)\n> > return false;\n> >\n> > But immediately after promotion, we can not rely on the above check\n> > and thus possibility of synced slots invalidation is there. To\n> > maintain consistent behavior regarding the setting of\n> > last_inactive_time for synced slots, similar to user slots, one\n> > potential solution to prevent this invalidation issue is to update the\n> > last_inactive_time of all synced slots within the ShutDownSlotSync()\n> > function during FinishWalRecovery(). This approach ensures that\n> > promotion doesn't immediately invalidate slots, and henceforth, we\n> > possess a correct last_inactive_time as a basis for invalidation going\n> > forward. This will be equivalent to updating last_inactive_time during\n> > restart (but without actual restart during promotion).\n> > The plus point of maintaining last_inactive_time for synced slots\n> > could be, this can provide data to the user on when last time the sync\n> > was attempted on that particular slot by background slot sync worker\n> > or SQl function. Thoughts?\n>\n> Please find the attached v21 patch implementing the above idea. It\n> also has changes for renaming last_inactive_time to inactive_since.\n>\n\nThanks for the patch. I have tested this patch alone, and it does what\nit says. One additional thing which I noticed is that now it sets\ninactive_since for temp slots as well, but that idea looks fine to me.\n\nI could not test 'invalidation on promotion bug' with this change, as\nthat needed rebasing of the rest of the patches.\n\nFew trivial things:\n\n1)\nCommti msg:\n\nensures the value is set to current timestamp during the\nshutdown to help correctly interpret the time if the standby gets\npromoted without a restart.\n\nshutdown --> shutdown of slot sync worker (as it was not clear if it\nis instance shutdown or something else)\n\n2)\n'The time since the slot has became inactive'.\n\nhas became-->has become\nor just became\n\nPlease check it in all the files. There are multiple places.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 12:04:26 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 11:07:51AM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 26, 2024 at 9:30 AM shveta malik <[email protected]> wrote:\n> > But immediately after promotion, we can not rely on the above check\n> > and thus possibility of synced slots invalidation is there. To\n> > maintain consistent behavior regarding the setting of\n> > last_inactive_time for synced slots, similar to user slots, one\n> > potential solution to prevent this invalidation issue is to update the\n> > last_inactive_time of all synced slots within the ShutDownSlotSync()\n> > function during FinishWalRecovery(). This approach ensures that\n> > promotion doesn't immediately invalidate slots, and henceforth, we\n> > possess a correct last_inactive_time as a basis for invalidation going\n> > forward. This will be equivalent to updating last_inactive_time during\n> > restart (but without actual restart during promotion).\n> > The plus point of maintaining last_inactive_time for synced slots\n> > could be, this can provide data to the user on when last time the sync\n> > was attempted on that particular slot by background slot sync worker\n> > or SQl function. Thoughts?\n> \n> Please find the attached v21 patch implementing the above idea. It\n> also has changes for renaming last_inactive_time to inactive_since.\n\nThanks!\n\nA few comments:\n\n1 ===\n\nOne trailing whitespace:\n\nApplying: Fix review comments for slot's last_inactive_time property\n.git/rebase-apply/patch:433: trailing whitespace.\n# got a valid inactive_since value representing the last slot sync time.\nwarning: 1 line adds whitespace errors.\n\n2 ===\n\nIt looks like inactive_since is set to the current timestamp on the standby\neach time the sync worker does a cycle:\n\nprimary:\n\npostgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n slot_name | inactive_since\n-------------+-------------------------------\n lsub27_slot | 2024-03-26 07:39:19.745517+00\n lsub28_slot | 2024-03-26 07:40:24.953826+00\n\nstandby:\n\npostgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n slot_name | inactive_since\n-------------+-------------------------------\n lsub27_slot | 2024-03-26 07:43:56.387324+00\n lsub28_slot | 2024-03-26 07:43:56.387338+00\n\nI don't think that should be the case.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 07:45:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:15 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> 2 ===\n>\n> It looks like inactive_since is set to the current timestamp on the standby\n> each time the sync worker does a cycle:\n>\n> primary:\n>\n> postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> slot_name | inactive_since\n> -------------+-------------------------------\n> lsub27_slot | 2024-03-26 07:39:19.745517+00\n> lsub28_slot | 2024-03-26 07:40:24.953826+00\n>\n> standby:\n>\n> postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> slot_name | inactive_since\n> -------------+-------------------------------\n> lsub27_slot | 2024-03-26 07:43:56.387324+00\n> lsub28_slot | 2024-03-26 07:43:56.387338+00\n>\n> I don't think that should be the case.\n>\n\nBut why? This is exactly what we discussed in another thread where we\nagreed to update inactive_since even for sync slots. In each sync\ncycle, we acquire/release the slot, so the inactive_since gets\nupdated. See synchronize_one_slot().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 13:37:21 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 01:37:21PM +0530, Amit Kapila wrote:\n> On Tue, Mar 26, 2024 at 1:15 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > 2 ===\n> >\n> > It looks like inactive_since is set to the current timestamp on the standby\n> > each time the sync worker does a cycle:\n> >\n> > primary:\n> >\n> > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > slot_name | inactive_since\n> > -------------+-------------------------------\n> > lsub27_slot | 2024-03-26 07:39:19.745517+00\n> > lsub28_slot | 2024-03-26 07:40:24.953826+00\n> >\n> > standby:\n> >\n> > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > slot_name | inactive_since\n> > -------------+-------------------------------\n> > lsub27_slot | 2024-03-26 07:43:56.387324+00\n> > lsub28_slot | 2024-03-26 07:43:56.387338+00\n> >\n> > I don't think that should be the case.\n> >\n> \n> But why? This is exactly what we discussed in another thread where we\n> agreed to update inactive_since even for sync slots.\n\nHum, I thought we agreed to \"sync\" it and to \"update it to current time\"\nonly at promotion time.\n\nI don't think updating inactive_since to current time during each cycle makes\nsense (I mean I understand the use case: being able to say when slots have been\nsync, but if this is what we want then we should consider an extra view or an\nextra field but not relying on the inactive_since one).\n\nIf the primary goes down, not updating inactive_since to the current time could\nalso provide benefit such as knowing the inactive_since of the primary slots\n(from the standby) the last time it has been synced. If we update it to the current\ntime then this information is lost.\n\n> In each sync\n> cycle, we acquire/release the slot, so the inactive_since gets\n> updated. See synchronize_one_slot().\n\nRight, and I think we should put an extra condition if in recovery.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:24:00 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 11:26 AM Amit Kapila <[email protected]> wrote:\n>\n> Review comments on v18_0002 and v18_0005\n> =======================================\n>\n> 1.\n> We have decided to update inactive_since for temporary slots. So,\n> unless there is some reason, we should allow inactive_timeout to also\n> be set for temporary slots.\n\nWFM. A temporary slot that's inactive for a long time before even the\nserver isn't shutdown can utilize this inactive_timeout based\ninvalidation mechanism. And, I'd also vote for we being consistent for\ntemporary and synced slots.\n\n> L.last_inactive_time,\n> + L.inactive_timeout,\n>\n> Shall we keep inactive_timeout before\n> last_inactive_time/inactive_since? I don't have any strong reason to\n> propose that way apart from that the former is provided by the user.\n\nDone.\n\n> + if (InvalidateReplicationSlotForInactiveTimeout(slot, false, true, true))\n> + invalidated = true;\n>\n> I don't think we should try to invalidate the slots in\n> pg_get_replication_slots. This function's purpose is to get the\n> current information on slots and has no intention to perform any work\n> for slots. Any error due to invalidation won't be what the user would\n> be expecting here.\n\nAgree. Removed.\n\n> 4.\n> +static bool\n> +InvalidateSlotForInactiveTimeout(ReplicationSlot *slot,\n> + bool need_control_lock,\n> + bool need_mutex)\n> {\n> ...\n> ...\n> + if (need_control_lock)\n> + LWLockAcquire(ReplicationSlotControlLock, LW_SHARED);\n> +\n> + Assert(LWLockHeldByMeInMode(ReplicationSlotControlLock, LW_SHARED));\n> +\n> + /*\n> + * Check if the slot needs to be invalidated due to inactive_timeout. We\n> + * do this with the spinlock held to avoid race conditions -- for example\n> + * the restart_lsn could move forward, or the slot could be dropped.\n> + */\n> + if (need_mutex)\n> + SpinLockAcquire(&slot->mutex);\n> ...\n>\n> I find this combination of parameters a bit strange. Because, say if\n> need_mutex is false and need_control_lock is true then that means this\n> function will acquire LWlock after acquiring spinlock which is\n> unacceptable. Now, this may not happen in practice as the callers\n> won't pass such a combination but still, this functionality should be\n> improved.\n\nRight. Either we need two locks or not. So, changed it to use just one\nbool need_locks, upon set both control lock and spin lock are acquired\nand released.\n\nOn Mon, Mar 25, 2024 at 10:33 AM shveta malik <[email protected]> wrote:\n>\n> patch 002:\n>\n> 2)\n> slotsync.c:\n>\n> ReplicationSlotCreate(remote_slot->name, true, RS_TEMPORARY,\n> remote_slot->two_phase,\n> remote_slot->failover,\n> - true);\n> + true, 0);\n>\n> + slot->data.inactive_timeout = remote_slot->inactive_timeout;\n>\n> Is there a reason we are not passing 'remote_slot->inactive_timeout'\n> to ReplicationSlotCreate() directly?\n\nThe slot there gets created temporarily for which we were not\nsupporting inactive_timeout being set. But, in the latest v22 patch we\nare supporting, so passing the remote_slot->inactive_timeout directly.\n\n> 3)\n> slotfuncs.c\n> pg_create_logical_replication_slot():\n> + int inactive_timeout = PG_GETARG_INT32(5);\n>\n> Can we mention here that timeout is in seconds either in comment or\n> rename variable to inactive_timeout_secs?\n>\n> Please do this for create_physical_replication_slot(),\n> create_logical_replication_slot(),\n> pg_create_physical_replication_slot() as well.\n\nAdded /* in seconds */ next the variable declaration.\n\n> ---------\n> 4)\n> + int inactive_timeout; /* The amount of time in seconds the slot\n> + * is allowed to be inactive. */\n> } LogicalSlotInfo;\n>\n> Do we need to mention \"before getting invalided\" like other places\n> (in last patch)?\n\nDone.\n\n> 5)\n> Same at these two places. \"before getting invalided\" to be added in\n> the last patch otherwise the info is incompleted.\n>\n> +\n> + /* The amount of time in seconds the slot is allowed to be inactive */\n> + int inactive_timeout;\n> } ReplicationSlotPersistentData;\n>\n>\n> + * inactive_timeout: The amount of time in seconds the slot is allowed to be\n> + * inactive.\n> */\n> void\n> ReplicationSlotCreate(const char *name, bool db_specific,\n> Same here. \"before getting invalidated\" ?\n\nDone.\n\nOn Tue, Mar 26, 2024 at 12:04 PM shveta malik <[email protected]> wrote:\n>\n> > Please find the attached v21 patch implementing the above idea. It\n> > also has changes for renaming last_inactive_time to inactive_since.\n>\n> Thanks for the patch. I have tested this patch alone, and it does what\n> it says. One additional thing which I noticed is that now it sets\n> inactive_since for temp slots as well, but that idea looks fine to me.\n\nRight. Let's be consistent by treating all slots the same.\n\n> I could not test 'invalidation on promotion bug' with this change, as\n> that needed rebasing of the rest of the patches.\n\nPlease use the v22 patch set.\n\n> Few trivial things:\n>\n> 1)\n> Commti msg:\n>\n> ensures the value is set to current timestamp during the\n> shutdown to help correctly interpret the time if the standby gets\n> promoted without a restart.\n>\n> shutdown --> shutdown of slot sync worker (as it was not clear if it\n> is instance shutdown or something else)\n\nChanged it to \"shutdown of slot sync machinery\" to be consistent with\nthe comments.\n\n> 2)\n> 'The time since the slot has became inactive'.\n>\n> has became-->has become\n> or just became\n>\n> Please check it in all the files. There are multiple places.\n\nFixed.\n\nPlease see the attached v23 patches. I've addressed all the review\ncomments received so far from Amit and Shveta.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 14:27:17 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 2:27 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> >\n> > 1)\n> > Commti msg:\n> >\n> > ensures the value is set to current timestamp during the\n> > shutdown to help correctly interpret the time if the standby gets\n> > promoted without a restart.\n> >\n> > shutdown --> shutdown of slot sync worker (as it was not clear if it\n> > is instance shutdown or something else)\n>\n> Changed it to \"shutdown of slot sync machinery\" to be consistent with\n> the comments.\n\nThanks for addressing the comments. Just to give more clarity here (so\nthat you take a informed decision), I am not sure if we actually shut\ndown slot-sync machinery. We only shot down slot sync worker.\nSlot-sync machinery can still be used using\n'pg_sync_replication_slots' SQL function. I can easily reproduce the\nscenario where SQL function and reset_synced_slots_info() are going\nin parallel where the latter hits 'Assert(s->active_pid == 0)' due to\nthe fact that parallel SQL sync function is active on that slot.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 14:52:11 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 02:27:17PM +0530, Bharath Rupireddy wrote:\n> Please use the v22 patch set.\n\nThanks!\n\n1 ===\n\n+reset_synced_slots_info(void)\n\nI'm not sure \"reset\" is the right word, what about slot_sync_shutdown_update()?\n\n2 ===\n\n+ for (int i = 0; i < max_replication_slots; i++)\n+ {\n+ ReplicationSlot *s = &ReplicationSlotCtl->replication_slots[i];\n+\n+ /* Check if it is a synchronized slot */\n+ if (s->in_use && s->data.synced)\n+ {\n+ TimestampTz now;\n+\n+ Assert(SlotIsLogical(s));\n+ Assert(s->active_pid == 0);\n+\n+ /*\n+ * Set the time since the slot has become inactive after shutting\n+ * down slot sync machinery. This helps correctly interpret the\n+ * time if the standby gets promoted without a restart. We get the\n+ * current time beforehand to avoid a system call while holding\n+ * the lock.\n+ */\n+ now = GetCurrentTimestamp();\n\nWhat about moving \"now = GetCurrentTimestamp()\" outside of the for loop? (it\nwould be less costly and probably good enough).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:42:40 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 1:54 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Mar 26, 2024 at 01:37:21PM +0530, Amit Kapila wrote:\n> > On Tue, Mar 26, 2024 at 1:15 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > 2 ===\n> > >\n> > > It looks like inactive_since is set to the current timestamp on the standby\n> > > each time the sync worker does a cycle:\n> > >\n> > > primary:\n> > >\n> > > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > > slot_name | inactive_since\n> > > -------------+-------------------------------\n> > > lsub27_slot | 2024-03-26 07:39:19.745517+00\n> > > lsub28_slot | 2024-03-26 07:40:24.953826+00\n> > >\n> > > standby:\n> > >\n> > > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > > slot_name | inactive_since\n> > > -------------+-------------------------------\n> > > lsub27_slot | 2024-03-26 07:43:56.387324+00\n> > > lsub28_slot | 2024-03-26 07:43:56.387338+00\n> > >\n> > > I don't think that should be the case.\n> > >\n> >\n> > But why? This is exactly what we discussed in another thread where we\n> > agreed to update inactive_since even for sync slots.\n>\n> Hum, I thought we agreed to \"sync\" it and to \"update it to current time\"\n> only at promotion time.\n\nI think there may have been some misunderstanding here. But now if I\nrethink this, I am fine with 'inactive_since' getting synced from\nprimary to standby. But if we do that, we need to add docs stating\n\"inactive_since\" represents primary's inactivity and not standby's\nslots inactivity for synced slots. The reason for this clarification\nis that the synced slot might be generated much later, yet\n'inactive_since' is synced from the primary, potentially indicating a\ntime considerably earlier than when the synced slot was actually\ncreated.\n\nAnother approach could be that \"inactive_since\" for synced slot\nactually gives its own inactivity data rather than giving primary's\nslot data. We update inactive_since on standby only at 3 occasions:\n1) at the time of creation of the synced slot.\n2) during standby restart.\n3) during promotion of standby.\n\nI have attached a sample patch for this idea as.txt file.\n\nI am fine with any of these approaches. One gives data synced from\nprimary for synced slots, while another gives actual inactivity data\nof synced slots.\n\nthanks\nShveta", "msg_date": "Tue, 26 Mar 2024 15:17:36 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 03:17:36PM +0530, shveta malik wrote:\n> On Tue, Mar 26, 2024 at 1:54 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Tue, Mar 26, 2024 at 01:37:21PM +0530, Amit Kapila wrote:\n> > > On Tue, Mar 26, 2024 at 1:15 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > 2 ===\n> > > >\n> > > > It looks like inactive_since is set to the current timestamp on the standby\n> > > > each time the sync worker does a cycle:\n> > > >\n> > > > primary:\n> > > >\n> > > > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > > > slot_name | inactive_since\n> > > > -------------+-------------------------------\n> > > > lsub27_slot | 2024-03-26 07:39:19.745517+00\n> > > > lsub28_slot | 2024-03-26 07:40:24.953826+00\n> > > >\n> > > > standby:\n> > > >\n> > > > postgres=# select slot_name,inactive_since from pg_replication_slots where failover = 't';\n> > > > slot_name | inactive_since\n> > > > -------------+-------------------------------\n> > > > lsub27_slot | 2024-03-26 07:43:56.387324+00\n> > > > lsub28_slot | 2024-03-26 07:43:56.387338+00\n> > > >\n> > > > I don't think that should be the case.\n> > > >\n> > >\n> > > But why? This is exactly what we discussed in another thread where we\n> > > agreed to update inactive_since even for sync slots.\n> >\n> > Hum, I thought we agreed to \"sync\" it and to \"update it to current time\"\n> > only at promotion time.\n> \n> I think there may have been some misunderstanding here.\n\nIndeed ;-)\n\n> But now if I\n> rethink this, I am fine with 'inactive_since' getting synced from\n> primary to standby. But if we do that, we need to add docs stating\n> \"inactive_since\" represents primary's inactivity and not standby's\n> slots inactivity for synced slots.\n\nYeah sure.\n\n> The reason for this clarification\n> is that the synced slot might be generated much later, yet\n> 'inactive_since' is synced from the primary, potentially indicating a\n> time considerably earlier than when the synced slot was actually\n> created.\n\nRight.\n\n> Another approach could be that \"inactive_since\" for synced slot\n> actually gives its own inactivity data rather than giving primary's\n> slot data. We update inactive_since on standby only at 3 occasions:\n> 1) at the time of creation of the synced slot.\n> 2) during standby restart.\n> 3) during promotion of standby.\n> \n> I have attached a sample patch for this idea as.txt file.\n\nThanks!\n\n> I am fine with any of these approaches. One gives data synced from\n> primary for synced slots, while another gives actual inactivity data\n> of synced slots.\n\nWhat about another approach?: inactive_since gives data synced from primary for\nsynced slots and another dedicated field (could be added later...) could\nrepresent what you suggest as the other option.\n\nAnother cons of updating inactive_since at the current time during each slot\nsync cycle is that calling GetCurrentTimestamp() very frequently\n(during each sync cycle of very active slots) could be too costly.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 10:20:50 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 3:12 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 02:27:17PM +0530, Bharath Rupireddy wrote:\n> > Please use the v22 patch set.\n>\n> Thanks!\n>\n> 1 ===\n>\n> +reset_synced_slots_info(void)\n>\n> I'm not sure \"reset\" is the right word, what about slot_sync_shutdown_update()?\n>\n\n*shutdown_update() sounds generic. How about\nupdate_synced_slots_inactive_time()? I think it is a bit longer but\nconveys the meaning.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Mar 2024 15:51:02 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 7:57 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> Please see the attached v23 patches. I've addressed all the review\n> comments received so far from Amit and Shveta.\n>\n>\nIn patch 0003:\n+ SpinLockAcquire(&slot->mutex);\n+ }\n+\n+ Assert(LWLockHeldByMeInMode(ReplicationSlotControlLock, LW_SHARED));\n+\n+ if (slot->inactive_since > 0 &&\n+ slot->data.inactive_timeout > 0)\n+ {\n+ TimestampTz now;\n+\n+ /* inactive_since is only tracked for inactive slots */\n+ Assert(slot->active_pid == 0);\n+\n+ now = GetCurrentTimestamp();\n+ if (TimestampDifferenceExceeds(slot->inactive_since, now,\n+ slot->data.inactive_timeout * 1000))\n+ inavidation_cause = RS_INVAL_INACTIVE_TIMEOUT;\n+ }\n+\n+ if (need_locks)\n+ {\n+ SpinLockRelease(&slot->mutex);\n\nHere, GetCurrentTimestamp() is still called with SpinLock held. Maybe do\nthis prior to acquiring the spinlock.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Tue, Mar 26, 2024 at 7:57 PM Bharath Rupireddy <[email protected]> wrote:\nPlease see the attached v23 patches. I've addressed all the review\ncomments received so far from Amit and Shveta.\nIn patch 0003:+\t\tSpinLockAcquire(&slot->mutex);+\t}++\tAssert(LWLockHeldByMeInMode(ReplicationSlotControlLock, LW_SHARED));++\tif (slot->inactive_since > 0 &&+\t\tslot->data.inactive_timeout > 0)+\t{+\t\tTimestampTz now;++\t\t/* inactive_since is only tracked for inactive slots */+\t\tAssert(slot->active_pid == 0);++\t\tnow = GetCurrentTimestamp();+\t\tif (TimestampDifferenceExceeds(slot->inactive_since, now,+\t\t\t\t\t\t\t\t\t   slot->data.inactive_timeout * 1000))+\t\t\tinavidation_cause = RS_INVAL_INACTIVE_TIMEOUT;+\t}++\tif (need_locks)+\t{+\t\tSpinLockRelease(&slot->mutex);Here, \nGetCurrentTimestamp() is still called with SpinLock held. Maybe do this prior to acquiring the spinlock.regards,Ajin CherianFujitsu Australia", "msg_date": "Tue, 26 Mar 2024 21:22:25 +1100", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 3:50 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> > I think there may have been some misunderstanding here.\n>\n> Indeed ;-)\n>\n> > But now if I\n> > rethink this, I am fine with 'inactive_since' getting synced from\n> > primary to standby. But if we do that, we need to add docs stating\n> > \"inactive_since\" represents primary's inactivity and not standby's\n> > slots inactivity for synced slots.\n>\n> Yeah sure.\n>\n> > The reason for this clarification\n> > is that the synced slot might be generated much later, yet\n> > 'inactive_since' is synced from the primary, potentially indicating a\n> > time considerably earlier than when the synced slot was actually\n> > created.\n>\n> Right.\n>\n> > Another approach could be that \"inactive_since\" for synced slot\n> > actually gives its own inactivity data rather than giving primary's\n> > slot data. We update inactive_since on standby only at 3 occasions:\n> > 1) at the time of creation of the synced slot.\n> > 2) during standby restart.\n> > 3) during promotion of standby.\n> >\n> > I have attached a sample patch for this idea as.txt file.\n>\n> Thanks!\n>\n> > I am fine with any of these approaches. One gives data synced from\n> > primary for synced slots, while another gives actual inactivity data\n> > of synced slots.\n>\n> What about another approach?: inactive_since gives data synced from primary for\n> synced slots and another dedicated field (could be added later...) could\n> represent what you suggest as the other option.\n\nYes, okay with me. I think there is some confusion here as well. In my\nsecond approach above, I have not suggested anything related to\nsync-worker. We can think on that later if we really need another\nfield which give us sync time. In my second approach, I have tried to\navoid updating inactive_since for synced slots during sync process. We\nupdate that field during creation of synced slot so that\ninactive_since reflects correct info even for synced slots (rather\nthan copying from primary). Please have a look at my patch and let me\nknow your thoughts. I am fine with copying it from primary as well and\ndocumenting this behaviour.\n\n> Another cons of updating inactive_since at the current time during each slot\n> sync cycle is that calling GetCurrentTimestamp() very frequently\n> (during each sync cycle of very active slots) could be too costly.\n\nRight.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:17:53 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 4:18 PM shveta malik <[email protected]> wrote:\n>\n> > What about another approach?: inactive_since gives data synced from primary for\n> > synced slots and another dedicated field (could be added later...) could\n> > represent what you suggest as the other option.\n>\n> Yes, okay with me. I think there is some confusion here as well. In my\n> second approach above, I have not suggested anything related to\n> sync-worker. We can think on that later if we really need another\n> field which give us sync time. In my second approach, I have tried to\n> avoid updating inactive_since for synced slots during sync process. We\n> update that field during creation of synced slot so that\n> inactive_since reflects correct info even for synced slots (rather\n> than copying from primary). Please have a look at my patch and let me\n> know your thoughts. I am fine with copying it from primary as well and\n> documenting this behaviour.\n\nI took a look at your patch.\n\n--- a/src/backend/replication/logical/slotsync.c\n+++ b/src/backend/replication/logical/slotsync.c\n@@ -628,6 +628,7 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\nremote_dbid)\n SpinLockAcquire(&slot->mutex);\n slot->effective_catalog_xmin = xmin_horizon;\n slot->data.catalog_xmin = xmin_horizon;\n+ slot->inactive_since = GetCurrentTimestamp();\n SpinLockRelease(&slot->mutex);\n\nIf we just sync inactive_since value for synced slots while in\nrecovery from the primary, so be it. Why do we need to update it to\nthe current time when the slot is being created? We don't expose slot\ncreation time, no? Aren't we fine if we just sync the value from\nprimary and document that fact? After the promotion, we can reset it\nto the current time so that it gets its own time. Do you see any\nissues with it?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:35:16 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 4:35 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 4:18 PM shveta malik <[email protected]> wrote:\n> >\n> > > What about another approach?: inactive_since gives data synced from primary for\n> > > synced slots and another dedicated field (could be added later...) could\n> > > represent what you suggest as the other option.\n> >\n> > Yes, okay with me. I think there is some confusion here as well. In my\n> > second approach above, I have not suggested anything related to\n> > sync-worker. We can think on that later if we really need another\n> > field which give us sync time. In my second approach, I have tried to\n> > avoid updating inactive_since for synced slots during sync process. We\n> > update that field during creation of synced slot so that\n> > inactive_since reflects correct info even for synced slots (rather\n> > than copying from primary). Please have a look at my patch and let me\n> > know your thoughts. I am fine with copying it from primary as well and\n> > documenting this behaviour.\n>\n> I took a look at your patch.\n>\n> --- a/src/backend/replication/logical/slotsync.c\n> +++ b/src/backend/replication/logical/slotsync.c\n> @@ -628,6 +628,7 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\n> remote_dbid)\n> SpinLockAcquire(&slot->mutex);\n> slot->effective_catalog_xmin = xmin_horizon;\n> slot->data.catalog_xmin = xmin_horizon;\n> + slot->inactive_since = GetCurrentTimestamp();\n> SpinLockRelease(&slot->mutex);\n>\n> If we just sync inactive_since value for synced slots while in\n> recovery from the primary, so be it. Why do we need to update it to\n> the current time when the slot is being created?\n\nIf we update inactive_since at synced slot's creation or during\nrestart (skipping setting it during sync), then this time reflects\nactual 'inactive_since' for that particular synced slot. Isn't that a\nclear info for the user and in alignment of what the name\n'inactive_since' actually suggests?\n\n> We don't expose slot\n> creation time, no?\n\nNo, we don't. But for synced slot, that is the time since that slot is\ninactive (unless promoted), so we are exposing inactive_since and not\ncreation time.\n\n>Aren't we fine if we just sync the value from\n> primary and document that fact? After the promotion, we can reset it\n> to the current time so that it gets its own time. Do you see any\n> issues with it?\n\nYes, we can do that. But curious to know, do we see any additional\nbenefit of reflecting primary's inactive_since at standby which I\nmight be missing?\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 26 Mar 2024 16:49:18 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 04:49:18PM +0530, shveta malik wrote:\n> On Tue, Mar 26, 2024 at 4:35 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Tue, Mar 26, 2024 at 4:18 PM shveta malik <[email protected]> wrote:\n> > >\n> > > > What about another approach?: inactive_since gives data synced from primary for\n> > > > synced slots and another dedicated field (could be added later...) could\n> > > > represent what you suggest as the other option.\n> > >\n> > > Yes, okay with me. I think there is some confusion here as well. In my\n> > > second approach above, I have not suggested anything related to\n> > > sync-worker. We can think on that later if we really need another\n> > > field which give us sync time. In my second approach, I have tried to\n> > > avoid updating inactive_since for synced slots during sync process. We\n> > > update that field during creation of synced slot so that\n> > > inactive_since reflects correct info even for synced slots (rather\n> > > than copying from primary). Please have a look at my patch and let me\n> > > know your thoughts. I am fine with copying it from primary as well and\n> > > documenting this behaviour.\n> >\n> > I took a look at your patch.\n> >\n> > --- a/src/backend/replication/logical/slotsync.c\n> > +++ b/src/backend/replication/logical/slotsync.c\n> > @@ -628,6 +628,7 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\n> > remote_dbid)\n> > SpinLockAcquire(&slot->mutex);\n> > slot->effective_catalog_xmin = xmin_horizon;\n> > slot->data.catalog_xmin = xmin_horizon;\n> > + slot->inactive_since = GetCurrentTimestamp();\n> > SpinLockRelease(&slot->mutex);\n> >\n> > If we just sync inactive_since value for synced slots while in\n> > recovery from the primary, so be it. Why do we need to update it to\n> > the current time when the slot is being created?\n> \n> If we update inactive_since at synced slot's creation or during\n> restart (skipping setting it during sync), then this time reflects\n> actual 'inactive_since' for that particular synced slot. Isn't that a\n> clear info for the user and in alignment of what the name\n> 'inactive_since' actually suggests?\n> \n> > We don't expose slot\n> > creation time, no?\n> \n> No, we don't. But for synced slot, that is the time since that slot is\n> inactive (unless promoted), so we are exposing inactive_since and not\n> creation time.\n> \n> >Aren't we fine if we just sync the value from\n> > primary and document that fact? After the promotion, we can reset it\n> > to the current time so that it gets its own time. Do you see any\n> > issues with it?\n> \n> Yes, we can do that. But curious to know, do we see any additional\n> benefit of reflecting primary's inactive_since at standby which I\n> might be missing?\n\nIn case the primary goes down, then one could use the value on the standby\nto get the value coming from the primary. I think that could be useful info to\nhave.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 12:31:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 04:17:53PM +0530, shveta malik wrote:\n> On Tue, Mar 26, 2024 at 3:50 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > > I think there may have been some misunderstanding here.\n> >\n> > Indeed ;-)\n> >\n> > > But now if I\n> > > rethink this, I am fine with 'inactive_since' getting synced from\n> > > primary to standby. But if we do that, we need to add docs stating\n> > > \"inactive_since\" represents primary's inactivity and not standby's\n> > > slots inactivity for synced slots.\n> >\n> > Yeah sure.\n> >\n> > > The reason for this clarification\n> > > is that the synced slot might be generated much later, yet\n> > > 'inactive_since' is synced from the primary, potentially indicating a\n> > > time considerably earlier than when the synced slot was actually\n> > > created.\n> >\n> > Right.\n> >\n> > > Another approach could be that \"inactive_since\" for synced slot\n> > > actually gives its own inactivity data rather than giving primary's\n> > > slot data. We update inactive_since on standby only at 3 occasions:\n> > > 1) at the time of creation of the synced slot.\n> > > 2) during standby restart.\n> > > 3) during promotion of standby.\n> > >\n> > > I have attached a sample patch for this idea as.txt file.\n> >\n> > Thanks!\n> >\n> > > I am fine with any of these approaches. One gives data synced from\n> > > primary for synced slots, while another gives actual inactivity data\n> > > of synced slots.\n> >\n> > What about another approach?: inactive_since gives data synced from primary for\n> > synced slots and another dedicated field (could be added later...) could\n> > represent what you suggest as the other option.\n> \n> Yes, okay with me. I think there is some confusion here as well. In my\n> second approach above, I have not suggested anything related to\n> sync-worker.\n\nYeah, no confusion, understood that way.\n\n> We can think on that later if we really need another\n> field which give us sync time.\n\nI think that calling GetCurrentTimestamp() so frequently could be too costly, so\nI'm not sure we should.\n\n> In my second approach, I have tried to\n> avoid updating inactive_since for synced slots during sync process. We\n> update that field during creation of synced slot so that\n> inactive_since reflects correct info even for synced slots (rather\n> than copying from primary). \n\nYeah, and I think we could create a dedicated field with this information\nif we feel the need.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 12:35:04 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 4:35 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> If we just sync inactive_since value for synced slots while in\n> recovery from the primary, so be it. Why do we need to update it to\n> the current time when the slot is being created? We don't expose slot\n> creation time, no? Aren't we fine if we just sync the value from\n> primary and document that fact? After the promotion, we can reset it\n> to the current time so that it gets its own time.\n\nI'm attaching v24 patches. It implements the above idea proposed\nupthread for synced slots. I've now separated\ns/last_inactive_time/inactive_since and synced slots behaviour. Please\nhave a look.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 26 Mar 2024 21:59:23 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 26, 2024 at 09:59:23PM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 26, 2024 at 4:35 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > If we just sync inactive_since value for synced slots while in\n> > recovery from the primary, so be it. Why do we need to update it to\n> > the current time when the slot is being created? We don't expose slot\n> > creation time, no? Aren't we fine if we just sync the value from\n> > primary and document that fact? After the promotion, we can reset it\n> > to the current time so that it gets its own time.\n> \n> I'm attaching v24 patches. It implements the above idea proposed\n> upthread for synced slots. I've now separated\n> s/last_inactive_time/inactive_since and synced slots behaviour. Please\n> have a look.\n\nThanks!\n\n==== v24-0001\n\nIt's now pure mechanical changes and it looks good to me.\n\n==== v24-0002\n\n1 ===\n\n This commit does two things:\n 1) Updates inactive_since for sync slots with the value\n received from the primary's slot.\n\nTested it and it does that.\n\n2 ===\n\n 2) Ensures the value is set to current timestamp during the\n shutdown of slot sync machinery to help correctly interpret the\n time if the standby gets promoted without a restart.\n\nTested it and it does that.\n\n3 ===\n\n+/*\n+ * Reset the synced slots info such as inactive_since after shutting\n+ * down the slot sync machinery.\n+ */\n+static void\n+update_synced_slots_inactive_time(void)\n\nLooks like the comment \"reset\" is not matching the name of the function and\nwhat it does.\n\n4 ===\n\n+ /*\n+ * We get the current time beforehand and only once to avoid\n+ * system calls overhead while holding the lock.\n+ */\n+ if (now == 0)\n+ now = GetCurrentTimestamp();\n\nAlso +1 of having GetCurrentTimestamp() just called one time within the loop.\n\n5 ===\n\n- if (!(RecoveryInProgress() && slot->data.synced))\n+ if (!(InRecovery && slot->data.synced))\n slot->inactive_since = GetCurrentTimestamp();\n else\n slot->inactive_since = 0;\n\nNot related to this change but more the way RestoreSlotFromDisk() behaves here:\n\nFor a sync slot on standby it will be set to zero and then later will be\nsynchronized with the one coming from the primary. I think that's fine to have\nit to zero for this window of time.\n\nNow, if the standby is down and one sets sync_replication_slots to off,\nthen inactive_since will be set to zero on the standby at startup and not \nsynchronized (unless one triggers a manual sync). I also think that's fine but\nit might be worth to document this behavior (that after a standby startup\ninactive_since is zero until the next sync...). \n\n6 ===\n\n+ print \"HI $slot_name $name $inactive_since $slot_creation_time\\n\";\n\ngarbage?\n\n7 ===\n\n+# Capture and validate inactive_since of a given slot.\n+sub capture_and_validate_slot_inactive_since\n+{\n+ my ($node, $slot_name, $slot_creation_time) = @_;\n+ my $name = $node->name;\n\nWe know have capture_and_validate_slot_inactive_since at 2 places:\n040_standby_failover_slots_sync.pl and 019_replslot_limit.pl.\n\nWorth to create a sub in Cluster.pm?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 26 Mar 2024 17:52:12 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 9:59 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 4:35 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > If we just sync inactive_since value for synced slots while in\n> > recovery from the primary, so be it. Why do we need to update it to\n> > the current time when the slot is being created? We don't expose slot\n> > creation time, no? Aren't we fine if we just sync the value from\n> > primary and document that fact? After the promotion, we can reset it\n> > to the current time so that it gets its own time.\n>\n> I'm attaching v24 patches. It implements the above idea proposed\n> upthread for synced slots. I've now separated\n> s/last_inactive_time/inactive_since and synced slots behaviour. Please\n> have a look.\n\nThanks for the patches. Few trivial comments for v24-002:\n\n1)\nslot.c:\n+ * data from the remote slot. We use InRecovery flag instead of\n+ * RecoveryInProgress() as it always returns true even for normal\n+ * server startup.\n\na) Not clear what 'it' refers to. Better to use 'the latter'\nb) Is it better to mention the primary here:\n 'as the latter always returns true even on the primary server during startup'.\n\n\n2)\nupdate_local_synced_slot():\n\n- strcmp(remote_slot->plugin, NameStr(slot->data.plugin)) == 0)\n+ strcmp(remote_slot->plugin, NameStr(slot->data.plugin)) == 0 &&\n+ remote_slot->inactive_since == slot->inactive_since)\n\nWhen this code was written initially, the intent was to do strcmp at\nthe end (only if absolutely needed). It will be good if we maintain\nthe same and add new checks before strcmp.\n\n3)\nupdate_synced_slots_inactive_time():\n\nThis assert is removed, is it intentional?\nAssert(s->active_pid == 0);\n\n\n4)\n040_standby_failover_slots_sync.pl:\n\n+# Capture the inactive_since of the slot from the standby the logical failover\n+# slots are synced/created on the standby.\n\nThe comment is unclear, something seems missing.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 27 Mar 2024 09:01:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 11:22 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > I'm attaching v24 patches. It implements the above idea proposed\n> > upthread for synced slots.\n>\n> ==== v24-0002\n>\n> 1 ===\n>\n> This commit does two things:\n> 1) Updates inactive_since for sync slots with the value\n> received from the primary's slot.\n>\n> Tested it and it does that.\n\nThanks. I've added a test case for this.\n\n> 2 ===\n>\n> 2) Ensures the value is set to current timestamp during the\n> shutdown of slot sync machinery to help correctly interpret the\n> time if the standby gets promoted without a restart.\n>\n> Tested it and it does that.\n\nThanks. I've added a test case for this.\n\n> 3 ===\n>\n> +/*\n> + * Reset the synced slots info such as inactive_since after shutting\n> + * down the slot sync machinery.\n> + */\n> +static void\n> +update_synced_slots_inactive_time(void)\n>\n> Looks like the comment \"reset\" is not matching the name of the function and\n> what it does.\n\nChanged. I've also changed the function name to\nupdate_synced_slots_inactive_since to be precise on what it exactly\ndoes.\n\n> 4 ===\n>\n> + /*\n> + * We get the current time beforehand and only once to avoid\n> + * system calls overhead while holding the lock.\n> + */\n> + if (now == 0)\n> + now = GetCurrentTimestamp();\n>\n> Also +1 of having GetCurrentTimestamp() just called one time within the loop.\n\nRight.\n\n> 5 ===\n>\n> - if (!(RecoveryInProgress() && slot->data.synced))\n> + if (!(InRecovery && slot->data.synced))\n> slot->inactive_since = GetCurrentTimestamp();\n> else\n> slot->inactive_since = 0;\n>\n> Not related to this change but more the way RestoreSlotFromDisk() behaves here:\n>\n> For a sync slot on standby it will be set to zero and then later will be\n> synchronized with the one coming from the primary. I think that's fine to have\n> it to zero for this window of time.\n\nRight.\n\n> Now, if the standby is down and one sets sync_replication_slots to off,\n> then inactive_since will be set to zero on the standby at startup and not\n> synchronized (unless one triggers a manual sync). I also think that's fine but\n> it might be worth to document this behavior (that after a standby startup\n> inactive_since is zero until the next sync...).\n\nIsn't this behaviour applicable for other slot parameters that the\nslot syncs from the remote slot on the primary?\n\nI've added the following note in the comments when we update\ninactive_since in RestoreSlotFromDisk.\n\n * Note that for synced slots after the standby starts up (i.e. after\n * the slots are loaded from the disk), the inactive_since will remain\n * zero until the next slot sync cycle.\n */\n if (!(InRecovery && slot->data.synced))\n slot->inactive_since = GetCurrentTimestamp();\n else\n slot->inactive_since = 0;\n\n> 6 ===\n>\n> + print \"HI $slot_name $name $inactive_since $slot_creation_time\\n\";\n>\n> garbage?\n\nRemoved.\n\n> 7 ===\n>\n> +# Capture and validate inactive_since of a given slot.\n> +sub capture_and_validate_slot_inactive_since\n> +{\n> + my ($node, $slot_name, $slot_creation_time) = @_;\n> + my $name = $node->name;\n>\n> We know have capture_and_validate_slot_inactive_since at 2 places:\n> 040_standby_failover_slots_sync.pl and 019_replslot_limit.pl.\n>\n> Worth to create a sub in Cluster.pm?\n\nI'd second that thought for now. We might have to debate first if it's\nuseful for all the nodes even without replication, and if yes, the\nnaming stuff and all that. Historically, we've had such duplicated\nfunctions until recently, for instance advance_wal and log_contains.\nWe\nmoved them over to a common perl library Cluster.pm very recently. I'm\nsure we can come back later to move it to Cluster.pm.\n\nOn Wed, Mar 27, 2024 at 9:02 AM shveta malik <[email protected]> wrote:\n>\n> 1)\n> slot.c:\n> + * data from the remote slot. We use InRecovery flag instead of\n> + * RecoveryInProgress() as it always returns true even for normal\n> + * server startup.\n>\n> a) Not clear what 'it' refers to. Better to use 'the latter'\n> b) Is it better to mention the primary here:\n> 'as the latter always returns true even on the primary server during startup'.\n\nModified.\n\n> 2)\n> update_local_synced_slot():\n>\n> - strcmp(remote_slot->plugin, NameStr(slot->data.plugin)) == 0)\n> + strcmp(remote_slot->plugin, NameStr(slot->data.plugin)) == 0 &&\n> + remote_slot->inactive_since == slot->inactive_since)\n>\n> When this code was written initially, the intent was to do strcmp at\n> the end (only if absolutely needed). It will be good if we maintain\n> the same and add new checks before strcmp.\n\nDone.\n\n> 3)\n> update_synced_slots_inactive_time():\n>\n> This assert is removed, is it intentional?\n> Assert(s->active_pid == 0);\n\nYes, the slot can get acquired in the corner case when someone runs\npg_sync_replication_slots concurrently at this time. I'm referring to\nthe issue reported upthread. We don't prevent one running\npg_sync_replication_slots in promotion/ShutDownSlotSync phase right?\nMaybe we should prevent that otherwise some of the slots are synced\nand the standby gets promoted while others are yet-to-be-synced.\n\n> 4)\n> 040_standby_failover_slots_sync.pl:\n>\n> +# Capture the inactive_since of the slot from the standby the logical failover\n> +# slots are synced/created on the standby.\n>\n> The comment is unclear, something seems missing.\n\nNice catch. Yes, that was wrong. I've modified it now.\n\nPlease find the attached v25-0001 (made this 0001 patch now as\ninactive_since patch is committed) patch with the above changes.\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 10:08:33 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:08 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Tue, Mar 26, 2024 at 11:22 PM Bertrand Drouvot\n> <[email protected]> wrote:\n>\n> > 3)\n> > update_synced_slots_inactive_time():\n> >\n> > This assert is removed, is it intentional?\n> > Assert(s->active_pid == 0);\n>\n> Yes, the slot can get acquired in the corner case when someone runs\n> pg_sync_replication_slots concurrently at this time. I'm referring to\n> the issue reported upthread. We don't prevent one running\n> pg_sync_replication_slots in promotion/ShutDownSlotSync phase right?\n> Maybe we should prevent that otherwise some of the slots are synced\n> and the standby gets promoted while others are yet-to-be-synced.\n>\n\nWe should do something about it but that shouldn't be done in this\npatch. We can handle it separately and then add such an assert.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:22:30 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:22 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 10:08 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Tue, Mar 26, 2024 at 11:22 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> >\n> > > 3)\n> > > update_synced_slots_inactive_time():\n> > >\n> > > This assert is removed, is it intentional?\n> > > Assert(s->active_pid == 0);\n> >\n> > Yes, the slot can get acquired in the corner case when someone runs\n> > pg_sync_replication_slots concurrently at this time. I'm referring to\n> > the issue reported upthread. We don't prevent one running\n> > pg_sync_replication_slots in promotion/ShutDownSlotSync phase right?\n> > Maybe we should prevent that otherwise some of the slots are synced\n> > and the standby gets promoted while others are yet-to-be-synced.\n> >\n>\n> We should do something about it but that shouldn't be done in this\n> patch. We can handle it separately and then add such an assert.\n\nAgreed. Once this patch is concluded, I can fix the slot sync shutdown\nissue and will also add this 'assert' back.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:24:32 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:24 AM shveta malik <[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 10:22 AM Amit Kapila <[email protected]> wrote:\n> >\n> > On Wed, Mar 27, 2024 at 10:08 AM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Tue, Mar 26, 2024 at 11:22 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > >\n> > > > 3)\n> > > > update_synced_slots_inactive_time():\n> > > >\n> > > > This assert is removed, is it intentional?\n> > > > Assert(s->active_pid == 0);\n> > >\n> > > Yes, the slot can get acquired in the corner case when someone runs\n> > > pg_sync_replication_slots concurrently at this time. I'm referring to\n> > > the issue reported upthread. We don't prevent one running\n> > > pg_sync_replication_slots in promotion/ShutDownSlotSync phase right?\n> > > Maybe we should prevent that otherwise some of the slots are synced\n> > > and the standby gets promoted while others are yet-to-be-synced.\n> > >\n> >\n> > We should do something about it but that shouldn't be done in this\n> > patch. We can handle it separately and then add such an assert.\n>\n> Agreed. Once this patch is concluded, I can fix the slot sync shutdown\n> issue and will also add this 'assert' back.\n\nAgreed. Thanks.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:26:01 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Mar 26, 2024 at 6:05 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n>\n> > We can think on that later if we really need another\n> > field which give us sync time.\n>\n> I think that calling GetCurrentTimestamp() so frequently could be too costly, so\n> I'm not sure we should.\n\nAgreed.\n\n> > In my second approach, I have tried to\n> > avoid updating inactive_since for synced slots during sync process. We\n> > update that field during creation of synced slot so that\n> > inactive_since reflects correct info even for synced slots (rather\n> > than copying from primary).\n>\n> Yeah, and I think we could create a dedicated field with this information\n> if we feel the need.\n\nOkay.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:55:29 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 10:08 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Please find the attached v25-0001 (made this 0001 patch now as\n> inactive_since patch is committed) patch with the above changes.\n\nFixed an issue in synchronize_slots where DatumGetLSN is being used in\nplace of DatumGetTimestampTz. Found this via CF bot member [1], not on\nmy dev system.\n\nPlease find the attached v6 patch.\n\n\n[1]\n[05:14:39.281] #7 DatumGetLSN (X=<optimized out>) at\n../src/include/utils/pg_lsn.h:24\n[05:14:39.281] No locals.\n[05:14:39.281] #8 synchronize_slots (wrconn=wrconn@entry=0x583cd170)\nat ../src/backend/replication/logical/slotsync.c:757\n[05:14:39.281] isnull = false\n[05:14:39.281] remote_slot = 0x583ce1a8\n[05:14:39.281] d = <optimized out>\n[05:14:39.281] col = 10\n[05:14:39.281] slotRow = {25, 25, 3220, 3220, 28, 16, 16, 25, 25, 1184}\n[05:14:39.281] res = 0x583cd1b8\n[05:14:39.281] tupslot = 0x583ce11c\n[05:14:39.281] remote_slot_list = 0x0\n[05:14:39.281] some_slot_updated = false\n[05:14:39.281] started_tx = false\n[05:14:39.281] query = 0x57692bc4 \"SELECT slot_name, plugin,\nconfirmed_flush_lsn, restart_lsn, catalog_xmin, two_phase, failover,\ndatabase, invalidation_reason, inactive_since FROM\npg_catalog.pg_replication_slots WHERE failover and NOT\"...\n[05:14:39.281] __func__ = \"synchronize_slots\"\n[05:14:39.281] #9 0x56ff9d1e in SyncReplicationSlots\n(wrconn=0x583cd170) at\n../src/backend/replication/logical/slotsync.c:1504\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 11:05:04 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 10:08:33AM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 26, 2024 at 11:22 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > - if (!(RecoveryInProgress() && slot->data.synced))\n> > + if (!(InRecovery && slot->data.synced))\n> > slot->inactive_since = GetCurrentTimestamp();\n> > else\n> > slot->inactive_since = 0;\n> >\n> > Not related to this change but more the way RestoreSlotFromDisk() behaves here:\n> >\n> > For a sync slot on standby it will be set to zero and then later will be\n> > synchronized with the one coming from the primary. I think that's fine to have\n> > it to zero for this window of time.\n> \n> Right.\n> \n> > Now, if the standby is down and one sets sync_replication_slots to off,\n> > then inactive_since will be set to zero on the standby at startup and not\n> > synchronized (unless one triggers a manual sync). I also think that's fine but\n> > it might be worth to document this behavior (that after a standby startup\n> > inactive_since is zero until the next sync...).\n> \n> Isn't this behaviour applicable for other slot parameters that the\n> slot syncs from the remote slot on the primary?\n\nNo they are persisted on disk. If not, we'd not know where to resume the decoding\nfrom on the standby in case primary is down and/or sync is off.\n\n> I've added the following note in the comments when we update\n> inactive_since in RestoreSlotFromDisk.\n> \n> * Note that for synced slots after the standby starts up (i.e. after\n> * the slots are loaded from the disk), the inactive_since will remain\n> * zero until the next slot sync cycle.\n> */\n> if (!(InRecovery && slot->data.synced))\n> slot->inactive_since = GetCurrentTimestamp();\n> else\n> slot->inactive_since = 0;\n\nI think we should add some words in the doc too and also about what the meaning\nof inactive_since on the standby is (as suggested by Shveta in [1]).\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uDkTW%2Bt1k3oPkaipFBzZePfFNB5DmiA%3D%3DpxRGcAdpF%3DPg%40mail.gmail.com\n\n> > 7 ===\n> >\n> > +# Capture and validate inactive_since of a given slot.\n> > +sub capture_and_validate_slot_inactive_since\n> > +{\n> > + my ($node, $slot_name, $slot_creation_time) = @_;\n> > + my $name = $node->name;\n> >\n> > We know have capture_and_validate_slot_inactive_since at 2 places:\n> > 040_standby_failover_slots_sync.pl and 019_replslot_limit.pl.\n> >\n> > Worth to create a sub in Cluster.pm?\n> \n> I'd second that thought for now. We might have to debate first if it's\n> useful for all the nodes even without replication, and if yes, the\n> naming stuff and all that. Historically, we've had such duplicated\n> functions until recently, for instance advance_wal and log_contains.\n> We\n> moved them over to a common perl library Cluster.pm very recently. I'm\n> sure we can come back later to move it to Cluster.pm.\n\nI thought that would be the right time not to introduce duplicated code.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 05:48:46 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 11:05 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Fixed an issue in synchronize_slots where DatumGetLSN is being used in\n> place of DatumGetTimestampTz. Found this via CF bot member [1], not on\n> my dev system.\n>\n> Please find the attached v6 patch.\n\nThanks for the patch. Few trivial things:\n\n----------\n1)\nsystem-views.sgml:\n\na) \"Note that the slots\" --> \"Note that the slots on the standbys,\"\n--it is good to mention \"standbys\" as synced could be true on primary\nas well (promoted standby)\n\nb) If you plan to add more info which Bertrand suggested, then it will\nbe better to make a <note> section instead of using \"Note\"\n\n2)\ncommit msg:\n\n\"The impact of this\non a promoted standby inactive_since is always NULL for all\nsynced slots even after server restart.\n\"\nSentence looks broken.\n---------\n\nApart from the above trivial things, v26-001 looks good to me.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 27 Mar 2024 11:39:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 11:39 AM shveta malik <[email protected]> wrote:\n>\n> Thanks for the patch. Few trivial things:\n\nThanks for reviewing.\n\n> ----------\n> 1)\n> system-views.sgml:\n>\n> a) \"Note that the slots\" --> \"Note that the slots on the standbys,\"\n> --it is good to mention \"standbys\" as synced could be true on primary\n> as well (promoted standby)\n\nDone.\n\n> b) If you plan to add more info which Bertrand suggested, then it will\n> be better to make a <note> section instead of using \"Note\"\n\nI added the note that Bertrand specified upthread. But, I couldn't\nfind an instance of adding <note> ... </note> within a table. Hence\nwith \"Note that ....\" statments just like any other notes in the\nsystem-views.sgml. pg_replication_slot in system-vews.sgml renders as\ntable, so having <note> ... </note> may not be a great idea.\n\n> 2)\n> commit msg:\n>\n> \"The impact of this\n> on a promoted standby inactive_since is always NULL for all\n> synced slots even after server restart.\n> \"\n> Sentence looks broken.\n> ---------\n\nReworded.\n\n> Apart from the above trivial things, v26-001 looks good to me.\n\nPlease check the attached v27 patch which also has Bertrand's comment\non deduplicating the TAP function. I've now moved it to Cluster.pm.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 14:55:17 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 02:55:17PM +0530, Bharath Rupireddy wrote:\n> Please check the attached v27 patch which also has Bertrand's comment\n> on deduplicating the TAP function. I've now moved it to Cluster.pm.\n\nThanks!\n\n1 ===\n\n+ Note that the slots on the standbys that are being synced from a\n+ primary server (whose <structfield>synced</structfield> field is\n+ <literal>true</literal>), will get the\n+ <structfield>inactive_since</structfield> value from the\n+ corresponding remote slot on the primary. Also, note that for the\n+ synced slots on the standby, after the standby starts up (i.e. after\n+ the slots are loaded from the disk), the inactive_since will remain\n+ zero until the next slot sync cycle.\n\nNot sure we should mention the \"(i.e. after the slots are loaded from the disk)\"\nand also \"cycle\" (as that does not sound right in case of manual sync).\n\nMy proposal (in text) but feel free to reword it:\n\nNote that the slots on the standbys that are being synced from a\nprimary server (whose synced field is true), will get the inactive_since value\nfrom the corresponding remote slot on the primary. Also, after the standby starts\nup, the inactive_since (for such synced slots) will remain zero until the next\nsynchronization.\n\n2 ===\n\n+=item $node->create_logical_slot_on_standby(self, primary, slot_name, dbname)\n\nget_slot_inactive_since_value instead?\n\n3 ===\n\n+against given reference time.\n\ns/given reference/optional given reference/?\n\n\nApart from the above, LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 10:11:58 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 2:55 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Mar 27, 2024 at 11:39 AM shveta malik <[email protected]> wrote:\n> >\n> > Thanks for the patch. Few trivial things:\n>\n> Thanks for reviewing.\n>\n> > ----------\n> > 1)\n> > system-views.sgml:\n> >\n> > a) \"Note that the slots\" --> \"Note that the slots on the standbys,\"\n> > --it is good to mention \"standbys\" as synced could be true on primary\n> > as well (promoted standby)\n>\n> Done.\n>\n> > b) If you plan to add more info which Bertrand suggested, then it will\n> > be better to make a <note> section instead of using \"Note\"\n>\n> I added the note that Bertrand specified upthread. But, I couldn't\n> find an instance of adding <note> ... </note> within a table. Hence\n> with \"Note that ....\" statments just like any other notes in the\n> system-views.sgml. pg_replication_slot in system-vews.sgml renders as\n> table, so having <note> ... </note> may not be a great idea.\n>\n> > 2)\n> > commit msg:\n> >\n> > \"The impact of this\n> > on a promoted standby inactive_since is always NULL for all\n> > synced slots even after server restart.\n> > \"\n> > Sentence looks broken.\n> > ---------\n>\n> Reworded.\n>\n> > Apart from the above trivial things, v26-001 looks good to me.\n>\n> Please check the attached v27 patch which also has Bertrand's comment\n> on deduplicating the TAP function. I've now moved it to Cluster.pm.\n>\n\nThanks for the patch. Regarding doc, I have few comments.\n\n+ Note that the slots on the standbys that are being synced from a\n+ primary server (whose <structfield>synced</structfield> field is\n+ <literal>true</literal>), will get the\n+ <structfield>inactive_since</structfield> value from the\n+ corresponding remote slot on the primary. Also, note that for the\n+ synced slots on the standby, after the standby starts up (i.e. after\n+ the slots are loaded from the disk), the inactive_since will remain\n+ zero until the next slot sync cycle.\n\na) \"inactive_since will remain zero\"\nSince it is user exposed info and the user finds it NULL in\npg_replication_slots, shall we mention NULL instead of 0?\n\nb) Since we are referring to the sync cycle here, I feel it will be\ngood to give a link to that page.\n+ zero until the next slot sync cycle (see\n+ <xref linkend=\"logicaldecoding-replication-slots-synchronization\"/> for\n+ slot synchronization details).\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 27 Mar 2024 15:43:22 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 3:42 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> 1 ===\n>\n> My proposal (in text) but feel free to reword it:\n>\n> Note that the slots on the standbys that are being synced from a\n> primary server (whose synced field is true), will get the inactive_since value\n> from the corresponding remote slot on the primary. Also, after the standby starts\n> up, the inactive_since (for such synced slots) will remain zero until the next\n> synchronization.\n\nWFM.\n\n> 2 ===\n>\n> +=item $node->create_logical_slot_on_standby(self, primary, slot_name, dbname)\n>\n> get_slot_inactive_since_value instead?\n\nUgh. Changed.\n\n> 3 ===\n>\n> +against given reference time.\n>\n> s/given reference/optional given reference/?\n\nDone.\n\n> Apart from the above, LGTM.\n\nThanks for reviewing.\n\nOn Wed, Mar 27, 2024 at 3:43 PM shveta malik <[email protected]> wrote:\n>\n> Thanks for the patch. Regarding doc, I have few comments.\n\nThanks for reviewing.\n\n> a) \"inactive_since will remain zero\"\n> Since it is user exposed info and the user finds it NULL in\n> pg_replication_slots, shall we mention NULL instead of 0?\n\nRight. Changed.\n\n> b) Since we are referring to the sync cycle here, I feel it will be\n> good to give a link to that page.\n> + zero until the next slot sync cycle (see\n> + <xref linkend=\"logicaldecoding-replication-slots-synchronization\"/> for\n> + slot synchronization details).\n\nWFM.\n\nPlease see the attached v28 patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 17:55:05 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 05:55:05PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 27, 2024 at 3:42 PM Bertrand Drouvot\n> Please see the attached v28 patch.\n\nThanks!\n\n1 === sorry I missed it in the previous review\n\n if (!(RecoveryInProgress() && slot->data.synced))\n+ {\n now = GetCurrentTimestamp();\n+ update_inactive_since = true;\n+ }\n+ else\n+ update_inactive_since = false;\n\nI think update_inactive_since is not needed, we could rely on (now > 0) instead.\n\n2 ===\n\n+=item $node->get_slot_inactive_since_value(self, primary, slot_name, dbname)\n+\n+Get inactive_since column value for a given replication slot validating it\n+against optional reference time.\n+\n+=cut\n+\n+sub get_slot_inactive_since_value\n+{\n\nshouldn't be \"=item $node->get_slot_inactive_since_value(self, slot_name, reference_time)\"\ninstead?\n\nApart from the above, LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 13:24:52 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 6:54 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Wed, Mar 27, 2024 at 05:55:05PM +0530, Bharath Rupireddy wrote:\n> > On Wed, Mar 27, 2024 at 3:42 PM Bertrand Drouvot\n> > Please see the attached v28 patch.\n>\n> Thanks!\n>\n> 1 === sorry I missed it in the previous review\n>\n> if (!(RecoveryInProgress() && slot->data.synced))\n> + {\n> now = GetCurrentTimestamp();\n> + update_inactive_since = true;\n> + }\n> + else\n> + update_inactive_since = false;\n>\n> I think update_inactive_since is not needed, we could rely on (now > 0) instead.\n\nThought of using it, but, at the expense of readability. I prefer to\nuse a variable instead. However, I changed the variable to be more\nmeaningful to is_slot_being_synced.\n\n> 2 ===\n>\n> +=item $node->get_slot_inactive_since_value(self, primary, slot_name, dbname)\n> +\n> +Get inactive_since column value for a given replication slot validating it\n> +against optional reference time.\n> +\n> +=cut\n> +\n> +sub get_slot_inactive_since_value\n> +{\n>\n> shouldn't be \"=item $node->get_slot_inactive_since_value(self, slot_name, reference_time)\"\n> instead?\n\nUgh. Changed.\n\n> Apart from the above, LGTM.\n\nThanks. I'm attaching v29 patches. 0001 managing inactive_since on the\nstandby for sync slots. 0002 implementing inactive timeout GUC based\ninvalidation mechanism.\n\nPlease have a look.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 27 Mar 2024 21:00:37 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 09:00:37PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 27, 2024 at 6:54 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Wed, Mar 27, 2024 at 05:55:05PM +0530, Bharath Rupireddy wrote:\n> > > On Wed, Mar 27, 2024 at 3:42 PM Bertrand Drouvot\n> > > Please see the attached v28 patch.\n> >\n> > Thanks!\n> >\n> > 1 === sorry I missed it in the previous review\n> >\n> > if (!(RecoveryInProgress() && slot->data.synced))\n> > + {\n> > now = GetCurrentTimestamp();\n> > + update_inactive_since = true;\n> > + }\n> > + else\n> > + update_inactive_since = false;\n> >\n> > I think update_inactive_since is not needed, we could rely on (now > 0) instead.\n> \n> Thought of using it, but, at the expense of readability. I prefer to\n> use a variable instead.\n\nThat's fine too.\n\n> However, I changed the variable to be more meaningful to is_slot_being_synced.\n\nYeah makes sense and even easier to read.\n\nv29-0001 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 27 Mar 2024 16:12:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 9:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Thanks. I'm attaching v29 patches. 0001 managing inactive_since on the\n> standby for sync slots. 0002 implementing inactive timeout GUC based\n> invalidation mechanism.\n>\n> Please have a look.\n\nThanks for the patches. v29-001 looks good to me.\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 28 Mar 2024 09:16:17 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 27, 2024 at 09:00:37PM +0530, Bharath Rupireddy wrote:\n> standby for sync slots. 0002 implementing inactive timeout GUC based\n> invalidation mechanism.\n> \n> Please have a look.\n\nThanks!\n\nRegarding 0002:\n\nSome testing:\n\nT1 ===\n\nWhen the slot is invalidated on the primary, then the reason is propagated to\nthe sync slot (if any). That's fine but we are loosing the inactive_since on the\nstandby:\n\nPrimary:\n\npostgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n slot_name | inactive_since | conflicting | invalidation_reason\n-------------+-------------------------------+-------------+---------------------\n lsub29_slot | 2024-03-28 08:24:51.672528+00 | f | inactive_timeout\n(1 row)\n\nStandby:\n\npostgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n slot_name | inactive_since | conflicting | invalidation_reason\n-------------+----------------+-------------+---------------------\n lsub29_slot | | f | inactive_timeout\n(1 row)\n\nI think in this case it should always reflect the value from the primary (so\nthat one can understand why it is invalidated).\n\nT2 ===\n\nAnd it is set to a value during promotion:\n\npostgres=# select pg_promote();\n pg_promote\n------------\n t\n(1 row)\n\npostgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n slot_name | inactive_since | conflicting | invalidation_reason\n-------------+------------------------------+-------------+---------------------\n lsub29_slot | 2024-03-28 08:30:11.74505+00 | f | inactive_timeout\n(1 row)\n\nI think when it is invalidated it should always reflect the value from the\nprimary (so that one can understand why it is invalidated).\n\nT3 ===\n\nAs far the slot invalidation on the primary:\n\npostgres=# SELECT * FROM pg_logical_slot_get_changes('lsub29_slot', NULL, NULL, 'include-xids', '0');\nERROR: cannot acquire invalidated replication slot \"lsub29_slot\"\n\nCan we make the message more consistent with what can be found in CreateDecodingContext()\nfor example?\n\nT4 ===\n\nAlso, it looks like querying pg_replication_slots() does not trigger an\ninvalidation: I think it should if the slot is not invalidated yet (and matches\nthe invalidation criteria).\n\nCode review:\n\nCR1 ===\n\n+ Invalidate replication slots that are inactive for longer than this\n+ amount of time. If this value is specified without units, it is taken\n\ns/Invalidate/Invalidates/?\n\nShould we mention the relationship with inactive_since?\n\nCR2 ===\n\n+ *\n+ * If check_for_invalidation is true, the slot is checked for invalidation\n+ * based on replication_slot_inactive_timeout GUC and an error is raised after making the slot ours.\n */\n void\n-ReplicationSlotAcquire(const char *name, bool nowait)\n+ReplicationSlotAcquire(const char *name, bool nowait,\n+ bool check_for_invalidation)\n\n\ns/check_for_invalidation/check_for_timeout_invalidation/?\n\nCR3 ===\n\n+ if (slot->inactive_since == 0 ||\n+ replication_slot_inactive_timeout == 0)\n+ return false;\n\nBetter to test replication_slot_inactive_timeout first? (I mean there is no\npoint of testing inactive_since if replication_slot_inactive_timeout == 0)\n\nCR4 ===\n\n+ if (slot->inactive_since > 0 &&\n+ replication_slot_inactive_timeout > 0)\n+ {\n\nSame.\n\nSo, instead of CR3 === and CR4 ===, I wonder if it wouldn't be better to do\nsomething like:\n\nif (replication_slot_inactive_timeout == 0)\n\treturn false;\nelse if (slot->inactive_since > 0)\n.\n.\n.\n.\nelse\n\treturn false;\n\nThat would avoid checking replication_slot_inactive_timeout and inactive_since\nmultiple times.\n\nCR5 ===\n\n+ * held to avoid race conditions -- for example the restart_lsn could move\n+ * forward, or the slot could be dropped.\n\nDoes the restart_lsn example makes sense here?\n\nCR6 ===\n\n+static bool\n+InvalidateSlotForInactiveTimeout(ReplicationSlot *slot, bool need_locks)\n+{\n\nInvalidatePossiblyInactiveSlot() maybe?\n\nCR7 ===\n\n+ /* Make sure the invalidated state persists across server restart */\n+ slot->just_dirtied = true;\n+ slot->dirty = true;\n+ SpinLockRelease(&slot->mutex);\n\nMaybe we could create a new function say MarkGivenReplicationSlotDirty()\nwith a slot as parameter, that ReplicationSlotMarkDirty could call too?\n\nThen maybe we could set slot->data.invalidated = RS_INVAL_INACTIVE_TIMEOUT in\nInvalidateSlotForInactiveTimeout()? (to avoid multiple SpinLockAcquire/SpinLockRelease).\n\nCR8 ===\n\n+ if (persist_state)\n+ {\n+ char path[MAXPGPATH];\n+\n+ sprintf(path, \"pg_replslot/%s\", NameStr(slot->data.name));\n+ SaveSlotToPath(slot, path, ERROR);\n+ }\n\nMaybe we could create a new function say GivenReplicationSlotSave()\nwith a slot as parameter, that ReplicationSlotSave() could call too?\n\nCR9 ===\n\n+ if (check_for_invalidation)\n+ {\n+ /* The slot is ours by now */\n+ Assert(s->active_pid == MyProcPid);\n+\n+ /*\n+ * Well, the slot is not yet ours really unless we check for the\n+ * invalidation below.\n+ */\n+ s->active_pid = 0;\n+ if (InvalidateReplicationSlotForInactiveTimeout(s, true, true))\n+ {\n+ /*\n+ * If the slot has been invalidated, recalculate the resource\n+ * limits.\n+ */\n+ ReplicationSlotsComputeRequiredXmin(false);\n+ ReplicationSlotsComputeRequiredLSN();\n+\n+ /* Might need it for slot clean up on error, so restore it */\n+ s->active_pid = MyProcPid;\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot acquire invalidated replication slot \\\"%s\\\"\",\n+ NameStr(MyReplicationSlot->data.name))));\n+ }\n+ s->active_pid = MyProcPid;\n\nAre we not missing some SpinLockAcquire/Release on the slot's mutex here? (the\nplaces where we set the active_pid).\n\nCR10 ===\n\n@@ -1628,6 +1674,10 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n if (SlotIsLogical(s))\n invalidation_cause = cause;\n break;\n+ case RS_INVAL_INACTIVE_TIMEOUT:\n+ if (InvalidateReplicationSlotForInactiveTimeout(s, false, false))\n+ invalidation_cause = cause;\n+ break;\n\nInvalidatePossiblyObsoleteSlot() is not called with such a reason, better to use\nan Assert here and in the caller too?\n\nCR11 ===\n\n+++ b/src/test/recovery/t/050_invalidate_slots.pl\n\nwhy not using 019_replslot_limit.pl?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 28 Mar 2024 09:43:44 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Mar 27, 2024 at 9:00 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Thanks. I'm attaching v29 patches. 0001 managing inactive_since on the\n> standby for sync slots.\n>\n\nCommit message states: \"why we can't just update inactive_since for\nsynced slots on the standby with the value received from remote slot\non the primary. This is consistent with any other slot parameter i.e.\nall of them are synced from the primary.\"\n\nThe inactive_since is not consistent with other slot parameters which\nwe copy. We don't perform anything related to those other parameters\nlike say two_phase phase which can change that property. However, we\ndo acquire the slot, advance the slot (as per recent discussion [1]),\nand release it. Since these operations can impact inactive_since, it\nseems to me that inactive_since is not the same as other parameters.\nIt can have a different value than the primary. Why would anyone want\nto know the value of inactive_since from primary after the standby is\npromoted? Now, the other concern is that calling GetCurrentTimestamp()\ncould be costly when the values for the slot are not going to be\nupdated but if that happens we can optimize such that before acquiring\nthe slot we can have some minimal pre-checks to ensure whether we need\nto update the slot or not.\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB571615D35F486080616CA841943A2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Mar 2024 09:39:31 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 29, 2024 at 09:39:31AM +0530, Amit Kapila wrote:\n> On Wed, Mar 27, 2024 at 9:00 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> >\n> > Thanks. I'm attaching v29 patches. 0001 managing inactive_since on the\n> > standby for sync slots.\n> >\n> \n> Commit message states: \"why we can't just update inactive_since for\n> synced slots on the standby with the value received from remote slot\n> on the primary. This is consistent with any other slot parameter i.e.\n> all of them are synced from the primary.\"\n> \n> The inactive_since is not consistent with other slot parameters which\n> we copy. We don't perform anything related to those other parameters\n> like say two_phase phase which can change that property. However, we\n> do acquire the slot, advance the slot (as per recent discussion [1]),\n> and release it. Since these operations can impact inactive_since, it\n> seems to me that inactive_since is not the same as other parameters.\n> It can have a different value than the primary. Why would anyone want\n> to know the value of inactive_since from primary after the standby is\n> promoted?\n\nI think it can be useful \"before\" it is promoted and in case the primary is down.\nI agree that tracking the activity time of a synced slot can be useful, why\nnot creating a dedicated field for that purpose (and keep inactive_since a\nperfect \"copy\" of the primary)?\n\n> Now, the other concern is that calling GetCurrentTimestamp()\n> could be costly when the values for the slot are not going to be\n> updated but if that happens we can optimize such that before acquiring\n> the slot we can have some minimal pre-checks to ensure whether we need\n> to update the slot or not.\n\nRight, but for a very active slot it is likely that we call GetCurrentTimestamp()\nduring almost each sync cycle.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 29 Mar 2024 06:19:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 29, 2024 at 11:49 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 09:39:31AM +0530, Amit Kapila wrote:\n> >\n> > Commit message states: \"why we can't just update inactive_since for\n> > synced slots on the standby with the value received from remote slot\n> > on the primary. This is consistent with any other slot parameter i.e.\n> > all of them are synced from the primary.\"\n> >\n> > The inactive_since is not consistent with other slot parameters which\n> > we copy. We don't perform anything related to those other parameters\n> > like say two_phase phase which can change that property. However, we\n> > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > and release it. Since these operations can impact inactive_since, it\n> > seems to me that inactive_since is not the same as other parameters.\n> > It can have a different value than the primary. Why would anyone want\n> > to know the value of inactive_since from primary after the standby is\n> > promoted?\n>\n> I think it can be useful \"before\" it is promoted and in case the primary is down.\n>\n\nIt is not clear to me what is user going to do by checking the\ninactivity time for slots when the corresponding server is down. I\nthought the idea was to check such slots and see if they need to be\ndropped or enabled again to avoid excessive disk usage, etc.\n\n> I agree that tracking the activity time of a synced slot can be useful, why\n> not creating a dedicated field for that purpose (and keep inactive_since a\n> perfect \"copy\" of the primary)?\n>\n\nWe can have a separate field for this but not sure if it is worth it.\n\n> > Now, the other concern is that calling GetCurrentTimestamp()\n> > could be costly when the values for the slot are not going to be\n> > updated but if that happens we can optimize such that before acquiring\n> > the slot we can have some minimal pre-checks to ensure whether we need\n> > to update the slot or not.\n>\n> Right, but for a very active slot it is likely that we call GetCurrentTimestamp()\n> during almost each sync cycle.\n>\n\nTrue, but if we have to save a slot to disk each time to persist the\nchanges (for an active slot) then probably GetCurrentTimestamp()\nshouldn't be costly enough to matter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Mar 2024 15:03:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 29, 2024 at 03:03:01PM +0530, Amit Kapila wrote:\n> On Fri, Mar 29, 2024 at 11:49 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 29, 2024 at 09:39:31AM +0530, Amit Kapila wrote:\n> > >\n> > > Commit message states: \"why we can't just update inactive_since for\n> > > synced slots on the standby with the value received from remote slot\n> > > on the primary. This is consistent with any other slot parameter i.e.\n> > > all of them are synced from the primary.\"\n> > >\n> > > The inactive_since is not consistent with other slot parameters which\n> > > we copy. We don't perform anything related to those other parameters\n> > > like say two_phase phase which can change that property. However, we\n> > > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > > and release it. Since these operations can impact inactive_since, it\n> > > seems to me that inactive_since is not the same as other parameters.\n> > > It can have a different value than the primary. Why would anyone want\n> > > to know the value of inactive_since from primary after the standby is\n> > > promoted?\n> >\n> > I think it can be useful \"before\" it is promoted and in case the primary is down.\n> >\n> \n> It is not clear to me what is user going to do by checking the\n> inactivity time for slots when the corresponding server is down.\n\nSay a failover needs to be done, then it could be useful to know for which\nslots the activity needs to be resumed (thinking about external logical decoding\nplugin, not about pub/sub here). If one see an inactive slot (since long \"enough\")\nthen he can start to reasonate about what to do with it.\n\n> I thought the idea was to check such slots and see if they need to be\n> dropped or enabled again to avoid excessive disk usage, etc.\n\nYeah that's the case but it does not mean inactive_since can't be useful in other\nways.\n\nAlso, say the slot has been invalidated on the primary (due to inactivity timeout),\nprimary is down and there is a failover. By keeping the inactive_since from\nthe primary, one could know when the inactivity that lead to the timeout started.\n\nAgain, more concerned about external logical decoding plugin than pub/sub here.\n\n> > I agree that tracking the activity time of a synced slot can be useful, why\n> > not creating a dedicated field for that purpose (and keep inactive_since a\n> > perfect \"copy\" of the primary)?\n> >\n> \n> We can have a separate field for this but not sure if it is worth it.\n\nOTOH I'm not sure that erasing this information from the primary is useful. I\nthink that 2 fields would be the best option and would be less subject of\nmisinterpretation.\n\n> > > Now, the other concern is that calling GetCurrentTimestamp()\n> > > could be costly when the values for the slot are not going to be\n> > > updated but if that happens we can optimize such that before acquiring\n> > > the slot we can have some minimal pre-checks to ensure whether we need\n> > > to update the slot or not.\n> >\n> > Right, but for a very active slot it is likely that we call GetCurrentTimestamp()\n> > during almost each sync cycle.\n> >\n> \n> True, but if we have to save a slot to disk each time to persist the\n> changes (for an active slot) then probably GetCurrentTimestamp()\n> shouldn't be costly enough to matter.\n\nRight, persisting the changes to disk would be even more costly.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 29 Mar 2024 12:47:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Mar 28, 2024 at 3:13 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Regarding 0002:\n\nThanks for reviewing it.\n\n> Some testing:\n>\n> T1 ===\n>\n> When the slot is invalidated on the primary, then the reason is propagated to\n> the sync slot (if any). That's fine but we are loosing the inactive_since on the\n> standby:\n>\n> Primary:\n>\n> postgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n> slot_name | inactive_since | conflicting | invalidation_reason\n> -------------+-------------------------------+-------------+---------------------\n> lsub29_slot | 2024-03-28 08:24:51.672528+00 | f | inactive_timeout\n> (1 row)\n>\n> Standby:\n>\n> postgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n> slot_name | inactive_since | conflicting | invalidation_reason\n> -------------+----------------+-------------+---------------------\n> lsub29_slot | | f | inactive_timeout\n> (1 row)\n>\n> I think in this case it should always reflect the value from the primary (so\n> that one can understand why it is invalidated).\n\nI'll come back to this as soon as we all agree on inactive_since\nbehavior for synced slots.\n\n> T2 ===\n>\n> And it is set to a value during promotion:\n>\n> postgres=# select pg_promote();\n> pg_promote\n> ------------\n> t\n> (1 row)\n>\n> postgres=# select slot_name,inactive_since,conflicting,invalidation_reason from pg_replication_slots where slot_name='lsub29_slot';\n> slot_name | inactive_since | conflicting | invalidation_reason\n> -------------+------------------------------+-------------+---------------------\n> lsub29_slot | 2024-03-28 08:30:11.74505+00 | f | inactive_timeout\n> (1 row)\n>\n> I think when it is invalidated it should always reflect the value from the\n> primary (so that one can understand why it is invalidated).\n\nI'll come back to this as soon as we all agree on inactive_since\nbehavior for synced slots.\n\n> T3 ===\n>\n> As far the slot invalidation on the primary:\n>\n> postgres=# SELECT * FROM pg_logical_slot_get_changes('lsub29_slot', NULL, NULL, 'include-xids', '0');\n> ERROR: cannot acquire invalidated replication slot \"lsub29_slot\"\n>\n> Can we make the message more consistent with what can be found in CreateDecodingContext()\n> for example?\n\nHm, that makes sense because slot acquisition and release is something\ninternal to the server.\n\n> T4 ===\n>\n> Also, it looks like querying pg_replication_slots() does not trigger an\n> invalidation: I think it should if the slot is not invalidated yet (and matches\n> the invalidation criteria).\n\nThere's a different opinion on this, check comment #3 from\nhttps://www.postgresql.org/message-id/CAA4eK1LLj%2BeaMN-K8oeOjfG%2BUuzTY%3DL5PXbcMJURZbFm%2B_aJSA%40mail.gmail.com.\n\n> Code review:\n>\n> CR1 ===\n>\n> + Invalidate replication slots that are inactive for longer than this\n> + amount of time. If this value is specified without units, it is taken\n>\n> s/Invalidate/Invalidates/?\n\nDone.\n\n> Should we mention the relationship with inactive_since?\n\nDone.\n\n> CR2 ===\n>\n> + *\n> + * If check_for_invalidation is true, the slot is checked for invalidation\n> + * based on replication_slot_inactive_timeout GUC and an error is raised after making the slot ours.\n> */\n> void\n> -ReplicationSlotAcquire(const char *name, bool nowait)\n> +ReplicationSlotAcquire(const char *name, bool nowait,\n> + bool check_for_invalidation)\n>\n>\n> s/check_for_invalidation/check_for_timeout_invalidation/?\n\nDone.\n\n> CR3 ===\n>\n> + if (slot->inactive_since == 0 ||\n> + replication_slot_inactive_timeout == 0)\n> + return false;\n>\n> Better to test replication_slot_inactive_timeout first? (I mean there is no\n> point of testing inactive_since if replication_slot_inactive_timeout == 0)\n>\n> CR4 ===\n>\n> + if (slot->inactive_since > 0 &&\n> + replication_slot_inactive_timeout > 0)\n> + {\n>\n> Same.\n>\n> So, instead of CR3 === and CR4 ===, I wonder if it wouldn't be better to do\n> something like:\n>\n> if (replication_slot_inactive_timeout == 0)\n> return false;\n> else if (slot->inactive_since > 0)\n> .\n> else\n> return false;\n>\n> That would avoid checking replication_slot_inactive_timeout and inactive_since\n> multiple times.\n\nDone.\n\n> CR5 ===\n>\n> + * held to avoid race conditions -- for example the restart_lsn could move\n> + * forward, or the slot could be dropped.\n>\n> Does the restart_lsn example makes sense here?\n\nNo, it doesn't. Modified that.\n\n> CR6 ===\n>\n> +static bool\n> +InvalidateSlotForInactiveTimeout(ReplicationSlot *slot, bool need_locks)\n> +{\n>\n> InvalidatePossiblyInactiveSlot() maybe?\n\nI think we will lose the essence i.e. timeout from the suggested\nfunction name, otherwise just the inactive doesn't give a clearer\nmeaning. I kept it that way unless anyone suggests otherwise.\n\n> CR7 ===\n>\n> + /* Make sure the invalidated state persists across server restart */\n> + slot->just_dirtied = true;\n> + slot->dirty = true;\n> + SpinLockRelease(&slot->mutex);\n>\n> Maybe we could create a new function say MarkGivenReplicationSlotDirty()\n> with a slot as parameter, that ReplicationSlotMarkDirty could call too?\n\nDone that.\n\n> Then maybe we could set slot->data.invalidated = RS_INVAL_INACTIVE_TIMEOUT in\n> InvalidateSlotForInactiveTimeout()? (to avoid multiple SpinLockAcquire/SpinLockRelease).\n\nDone that.\n\n> CR8 ===\n>\n> + if (persist_state)\n> + {\n> + char path[MAXPGPATH];\n> +\n> + sprintf(path, \"pg_replslot/%s\", NameStr(slot->data.name));\n> + SaveSlotToPath(slot, path, ERROR);\n> + }\n>\n> Maybe we could create a new function say GivenReplicationSlotSave()\n> with a slot as parameter, that ReplicationSlotSave() could call too?\n\nDone that.\n\n> CR9 ===\n>\n> + if (check_for_invalidation)\n> + {\n> + /* The slot is ours by now */\n> + Assert(s->active_pid == MyProcPid);\n> +\n> + /*\n> + * Well, the slot is not yet ours really unless we check for the\n> + * invalidation below.\n> + */\n> + s->active_pid = 0;\n> + if (InvalidateReplicationSlotForInactiveTimeout(s, true, true))\n> + {\n> + /*\n> + * If the slot has been invalidated, recalculate the resource\n> + * limits.\n> + */\n> + ReplicationSlotsComputeRequiredXmin(false);\n> + ReplicationSlotsComputeRequiredLSN();\n> +\n> + /* Might need it for slot clean up on error, so restore it */\n> + s->active_pid = MyProcPid;\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot acquire invalidated replication slot \\\"%s\\\"\",\n> + NameStr(MyReplicationSlot->data.name))));\n> + }\n> + s->active_pid = MyProcPid;\n>\n> Are we not missing some SpinLockAcquire/Release on the slot's mutex here? (the\n> places where we set the active_pid).\n\nHm, yes. But, shall I acquire the mutex, set active_pid to 0 for a\nmoment just to satisfy Assert(slot->active_pid == 0); in\nInvalidateReplicationSlotForInactiveTimeout and\nInvalidateSlotForInactiveTimeout? I just removed the assertions\nbecause being replication_slot_inactive_timeout > 0 and inactive_since\n> 0 is enough for these functions to think and decide on inactive\ntimeout invalidation.\n\n> CR10 ===\n>\n> @@ -1628,6 +1674,10 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> if (SlotIsLogical(s))\n> invalidation_cause = cause;\n> break;\n> + case RS_INVAL_INACTIVE_TIMEOUT:\n> + if (InvalidateReplicationSlotForInactiveTimeout(s, false, false))\n> + invalidation_cause = cause;\n> + break;\n>\n> InvalidatePossiblyObsoleteSlot() is not called with such a reason, better to use\n> an Assert here and in the caller too?\n\nDone.\n\n> CR11 ===\n>\n> +++ b/src/test/recovery/t/050_invalidate_slots.pl\n>\n> why not using 019_replslot_limit.pl?\n\nI understand that 019_replslot_limit covers wal_removed related\ninvalidations. But, I don't want to kludge it with a bunch of other\ntests. The new tests anyway need a bunch of new nodes and a couple of\nhelper functions. Any future invalidation mechanisms can be added here\nin this new file. Also, having a separate file quickly helps isolate\nany test failures that BF animals might report in future. I don't\nthink a separate test file here hurts anyone unless there's a strong\nreason against it.\n\nPlease see the attached v30 patch. 0002 is where all of the above\nreview comments have been addressed.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 31 Mar 2024 10:25:46 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 29, 2024 at 9:39 AM Amit Kapila <[email protected]> wrote:\n>\n> Commit message states: \"why we can't just update inactive_since for\n> synced slots on the standby with the value received from remote slot\n> on the primary. This is consistent with any other slot parameter i.e.\n> all of them are synced from the primary.\"\n>\n> The inactive_since is not consistent with other slot parameters which\n> we copy. We don't perform anything related to those other parameters\n> like say two_phase phase which can change that property. However, we\n> do acquire the slot, advance the slot (as per recent discussion [1]),\n> and release it. Since these operations can impact inactive_since, it\n> seems to me that inactive_since is not the same as other parameters.\n> It can have a different value than the primary. Why would anyone want\n> to know the value of inactive_since from primary after the standby is\n> promoted?\n\nAfter thinking about it for a while now, it feels to me that the\nsynced slots (slots on the standby that are being synced from the\nprimary) can have their own inactive_sicne value. Fundamentally,\ninactive_sicne is set to 0 when slot is acquired and set to current\ntime when slot is released, no matter who acquires and releases it -\nbe it walsenders for replication, or backends for slot advance, or\nbackends for slot sync using pg_sync_replication_slots, or backends\nfor other slot functions, or background sync worker. Remember the\nearlier patch was updating inactive_since just for walsenders, but\nthen the suggestion was to update it unconditionally -\nhttps://www.postgresql.org/message-id/CAJpy0uD64X%3D2ENmbHaRiWTKeQawr-rbGoy_GdhQQLVXzUSKTMg%40mail.gmail.com.\nWhoever syncs the slot, *acutally* acquires the slot i.e. makes it\ntheirs, syncs it from the primary, and releases it. IMO, no\ndifferentiation is to be made for synced slots.\n\nThere was a suggestion on using inactive_since of the synced slot on\nthe standby to know the inactivity of the slot on the primary. If one\nwants to do that, they better look at/monitor the primary slot\ninfo/logs/pg_replication_slot/whatever. I really don't see a point in\nhaving two different meanings for a single property of a replication\nslot - inactive_since for a regular slot tells since when this slot\nhas become inactive, and for a synced slot since when the\ncorresponding remote slot has become inactive. I think this will\nconfuse users for sure.\n\nAlso, if inactive_since is being changed on the primary so frequently,\nand none of the other parameters are changing, if we copy\ninactive_since to the synced slots, then standby will just be doing\n*sync* work (mark the slots dirty and save to disk) for updating\ninactive_since. I think this is unnecessary behaviour for sure.\n\nComing to a future patch for inactive timeout based slot invalidation,\nwe can either allow invalidation without any differentiation for\nsynced slots or restrict invalidation to avoid more sync work. For\ninstance, if inactive timeout is kept low on the standby, the sync\nworker will be doing more work as it drops and recreates a slot\nrepeatedly if it keeps getting invalidated. Another thing is that the\nstandby takes independent invalidation decisions for synced slots.\nAFAICS, invalidation due to wal_removal is the only sole reason (out\nof all available invalidation reasons) for a synced slot to get\ninvalidated independently of the primary. Check\nhttps://www.postgresql.org/message-id/CAA4eK1JXBwTaDRD_%3D8t6UB1fhRNjC1C%2BgH4YdDxj_9U6djLnXw%40mail.gmail.com\nfor the suggestion on we better not differentiaing invalidation\ndecisions for synced slots.\n\nThe assumption of letting synced slots have their own inactive_since\nnot only simplifies the code, but also looks less-confusing and more\nmeaningful to the user. The only code that we put in on top of the\ncommitted code is to use InRecovery in place of\nRecoveryInProgress() in RestoreSlotFromDisk() to fix the issue raised\nby Shveta upthread.\n\n> Now, the other concern is that calling GetCurrentTimestamp()\n> could be costly when the values for the slot are not going to be\n> updated but if that happens we can optimize such that before acquiring\n> the slot we can have some minimal pre-checks to ensure whether we need\n> to update the slot or not.\n>\n> [1] - https://www.postgresql.org/message-id/OS0PR01MB571615D35F486080616CA841943A2%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nA quick test with a function to measure the cost of\nGetCurrentTimestamp [1] on my Ubuntu dev system (an AWS EC2 c5.4xlarge\ninstance), gives me [2]. It took 0.388 ms, 2.269 ms, 21.144 ms,\n209.333 ms, 2091.174 ms, 20908.942 ms for 10K, 100K, 1million,\n10million, 100million, 1billion times respectively. Costs might be\ndifferent on various systems with different OS, but it gives us a\nrough idea.\n\nIf we are too much concerned about the cost of GetCurrentTimestamp(),\na possible approach is just don't set inactive_since for slots being\nsynced on the standby. Just let the first acquisition and release\nafter the promotion do that job. We can always call this out in the\ndocs saying \"replication slots on the streaming standbys which are\nbeing synced from the primary are not inactive in practice, so the\ninactive_since is always NULL for them unless the standby is\npromoted\".\n\n[1]\nDatum\npg_get_current_timestamp(PG_FUNCTION_ARGS)\n{\n int loops = PG_GETARG_INT32(0);\n TimestampTz ctime;\n\n for (int i = 0; i < loops; i++)\n ctime = GetCurrentTimestamp();\n\n PG_RETURN_TIMESTAMPTZ(ctime);\n}\n\n[2]\npostgres=# \\timing\nTiming is on.\npostgres=# SELECT pg_get_current_timestamp(1000000000);\n pg_get_current_timestamp\n-------------------------------\n 2024-03-30 19:07:57.374797+00\n(1 row)\n\nTime: 20908.942 ms (00:20.909)\npostgres=# SELECT pg_get_current_timestamp(100000000);\n pg_get_current_timestamp\n-------------------------------\n 2024-03-30 19:08:21.038064+00\n(1 row)\n\nTime: 2091.174 ms (00:02.091)\npostgres=# SELECT pg_get_current_timestamp(10000000);\n pg_get_current_timestamp\n-------------------------------\n 2024-03-30 19:08:24.329949+00\n(1 row)\n\nTime: 209.333 ms\npostgres=# SELECT pg_get_current_timestamp(1000000);\n pg_get_current_timestamp\n-------------------------------\n 2024-03-30 19:08:26.978016+00\n(1 row)\n\nTime: 21.144 ms\npostgres=# SELECT pg_get_current_timestamp(100000);\n pg_get_current_timestamp\n-------------------------------\n 2024-03-30 19:08:29.142248+00\n(1 row)\n\nTime: 2.269 ms\npostgres=# SELECT pg_get_current_timestamp(10000);\n pg_get_current_timestamp\n------------------------------\n 2024-03-30 19:08:31.34621+00\n(1 row)\n\nTime: 0.388 ms\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 08:47:59 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Mar 29, 2024 at 6:17 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 03:03:01PM +0530, Amit Kapila wrote:\n> > On Fri, Mar 29, 2024 at 11:49 AM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Mar 29, 2024 at 09:39:31AM +0530, Amit Kapila wrote:\n> > > >\n> > > > Commit message states: \"why we can't just update inactive_since for\n> > > > synced slots on the standby with the value received from remote slot\n> > > > on the primary. This is consistent with any other slot parameter i.e.\n> > > > all of them are synced from the primary.\"\n> > > >\n> > > > The inactive_since is not consistent with other slot parameters which\n> > > > we copy. We don't perform anything related to those other parameters\n> > > > like say two_phase phase which can change that property. However, we\n> > > > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > > > and release it. Since these operations can impact inactive_since, it\n> > > > seems to me that inactive_since is not the same as other parameters.\n> > > > It can have a different value than the primary. Why would anyone want\n> > > > to know the value of inactive_since from primary after the standby is\n> > > > promoted?\n> > >\n> > > I think it can be useful \"before\" it is promoted and in case the primary is down.\n> > >\n> >\n> > It is not clear to me what is user going to do by checking the\n> > inactivity time for slots when the corresponding server is down.\n>\n> Say a failover needs to be done, then it could be useful to know for which\n> slots the activity needs to be resumed (thinking about external logical decoding\n> plugin, not about pub/sub here). If one see an inactive slot (since long \"enough\")\n> then he can start to reasonate about what to do with it.\n>\n> > I thought the idea was to check such slots and see if they need to be\n> > dropped or enabled again to avoid excessive disk usage, etc.\n>\n> Yeah that's the case but it does not mean inactive_since can't be useful in other\n> ways.\n>\n> Also, say the slot has been invalidated on the primary (due to inactivity timeout),\n> primary is down and there is a failover. By keeping the inactive_since from\n> the primary, one could know when the inactivity that lead to the timeout started.\n>\n\nSo, this means at promotion, we won't set the current_time for\ninactive_since which is not what the currently proposed patch is\ndoing. Moreover, doing the invalidation on promoted standby based on\ninactive_since of the primary node sounds debatable because the\ninactive_timeout could be different on the new node (promoted\nstandby).\n\n> Again, more concerned about external logical decoding plugin than pub/sub here.\n>\n> > > I agree that tracking the activity time of a synced slot can be useful, why\n> > > not creating a dedicated field for that purpose (and keep inactive_since a\n> > > perfect \"copy\" of the primary)?\n> > >\n> >\n> > We can have a separate field for this but not sure if it is worth it.\n>\n> OTOH I'm not sure that erasing this information from the primary is useful. I\n> think that 2 fields would be the best option and would be less subject of\n> misinterpretation.\n>\n> > > > Now, the other concern is that calling GetCurrentTimestamp()\n> > > > could be costly when the values for the slot are not going to be\n> > > > updated but if that happens we can optimize such that before acquiring\n> > > > the slot we can have some minimal pre-checks to ensure whether we need\n> > > > to update the slot or not.\n> > >\n> > > Right, but for a very active slot it is likely that we call GetCurrentTimestamp()\n> > > during almost each sync cycle.\n> > >\n> >\n> > True, but if we have to save a slot to disk each time to persist the\n> > changes (for an active slot) then probably GetCurrentTimestamp()\n> > shouldn't be costly enough to matter.\n>\n> Right, persisting the changes to disk would be even more costly.\n>\n\nThe point I was making is that currently after copying the\nremote_node's values, we always persist the slots to disk, so the cost\nof current_time shouldn't be much. Now, if the values won't change\nthen probably there is some cost but in most cases (active slots), the\nvalues will always change. Also, if all the slots are inactive then we\nwill slow down the speed of sync. We also need to consider if we want\nto copy the value of inactive_since from the primary and if that is\nthe only value changed then shall we persist the slot or not?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 1 Apr 2024 09:04:43 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 01, 2024 at 09:04:43AM +0530, Amit Kapila wrote:\n> On Fri, Mar 29, 2024 at 6:17 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Fri, Mar 29, 2024 at 03:03:01PM +0530, Amit Kapila wrote:\n> > > On Fri, Mar 29, 2024 at 11:49 AM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Fri, Mar 29, 2024 at 09:39:31AM +0530, Amit Kapila wrote:\n> > > > >\n> > > > > Commit message states: \"why we can't just update inactive_since for\n> > > > > synced slots on the standby with the value received from remote slot\n> > > > > on the primary. This is consistent with any other slot parameter i.e.\n> > > > > all of them are synced from the primary.\"\n> > > > >\n> > > > > The inactive_since is not consistent with other slot parameters which\n> > > > > we copy. We don't perform anything related to those other parameters\n> > > > > like say two_phase phase which can change that property. However, we\n> > > > > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > > > > and release it. Since these operations can impact inactive_since, it\n> > > > > seems to me that inactive_since is not the same as other parameters.\n> > > > > It can have a different value than the primary. Why would anyone want\n> > > > > to know the value of inactive_since from primary after the standby is\n> > > > > promoted?\n> > > >\n> > > > I think it can be useful \"before\" it is promoted and in case the primary is down.\n> > > >\n> > >\n> > > It is not clear to me what is user going to do by checking the\n> > > inactivity time for slots when the corresponding server is down.\n> >\n> > Say a failover needs to be done, then it could be useful to know for which\n> > slots the activity needs to be resumed (thinking about external logical decoding\n> > plugin, not about pub/sub here). If one see an inactive slot (since long \"enough\")\n> > then he can start to reasonate about what to do with it.\n> >\n> > > I thought the idea was to check such slots and see if they need to be\n> > > dropped or enabled again to avoid excessive disk usage, etc.\n> >\n> > Yeah that's the case but it does not mean inactive_since can't be useful in other\n> > ways.\n> >\n> > Also, say the slot has been invalidated on the primary (due to inactivity timeout),\n> > primary is down and there is a failover. By keeping the inactive_since from\n> > the primary, one could know when the inactivity that lead to the timeout started.\n> >\n> \n> So, this means at promotion, we won't set the current_time for\n> inactive_since which is not what the currently proposed patch is\n> doing.\n\nYeah, that's why I made the comment T2 in [1].\n\n> Moreover, doing the invalidation on promoted standby based on\n> inactive_since of the primary node sounds debatable because the\n> inactive_timeout could be different on the new node (promoted\n> standby).\n\nI think that if the slot is not invalidated before the promotion then we should\nerase the value from the primary and use the promotion time.\n\n> > Again, more concerned about external logical decoding plugin than pub/sub here.\n> >\n> > > > I agree that tracking the activity time of a synced slot can be useful, why\n> > > > not creating a dedicated field for that purpose (and keep inactive_since a\n> > > > perfect \"copy\" of the primary)?\n> > > >\n> > >\n> > > We can have a separate field for this but not sure if it is worth it.\n> >\n> > OTOH I'm not sure that erasing this information from the primary is useful. I\n> > think that 2 fields would be the best option and would be less subject of\n> > misinterpretation.\n> >\n> > > > > Now, the other concern is that calling GetCurrentTimestamp()\n> > > > > could be costly when the values for the slot are not going to be\n> > > > > updated but if that happens we can optimize such that before acquiring\n> > > > > the slot we can have some minimal pre-checks to ensure whether we need\n> > > > > to update the slot or not.\n> > > >\n> > > > Right, but for a very active slot it is likely that we call GetCurrentTimestamp()\n> > > > during almost each sync cycle.\n> > > >\n> > >\n> > > True, but if we have to save a slot to disk each time to persist the\n> > > changes (for an active slot) then probably GetCurrentTimestamp()\n> > > shouldn't be costly enough to matter.\n> >\n> > Right, persisting the changes to disk would be even more costly.\n> >\n> \n> The point I was making is that currently after copying the\n> remote_node's values, we always persist the slots to disk, so the cost\n> of current_time shouldn't be much.\n\nOh right, I missed this (was focusing only on inactive_since that we don't persist\nto disk IIRC).\n\nBTW, If we are going this way, maybe we could accept a bit less accuracy\nand use GetCurrentTransactionStopTimestamp() instead?\n\n> Now, if the values won't change\n> then probably there is some cost but in most cases (active slots), the\n> values will always change.\n\nRight.\n\n> Also, if all the slots are inactive then we\n> will slow down the speed of sync.\n\nYes.\n\n> We also need to consider if we want\n> to copy the value of inactive_since from the primary and if that is\n> the only value changed then shall we persist the slot or not?\n\nGood point, then I don't think we should as inactive_since is not persisted on disk.\n\n[1]: https://www.postgresql.org/message-id/ZgU70MjdOfO6l0O0%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 06:51:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 01, 2024 at 08:47:59AM +0530, Bharath Rupireddy wrote:\n> On Fri, Mar 29, 2024 at 9:39 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Commit message states: \"why we can't just update inactive_since for\n> > synced slots on the standby with the value received from remote slot\n> > on the primary. This is consistent with any other slot parameter i.e.\n> > all of them are synced from the primary.\"\n> >\n> > The inactive_since is not consistent with other slot parameters which\n> > we copy. We don't perform anything related to those other parameters\n> > like say two_phase phase which can change that property. However, we\n> > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > and release it. Since these operations can impact inactive_since, it\n> > seems to me that inactive_since is not the same as other parameters.\n> > It can have a different value than the primary. Why would anyone want\n> > to know the value of inactive_since from primary after the standby is\n> > promoted?\n> \n> After thinking about it for a while now, it feels to me that the\n> synced slots (slots on the standby that are being synced from the\n> primary) can have their own inactive_sicne value. Fundamentally,\n> inactive_sicne is set to 0 when slot is acquired and set to current\n> time when slot is released, no matter who acquires and releases it -\n> be it walsenders for replication, or backends for slot advance, or\n> backends for slot sync using pg_sync_replication_slots, or backends\n> for other slot functions, or background sync worker. Remember the\n> earlier patch was updating inactive_since just for walsenders, but\n> then the suggestion was to update it unconditionally -\n> https://www.postgresql.org/message-id/CAJpy0uD64X%3D2ENmbHaRiWTKeQawr-rbGoy_GdhQQLVXzUSKTMg%40mail.gmail.com.\n> Whoever syncs the slot, *acutally* acquires the slot i.e. makes it\n> theirs, syncs it from the primary, and releases it. IMO, no\n> differentiation is to be made for synced slots.\n> \n> There was a suggestion on using inactive_since of the synced slot on\n> the standby to know the inactivity of the slot on the primary. If one\n> wants to do that, they better look at/monitor the primary slot\n> info/logs/pg_replication_slot/whatever.\n\nYeah but the use case was in case the primary is down for whatever reason.\n\n> I really don't see a point in\n> having two different meanings for a single property of a replication\n> slot - inactive_since for a regular slot tells since when this slot\n> has become inactive, and for a synced slot since when the\n> corresponding remote slot has become inactive. I think this will\n> confuse users for sure.\n\nI'm not sure as we are speaking about \"synced\" slots. I can also see some confusion\nif this value is not \"synced\".\n\n> Also, if inactive_since is being changed on the primary so frequently,\n> and none of the other parameters are changing, if we copy\n> inactive_since to the synced slots, then standby will just be doing\n> *sync* work (mark the slots dirty and save to disk) for updating\n> inactive_since. I think this is unnecessary behaviour for sure.\n\nRight, I think we should avoid the save slot to disk in that case (question raised\nby Amit in [1]).\n\n> Coming to a future patch for inactive timeout based slot invalidation,\n> we can either allow invalidation without any differentiation for\n> synced slots or restrict invalidation to avoid more sync work. For\n> instance, if inactive timeout is kept low on the standby, the sync\n> worker will be doing more work as it drops and recreates a slot\n> repeatedly if it keeps getting invalidated. Another thing is that the\n> standby takes independent invalidation decisions for synced slots.\n> AFAICS, invalidation due to wal_removal is the only sole reason (out\n> of all available invalidation reasons) for a synced slot to get\n> invalidated independently of the primary. Check\n> https://www.postgresql.org/message-id/CAA4eK1JXBwTaDRD_%3D8t6UB1fhRNjC1C%2BgH4YdDxj_9U6djLnXw%40mail.gmail.com\n> for the suggestion on we better not differentiaing invalidation\n> decisions for synced slots.\n\nYeah, I think the invalidation decision on the standby is highly linked to\nwhat inactive_since on the standby is: synced from primary or not.\n\n> The assumption of letting synced slots have their own inactive_since\n> not only simplifies the code, but also looks less-confusing and more\n> meaningful to the user.\n\nI'm not sure at all. But if the majority of us thinks it's the case then let's\ngo that way.\n\n> > Now, the other concern is that calling GetCurrentTimestamp()\n> > could be costly when the values for the slot are not going to be\n> > updated but if that happens we can optimize such that before acquiring\n> > the slot we can have some minimal pre-checks to ensure whether we need\n> > to update the slot or not.\n\nAlso maybe we could accept a bit less accuracy and use\nGetCurrentTransactionStopTimestamp() instead?\n\n> If we are too much concerned about the cost of GetCurrentTimestamp(),\n> a possible approach is just don't set inactive_since for slots being\n> synced on the standby.\n> Just let the first acquisition and release\n> after the promotion do that job. We can always call this out in the\n> docs saying \"replication slots on the streaming standbys which are\n> being synced from the primary are not inactive in practice, so the\n> inactive_since is always NULL for them unless the standby is\n> promoted\".\n\nI think that was the initial behavior that lead to Robert's remark (see [2]):\n\n\"\nAnd I'm suspicious that having an exception for slots being synced is\na bad idea. That makes too much of a judgement about how the user will\nuse this field. It's usually better to just expose the data, and if\nthe user needs helps to make sense of that data, then give them that\nhelp separately.\n\"\n\n[1]: https://www.postgresql.org/message-id/CAA4eK1JtKieWMivbswYg5FVVB5FugCftLvQKVsxh%3Dm_8nk04vw%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CA%2BTgmob_Ta-t2ty8QrKHBGnNLrf4ZYcwhGHGFsuUoFrAEDw4sA%40mail.gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 07:18:55 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 31, 2024 at 10:25:46AM +0530, Bharath Rupireddy wrote:\n> On Thu, Mar 28, 2024 at 3:13 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > I think in this case it should always reflect the value from the primary (so\n> > that one can understand why it is invalidated).\n> \n> I'll come back to this as soon as we all agree on inactive_since\n> behavior for synced slots.\n\nMakes sense. Also if the majority of us thinks it's not needed for inactive_since\nto be an exact copy of the primary, then let's go that way.\n\n> > I think when it is invalidated it should always reflect the value from the\n> > primary (so that one can understand why it is invalidated).\n> \n> I'll come back to this as soon as we all agree on inactive_since\n> behavior for synced slots.\n\nYeah.\n\n> > T4 ===\n> >\n> > Also, it looks like querying pg_replication_slots() does not trigger an\n> > invalidation: I think it should if the slot is not invalidated yet (and matches\n> > the invalidation criteria).\n> \n> There's a different opinion on this, check comment #3 from\n> https://www.postgresql.org/message-id/CAA4eK1LLj%2BeaMN-K8oeOjfG%2BUuzTY%3DL5PXbcMJURZbFm%2B_aJSA%40mail.gmail.com.\n\nOh right, I can see Amit's point too. Let's put pg_replication_slots() out of\nthe game then.\n\n> > CR6 ===\n> >\n> > +static bool\n> > +InvalidateSlotForInactiveTimeout(ReplicationSlot *slot, bool need_locks)\n> > +{\n> >\n> > InvalidatePossiblyInactiveSlot() maybe?\n> \n> I think we will lose the essence i.e. timeout from the suggested\n> function name, otherwise just the inactive doesn't give a clearer\n> meaning. I kept it that way unless anyone suggests otherwise.\n\nRight. OTOH I think that \"Possibly\" adds some nuance (like InvalidatePossiblyObsoleteSlot()\nis already doing).\n\n> Please see the attached v30 patch. 0002 is where all of the above\n> review comments have been addressed.\n\nThanks! FYI, I did not look at the content yet, just replied to the above\ncomments.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 1 Apr 2024 09:59:51 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Apr 1, 2024 at 12:18 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Mar 29, 2024 at 9:39 AM Amit Kapila <[email protected]> wrote:\n> >\n> > Commit message states: \"why we can't just update inactive_since for\n> > synced slots on the standby with the value received from remote slot\n> > on the primary. This is consistent with any other slot parameter i.e.\n> > all of them are synced from the primary.\"\n> >\n> > The inactive_since is not consistent with other slot parameters which\n> > we copy. We don't perform anything related to those other parameters\n> > like say two_phase phase which can change that property. However, we\n> > do acquire the slot, advance the slot (as per recent discussion [1]),\n> > and release it. Since these operations can impact inactive_since, it\n> > seems to me that inactive_since is not the same as other parameters.\n> > It can have a different value than the primary. Why would anyone want\n> > to know the value of inactive_since from primary after the standby is\n> > promoted?\n>\n> After thinking about it for a while now, it feels to me that the\n> synced slots (slots on the standby that are being synced from the\n> primary) can have their own inactive_sicne value. Fundamentally,\n> inactive_sicne is set to 0 when slot is acquired and set to current\n> time when slot is released, no matter who acquires and releases it -\n> be it walsenders for replication, or backends for slot advance, or\n> backends for slot sync using pg_sync_replication_slots, or backends\n> for other slot functions, or background sync worker. Remember the\n> earlier patch was updating inactive_since just for walsenders, but\n> then the suggestion was to update it unconditionally -\n> https://www.postgresql.org/message-id/CAJpy0uD64X%3D2ENmbHaRiWTKeQawr-rbGoy_GdhQQLVXzUSKTMg%40mail.gmail.com.\n> Whoever syncs the slot, *acutally* acquires the slot i.e. makes it\n> theirs, syncs it from the primary, and releases it. IMO, no\n> differentiation is to be made for synced slots.\n\nFWIW, coming to this thread late, I think that the inactive_since\nshould not be synchronized from the primary. The wall clocks are\ndifferent on the primary and the standby so having the primary's\ntimestamp on the standby can confuse users, especially when there is a\nbig clock drift. Also, as Amit mentioned, inactive_since seems not to\nbe consistent with other parameters we copy. The\nreplication_slot_inactive_timeout feature should work on the standby\nindependent from the primary, like other slot invalidation mechanisms,\nand it should be based on its own local clock.\n\n> Coming to a future patch for inactive timeout based slot invalidation,\n> we can either allow invalidation without any differentiation for\n> synced slots or restrict invalidation to avoid more sync work. For\n> instance, if inactive timeout is kept low on the standby, the sync\n> worker will be doing more work as it drops and recreates a slot\n> repeatedly if it keeps getting invalidated. Another thing is that the\n> standby takes independent invalidation decisions for synced slots.\n> AFAICS, invalidation due to wal_removal is the only sole reason (out\n> of all available invalidation reasons) for a synced slot to get\n> invalidated independently of the primary. Check\n> https://www.postgresql.org/message-id/CAA4eK1JXBwTaDRD_%3D8t6UB1fhRNjC1C%2BgH4YdDxj_9U6djLnXw%40mail.gmail.com\n> for the suggestion on we better not differentiaing invalidation\n> decisions for synced slots.\n>\n> The assumption of letting synced slots have their own inactive_since\n> not only simplifies the code, but also looks less-confusing and more\n> meaningful to the user. The only code that we put in on top of the\n> committed code is to use InRecovery in place of\n> RecoveryInProgress() in RestoreSlotFromDisk() to fix the issue raised\n> by Shveta upthread.\n\nIf we want to invalidate the synced slots due to the timeout, I think\nwe need to define what is \"inactive\" for synced slots.\n\nSuppose that the slotsync worker updates the local (synced) slot's\ninactive_since whenever releasing the slot, irrespective of the actual\nLSNs (or other slot parameters) having been updated. I think that this\nidea cannot handle a slot that is not acquired on the primary. In this\ncase, the remote slot is inactive but the local slot is regarded as\nactive. WAL files are piled up on the standby (and on the primary) as\nthe slot's LSNs don't move forward. I think we want to regard such a\nslot as \"inactive\" also on the standby and invalidate it because of\nthe timeout.\n\n>\n> > Now, the other concern is that calling GetCurrentTimestamp()\n> > could be costly when the values for the slot are not going to be\n> > updated but if that happens we can optimize such that before acquiring\n> > the slot we can have some minimal pre-checks to ensure whether we need\n> > to update the slot or not.\n\nIf we use such pre-checks, another problem might happen; it cannot\nhandle a case where the slot is acquired on the primary but its LSNs\ndon't move forward. Imagine a logical replication conflict happened on\nthe subscriber, and the logical replication enters the retry loop. In\nthis case, the remote slot's inactive_since gets updated for every\nretry, but it looks inactive from the standby since the slot LSNs\ndon't change. Therefore, only the local slot could be invalidated due\nto the timeout but probably we don't want to regard such a slot as\n\"inactive\".\n\nAnother idea I came up with is that the slotsync worker updates the\nlocal slot's inactive_since to the local timestamp only when the\nremote slot might have got inactive. If the remote slot is acquired by\nsomeone, the local slot's inactive_since is also NULL. If the remote\nslot gets inactive, the slotsync worker sets the local timestamp to\nthe local slot's inactive_since. Since the remote slot could be\nacquired and released before the slotsync worker gets the remote slot\ndata again, if the remote slot's inactive_since > the local slot's\ninactive_since, the slotsync worker updates the local one. IOW, we\ndetect whether the remote slot was acquired and released since the\nlast synchronization, by checking the remote slot's inactive_since.\nThis idea seems to handle these cases I mentioned unless I'm missing\nsomething, but it requires for the slotsync worker to update\ninactive_since in a different way than other parameters.\n\nOr a simple solution is that the slotsync worker updates\ninactive_since as it does for non-synced slots, and disables\ntimeout-based slot invalidation for synced slots.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 12:07:54 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 02, 2024 at 12:07:54PM +0900, Masahiko Sawada wrote:\n> On Mon, Apr 1, 2024 at 12:18 PM Bharath Rupireddy\n> \n> FWIW, coming to this thread late, I think that the inactive_since\n> should not be synchronized from the primary. The wall clocks are\n> different on the primary and the standby so having the primary's\n> timestamp on the standby can confuse users, especially when there is a\n> big clock drift. Also, as Amit mentioned, inactive_since seems not to\n> be consistent with other parameters we copy. The\n> replication_slot_inactive_timeout feature should work on the standby\n> independent from the primary, like other slot invalidation mechanisms,\n> and it should be based on its own local clock.\n\nThanks for sharing your thoughts! So, it looks like that most of us agree to not\nsync inactive_since from the primary, I'm fine with that.\n\n> If we want to invalidate the synced slots due to the timeout, I think\n> we need to define what is \"inactive\" for synced slots.\n> \n> Suppose that the slotsync worker updates the local (synced) slot's\n> inactive_since whenever releasing the slot, irrespective of the actual\n> LSNs (or other slot parameters) having been updated. I think that this\n> idea cannot handle a slot that is not acquired on the primary. In this\n> case, the remote slot is inactive but the local slot is regarded as\n> active. WAL files are piled up on the standby (and on the primary) as\n> the slot's LSNs don't move forward. I think we want to regard such a\n> slot as \"inactive\" also on the standby and invalidate it because of\n> the timeout.\n\nI think that makes sense to somehow link inactive_since on the standby to \nthe actual LSNs (or other slot parameters) being updated or not.\n\n> > > Now, the other concern is that calling GetCurrentTimestamp()\n> > > could be costly when the values for the slot are not going to be\n> > > updated but if that happens we can optimize such that before acquiring\n> > > the slot we can have some minimal pre-checks to ensure whether we need\n> > > to update the slot or not.\n> \n> If we use such pre-checks, another problem might happen; it cannot\n> handle a case where the slot is acquired on the primary but its LSNs\n> don't move forward. Imagine a logical replication conflict happened on\n> the subscriber, and the logical replication enters the retry loop. In\n> this case, the remote slot's inactive_since gets updated for every\n> retry, but it looks inactive from the standby since the slot LSNs\n> don't change. Therefore, only the local slot could be invalidated due\n> to the timeout but probably we don't want to regard such a slot as\n> \"inactive\".\n> \n> Another idea I came up with is that the slotsync worker updates the\n> local slot's inactive_since to the local timestamp only when the\n> remote slot might have got inactive. If the remote slot is acquired by\n> someone, the local slot's inactive_since is also NULL. If the remote\n> slot gets inactive, the slotsync worker sets the local timestamp to\n> the local slot's inactive_since. Since the remote slot could be\n> acquired and released before the slotsync worker gets the remote slot\n> data again, if the remote slot's inactive_since > the local slot's\n> inactive_since, the slotsync worker updates the local one.\n\nThen I think we would need to be careful about time zone comparison.\n\n> IOW, we\n> detect whether the remote slot was acquired and released since the\n> last synchronization, by checking the remote slot's inactive_since.\n> This idea seems to handle these cases I mentioned unless I'm missing\n> something, but it requires for the slotsync worker to update\n> inactive_since in a different way than other parameters.\n> \n> Or a simple solution is that the slotsync worker updates\n> inactive_since as it does for non-synced slots, and disables\n> timeout-based slot invalidation for synced slots.\n\nYeah, I think the main question to help us decide is: do we want to invalidate\n\"inactive\" synced slots locally (in addition to synchronizing the invalidation\nfrom the primary)? \n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 06:28:40 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Apr 2, 2024 at 11:58 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > Or a simple solution is that the slotsync worker updates\n> > inactive_since as it does for non-synced slots, and disables\n> > timeout-based slot invalidation for synced slots.\n>\n> Yeah, I think the main question to help us decide is: do we want to invalidate\n> \"inactive\" synced slots locally (in addition to synchronizing the invalidation\n> from the primary)?\n\nI think this approach looks way simpler than the other one. The other\napproach of linking inactive_since on the standby for synced slots to\nthe actual LSNs (or other slot parameters) being updated or not looks\nmore complicated, and might not go well with the end user. However,\nwe need to be able to say why we don't invalidate synced slots due to\ninactive timeout unlike the wal_removed invalidation that can happen\nright now on the standby for synced slots. This leads us to define\nactually what a slot being active means. Is syncing the data from the\nremote slot considered as the slot being active?\n\nOn the other hand, it may not sound great if we don't invalidate\nsynced slots due to inactive timeout even though they hold resources\nsuch as WAL and XIDs.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 12:41:35 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 02, 2024 at 12:41:35PM +0530, Bharath Rupireddy wrote:\n> On Tue, Apr 2, 2024 at 11:58 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > > Or a simple solution is that the slotsync worker updates\n> > > inactive_since as it does for non-synced slots, and disables\n> > > timeout-based slot invalidation for synced slots.\n> >\n> > Yeah, I think the main question to help us decide is: do we want to invalidate\n> > \"inactive\" synced slots locally (in addition to synchronizing the invalidation\n> > from the primary)?\n> \n> I think this approach looks way simpler than the other one. The other\n> approach of linking inactive_since on the standby for synced slots to\n> the actual LSNs (or other slot parameters) being updated or not looks\n> more complicated, and might not go well with the end user. However,\n> we need to be able to say why we don't invalidate synced slots due to\n> inactive timeout unlike the wal_removed invalidation that can happen\n> right now on the standby for synced slots. This leads us to define\n> actually what a slot being active means. Is syncing the data from the\n> remote slot considered as the slot being active?\n> \n> On the other hand, it may not sound great if we don't invalidate\n> synced slots due to inactive timeout even though they hold resources\n> such as WAL and XIDs.\n\nRight and the \"only\" benefit then would be to give an idea as to when the last\nsync did occur on the local slot.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 2 Apr 2024 08:33:40 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Apr 2, 2024 at 11:58 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Apr 02, 2024 at 12:07:54PM +0900, Masahiko Sawada wrote:\n> > On Mon, Apr 1, 2024 at 12:18 PM Bharath Rupireddy\n> >\n> > FWIW, coming to this thread late, I think that the inactive_since\n> > should not be synchronized from the primary. The wall clocks are\n> > different on the primary and the standby so having the primary's\n> > timestamp on the standby can confuse users, especially when there is a\n> > big clock drift. Also, as Amit mentioned, inactive_since seems not to\n> > be consistent with other parameters we copy. The\n> > replication_slot_inactive_timeout feature should work on the standby\n> > independent from the primary, like other slot invalidation mechanisms,\n> > and it should be based on its own local clock.\n>\n> Thanks for sharing your thoughts! So, it looks like that most of us agree to not\n> sync inactive_since from the primary, I'm fine with that.\n\n+1 on not syncing slots from primary.\n\n> > If we want to invalidate the synced slots due to the timeout, I think\n> > we need to define what is \"inactive\" for synced slots.\n> >\n> > Suppose that the slotsync worker updates the local (synced) slot's\n> > inactive_since whenever releasing the slot, irrespective of the actual\n> > LSNs (or other slot parameters) having been updated. I think that this\n> > idea cannot handle a slot that is not acquired on the primary. In this\n> > case, the remote slot is inactive but the local slot is regarded as\n> > active. WAL files are piled up on the standby (and on the primary) as\n> > the slot's LSNs don't move forward. I think we want to regard such a\n> > slot as \"inactive\" also on the standby and invalidate it because of\n> > the timeout.\n>\n> I think that makes sense to somehow link inactive_since on the standby to\n> the actual LSNs (or other slot parameters) being updated or not.\n>\n> > > > Now, the other concern is that calling GetCurrentTimestamp()\n> > > > could be costly when the values for the slot are not going to be\n> > > > updated but if that happens we can optimize such that before acquiring\n> > > > the slot we can have some minimal pre-checks to ensure whether we need\n> > > > to update the slot or not.\n> >\n> > If we use such pre-checks, another problem might happen; it cannot\n> > handle a case where the slot is acquired on the primary but its LSNs\n> > don't move forward. Imagine a logical replication conflict happened on\n> > the subscriber, and the logical replication enters the retry loop. In\n> > this case, the remote slot's inactive_since gets updated for every\n> > retry, but it looks inactive from the standby since the slot LSNs\n> > don't change. Therefore, only the local slot could be invalidated due\n> > to the timeout but probably we don't want to regard such a slot as\n> > \"inactive\".\n> >\n> > Another idea I came up with is that the slotsync worker updates the\n> > local slot's inactive_since to the local timestamp only when the\n> > remote slot might have got inactive. If the remote slot is acquired by\n> > someone, the local slot's inactive_since is also NULL. If the remote\n> > slot gets inactive, the slotsync worker sets the local timestamp to\n> > the local slot's inactive_since. Since the remote slot could be\n> > acquired and released before the slotsync worker gets the remote slot\n> > data again, if the remote slot's inactive_since > the local slot's\n> > inactive_since, the slotsync worker updates the local one.\n>\n> Then I think we would need to be careful about time zone comparison.\n\nYes. Also we need to consider the case when a user is relying on\npg_sync_replication_slots() and has not enabled slot-sync worker. In\nsuch a case if synced slot's inactive_since is derived from inactivity\nof remote-slot, it might not be that frequently updated (based on when\nthe user actually runs the SQL function) and thus may be misleading.\nOTOH, if inactivty_since of synced slots represents its own\ninactivity, then it will give correct info even for the case when the\nSQL function is run after a long time and slot-sync worker is\ndisabled.\n\n> > IOW, we\n> > detect whether the remote slot was acquired and released since the\n> > last synchronization, by checking the remote slot's inactive_since.\n> > This idea seems to handle these cases I mentioned unless I'm missing\n> > something, but it requires for the slotsync worker to update\n> > inactive_since in a different way than other parameters.\n> >\n> > Or a simple solution is that the slotsync worker updates\n> > inactive_since as it does for non-synced slots, and disables\n> > timeout-based slot invalidation for synced slots.\n\nI like this idea better, it takes care of such a case too when the\nuser is relying on sync-function rather than worker and does not want\nto get the slots invalidated in between 2 sync function calls.\n\n> Yeah, I think the main question to help us decide is: do we want to invalidate\n> \"inactive\" synced slots locally (in addition to synchronizing the invalidation\n> from the primary)?\n\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Apr 2024 08:38:10 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n>\n> > > Or a simple solution is that the slotsync worker updates\n> > > inactive_since as it does for non-synced slots, and disables\n> > > timeout-based slot invalidation for synced slots.\n>\n> I like this idea better, it takes care of such a case too when the\n> user is relying on sync-function rather than worker and does not want\n> to get the slots invalidated in between 2 sync function calls.\n\nPlease find the attached v31 patches implementing the above idea:\n\n- synced slots get their on inactive_since just like any other slot\n- synced slots don't get invalidated due to inactive timeout because\nsuch slots not considered active at all as they don't perform logical\ndecoding (of course, they will perform in fast_forward mode to fix the\nother data loss issue, but they don't generate changes for them to be\ncalled as *active* slots)\n- synced slots inactive_since is set to current timestamp after the\nstandby gets promoted to help inactive_since interpret correctly just\nlike any other slot.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 Apr 2024 11:17:41 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 03, 2024 at 11:17:41AM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n> >\n> > > > Or a simple solution is that the slotsync worker updates\n> > > > inactive_since as it does for non-synced slots, and disables\n> > > > timeout-based slot invalidation for synced slots.\n> >\n> > I like this idea better, it takes care of such a case too when the\n> > user is relying on sync-function rather than worker and does not want\n> > to get the slots invalidated in between 2 sync function calls.\n> \n> Please find the attached v31 patches implementing the above idea:\n\nThanks!\n\nSome comments related to v31-0001:\n\n=== testing the behavior\n\nT1 ===\n\n> - synced slots get their on inactive_since just like any other slot\n\nIt behaves as described.\n\nT2 ===\n\n> - synced slots inactive_since is set to current timestamp after the\n> standby gets promoted to help inactive_since interpret correctly just\n> like any other slot.\n \nIt behaves as described.\n\nCR1 ===\n\n+ <structfield>inactive_since</structfield> value will get updated\n+ after every synchronization\n\nindicates the last synchronization time? (I think that after every synchronization\ncould lead to confusion).\n\nCR2 ===\n\n+ /*\n+ * Set the time since the slot has become inactive after shutting\n+ * down slot sync machinery. This helps correctly interpret the\n+ * time if the standby gets promoted without a restart.\n+ */\n\nIt looks to me that this comment is not at the right place because there is\nnothing after the comment that indicates that we shutdown the \"slot sync machinery\".\n\nMaybe a better place is before the function definition and mention that this is\ncurrently called when we shutdown the \"slot sync machinery\"?\n\nCR3 ===\n\n+ * We get the current time beforehand and only once to avoid\n+ * system calls overhead while holding the lock.\n\ns/avoid system calls overhead while holding the lock/avoid system calls while holding the spinlock/?\n\nCR4 ===\n\n+ * Set the time since the slot has become inactive. We get the current\n+ * time beforehand to avoid system call overhead while holding the lock\n\nSame.\n\nCR5 ===\n\n+ # Check that the captured time is sane\n+ if (defined $reference_time)\n+ {\n\ns/Check that the captured time is sane/Check that the inactive_since is sane/?\n\nSorry if some of those comments could have been done while I did review v29-0001.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 06:50:19 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 03, 2024 at 11:17:41AM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n> >\n> > > > Or a simple solution is that the slotsync worker updates\n> > > > inactive_since as it does for non-synced slots, and disables\n> > > > timeout-based slot invalidation for synced slots.\n> >\n> > I like this idea better, it takes care of such a case too when the\n> > user is relying on sync-function rather than worker and does not want\n> > to get the slots invalidated in between 2 sync function calls.\n> \n> Please find the attached v31 patches implementing the above idea:\n\nThanks!\n\nSome comments regarding v31-0002:\n\n=== testing the behavior\n\nT1 ===\n\n> - synced slots don't get invalidated due to inactive timeout because\n> such slots not considered active at all as they don't perform logical\n> decoding (of course, they will perform in fast_forward mode to fix the\n> other data loss issue, but they don't generate changes for them to be\n> called as *active* slots)\n\nIt behaves as described. OTOH non synced logical slots on the standby and\nphysical slots on the standby are invalidated which is what is expected.\n\nT2 ===\n\nIn case the slot is invalidated on the primary,\n\nprimary:\n\npostgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n slot_name | inactive_since | invalidation_reason\n-----------+-------------------------------+---------------------\n s1 | 2024-04-03 06:56:28.075637+00 | inactive_timeout\n\nthen on the standby we get:\n\nstandby:\n\npostgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n slot_name | inactive_since | invalidation_reason\n-----------+------------------------------+---------------------\n s1 | 2024-04-03 07:06:43.37486+00 | inactive_timeout\n\nshouldn't the slot be dropped/recreated instead of updating inactive_since?\n\n=== code\n\nCR1 ===\n\n+ Invalidates replication slots that are inactive for longer the\n+ specified amount of time\n\ns/for longer the/for longer that/?\n\nCR2 ===\n\n+ <literal>true</literal>) as such synced slots don't actually perform\n+ logical decoding.\n\nWe're switching in fast forward logical due to [1], so I'm not sure that's 100%\naccurate here. I'm not sure we need to specify a reason.\n\nCR3 ===\n\n+ errdetail(\"This slot has been invalidated because it was inactive for more than the time specified by replication_slot_inactive_timeout parameter.\")));\n\nI think we can remove \"parameter\" (see for example the error message in\nvalidate_remote_info()) and reduce it a bit, something like?\n\n\"This slot has been invalidated because it was inactive for more than replication_slot_inactive_timeout\"?\n\nCR4 ===\n\n+ appendStringInfoString(&err_detail, _(\"The slot has been inactive for more than the time specified by replication_slot_inactive_timeout parameter.\"));\n\nSame.\n\nCR5 ===\n\n+ /*\n+ * This function isn't expected to be called for inactive timeout based\n+ * invalidation. A separate function InvalidateInactiveReplicationSlot is\n+ * to be used for that.\n\nDo you think it's worth to explain why?\n\nCR6 ===\n\n+ if (replication_slot_inactive_timeout == 0)\n+ return false;\n+ else if (slot->inactive_since > 0)\n\n\"else\" is not needed here.\n\nCR7 ===\n\n+ SpinLockAcquire(&slot->mutex);\n+\n+ /*\n+ * Check if the slot needs to be invalidated due to\n+ * replication_slot_inactive_timeout GUC. We do this with the spinlock\n+ * held to avoid race conditions -- for example the inactive_since\n+ * could change, or the slot could be dropped.\n+ */\n+ now = GetCurrentTimestamp();\n\nWe should not call GetCurrentTimestamp() while holding a spinlock.\n\nCR8 ===\n\n+# Testcase start: Invalidate streaming standby's slot as well as logical\n+# failover slot on primary due to inactive timeout GUC. Also, check the logical\n\ns/inactive timeout GUC/replication_slot_inactive_timeout/?\n\nCR9 ===\n\n+# Start: Helper functions used for this test file\n+# End: Helper functions used for this test file\n\nI think that's the first TAP test with this comment. Not saying we should not but\nwhy did you feel the need to add those?\n\n[1]: https://www.postgresql.org/message-id/OS0PR01MB5716B3942AE49F3F725ACA92943B2@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 08:17:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 11:17 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n> >\n> > > > Or a simple solution is that the slotsync worker updates\n> > > > inactive_since as it does for non-synced slots, and disables\n> > > > timeout-based slot invalidation for synced slots.\n> >\n> > I like this idea better, it takes care of such a case too when the\n> > user is relying on sync-function rather than worker and does not want\n> > to get the slots invalidated in between 2 sync function calls.\n>\n> Please find the attached v31 patches implementing the above idea:\n>\n\nThanks for the patches, please find few comments:\n\nv31-001:\n\n1)\nsystem-views.sgml:\nvalue will get updated after every synchronization from the\ncorresponding remote slot on the primary.\n\n--This is confusing. It will be good to rephrase it.\n\n2)\nupdate_synced_slots_inactive_since()\n\n--May be, we should mention in the header that this function is called\nonly during promotion.\n\n3) 040_standby_failover_slots_sync.pl:\nWe capture inactive_since_on_primary when we do this for the first time at #175\nALTER SUBSCRIPTION regress_mysub1 DISABLE\"\n\nBut we again recreate the sub and disable it at line #280.\nDo you think we shall get inactive_since_on_primary again here, to be\ncompared with inactive_since_on_new_primary later?\n\n\nv31-002:\n(I had reviewed v29-002 but missed to post comments, I think these\nare still applicable)\n\n1) I think replication_slot_inactivity_timeout was recommended here\n(instead of replication_slot_inactive_timeout, so please give it a\nthought):\nhttps://www.postgresql.org/message-id/202403260739.udlp7lxixktx%40alvherre.pgsql\n\n2) Commit msg:\na)\n\"It is often easy for developers to set a timeout of say 1\nor 2 or 3 days at slot level, after which the inactive slots get\ndropped.\"\n\nShall we say invalidated rather than dropped?\n\nb)\n\"To achieve the above, postgres introduces a GUC allowing users\nset inactive timeout and then a slot stays inactive for this much\namount of time it invalidates the slot.\"\n\nBroken sentence.\n\n<have not reviewed 002 patch in detail yet>\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 3 Apr 2024 14:57:55 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 12:20 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Wed, Apr 03, 2024 at 11:17:41AM +0530, Bharath Rupireddy wrote:\n> > On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > > Or a simple solution is that the slotsync worker updates\n> > > > > inactive_since as it does for non-synced slots, and disables\n> > > > > timeout-based slot invalidation for synced slots.\n> > >\n> > > I like this idea better, it takes care of such a case too when the\n> > > user is relying on sync-function rather than worker and does not want\n> > > to get the slots invalidated in between 2 sync function calls.\n> >\n> > Please find the attached v31 patches implementing the above idea:\n>\n> Thanks!\n>\n> Some comments related to v31-0001:\n>\n> === testing the behavior\n>\n> T1 ===\n>\n> > - synced slots get their on inactive_since just like any other slot\n>\n> It behaves as described.\n>\n> T2 ===\n>\n> > - synced slots inactive_since is set to current timestamp after the\n> > standby gets promoted to help inactive_since interpret correctly just\n> > like any other slot.\n>\n> It behaves as described.\n>\n> CR1 ===\n>\n> + <structfield>inactive_since</structfield> value will get updated\n> + after every synchronization\n>\n> indicates the last synchronization time? (I think that after every synchronization\n> could lead to confusion).\n>\n\n+1.\n\n> CR2 ===\n>\n> + /*\n> + * Set the time since the slot has become inactive after shutting\n> + * down slot sync machinery. This helps correctly interpret the\n> + * time if the standby gets promoted without a restart.\n> + */\n>\n> It looks to me that this comment is not at the right place because there is\n> nothing after the comment that indicates that we shutdown the \"slot sync machinery\".\n>\n> Maybe a better place is before the function definition and mention that this is\n> currently called when we shutdown the \"slot sync machinery\"?\n>\n\nWon't it be better to have an assert for SlotSyncCtx->pid? IIRC, we\nhave some existing issues where we don't ensure that no one is running\nsync API before shutdown is complete but I think we can deal with that\nseparately and here we can still have an Assert.\n\n> CR3 ===\n>\n> + * We get the current time beforehand and only once to avoid\n> + * system calls overhead while holding the lock.\n>\n> s/avoid system calls overhead while holding the lock/avoid system calls while holding the spinlock/?\n>\n\nIs it valid to say that there is overhead of this call while holding\nspinlock? Because I don't think at the time of promotion we expect any\nother concurrent slot activity. The first reason seems good enough.\n\nOne other observation:\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -42,6 +42,7 @@\n #include \"access/transam.h\"\n #include \"access/xlog_internal.h\"\n #include \"access/xlogrecovery.h\"\n+#include \"access/xlogutils.h\"\n\nIs there a reason for this inclusion? I don't see any change which\nshould need this one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Apr 2024 15:32:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 2:58 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Apr 3, 2024 at 11:17 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Wed, Apr 3, 2024 at 8:38 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > > Or a simple solution is that the slotsync worker updates\n> > > > > inactive_since as it does for non-synced slots, and disables\n> > > > > timeout-based slot invalidation for synced slots.\n> > >\n> > > I like this idea better, it takes care of such a case too when the\n> > > user is relying on sync-function rather than worker and does not want\n> > > to get the slots invalidated in between 2 sync function calls.\n> >\n> > Please find the attached v31 patches implementing the above idea:\n> >\n>\n> Thanks for the patches, please find few comments:\n>\n> v31-001:\n>\n> 1)\n> system-views.sgml:\n> value will get updated after every synchronization from the\n> corresponding remote slot on the primary.\n>\n> --This is confusing. It will be good to rephrase it.\n>\n> 2)\n> update_synced_slots_inactive_since()\n>\n> --May be, we should mention in the header that this function is called\n> only during promotion.\n>\n> 3) 040_standby_failover_slots_sync.pl:\n> We capture inactive_since_on_primary when we do this for the first time at #175\n> ALTER SUBSCRIPTION regress_mysub1 DISABLE\"\n>\n> But we again recreate the sub and disable it at line #280.\n> Do you think we shall get inactive_since_on_primary again here, to be\n> compared with inactive_since_on_new_primary later?\n>\n\nI think so.\n\nFew additional comments on tests:\n1.\n+is( $standby1->safe_psql(\n+ 'postgres',\n+ \"SELECT '$inactive_since_on_primary'::timestamptz <\n'$inactive_since_on_standby'::timestamptz AND\n+ '$inactive_since_on_standby'::timestamptz < '$slot_sync_time'::timestamptz;\"\n\nShall we do <= check as we are doing in the main function\nget_slot_inactive_since_value as the time duration is less so it can\nbe the same as well? Similarly, please check other tests.\n\n2.\n+=item $node->get_slot_inactive_since_value(self, slot_name, reference_time)\n+\n+Get inactive_since column value for a given replication slot validating it\n+against optional reference time.\n+\n+=cut\n+\n+sub get_slot_inactive_since_value\n\nI see that all callers validate against reference time. It is better\nto name it validate_slot_inactive_since rather than using get_* as the\nmain purpose is to validate the passed value.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Apr 2024 16:19:46 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 12:20 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > Please find the attached v31 patches implementing the above idea:\n>\n> Some comments related to v31-0001:\n>\n> === testing the behavior\n>\n> T1 ===\n>\n> > - synced slots get their on inactive_since just like any other slot\n>\n> It behaves as described.\n>\n> T2 ===\n>\n> > - synced slots inactive_since is set to current timestamp after the\n> > standby gets promoted to help inactive_since interpret correctly just\n> > like any other slot.\n>\n> It behaves as described.\n\nThanks for testing.\n\n> CR1 ===\n>\n> + <structfield>inactive_since</structfield> value will get updated\n> + after every synchronization\n>\n> indicates the last synchronization time? (I think that after every synchronization\n> could lead to confusion).\n\nDone.\n\n> CR2 ===\n>\n> + /*\n> + * Set the time since the slot has become inactive after shutting\n> + * down slot sync machinery. This helps correctly interpret the\n> + * time if the standby gets promoted without a restart.\n> + */\n>\n> It looks to me that this comment is not at the right place because there is\n> nothing after the comment that indicates that we shutdown the \"slot sync machinery\".\n>\n> Maybe a better place is before the function definition and mention that this is\n> currently called when we shutdown the \"slot sync machinery\"?\n\nDone.\n\n> CR3 ===\n>\n> + * We get the current time beforehand and only once to avoid\n> + * system calls overhead while holding the lock.\n>\n> s/avoid system calls overhead while holding the lock/avoid system calls while holding the spinlock/?\n\nDone.\n\n> CR4 ===\n>\n> + * Set the time since the slot has become inactive. We get the current\n> + * time beforehand to avoid system call overhead while holding the lock\n>\n> Same.\n\nDone.\n\n> CR5 ===\n>\n> + # Check that the captured time is sane\n> + if (defined $reference_time)\n> + {\n>\n> s/Check that the captured time is sane/Check that the inactive_since is sane/?\n>\n> Sorry if some of those comments could have been done while I did review v29-0001.\n\nDone.\n\nOn Wed, Apr 3, 2024 at 2:58 PM shveta malik <[email protected]> wrote:\n>\n> Thanks for the patches, please find few comments:\n>\n> v31-001:\n>\n> 1)\n> system-views.sgml:\n> value will get updated after every synchronization from the\n> corresponding remote slot on the primary.\n>\n> --This is confusing. It will be good to rephrase it.\n\nDone as per Bertrand's suggestion.\n\n> 2)\n> update_synced_slots_inactive_since()\n>\n> --May be, we should mention in the header that this function is called\n> only during promotion.\n\nDone as per Bertrand's suggestion.\n\n> 3) 040_standby_failover_slots_sync.pl:\n> We capture inactive_since_on_primary when we do this for the first time at #175\n> ALTER SUBSCRIPTION regress_mysub1 DISABLE\"\n>\n> But we again recreate the sub and disable it at line #280.\n> Do you think we shall get inactive_since_on_primary again here, to be\n> compared with inactive_since_on_new_primary later?\n\nHm. Done that. Recapturing both slot_creation_time_on_primary and\ninactive_since_on_primary before and after CREATE SUBSCRIPTION creates\nthe slot again on the primary/publisher.\n\nOn Wed, Apr 3, 2024 at 3:32 PM Amit Kapila <[email protected]> wrote:\n>\n> > CR2 ===\n> >\n> > + /*\n> > + * Set the time since the slot has become inactive after shutting\n> > + * down slot sync machinery. This helps correctly interpret the\n> > + * time if the standby gets promoted without a restart.\n> > + */\n> >\n> > It looks to me that this comment is not at the right place because there is\n> > nothing after the comment that indicates that we shutdown the \"slot sync machinery\".\n> >\n> > Maybe a better place is before the function definition and mention that this is\n> > currently called when we shutdown the \"slot sync machinery\"?\n> >\n> Won't it be better to have an assert for SlotSyncCtx->pid? IIRC, we\n> have some existing issues where we don't ensure that no one is running\n> sync API before shutdown is complete but I think we can deal with that\n> separately and here we can still have an Assert.\n\nThat can work to ensure the slot sync worker isn't running as\nSlotSyncCtx->pid gets updated only for the slot sync worker. I added\nthis assertion for now.\n\nWe need to ensure (in a separate patch and thread) there is no backend\nacquiring it and performing sync while the slot sync worker is\nshutting down. Otherwise, some of the slots can get resynced and some\nare not while we are shutting down the slot sync worker as part of the\nstandby promotion which might leave the slots in an inconsistent\nstate.\n\n> > CR3 ===\n> >\n> > + * We get the current time beforehand and only once to avoid\n> > + * system calls overhead while holding the lock.\n> >\n> > s/avoid system calls overhead while holding the lock/avoid system calls while holding the spinlock/?\n> >\n> Is it valid to say that there is overhead of this call while holding\n> spinlock? Because I don't think at the time of promotion we expect any\n> other concurrent slot activity. The first reason seems good enough.\n\nNo slot activity but why GetCurrentTimestamp needs to be called every\ntime in a loop.\n\n> One other observation:\n> --- a/src/backend/replication/slot.c\n> +++ b/src/backend/replication/slot.c\n> @@ -42,6 +42,7 @@\n> #include \"access/transam.h\"\n> #include \"access/xlog_internal.h\"\n> #include \"access/xlogrecovery.h\"\n> +#include \"access/xlogutils.h\"\n>\n> Is there a reason for this inclusion? I don't see any change which\n> should need this one.\n\nNot anymore. It was earlier needed for using the InRecovery flag in\nthe then approach.\n\nOn Wed, Apr 3, 2024 at 4:19 PM Amit Kapila <[email protected]> wrote:\n>\n> > 3) 040_standby_failover_slots_sync.pl:\n> > We capture inactive_since_on_primary when we do this for the first time at #175\n> > ALTER SUBSCRIPTION regress_mysub1 DISABLE\"\n> >\n> > But we again recreate the sub and disable it at line #280.\n> > Do you think we shall get inactive_since_on_primary again here, to be\n> > compared with inactive_since_on_new_primary later?\n> >\n>\n> I think so.\n\nModified this to recapture the times before and after the slot gets recreated.\n\n> Few additional comments on tests:\n> 1.\n> +is( $standby1->safe_psql(\n> + 'postgres',\n> + \"SELECT '$inactive_since_on_primary'::timestamptz <\n> '$inactive_since_on_standby'::timestamptz AND\n> + '$inactive_since_on_standby'::timestamptz < '$slot_sync_time'::timestamptz;\"\n>\n> Shall we do <= check as we are doing in the main function\n> get_slot_inactive_since_value as the time duration is less so it can\n> be the same as well? Similarly, please check other tests.\n\nI get you. If the tests are so fast that losing a bit of precision\nmight cause tests to fail. So, I'v added equality check for all the\ntests.\n\n> 2.\n> +=item $node->get_slot_inactive_since_value(self, slot_name, reference_time)\n> +\n> +Get inactive_since column value for a given replication slot validating it\n> +against optional reference time.\n> +\n> +=cut\n> +\n> +sub get_slot_inactive_since_value\n>\n> I see that all callers validate against reference time. It is better\n> to name it validate_slot_inactive_since rather than using get_* as the\n> main purpose is to validate the passed value.\n\nExisting callers yes. Also, I've removed the reference time as an\noptional parameter.\n\nPer an offlist chat with Amit, I've added the following note in\nsynchronize_one_slot:\n\n@@ -584,6 +585,11 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\nremote_dbid)\n * overwriting 'invalidated' flag to remote_slot's value. See\n * InvalidatePossiblyObsoleteSlot() where it invalidates slot directly\n * if the slot is not acquired by other processes.\n+ *\n+ * XXX: If it ever turns out that slot acquire/release is costly for\n+ * cases when none of the slot property is changed then we can do a\n+ * pre-check to ensure that at least one of the slot property is\n+ * changed before acquiring the slot.\n */\n ReplicationSlotAcquire(remote_slot->name, true);\n\nPlease find the attached v32-0001 patch with the above review comments\naddressed. I'm working on review comments for 0002.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 Apr 2024 17:12:12 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 03, 2024 at 05:12:12PM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 3, 2024 at 4:19 PM Amit Kapila <[email protected]> wrote:\n> >\n> > + 'postgres',\n> > + \"SELECT '$inactive_since_on_primary'::timestamptz <\n> > '$inactive_since_on_standby'::timestamptz AND\n> > + '$inactive_since_on_standby'::timestamptz < '$slot_sync_time'::timestamptz;\"\n> >\n> > Shall we do <= check as we are doing in the main function\n> > get_slot_inactive_since_value as the time duration is less so it can\n> > be the same as well? Similarly, please check other tests.\n> \n> I get you. If the tests are so fast that losing a bit of precision\n> might cause tests to fail. So, I'v added equality check for all the\n> tests.\n\n> Please find the attached v32-0001 patch with the above review comments\n> addressed.\n\nThanks!\n\nJust one comment on v32-0001:\n\n+# Synced slot on the standby must get its own inactive_since.\n+is( $standby1->safe_psql(\n+ 'postgres',\n+ \"SELECT '$inactive_since_on_primary'::timestamptz <= '$inactive_since_on_standby'::timestamptz AND\n+ '$inactive_since_on_standby'::timestamptz <= '$slot_sync_time'::timestamptz;\"\n+ ),\n+ \"t\",\n+ 'synchronized slot has got its own inactive_since');\n+\n\nBy using <= we are not testing that it must get its own inactive_since (as we\nallow them to be equal in the test). I think we should just add some usleep()\nwhere appropriate and deny equality during the tests on inactive_since.\n\nExcept for the above, v32-0001 LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 13:16:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 6:46 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Just one comment on v32-0001:\n>\n> +# Synced slot on the standby must get its own inactive_since.\n> +is( $standby1->safe_psql(\n> + 'postgres',\n> + \"SELECT '$inactive_since_on_primary'::timestamptz <= '$inactive_since_on_standby'::timestamptz AND\n> + '$inactive_since_on_standby'::timestamptz <= '$slot_sync_time'::timestamptz;\"\n> + ),\n> + \"t\",\n> + 'synchronized slot has got its own inactive_since');\n> +\n>\n> By using <= we are not testing that it must get its own inactive_since (as we\n> allow them to be equal in the test). I think we should just add some usleep()\n> where appropriate and deny equality during the tests on inactive_since.\n\nThanks. It looks like we can ignore the equality in all of the\ninactive_since comparisons. IIUC, all the TAP tests do run with\nprimary and standbys on the single BF animals. And, it looks like\nassigning the inactive_since timestamps to perl variables is giving\nthe microseconds precision level\n(./tmp_check/log/regress_log_040_standby_failover_slots_sync:inactive_since\n2024-04-03 14:30:09.691648+00). FWIW, we already have some TAP and SQL\ntests relying on stats_reset timestamps without equality. So, I've\nleft the equality for the inactive_since tests.\n\n> Except for the above, v32-0001 LGTM.\n\nThanks. Please see the attached v33-0001 patch after removing equality\non inactive_since TAP tests.\n\nOn Wed, Apr 3, 2024 at 1:47 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Some comments regarding v31-0002:\n>\n> === testing the behavior\n>\n> T1 ===\n>\n> > - synced slots don't get invalidated due to inactive timeout because\n> > such slots not considered active at all as they don't perform logical\n> > decoding (of course, they will perform in fast_forward mode to fix the\n> > other data loss issue, but they don't generate changes for them to be\n> > called as *active* slots)\n>\n> It behaves as described. OTOH non synced logical slots on the standby and\n> physical slots on the standby are invalidated which is what is expected.\n\nRight.\n\n> T2 ===\n>\n> In case the slot is invalidated on the primary,\n>\n> primary:\n>\n> postgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n> slot_name | inactive_since | invalidation_reason\n> -----------+-------------------------------+---------------------\n> s1 | 2024-04-03 06:56:28.075637+00 | inactive_timeout\n>\n> then on the standby we get:\n>\n> standby:\n>\n> postgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n> slot_name | inactive_since | invalidation_reason\n> -----------+------------------------------+---------------------\n> s1 | 2024-04-03 07:06:43.37486+00 | inactive_timeout\n>\n> shouldn't the slot be dropped/recreated instead of updating inactive_since?\n\nThe sync slots that are invalidated on the primary aren't dropped and\nrecreated on the standby. There's no point in doing so because\ninvalidated slots on the primary can't be made useful. However, I\nfound that the synced slot is acquired and released unnecessarily\nafter the invalidation_reason is synced from the primary. I added a\nskip check in synchronize_one_slot to skip acquiring and releasing the\nslot if it's locally found inactive. With this, inactive_since won't\nget updated for invalidated sync slots on the standby as we don't\nacquire and release the slot.\n\n> === code\n>\n> CR1 ===\n>\n> + Invalidates replication slots that are inactive for longer the\n> + specified amount of time\n>\n> s/for longer the/for longer that/?\n\nFixed.\n\n> CR2 ===\n>\n> + <literal>true</literal>) as such synced slots don't actually perform\n> + logical decoding.\n>\n> We're switching in fast forward logical due to [1], so I'm not sure that's 100%\n> accurate here. I'm not sure we need to specify a reason.\n\nFixed.\n\n> CR3 ===\n>\n> + errdetail(\"This slot has been invalidated because it was inactive for more than the time specified by replication_slot_inactive_timeout parameter.\")));\n>\n> I think we can remove \"parameter\" (see for example the error message in\n> validate_remote_info()) and reduce it a bit, something like?\n>\n> \"This slot has been invalidated because it was inactive for more than replication_slot_inactive_timeout\"?\n\nDone.\n\n> CR4 ===\n>\n> + appendStringInfoString(&err_detail, _(\"The slot has been inactive for more than the time specified by replication_slot_inactive_timeout parameter.\"));\n>\n> Same.\n\nDone. Changed it to \"The slot has been inactive for more than\nreplication_slot_inactive_timeout.\"\n\n> CR5 ===\n>\n> + /*\n> + * This function isn't expected to be called for inactive timeout based\n> + * invalidation. A separate function InvalidateInactiveReplicationSlot is\n> + * to be used for that.\n>\n> Do you think it's worth to explain why?\n\nHm, I just wanted to point out the actual function here. I modified it\nto something like the following, if others feel we don't need that, I\ncan remove it.\n\n /*\n * Use InvalidateInactiveReplicationSlot for inactive timeout based\n * invalidation.\n */\n\n> CR6 ===\n>\n> + if (replication_slot_inactive_timeout == 0)\n> + return false;\n> + else if (slot->inactive_since > 0)\n>\n> \"else\" is not needed here.\n\nNothing wrong there, but removed.\n\n> CR7 ===\n>\n> + SpinLockAcquire(&slot->mutex);\n> +\n> + /*\n> + * Check if the slot needs to be invalidated due to\n> + * replication_slot_inactive_timeout GUC. We do this with the spinlock\n> + * held to avoid race conditions -- for example the inactive_since\n> + * could change, or the slot could be dropped.\n> + */\n> + now = GetCurrentTimestamp();\n>\n> We should not call GetCurrentTimestamp() while holding a spinlock.\n\nI was thinking why to add up the wait time to acquire\nLWLockAcquire(ReplicationSlotControlLock, LW_SHARED);. Now that I\nmoved it up before the spinlock but after the LWLockAcquire.\n\n> CR8 ===\n>\n> +# Testcase start: Invalidate streaming standby's slot as well as logical\n> +# failover slot on primary due to inactive timeout GUC. Also, check the logical\n>\n> s/inactive timeout GUC/replication_slot_inactive_timeout/?\n\nDone.\n\n> CR9 ===\n>\n> +# Start: Helper functions used for this test file\n> +# End: Helper functions used for this test file\n>\n> I think that's the first TAP test with this comment. Not saying we should not but\n> why did you feel the need to add those?\n\nHm. Removed.\n\n> [1]: https://www.postgresql.org/message-id/OS0PR01MB5716B3942AE49F3F725ACA92943B2@OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n\nOn Wed, Apr 3, 2024 at 2:58 PM shveta malik <[email protected]> wrote:\n>\n> v31-002:\n> (I had reviewed v29-002 but missed to post comments, I think these\n> are still applicable)\n>\n> 1) I think replication_slot_inactivity_timeout was recommended here\n> (instead of replication_slot_inactive_timeout, so please give it a\n> thought):\n> https://www.postgresql.org/message-id/202403260739.udlp7lxixktx%40alvherre.pgsql\n\nYeah. It's synonymous with inactive_since. If others have an opinion\nto have replication_slot_inactivity_timeout, I'm fine with it.\n\n> 2) Commit msg:\n> a)\n> \"It is often easy for developers to set a timeout of say 1\n> or 2 or 3 days at slot level, after which the inactive slots get\n> dropped.\"\n>\n> Shall we say invalidated rather than dropped?\n\nRight. Done that.\n\n> b)\n> \"To achieve the above, postgres introduces a GUC allowing users\n> set inactive timeout and then a slot stays inactive for this much\n> amount of time it invalidates the slot.\"\n>\n> Broken sentence.\n\nReworded it a bit.\n\nPlease find the attached v33 patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 Apr 2024 20:28:04 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Wed, Apr 03, 2024 at 08:28:04PM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 3, 2024 at 6:46 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Just one comment on v32-0001:\n> >\n> > +# Synced slot on the standby must get its own inactive_since.\n> > +is( $standby1->safe_psql(\n> > + 'postgres',\n> > + \"SELECT '$inactive_since_on_primary'::timestamptz <= '$inactive_since_on_standby'::timestamptz AND\n> > + '$inactive_since_on_standby'::timestamptz <= '$slot_sync_time'::timestamptz;\"\n> > + ),\n> > + \"t\",\n> > + 'synchronized slot has got its own inactive_since');\n> > +\n> >\n> > By using <= we are not testing that it must get its own inactive_since (as we\n> > allow them to be equal in the test). I think we should just add some usleep()\n> > where appropriate and deny equality during the tests on inactive_since.\n> \n> > Except for the above, v32-0001 LGTM.\n> \n> Thanks. Please see the attached v33-0001 patch after removing equality\n> on inactive_since TAP tests.\n\nThanks! v33-0001 LGTM.\n\n> On Wed, Apr 3, 2024 at 1:47 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> > Some comments regarding v31-0002:\n> >\n> > T2 ===\n> >\n> > In case the slot is invalidated on the primary,\n> >\n> > primary:\n> >\n> > postgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n> > slot_name | inactive_since | invalidation_reason\n> > -----------+-------------------------------+---------------------\n> > s1 | 2024-04-03 06:56:28.075637+00 | inactive_timeout\n> >\n> > then on the standby we get:\n> >\n> > standby:\n> >\n> > postgres=# select slot_name, inactive_since, invalidation_reason from pg_replication_slots where slot_name = 's1';\n> > slot_name | inactive_since | invalidation_reason\n> > -----------+------------------------------+---------------------\n> > s1 | 2024-04-03 07:06:43.37486+00 | inactive_timeout\n> >\n> > shouldn't the slot be dropped/recreated instead of updating inactive_since?\n> \n> The sync slots that are invalidated on the primary aren't dropped and\n> recreated on the standby.\n\nYeah, right (I was confused with synced slots that are invalidated locally).\n\n> However, I\n> found that the synced slot is acquired and released unnecessarily\n> after the invalidation_reason is synced from the primary. I added a\n> skip check in synchronize_one_slot to skip acquiring and releasing the\n> slot if it's locally found inactive. With this, inactive_since won't\n> get updated for invalidated sync slots on the standby as we don't\n> acquire and release the slot.\n\nCR1 ===\n\nYeah, I can see:\n\n@@ -575,6 +575,13 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid)\n \" name slot \\\"%s\\\" already exists on the standby\",\n remote_slot->name));\n\n+ /*\n+ * Skip the sync if the local slot is already invalidated. We do this\n+ * beforehand to save on slot acquire and release.\n+ */\n+ if (slot->data.invalidated != RS_INVAL_NONE)\n+ return false;\n\nThanks to the drop_local_obsolete_slots() call I think we are not missing the case\nwhere the slot has been invalidated on the primary, invalidation reason has been\nsynced on the standby and later the slot is dropped/ recreated manually on the\nprimary (then it should be dropped/recreated on the standby too).\n\nAlso it seems we are not missing the case where a sync slot is invalidated\nlocally due to wal removal (it should be dropped/recreated).\n\n> \n> > CR5 ===\n> >\n> > + /*\n> > + * This function isn't expected to be called for inactive timeout based\n> > + * invalidation. A separate function InvalidateInactiveReplicationSlot is\n> > + * to be used for that.\n> >\n> > Do you think it's worth to explain why?\n> \n> Hm, I just wanted to point out the actual function here. I modified it\n> to something like the following, if others feel we don't need that, I\n> can remove it.\n\nSorry If I was not clear but I meant to say \"Do you think it's worth to explain\nwhy we decided to create a dedicated function\"? (currently we \"just\" explain that\nwe created one).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 Apr 2024 16:27:45 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 11:58 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Please find the attached v33 patches.\n\n@@ -1368,6 +1416,7 @@ ShutDownSlotSync(void)\n if (SlotSyncCtx->pid == InvalidPid)\n {\n SpinLockRelease(&SlotSyncCtx->mutex);\n+ update_synced_slots_inactive_since();\n return;\n }\n SpinLockRelease(&SlotSyncCtx->mutex);\n@@ -1400,6 +1449,8 @@ ShutDownSlotSync(void)\n }\n\n SpinLockRelease(&SlotSyncCtx->mutex);\n+\n+ update_synced_slots_inactive_since();\n }\n\nWhy do we want to update all synced slots' inactive_since values at\nshutdown in spite of updating the value every time when releasing the\nslot? It seems to contradict the fact that inactive_since is updated\nwhen releasing or restoring the slot.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 13:11:31 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 9:42 AM Masahiko Sawada <[email protected]> wrote:\n>\n> @@ -1368,6 +1416,7 @@ ShutDownSlotSync(void)\n> if (SlotSyncCtx->pid == InvalidPid)\n> {\n> SpinLockRelease(&SlotSyncCtx->mutex);\n> + update_synced_slots_inactive_since();\n> return;\n> }\n> SpinLockRelease(&SlotSyncCtx->mutex);\n> @@ -1400,6 +1449,8 @@ ShutDownSlotSync(void)\n> }\n>\n> SpinLockRelease(&SlotSyncCtx->mutex);\n> +\n> + update_synced_slots_inactive_since();\n> }\n>\n> Why do we want to update all synced slots' inactive_since values at\n> shutdown in spite of updating the value every time when releasing the\n> slot? It seems to contradict the fact that inactive_since is updated\n> when releasing or restoring the slot.\n\nIt is to get the inactive_since right for the cases where the standby\nis promoted without a restart similar to when a standby is promoted\nwith restart in which case the inactive_since is set to current time\nin RestoreSlotFromDisk.\n\nImagine the slot is synced last time at time t1 and then a few hours\npassed, the standby is promoted without a restart. If we don't set\ninactive_since to current time in this case in ShutDownSlotSync, the\ninactive timeout invalidation mechanism can kick in immediately.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 10:04:02 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 8:28 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Apr 3, 2024 at 6:46 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Just one comment on v32-0001:\n> >\n> > +# Synced slot on the standby must get its own inactive_since.\n> > +is( $standby1->safe_psql(\n> > + 'postgres',\n> > + \"SELECT '$inactive_since_on_primary'::timestamptz <= '$inactive_since_on_standby'::timestamptz AND\n> > + '$inactive_since_on_standby'::timestamptz <= '$slot_sync_time'::timestamptz;\"\n> > + ),\n> > + \"t\",\n> > + 'synchronized slot has got its own inactive_since');\n> > +\n> >\n> > By using <= we are not testing that it must get its own inactive_since (as we\n> > allow them to be equal in the test). I think we should just add some usleep()\n> > where appropriate and deny equality during the tests on inactive_since.\n>\n> Thanks. It looks like we can ignore the equality in all of the\n> inactive_since comparisons. IIUC, all the TAP tests do run with\n> primary and standbys on the single BF animals. And, it looks like\n> assigning the inactive_since timestamps to perl variables is giving\n> the microseconds precision level\n> (./tmp_check/log/regress_log_040_standby_failover_slots_sync:inactive_since\n> 2024-04-03 14:30:09.691648+00). FWIW, we already have some TAP and SQL\n> tests relying on stats_reset timestamps without equality. So, I've\n> left the equality for the inactive_since tests.\n>\n> > Except for the above, v32-0001 LGTM.\n>\n> Thanks. Please see the attached v33-0001 patch after removing equality\n> on inactive_since TAP tests.\n>\n\nThe v33-0001 looks good to me. I have made minor changes in the\ncomments/commit message and removed one part of the test which was a\nbit confusing and didn't seem to add much value. Let me know what you\nthink of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 4 Apr 2024 10:48:11 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 10:48 AM Amit Kapila <[email protected]> wrote:\n>\n> The v33-0001 looks good to me. I have made minor changes in the\n> comments/commit message and removed one part of the test which was a\n> bit confusing and didn't seem to add much value. Let me know what you\n> think of the attached?\n\nThanks for the changes. v34-0001 LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 11:12:11 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 1:34 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 9:42 AM Masahiko Sawada <[email protected]> wrote:\n> >\n> > @@ -1368,6 +1416,7 @@ ShutDownSlotSync(void)\n> > if (SlotSyncCtx->pid == InvalidPid)\n> > {\n> > SpinLockRelease(&SlotSyncCtx->mutex);\n> > + update_synced_slots_inactive_since();\n> > return;\n> > }\n> > SpinLockRelease(&SlotSyncCtx->mutex);\n> > @@ -1400,6 +1449,8 @@ ShutDownSlotSync(void)\n> > }\n> >\n> > SpinLockRelease(&SlotSyncCtx->mutex);\n> > +\n> > + update_synced_slots_inactive_since();\n> > }\n> >\n> > Why do we want to update all synced slots' inactive_since values at\n> > shutdown in spite of updating the value every time when releasing the\n> > slot? It seems to contradict the fact that inactive_since is updated\n> > when releasing or restoring the slot.\n>\n> It is to get the inactive_since right for the cases where the standby\n> is promoted without a restart similar to when a standby is promoted\n> with restart in which case the inactive_since is set to current time\n> in RestoreSlotFromDisk.\n>\n> Imagine the slot is synced last time at time t1 and then a few hours\n> passed, the standby is promoted without a restart. If we don't set\n> inactive_since to current time in this case in ShutDownSlotSync, the\n> inactive timeout invalidation mechanism can kick in immediately.\n>\n\nThank you for the explanation! I understood the needs.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 17:01:30 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 1:32 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 1:34 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Thu, Apr 4, 2024 at 9:42 AM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > > @@ -1368,6 +1416,7 @@ ShutDownSlotSync(void)\n> > > if (SlotSyncCtx->pid == InvalidPid)\n> > > {\n> > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > + update_synced_slots_inactive_since();\n> > > return;\n> > > }\n> > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > @@ -1400,6 +1449,8 @@ ShutDownSlotSync(void)\n> > > }\n> > >\n> > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > +\n> > > + update_synced_slots_inactive_since();\n> > > }\n> > >\n> > > Why do we want to update all synced slots' inactive_since values at\n> > > shutdown in spite of updating the value every time when releasing the\n> > > slot? It seems to contradict the fact that inactive_since is updated\n> > > when releasing or restoring the slot.\n> >\n> > It is to get the inactive_since right for the cases where the standby\n> > is promoted without a restart similar to when a standby is promoted\n> > with restart in which case the inactive_since is set to current time\n> > in RestoreSlotFromDisk.\n> >\n> > Imagine the slot is synced last time at time t1 and then a few hours\n> > passed, the standby is promoted without a restart. If we don't set\n> > inactive_since to current time in this case in ShutDownSlotSync, the\n> > inactive timeout invalidation mechanism can kick in immediately.\n> >\n>\n> Thank you for the explanation! I understood the needs.\n>\n\nDo you want to review the v34_0001* further or shall I proceed with\nthe commit of the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Apr 2024 14:06:42 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 5:36 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 1:32 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Thu, Apr 4, 2024 at 1:34 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Thu, Apr 4, 2024 at 9:42 AM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > > @@ -1368,6 +1416,7 @@ ShutDownSlotSync(void)\n> > > > if (SlotSyncCtx->pid == InvalidPid)\n> > > > {\n> > > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > > + update_synced_slots_inactive_since();\n> > > > return;\n> > > > }\n> > > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > > @@ -1400,6 +1449,8 @@ ShutDownSlotSync(void)\n> > > > }\n> > > >\n> > > > SpinLockRelease(&SlotSyncCtx->mutex);\n> > > > +\n> > > > + update_synced_slots_inactive_since();\n> > > > }\n> > > >\n> > > > Why do we want to update all synced slots' inactive_since values at\n> > > > shutdown in spite of updating the value every time when releasing the\n> > > > slot? It seems to contradict the fact that inactive_since is updated\n> > > > when releasing or restoring the slot.\n> > >\n> > > It is to get the inactive_since right for the cases where the standby\n> > > is promoted without a restart similar to when a standby is promoted\n> > > with restart in which case the inactive_since is set to current time\n> > > in RestoreSlotFromDisk.\n> > >\n> > > Imagine the slot is synced last time at time t1 and then a few hours\n> > > passed, the standby is promoted without a restart. If we don't set\n> > > inactive_since to current time in this case in ShutDownSlotSync, the\n> > > inactive timeout invalidation mechanism can kick in immediately.\n> > >\n> >\n> > Thank you for the explanation! I understood the needs.\n> >\n>\n> Do you want to review the v34_0001* further or shall I proceed with\n> the commit of the same?\n\nThanks for asking. The v34-0001 patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 Apr 2024 18:05:02 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 11:12 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 10:48 AM Amit Kapila <[email protected]> wrote:\n> >\n> > The v33-0001 looks good to me. I have made minor changes in the\n> > comments/commit message and removed one part of the test which was a\n> > bit confusing and didn't seem to add much value. Let me know what you\n> > think of the attached?\n>\n> Thanks for the changes. v34-0001 LGTM.\n>\n\nI was doing a final review before pushing 0001 and found that\n'inactive_since' could be set twice during startup after promotion,\nonce while restoring slots and then via ShutDownSlotSync(). The reason\nis that ShutDownSlotSync() will be invoked in normal startup on\nprimary though it won't do anything apart from setting inactive_since\nif we have synced slots. I think you need to check 'StandbyMode' in\nupdate_synced_slots_inactive_since() and return if the same is not\nset. We can't use 'InRecovery' flag as that will be set even during\ncrash recovery.\n\nCan you please test this once unless you don't agree with the above theory?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Apr 2024 16:35:01 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 4:35 PM Amit Kapila <[email protected]> wrote:\n>\n> > Thanks for the changes. v34-0001 LGTM.\n>\n> I was doing a final review before pushing 0001 and found that\n> 'inactive_since' could be set twice during startup after promotion,\n> once while restoring slots and then via ShutDownSlotSync(). The reason\n> is that ShutDownSlotSync() will be invoked in normal startup on\n> primary though it won't do anything apart from setting inactive_since\n> if we have synced slots. I think you need to check 'StandbyMode' in\n> update_synced_slots_inactive_since() and return if the same is not\n> set. We can't use 'InRecovery' flag as that will be set even during\n> crash recovery.\n>\n> Can you please test this once unless you don't agree with the above theory?\n\nNice catch. I've verified that update_synced_slots_inactive_since is\ncalled even for normal server startups/crash recovery. I've added a\ncheck to exit if the StandbyMode isn't set.\n\nPlease find the attached v35 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 4 Apr 2024 17:52:50 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 4, 2024 at 5:53 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 4:35 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > Thanks for the changes. v34-0001 LGTM.\n> >\n> > I was doing a final review before pushing 0001 and found that\n> > 'inactive_since' could be set twice during startup after promotion,\n> > once while restoring slots and then via ShutDownSlotSync(). The reason\n> > is that ShutDownSlotSync() will be invoked in normal startup on\n> > primary though it won't do anything apart from setting inactive_since\n> > if we have synced slots. I think you need to check 'StandbyMode' in\n> > update_synced_slots_inactive_since() and return if the same is not\n> > set. We can't use 'InRecovery' flag as that will be set even during\n> > crash recovery.\n> >\n> > Can you please test this once unless you don't agree with the above theory?\n>\n> Nice catch. I've verified that update_synced_slots_inactive_since is\n> called even for normal server startups/crash recovery. I've added a\n> check to exit if the StandbyMode isn't set.\n>\n> Please find the attached v35 patch.\n\nThanks for the patch. Tested it , works well. Few cosmetic changes needed:\n\nin 040 test file:\n1)\n# Capture the inactive_since of the slot from the primary. Note that the slot\n# will be inactive since the corresponding subscription is disabled..\n\n2 .. at the end. Replace with one.\n\n2)\n# Synced slot on the standby must get its own inactive_since.\n\n. not needed in single line comment (to be consistent with\nneighbouring comments)\n\n\n3)\nupdate_synced_slots_inactive_since():\n\nif (!StandbyMode)\nreturn;\n\nIt will be good to add comments here.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 5 Apr 2024 08:39:13 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Apr 3, 2024 at 9:57 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > > shouldn't the slot be dropped/recreated instead of updating inactive_since?\n> >\n> > The sync slots that are invalidated on the primary aren't dropped and\n> > recreated on the standby.\n>\n> Yeah, right (I was confused with synced slots that are invalidated locally).\n>\n> > However, I\n> > found that the synced slot is acquired and released unnecessarily\n> > after the invalidation_reason is synced from the primary. I added a\n> > skip check in synchronize_one_slot to skip acquiring and releasing the\n> > slot if it's locally found inactive. With this, inactive_since won't\n> > get updated for invalidated sync slots on the standby as we don't\n> > acquire and release the slot.\n>\n> CR1 ===\n>\n> Yeah, I can see:\n>\n> @@ -575,6 +575,13 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid remote_dbid)\n> \" name slot \\\"%s\\\" already exists on the standby\",\n> remote_slot->name));\n>\n> + /*\n> + * Skip the sync if the local slot is already invalidated. We do this\n> + * beforehand to save on slot acquire and release.\n> + */\n> + if (slot->data.invalidated != RS_INVAL_NONE)\n> + return false;\n>\n> Thanks to the drop_local_obsolete_slots() call I think we are not missing the case\n> where the slot has been invalidated on the primary, invalidation reason has been\n> synced on the standby and later the slot is dropped/ recreated manually on the\n> primary (then it should be dropped/recreated on the standby too).\n>\n> Also it seems we are not missing the case where a sync slot is invalidated\n> locally due to wal removal (it should be dropped/recreated).\n\nRight.\n\n> > > CR5 ===\n> > >\n> > > + /*\n> > > + * This function isn't expected to be called for inactive timeout based\n> > > + * invalidation. A separate function InvalidateInactiveReplicationSlot is\n> > > + * to be used for that.\n> > >\n> > > Do you think it's worth to explain why?\n> >\n> > Hm, I just wanted to point out the actual function here. I modified it\n> > to something like the following, if others feel we don't need that, I\n> > can remove it.\n>\n> Sorry If I was not clear but I meant to say \"Do you think it's worth to explain\n> why we decided to create a dedicated function\"? (currently we \"just\" explain that\n> we created one).\n\nWe added a new function (InvalidateInactiveReplicationSlot) to\ninvalidate slot based on inactive timeout because 1) we do the\ninactive timeout invalidation at slot level as opposed to\nInvalidateObsoleteReplicationSlots which does loop over all the slots,\n2)\nInvalidatePossiblyObsoleteSlot does release the lock in some cases,\nhas a lot of unneeded code for inactive timeout invalidation check, 3)\nwe want some control over saving the slot to disk because we hook the\ninactive timeout invalidation into the loop that checkpoints the slot\ninfo to the disk in CheckPointReplicationSlots.\n\nI've added a comment atop InvalidateInactiveReplicationSlot.\n\nPlease find the attached v36 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 5 Apr 2024 11:21:43 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 05, 2024 at 11:21:43AM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 3, 2024 at 9:57 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> Please find the attached v36 patch.\n\nThanks!\n\nA few comments:\n\n1 ===\n\n+ <para>\n+ The timeout is measured from the time since the slot has become\n+ inactive (known from its\n+ <structfield>inactive_since</structfield> value) until it gets\n+ used (i.e., its <structfield>active</structfield> is set to true).\n+ </para>\n\nThat's right except when it's invalidated during the checkpoint (as the slot\nis not acquired in CheckPointReplicationSlots()).\n\nSo, what about adding: \"or a checkpoint occurs\"? That would also explain that\nthe invalidation could occur during checkpoint.\n\n2 ===\n\n+ /* If the slot has been invalidated, recalculate the resource limits */\n+ if (invalidated)\n+ {\n\n/If the slot/If a slot/?\n\n3 ===\n\n+ * NB - this function also runs as part of checkpoint, so avoid raising errors\n\ns/NB - this/NB: This function/? (that looks more consistent with other comments\nin the code)\n\n4 ===\n\n+ * Note that having a new function for RS_INVAL_INACTIVE_TIMEOUT cause instead\n\nI understand it's \"the RS_INVAL_INACTIVE_TIMEOUT cause\" but reading \"cause instead\"\nlooks weird to me. Maybe it would make sense to reword this a bit.\n\n5 ===\n\n+ * considered not active as they don't actually perform logical decoding.\n\nNot sure that's 100% accurate as we switched in fast forward logical\nin 2ec005b4e2.\n\n\"as they perform only fast forward logical decoding (or not at all)\", maybe?\n\n6 ===\n\n+ if (RecoveryInProgress() && slot->data.synced)\n+ return false;\n+\n+ if (replication_slot_inactive_timeout == 0)\n+ return false;\n\nWhat about just using one if? It's more a matter of taste but it also probably\nreduces the object file size a bit for non optimized build.\n\n7 ===\n\n+ /*\n+ * Do not invalidate the slots which are currently being synced from\n+ * the primary to the standby.\n+ */\n+ if (RecoveryInProgress() && slot->data.synced)\n+ return false;\n\nI think we don't need this check as the exact same one is done just before.\n\n8 ===\n\n+sub check_for_slot_invalidation_in_server_log\n+{\n+ my ($node, $slot_name, $offset) = @_;\n+ my $invalidated = 0;\n+\n+ for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n+ {\n+ $node->safe_psql('postgres', \"CHECKPOINT\");\n\nWouldn't be better to wait for the replication_slot_inactive_timeout time before\ninstead of triggering all those checkpoints? (it could be passed as an extra arg\nto wait_for_slot_invalidation()).\n\n9 ===\n\n# Synced slot mustn't get invalidated on the standby, it must sync invalidation\n# from the primary. So, we must not see the slot's invalidation message in server\n# log.\nok( !$standby1->log_contains(\n \"invalidating obsolete replication slot \\\"lsub1_sync_slot\\\"\",\n $standby1_logstart),\n 'check that syned slot has not been invalidated on the standby');\n\nWould that make sense to trigger a checkpoint on the standby before this test?\nI mean I think that without a checkpoint on the standby we should not see the\ninvalidation in the log anyway.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 07:43:58 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Fri, Apr 5, 2024 at 1:14 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > Please find the attached v36 patch.\n>\n> A few comments:\n>\n> 1 ===\n>\n> + <para>\n> + The timeout is measured from the time since the slot has become\n> + inactive (known from its\n> + <structfield>inactive_since</structfield> value) until it gets\n> + used (i.e., its <structfield>active</structfield> is set to true).\n> + </para>\n>\n> That's right except when it's invalidated during the checkpoint (as the slot\n> is not acquired in CheckPointReplicationSlots()).\n>\n> So, what about adding: \"or a checkpoint occurs\"? That would also explain that\n> the invalidation could occur during checkpoint.\n\nReworded.\n\n> 2 ===\n>\n> + /* If the slot has been invalidated, recalculate the resource limits */\n> + if (invalidated)\n> + {\n>\n> /If the slot/If a slot/?\n\nModified it to be like elsewhere.\n\n> 3 ===\n>\n> + * NB - this function also runs as part of checkpoint, so avoid raising errors\n>\n> s/NB - this/NB: This function/? (that looks more consistent with other comments\n> in the code)\n\nDone.\n\n> 4 ===\n>\n> + * Note that having a new function for RS_INVAL_INACTIVE_TIMEOUT cause instead\n>\n> I understand it's \"the RS_INVAL_INACTIVE_TIMEOUT cause\" but reading \"cause instead\"\n> looks weird to me. Maybe it would make sense to reword this a bit.\n\nReworded.\n\n> 5 ===\n>\n> + * considered not active as they don't actually perform logical decoding.\n>\n> Not sure that's 100% accurate as we switched in fast forward logical\n> in 2ec005b4e2.\n>\n> \"as they perform only fast forward logical decoding (or not at all)\", maybe?\n\nChanged it to \"as they don't perform logical decoding to produce the\nchanges\". In fast_forward mode no changes are produced.\n\n> 6 ===\n>\n> + if (RecoveryInProgress() && slot->data.synced)\n> + return false;\n> +\n> + if (replication_slot_inactive_timeout == 0)\n> + return false;\n>\n> What about just using one if? It's more a matter of taste but it also probably\n> reduces the object file size a bit for non optimized build.\n\nChanged.\n\n> 7 ===\n>\n> + /*\n> + * Do not invalidate the slots which are currently being synced from\n> + * the primary to the standby.\n> + */\n> + if (RecoveryInProgress() && slot->data.synced)\n> + return false;\n>\n> I think we don't need this check as the exact same one is done just before.\n\nRight. Removed.\n\n> 8 ===\n>\n> +sub check_for_slot_invalidation_in_server_log\n> +{\n> + my ($node, $slot_name, $offset) = @_;\n> + my $invalidated = 0;\n> +\n> + for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n> + {\n> + $node->safe_psql('postgres', \"CHECKPOINT\");\n>\n> Wouldn't be better to wait for the replication_slot_inactive_timeout time before\n> instead of triggering all those checkpoints? (it could be passed as an extra arg\n> to wait_for_slot_invalidation()).\n\nDone.\n\n> 9 ===\n>\n> # Synced slot mustn't get invalidated on the standby, it must sync invalidation\n> # from the primary. So, we must not see the slot's invalidation message in server\n> # log.\n> ok( !$standby1->log_contains(\n> \"invalidating obsolete replication slot \\\"lsub1_sync_slot\\\"\",\n> $standby1_logstart),\n> 'check that syned slot has not been invalidated on the standby');\n>\n> Would that make sense to trigger a checkpoint on the standby before this test?\n> I mean I think that without a checkpoint on the standby we should not see the\n> invalidation in the log anyway.\n\nDone.\n\nPlease find the attached v37 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 6 Apr 2024 11:55:38 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Apr 6, 2024 at 11:55 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n\nWhy the handling w.r.t active_pid in InvalidatePossiblyInactiveSlot()\nis not similar to InvalidatePossiblyObsoleteSlot(). Won't we need to\nensure that there is no other active slot user? Is it sufficient to\ncheck inactive_since for the same? If so, we need some comments to\nexplain the same.\n\nCan we avoid introducing the new functions like\nSaveGivenReplicationSlot() and MarkGivenReplicationSlotDirty(), if we\ndo the required work in the caller?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 6 Apr 2024 12:18:34 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Apr 6, 2024 at 12:18 PM Amit Kapila <[email protected]> wrote:\n>\n> Why the handling w.r.t active_pid in InvalidatePossiblyInactiveSlot()\n> is not similar to InvalidatePossiblyObsoleteSlot(). Won't we need to\n> ensure that there is no other active slot user? Is it sufficient to\n> check inactive_since for the same? If so, we need some comments to\n> explain the same.\n\nI removed the separate functions and with minimal changes, I've now\nplaced the RS_INVAL_INACTIVE_TIMEOUT logic into\nInvalidatePossiblyObsoleteSlot and use that even in\nCheckPointReplicationSlots.\n\n> Can we avoid introducing the new functions like\n> SaveGivenReplicationSlot() and MarkGivenReplicationSlotDirty(), if we\n> do the required work in the caller?\n\nHm. Removed them now.\n\nPlease see the attached v38 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 6 Apr 2024 17:10:19 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Apr 6, 2024 at 5:10 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Please see the attached v38 patch.\n\nHi, thanks everyone for reviewing the design and patches so far. Here\nI'm with the v39 patches implementing inactive timeout based (0001)\nand XID age based (0002) invalidation mechanisms.\n\nI'm quoting the hackers who are okay with inactive timeout based\ninvalidation mechanism:\nBertrand Drouvot -\nhttps://www.postgresql.org/message-id/ZgL0N%2BxVJNkyqsKL%40ip-10-97-1-34.eu-west-3.compute.internal\nand https://www.postgresql.org/message-id/ZgPHDAlM79iLtGIH%40ip-10-97-1-34.eu-west-3.compute.internal\nAmit Kapila - https://www.postgresql.org/message-id/CAA4eK1L3awyzWMuymLJUm8SoFEQe%3DDa9KUwCcAfC31RNJ1xdJA%40mail.gmail.com\nNathan Bossart -\nhttps://www.postgresql.org/message-id/20240325195443.GA2923888%40nathanxps13\nRobert Haas - https://www.postgresql.org/message-id/CA%2BTgmoZTbaaEjSZUG1FL0mzxAdN3qmXksO3O9_PZhEuXTkVnRQ%40mail.gmail.com\n\nI'm quoting the hackers who are okay with XID age based invalidation mechanism:\nNathan Bossart -\nhttps://www.postgresql.org/message-id/20240326150918.GB3181099%40nathanxps13\nand https://www.postgresql.org/message-id/20240327150557.GA3994937%40nathanxps13\nAlvaro Herrera -\nhttps://www.postgresql.org/message-id/202403261539.xcjfle7sksz7%40alvherre.pgsql\nBertrand Drouvot -\nhttps://www.postgresql.org/message-id/ZgPHDAlM79iLtGIH%40ip-10-97-1-34.eu-west-3.compute.internal\nAmit Kapila - https://www.postgresql.org/message-id/CAA4eK1L3awyzWMuymLJUm8SoFEQe%3DDa9KUwCcAfC31RNJ1xdJA%40mail.gmail.com\n\nThere was a point raised by Robert\nhttps://www.postgresql.org/message-id/CA%2BTgmoaRECcnyqxAxUhP5dk2S4HX%3DpGh-p-PkA3uc%2BjG_9hiMw%40mail.gmail.com\nfor XID age based invalidation. An issue related to\nvacuum_defer_cleanup_age\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=be504a3e974d75be6f95c8f9b7367126034f2d12\nled to the removal of the GUC\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=1118cd37eb61e6a2428f457a8b2026a7bb3f801a.\nThe same issue may not happen for the XID age based invaliation. This\nis because the XID age is not calculated using FullTransactionId but\nusing TransactionId as the slot's xmin and catalog_xmin are tracked as\nTransactionId.\n\nThere was a point raised by Amit\nhttps://www.postgresql.org/message-id/CAA4eK1K8wqLsMw6j0hE_SFoWAeo3Kw8UNnMfhsWaYDF1GWYQ%2Bg%40mail.gmail.com\non when to do the XID age based invalidation - whether in checkpointer\nor when vacuum is being run or whenever ComputeXIDHorizons gets called\nor in autovacuum process. For now, I've chosen the design to do these\nnew invalidation checks in two places - 1) whenever the slot is\nacquired and the slot acquisition errors out if invalidated, 2) during\ncheckpoint. However, I'm open to suggestions on this.\n\nI've also verified the case whether the replication_slot_xid_age\nsetting can help in case of server inching towards the XID wraparound.\nI've created a primary and streaming standby setup with\nhot_standby_feedback set to on (so that the slot gets an xmin). Then,\nI've set replication_slot_xid_age to 2 billion on the primary, and\nused xid_wraparound extension to reach XID wraparound on the primary.\nOnce I start receiving the WARNINGs about VACUUM, I did a checkpoint\nafter which the slot got invalidated enabling my VACUUM to freeze XIDs\nsaving my database from XID wraparound problem.\n\nThanks a lot Masahiko Sawada for an offlist chat about the XID age\ncalculation logic.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 13 Apr 2024 09:36:25 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Thu, Apr 4, 2024 at 9:23 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 4:35 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > Thanks for the changes. v34-0001 LGTM.\n> >\n> > I was doing a final review before pushing 0001 and found that\n> > 'inactive_since' could be set twice during startup after promotion,\n> > once while restoring slots and then via ShutDownSlotSync(). The reason\n> > is that ShutDownSlotSync() will be invoked in normal startup on\n> > primary though it won't do anything apart from setting inactive_since\n> > if we have synced slots. I think you need to check 'StandbyMode' in\n> > update_synced_slots_inactive_since() and return if the same is not\n> > set. We can't use 'InRecovery' flag as that will be set even during\n> > crash recovery.\n> >\n> > Can you please test this once unless you don't agree with the above theory?\n>\n> Nice catch. I've verified that update_synced_slots_inactive_since is\n> called even for normal server startups/crash recovery. I've added a\n> check to exit if the StandbyMode isn't set.\n>\n> Please find the attached v35 patch.\n>\n\nThe documentation says about both 'active' and 'inactive_since'\ncolumns of pg_replication_slots say:\n\n---\nactive bool\nTrue if this slot is currently actively being used\n\ninactive_since timestamptz\nThe time since the slot has become inactive. NULL if the slot is\ncurrently being used. Note that for slots on the standby that are\nbeing synced from a primary server (whose synced field is true), the\ninactive_since indicates the last synchronization (see Section 47.2.3)\ntime.\n---\n\nWhen reading the description I thought if 'active' is true,\n'inactive_since' is NULL, but it doesn't seem to apply for temporary\nslots. Since we don't reset the active_pid field of temporary slots\nwhen the release, the 'active' is still true in the view but\n'inactive_since' is not NULL. Do you think we need to mention it in\nthe documentation?\n\nAs for the timeout-based slot invalidation feature, we could end up\ninvalidating the temporary slots even if they are shown as active,\nwhich could confuse users. Do we want to somehow deal with it?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 22:50:37 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Apr 22, 2024 at 7:21 PM Masahiko Sawada <[email protected]> wrote:\n>\n> > Please find the attached v35 patch.\n>\n> The documentation says about both 'active' and 'inactive_since'\n> columns of pg_replication_slots say:\n>\n> ---\n> active bool\n> True if this slot is currently actively being used\n>\n> inactive_since timestamptz\n> The time since the slot has become inactive. NULL if the slot is\n> currently being used. Note that for slots on the standby that are\n> being synced from a primary server (whose synced field is true), the\n> inactive_since indicates the last synchronization (see Section 47.2.3)\n> time.\n> ---\n>\n> When reading the description I thought if 'active' is true,\n> 'inactive_since' is NULL, but it doesn't seem to apply for temporary\n> slots.\n\nRight.\n\n> Since we don't reset the active_pid field of temporary slots\n> when the release, the 'active' is still true in the view but\n> 'inactive_since' is not NULL.\n\nRight. inactive_since is reset whenever the temporary slot is acquired\nagain within the same backend that created the temporary slot.\n\n> Do you think we need to mention it in\n> the documentation?\n\nI think that's the reason we dropped \"active\" from the statement. It\nwas earlier \"NULL if the slot is currently actively being used.\". But,\nper Bertrand's comment\nhttps://www.postgresql.org/message-id/ZehE2IJcsetSJMHC%40ip-10-97-1-34.eu-west-3.compute.internal\nchanged it to \"\"NULL if the slot is currently being used.\".\n\nTemporary slots retain the active = true and active_pid = <pid of the\nbackend that created it> even when the slot is not being used until\nthe lifetime of the backend process. We haven't tied active or\nactive_pid flags to inactive_since, doing so now to represent the\ntemporary slot behaviour for active and active_pid will confuse users\nmore. As far as the inactive_since of a slot is concerned, it is set\nto 0 when the slot is being used (acquired) and set to current\ntimestamp when the slot is not being used (released).\n\n> As for the timeout-based slot invalidation feature, we could end up\n> invalidating the temporary slots even if they are shown as active,\n> which could confuse users. Do we want to somehow deal with it?\n\nYes. As long as the temporary slot is lying unused holding up\nresources for more than the specified\nreplication_slot_inactive_timeout, it is bound to get invalidated.\nThis keeps behaviour consistent and less-confusing to the users.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 11:11:35 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Apr 25, 2024 at 11:11 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 7:21 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > > Please find the attached v35 patch.\n> >\n> > The documentation says about both 'active' and 'inactive_since'\n> > columns of pg_replication_slots say:\n> >\n> > ---\n> > active bool\n> > True if this slot is currently actively being used\n> >\n> > inactive_since timestamptz\n> > The time since the slot has become inactive. NULL if the slot is\n> > currently being used. Note that for slots on the standby that are\n> > being synced from a primary server (whose synced field is true), the\n> > inactive_since indicates the last synchronization (see Section 47.2.3)\n> > time.\n> > ---\n> >\n> > When reading the description I thought if 'active' is true,\n> > 'inactive_since' is NULL, but it doesn't seem to apply for temporary\n> > slots.\n>\n> Right.\n>\n> > Since we don't reset the active_pid field of temporary slots\n> > when the release, the 'active' is still true in the view but\n> > 'inactive_since' is not NULL.\n>\n> Right. inactive_since is reset whenever the temporary slot is acquired\n> again within the same backend that created the temporary slot.\n>\n> > Do you think we need to mention it in\n> > the documentation?\n>\n> I think that's the reason we dropped \"active\" from the statement. It\n> was earlier \"NULL if the slot is currently actively being used.\". But,\n> per Bertrand's comment\n> https://www.postgresql.org/message-id/ZehE2IJcsetSJMHC%40ip-10-97-1-34.eu-west-3.compute.internal\n> changed it to \"\"NULL if the slot is currently being used.\".\n>\n> Temporary slots retain the active = true and active_pid = <pid of the\n> backend that created it> even when the slot is not being used until\n> the lifetime of the backend process. We haven't tied active or\n> active_pid flags to inactive_since, doing so now to represent the\n> temporary slot behaviour for active and active_pid will confuse users\n> more.\n>\n\nThis is true and it's probably easy for us to understand as we\ndeveloped this feature but the same may not be true for others. I\nwonder if we can be explicit about the difference of\nactive/inactive_since by adding something like the following for\ninactive_since: Note that this field is not related to the active flag\nas temporary slots can remain active till the session ends even when\nthey are not being used.\n\nSawada-San, do you have any suggestions on the wording?\n\n>\n As far as the inactive_since of a slot is concerned, it is set\n> to 0 when the slot is being used (acquired) and set to current\n> timestamp when the slot is not being used (released).\n>\n> > As for the timeout-based slot invalidation feature, we could end up\n> > invalidating the temporary slots even if they are shown as active,\n> > which could confuse users. Do we want to somehow deal with it?\n>\n> Yes. As long as the temporary slot is lying unused holding up\n> resources for more than the specified\n> replication_slot_inactive_timeout, it is bound to get invalidated.\n> This keeps behaviour consistent and less-confusing to the users.\n>\n\nAgreed. We may want to add something in the docs for this to avoid\nconfusion with the active flag.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Apr 2024 09:33:28 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Sat, Apr 13, 2024 at 9:36 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> There was a point raised by Amit\n> https://www.postgresql.org/message-id/CAA4eK1K8wqLsMw6j0hE_SFoWAeo3Kw8UNnMfhsWaYDF1GWYQ%2Bg%40mail.gmail.com\n> on when to do the XID age based invalidation - whether in checkpointer\n> or when vacuum is being run or whenever ComputeXIDHorizons gets called\n> or in autovacuum process. For now, I've chosen the design to do these\n> new invalidation checks in two places - 1) whenever the slot is\n> acquired and the slot acquisition errors out if invalidated, 2) during\n> checkpoint. However, I'm open to suggestions on this.\n\nHere are my thoughts on when to do the XID age invalidation. In all\nthe patches sent so far, the XID age invalidation happens in two\nplaces - one during the slot acquisition, and another during the\ncheckpoint. As the suggestion is to do it during the vacuum (manual\nand auto), so that even if the checkpoint isn't happening in the\ndatabase for whatever reasons, a vacuum command or autovacuum can\ninvalidate the slots whose XID is aged.\n\nAn idea is to check for XID age based invalidation for all the slots\nin ComputeXidHorizons() before it reads replication_slot_xmin and\nreplication_slot_catalog_xmin, and obviously before the proc array\nlock is acquired. A potential problem with this approach is that the\ninvalidation check can become too aggressive as XID horizons are\ncomputed from many places.\n\nAnother idea is to check for XID age based invalidation for all the\nslots in higher levels than ComputeXidHorizons(), for example in\nvacuum() which is an entry point for both vacuum command and\nautovacuum. This approach seems similar to vacuum_failsafe_age GUC\nwhich checks each relation for the failsafe age before vacuum gets\ntriggered on it.\n\nDoes anyone see any issues or risks with the above two approaches or\nhave any other ideas? Thoughts?\n\nI attached v40 patches here. I reworded some of the ERROR messages,\nand did some code clean-up. Note that I haven't implemented any of the\nabove approaches yet.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 17 Jun 2024 17:55:04 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Jun 17, 2024 at 05:55:04PM +0530, Bharath Rupireddy wrote:\n> Here are my thoughts on when to do the XID age invalidation. In all\n> the patches sent so far, the XID age invalidation happens in two\n> places - one during the slot acquisition, and another during the\n> checkpoint. As the suggestion is to do it during the vacuum (manual\n> and auto), so that even if the checkpoint isn't happening in the\n> database for whatever reasons, a vacuum command or autovacuum can\n> invalidate the slots whose XID is aged.\n\n+1. IMHO this is a principled choice. The similar max_slot_wal_keep_size\nparameter is considered where it arguably matters most: when we are trying\nto remove/recycle WAL segments. Since this parameter is intended to\nprevent the server from running out of space, it makes sense that we'd\napply it at the point where we are trying to free up space. The proposed\nmax_slot_xid_age parameter is intended to prevent the server from running\nout of transaction IDs, so it follows that we'd apply it at the point where\nwe reclaim them, which happens to be vacuum.\n\n> An idea is to check for XID age based invalidation for all the slots\n> in ComputeXidHorizons() before it reads replication_slot_xmin and\n> replication_slot_catalog_xmin, and obviously before the proc array\n> lock is acquired. A potential problem with this approach is that the\n> invalidation check can become too aggressive as XID horizons are\n> computed from many places.\n>\n> Another idea is to check for XID age based invalidation for all the\n> slots in higher levels than ComputeXidHorizons(), for example in\n> vacuum() which is an entry point for both vacuum command and\n> autovacuum. This approach seems similar to vacuum_failsafe_age GUC\n> which checks each relation for the failsafe age before vacuum gets\n> triggered on it.\n\nI don't presently have any strong opinion on where this logic should go,\nbut in general, I think we should only invalidate slots if invalidating\nthem would allow us to advance the vacuum cutoff. If the cutoff is held\nback by something else, I don't see a point in invalidating slots because\nwe'll just be breaking replication in return for no additional reclaimed\ntransaction IDs.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:09:53 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Jun 17, 2024 at 5:55 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Here are my thoughts on when to do the XID age invalidation. In all\n> the patches sent so far, the XID age invalidation happens in two\n> places - one during the slot acquisition, and another during the\n> checkpoint. As the suggestion is to do it during the vacuum (manual\n> and auto), so that even if the checkpoint isn't happening in the\n> database for whatever reasons, a vacuum command or autovacuum can\n> invalidate the slots whose XID is aged.\n>\n> An idea is to check for XID age based invalidation for all the slots\n> in ComputeXidHorizons() before it reads replication_slot_xmin and\n> replication_slot_catalog_xmin, and obviously before the proc array\n> lock is acquired. A potential problem with this approach is that the\n> invalidation check can become too aggressive as XID horizons are\n> computed from many places.\n>\n> Another idea is to check for XID age based invalidation for all the\n> slots in higher levels than ComputeXidHorizons(), for example in\n> vacuum() which is an entry point for both vacuum command and\n> autovacuum. This approach seems similar to vacuum_failsafe_age GUC\n> which checks each relation for the failsafe age before vacuum gets\n> triggered on it.\n\nI am attaching the patches implementing the idea of invalidating\nreplication slots during vacuum when current slot xmin limits\n(procArray->replication_slot_xmin and\nprocArray->replication_slot_catalog_xmin) are aged as per the new XID\nage GUC. When either of these limits are aged, there must be at least\none replication slot that is aged, because the xmin limits, after all,\nare the minimum of xmin or catalog_xmin of all replication slots. In\nthis approach, the new XID age GUC will help vacuum when needed,\nbecause the current slot xmin limits are recalculated after\ninvalidating replication slots that are holding xmins for longer than\nthe age. The code is placed in vacuum() which is common for both\nvacuum command and autovacuum, and gets executed only once every\nvacuum cycle to not be too aggressive in invalidating.\n\nHowever, there might be some concerns with this approach like the following:\n1) Adding more code to vacuum might not be acceptable\n2) What if invalidation of replication slots emits an error, will it\nblock vacuum forever? Currently, InvalidateObsoleteReplicationSlots()\nis also called as part of the checkpoint, and emitting ERRORs from\nwithin is avoided already. Therefore, there is no concern here for\nnow.\n3) What if there are more replication slots to be invalidated, will it\ndelay the vacuum? If yes, by how much? <<TODO>>\n4) Will the invalidation based on just current replication slot xmin\nlimits suffice irrespective of vacuum cutoffs? IOW, if the replication\nslots are invalidated but vacuum isn't going to do any work because\nvacuum cutoffs are not yet met? Is the invalidation work wasteful\nhere?\n5) Is it okay to take just one more time the proc array lock to get\ncurrent replication slot xmin limits via\nProcArrayGetReplicationSlotXmin() once every vacuum cycle? <<TODO>>\n6) Vacuum command can't be run on the standby in recovery. So, to help\ninvalidate replication slots on the standby, I have for now let the\ncheckpointer also do the XID age based invalidation. I know\ninvalidating both in checkpointer and vacuum may not be a great idea,\nbut I'm open to thoughts.\n\nFollowing are some of the alternative approaches which IMHO don't help\nvacuum when needed:\na) Let the checkpointer do the XID age based invalidation, and call it\nout in the documentation that if the checkpoint doesn't happen, the\nnew GUC doesn't help even if the vacuum is run. This has been the\napproach until v40 patch.\nb) Checkpointer and/or other backends add an autovacuum work item via\nAutoVacuumRequestWork(), and autovacuum when it gets to it will\ninvalidate the replication slots. But, what to do for the vacuum\ncommand here?\n\nPlease find the attached v41 patches implementing the idea of vacuum\ndoing the invalidation.\n\nThoughts?\n\nThanks to Sawada-san for a detailed off-list discussion.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 24 Jun 2024 11:30:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 11:30:00AM +0530, Bharath Rupireddy wrote:\n> 6) Vacuum command can't be run on the standby in recovery. So, to help\n> invalidate replication slots on the standby, I have for now let the\n> checkpointer also do the XID age based invalidation. I know\n> invalidating both in checkpointer and vacuum may not be a great idea,\n> but I'm open to thoughts.\n\nHm. I hadn't considered this angle.\n\n> a) Let the checkpointer do the XID age based invalidation, and call it\n> out in the documentation that if the checkpoint doesn't happen, the\n> new GUC doesn't help even if the vacuum is run. This has been the\n> approach until v40 patch.\n\nMy first reaction is that this is probably okay. I guess you might run\ninto problems if you set max_slot_xid_age to 2B and checkpoint_timeout to 1\nday, but even in that case your transaction ID usage rate would need to be\npretty high for wraparound to occur.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 9 Jul 2024 17:01:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:01 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> Hi,\n>\n> On Mon, Jun 17, 2024 at 5:55 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Here are my thoughts on when to do the XID age invalidation. In all\n> > the patches sent so far, the XID age invalidation happens in two\n> > places - one during the slot acquisition, and another during the\n> > checkpoint. As the suggestion is to do it during the vacuum (manual\n> > and auto), so that even if the checkpoint isn't happening in the\n> > database for whatever reasons, a vacuum command or autovacuum can\n> > invalidate the slots whose XID is aged.\n> >\n> > An idea is to check for XID age based invalidation for all the slots\n> > in ComputeXidHorizons() before it reads replication_slot_xmin and\n> > replication_slot_catalog_xmin, and obviously before the proc array\n> > lock is acquired. A potential problem with this approach is that the\n> > invalidation check can become too aggressive as XID horizons are\n> > computed from many places.\n> >\n> > Another idea is to check for XID age based invalidation for all the\n> > slots in higher levels than ComputeXidHorizons(), for example in\n> > vacuum() which is an entry point for both vacuum command and\n> > autovacuum. This approach seems similar to vacuum_failsafe_age GUC\n> > which checks each relation for the failsafe age before vacuum gets\n> > triggered on it.\n>\n> I am attaching the patches implementing the idea of invalidating\n> replication slots during vacuum when current slot xmin limits\n> (procArray->replication_slot_xmin and\n> procArray->replication_slot_catalog_xmin) are aged as per the new XID\n> age GUC. When either of these limits are aged, there must be at least\n> one replication slot that is aged, because the xmin limits, after all,\n> are the minimum of xmin or catalog_xmin of all replication slots. In\n> this approach, the new XID age GUC will help vacuum when needed,\n> because the current slot xmin limits are recalculated after\n> invalidating replication slots that are holding xmins for longer than\n> the age. The code is placed in vacuum() which is common for both\n> vacuum command and autovacuum, and gets executed only once every\n> vacuum cycle to not be too aggressive in invalidating.\n>\n> However, there might be some concerns with this approach like the\n> following:\n> 1) Adding more code to vacuum might not be acceptable\n> 2) What if invalidation of replication slots emits an error, will it\n> block vacuum forever? Currently, InvalidateObsoleteReplicationSlots()\n> is also called as part of the checkpoint, and emitting ERRORs from\n> within is avoided already. Therefore, there is no concern here for\n> now.\n> 3) What if there are more replication slots to be invalidated, will it\n> delay the vacuum? If yes, by how much? <<TODO>>\n> 4) Will the invalidation based on just current replication slot xmin\n> limits suffice irrespective of vacuum cutoffs? IOW, if the replication\n> slots are invalidated but vacuum isn't going to do any work because\n> vacuum cutoffs are not yet met? Is the invalidation work wasteful\n> here?\n> 5) Is it okay to take just one more time the proc array lock to get\n> current replication slot xmin limits via\n> ProcArrayGetReplicationSlotXmin() once every vacuum cycle? <<TODO>>\n> 6) Vacuum command can't be run on the standby in recovery. So, to help\n> invalidate replication slots on the standby, I have for now let the\n> checkpointer also do the XID age based invalidation. I know\n> invalidating both in checkpointer and vacuum may not be a great idea,\n> but I'm open to thoughts.\n>\n> Following are some of the alternative approaches which IMHO don't help\n> vacuum when needed:\n> a) Let the checkpointer do the XID age based invalidation, and call it\n> out in the documentation that if the checkpoint doesn't happen, the\n> new GUC doesn't help even if the vacuum is run. This has been the\n> approach until v40 patch.\n> b) Checkpointer and/or other backends add an autovacuum work item via\n> AutoVacuumRequestWork(), and autovacuum when it gets to it will\n> invalidate the replication slots. But, what to do for the vacuum\n> command here?\n>\n> Please find the attached v41 patches implementing the idea of vacuum\n> doing the invalidation.\n>\n> Thoughts?\n>\n> Thanks to Sawada-san for a detailed off-list discussion.\n>\n\nThe patch no longer applies on HEAD, please rebase.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Mon, Jun 24, 2024 at 4:01 PM Bharath Rupireddy <[email protected]> wrote:Hi,\n\nOn Mon, Jun 17, 2024 at 5:55 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Here are my thoughts on when to do the XID age invalidation. In all\n> the patches sent so far, the XID age invalidation happens in two\n> places - one during the slot acquisition, and another during the\n> checkpoint. As the suggestion is to do it during the vacuum (manual\n> and auto), so that even if the checkpoint isn't happening in the\n> database for whatever reasons, a vacuum command or autovacuum can\n> invalidate the slots whose XID is aged.\n>\n> An idea is to check for XID age based invalidation for all the slots\n> in ComputeXidHorizons() before it reads replication_slot_xmin and\n> replication_slot_catalog_xmin, and obviously before the proc array\n> lock is acquired. A potential problem with this approach is that the\n> invalidation check can become too aggressive as XID horizons are\n> computed from many places.\n>\n> Another idea is to check for XID age based invalidation for all the\n> slots in higher levels than ComputeXidHorizons(), for example in\n> vacuum() which is an entry point for both vacuum command and\n> autovacuum. This approach seems similar to vacuum_failsafe_age GUC\n> which checks each relation for the failsafe age before vacuum gets\n> triggered on it.\n\nI am attaching the patches implementing the idea of invalidating\nreplication slots during vacuum when current slot xmin limits\n(procArray->replication_slot_xmin and\nprocArray->replication_slot_catalog_xmin) are aged as per the new XID\nage GUC. When either of these limits are aged, there must be at least\none replication slot that is aged, because the xmin limits, after all,\nare the minimum of xmin or catalog_xmin of all replication slots. In\nthis approach, the new XID age GUC will help vacuum when needed,\nbecause the current slot xmin limits are recalculated after\ninvalidating replication slots that are holding xmins for longer than\nthe age. The code is placed in vacuum() which is common for both\nvacuum command and autovacuum, and gets executed only once every\nvacuum cycle to not be too aggressive in invalidating.\n\nHowever, there might be some concerns with this approach like the following:\n1) Adding more code to vacuum might not be acceptable\n2) What if invalidation of replication slots emits an error, will it\nblock vacuum forever? Currently, InvalidateObsoleteReplicationSlots()\nis also called as part of the checkpoint, and emitting ERRORs from\nwithin is avoided already. Therefore, there is no concern here for\nnow.\n3) What if there are more replication slots to be invalidated, will it\ndelay the vacuum? If yes, by how much? <<TODO>>\n4) Will the invalidation based on just current replication slot xmin\nlimits suffice irrespective of vacuum cutoffs? IOW, if the replication\nslots are invalidated but vacuum isn't going to do any work because\nvacuum cutoffs are not yet met? Is the invalidation work wasteful\nhere?\n5) Is it okay to take just one more time the proc array lock to get\ncurrent replication slot xmin limits via\nProcArrayGetReplicationSlotXmin() once every vacuum cycle? <<TODO>>\n6) Vacuum command can't be run on the standby in recovery. So, to help\ninvalidate replication slots on the standby, I have for now let the\ncheckpointer also do the XID age based invalidation. I know\ninvalidating both in checkpointer and vacuum may not be a great idea,\nbut I'm open to thoughts.\n\nFollowing are some of the alternative approaches which IMHO don't help\nvacuum when needed:\na) Let the checkpointer do the XID age based invalidation, and call it\nout in the documentation that if the checkpoint doesn't happen, the\nnew GUC doesn't help even if the vacuum is run. This has been the\napproach until v40 patch.\nb) Checkpointer and/or other backends add an autovacuum work item via\nAutoVacuumRequestWork(), and autovacuum when it gets to it will\ninvalidate the replication slots. But, what to do for the vacuum\ncommand here?\n\nPlease find the attached v41 patches implementing the idea of vacuum\ndoing the invalidation.\n\nThoughts?\n\nThanks to Sawada-san for a detailed off-list discussion.The patch no longer applies on HEAD, please rebase.regards,Ajin CherianFujitsu Australia", "msg_date": "Mon, 12 Aug 2024 22:17:55 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Jul 9, 2024 at 3:01 PM Nathan Bossart <[email protected]> wrote:\n>\n> On Mon, Jun 24, 2024 at 11:30:00AM +0530, Bharath Rupireddy wrote:\n> > 6) Vacuum command can't be run on the standby in recovery. So, to help\n> > invalidate replication slots on the standby, I have for now let the\n> > checkpointer also do the XID age based invalidation. I know\n> > invalidating both in checkpointer and vacuum may not be a great idea,\n> > but I'm open to thoughts.\n>\n> Hm. I hadn't considered this angle.\n\nAnother idea would be to let the startup process do slot invalidation\nwhen replaying a RUNNING_XACTS record. Since a RUNNING_XACTS record\nhas the latest XID on the primary, I think the startup process can\ncompare it to the slot-xmin, and invalidate slots which are older than\nthe age limit.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Aug 2024 06:32:45 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Jun 24, 2024 at 4:01 PM Bharath Rupireddy <\[email protected]> wrote:\n\n> Hi,\n>\n> On Mon, Jun 17, 2024 at 5:55 PM Bharath Rupireddy\n> <[email protected]> wrote:\n>\n> Please find the attached v41 patches implementing the idea of vacuum\n> doing the invalidation.\n>\n> Thoughts?\n>\n>\n>\n\nSome minor comments on the patch:\n1.\n+ /*\n+ * Release the lock if it's not yet to keep the cleanup path on\n+ * error happy.\n+ */\n\nI suggest rephrasing to: \" \"Release the lock if it hasn't been already to\nensure smooth cleanup on error.\"\n\n\n2.\n\nelog(DEBUG1, \"performing replication slot invalidation\");\n\nProbably change it to \"performing replication slot invalidation checks\" as\nwe might not actually invalidate any slot here.\n\n3.\n\nIn CheckPointReplicationSlots()\n\n+ invalidated =\nInvalidateObsoleteReplicationSlots(RS_INVAL_INACTIVE_TIMEOUT,\n+ 0,\n+ InvalidOid,\n+ InvalidTransactionId);\n+\n+ if (invalidated)\n+ {\n+ /*\n+ * If any slots have been invalidated, recalculate the resource\n+ * limits.\n+ */\n+ ReplicationSlotsComputeRequiredXmin(false);\n+ ReplicationSlotsComputeRequiredLSN();\n+ }\n\nIs this calculation of resource limits really required here when the same\nis already done inside InvalidateObsoleteReplicationSlots()\n\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Mon, Jun 24, 2024 at 4:01 PM Bharath Rupireddy <[email protected]> wrote:Hi,\n\nOn Mon, Jun 17, 2024 at 5:55 PM Bharath Rupireddy\n<[email protected]> wrote:\n\nPlease find the attached v41 patches implementing the idea of vacuum\ndoing the invalidation.\n\nThoughts?\nSome minor comments on the patch:1.+\t\t\t/*+\t\t\t * Release the lock if it's not yet to keep the cleanup path on+\t\t\t * error happy.+\t\t\t */I suggest rephrasing to: \"\n\n\"Release the lock if it hasn't been already to ensure smooth cleanup on error.\"2.elog(DEBUG1, \"performing replication slot invalidation\");Probably change it to \"performing replication slot invalidation checks\" as we might not actually invalidate any slot here.3. In CheckPointReplicationSlots()+\tinvalidated = InvalidateObsoleteReplicationSlots(RS_INVAL_INACTIVE_TIMEOUT,+\t\t\t\t\t\t\t\t\t\t\t\t\t 0,+\t\t\t\t\t\t\t\t\t\t\t\t\t InvalidOid,+\t\t\t\t\t\t\t\t\t\t\t\t\t InvalidTransactionId);++\tif (invalidated)+\t{+\t\t/*+\t\t * If any slots have been invalidated, recalculate the resource+\t\t * limits.+\t\t */+\t\tReplicationSlotsComputeRequiredXmin(false);+\t\tReplicationSlotsComputeRequiredLSN();+\t}Is this calculation of resource limits really required here when the same is already done inside InvalidateObsoleteReplicationSlots()regards,Ajin CherianFujitsu Australia", "msg_date": "Wed, 14 Aug 2024 13:50:38 +1000", "msg_from": "Ajin Cherian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Aug 14, 2024 at 9:20 AM Ajin Cherian <[email protected]> wrote:\n>\n> Some minor comments on the patch:\n\nThanks for reviewing.\n\n> 1.\n> + /*\n> + * Release the lock if it's not yet to keep the cleanup path on\n> + * error happy.\n> + */\n>\n> I suggest rephrasing to: \" \"Release the lock if it hasn't been already to ensure smooth cleanup on error.\"\n\nChanged.\n\n> 2.\n>\n> elog(DEBUG1, \"performing replication slot invalidation\");\n>\n> Probably change it to \"performing replication slot invalidation checks\" as we might not actually invalidate any slot here.\n\nChanged.\n\n> 3.\n>\n> + ReplicationSlotsComputeRequiredXmin(false);\n> + ReplicationSlotsComputeRequiredLSN();\n> + }\n>\n> Is this calculation of resource limits really required here when the same is already done inside InvalidateObsoleteReplicationSlots()\n\nNice catch. Removed.\n\nPlease find the attached v42 patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 26 Aug 2024 11:44:05 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Aug 26, 2024 at 11:44 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n\nFew comments on 0001:\n1.\n@@ -651,6 +651,13 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\nremote_dbid)\n \" name slot \\\"%s\\\" already exists on the standby\",\n remote_slot->name));\n\n+ /*\n+ * Skip the sync if the local slot is already invalidated. We do this\n+ * beforehand to avoid slot acquire and release.\n+ */\n+ if (slot->data.invalidated != RS_INVAL_NONE)\n+ return false;\n+\n /*\n * The slot has been synchronized before.\n\nI was wondering why you have added this new check as part of this\npatch. If you see the following comments in the related code, you will\nknow why we haven't done this previously.\n\n/*\n* The slot has been synchronized before.\n*\n* It is important to acquire the slot here before checking\n* invalidation. If we don't acquire the slot first, there could be a\n* race condition that the local slot could be invalidated just after\n* checking the 'invalidated' flag here and we could end up\n* overwriting 'invalidated' flag to remote_slot's value. See\n* InvalidatePossiblyObsoleteSlot() where it invalidates slot directly\n* if the slot is not acquired by other processes.\n*\n* XXX: If it ever turns out that slot acquire/release is costly for\n* cases when none of the slot properties is changed then we can do a\n* pre-check to ensure that at least one of the slot properties is\n* changed before acquiring the slot.\n*/\nReplicationSlotAcquire(remote_slot->name, true);\n\nWe need some modifications in these comments if you want to add a\npre-check here.\n\n2.\n@@ -1907,6 +2033,31 @@ CheckPointReplicationSlots(bool is_shutdown)\n SaveSlotToPath(s, path, LOG);\n }\n LWLockRelease(ReplicationSlotAllocationLock);\n+\n+ elog(DEBUG1, \"performing replication slot invalidation checks\");\n+\n+ /*\n+ * Note that we will make another pass over replication slots for\n+ * invalidations to keep the code simple. The assumption here is that the\n+ * traversal over replication slots isn't that costly even with hundreds\n+ * of replication slots. If it ever turns out that this assumption is\n+ * wrong, we might have to put the invalidation check logic in the above\n+ * loop, for that we might have to do the following:\n+ *\n+ * - Acqure ControlLock lock once before the loop.\n+ *\n+ * - Call InvalidatePossiblyObsoleteSlot for each slot.\n+ *\n+ * - Handle the cases in which ControlLock gets released just like\n+ * InvalidateObsoleteReplicationSlots does.\n+ *\n+ * - Avoid saving slot info to disk two times for each invalidated slot.\n+ *\n+ * XXX: Should we move inactive_timeout inavalidation check closer to\n+ * wal_removed in CreateCheckPoint and CreateRestartPoint?\n+ */\n+ InvalidateObsoleteReplicationSlots(RS_INVAL_INACTIVE_TIMEOUT,\n+ 0, InvalidOid, InvalidTransactionId);\n\nWhy do we want to call this for shutdown case (when is_shutdown is\ntrue)? I understand trying to invalidate slots during regular\ncheckpoint but not sure if we need it at the time of shutdown. The\nother point is can we try to check the performance impact with 100s of\nslots as mentioned in the code comments?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Aug 2024 16:35:02 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for looking into this.\n\nOn Mon, Aug 26, 2024 at 4:35 PM Amit Kapila <[email protected]> wrote:\n>\n> Few comments on 0001:\n> 1.\n> @@ -651,6 +651,13 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\n>\n> + /*\n> + * Skip the sync if the local slot is already invalidated. We do this\n> + * beforehand to avoid slot acquire and release.\n> + */\n>\n> I was wondering why you have added this new check as part of this\n> patch. If you see the following comments in the related code, you will\n> know why we haven't done this previously.\n\nRemoved. Can deal with optimization separately.\n\n> 2.\n> + */\n> + InvalidateObsoleteReplicationSlots(RS_INVAL_INACTIVE_TIMEOUT,\n> + 0, InvalidOid, InvalidTransactionId);\n>\n> Why do we want to call this for shutdown case (when is_shutdown is\n> true)? I understand trying to invalidate slots during regular\n> checkpoint but not sure if we need it at the time of shutdown.\n\nChanged it to invalidate only for non-shutdown checkpoints.\ninactive_timeout invalidation isn't critical for shutdown unlike\nwal_removed which can help shutdown by freeing up some disk space.\n\n> The\n> other point is can we try to check the performance impact with 100s of\n> slots as mentioned in the code comments?\n\nI first checked how much does the wal_removed invalidation check add to the\ncheckpoint (see 2nd and 3rd column). I then checked how much\ninactive_timeout invalidation check adds to the checkpoint (see 4th\ncolumn), it is not more than wal_remove invalidation check. I then checked\nhow much the wal_removed invalidation check adds for replication slots that\nhave already been invalidated due to inactive_timeout (see 5th column),\nlooks like not much.\n\n| # of slots | HEAD (no invalidation) ms | HEAD (wal_removed) ms | PATCHED\n(inactive_timeout) ms | PATCHED (inactive_timeout+wal_removed) ms |\n|------------|----------------------------|-----------------------|-------------------------------|------------------------------------------|\n| 100 | 18.591 | 370.586 | 359.299\n | 373.882 |\n| 1000 | 15.722 | 4834.901 |\n5081.751 | 5072.128 |\n| 10000 | 19.261 | 59801.062 |\n61270.406 | 60270.099 |\n\nHaving said that, I'm okay to implement the optimization specified.\nThoughts?\n\n+ /*\n+ * NB: We will make another pass over replication slots for\n+ * invalidation checks to keep the code simple. Testing shows that\n+ * there is no noticeable overhead (when compared with wal_removed\n+ * invalidation) even if we were to do inactive_timeout invalidation\n+ * of thousands of replication slots here. If it is ever proven that\n+ * this assumption is wrong, we will have to perform the invalidation\n+ * checks in the above for loop with the following changes:\n+ *\n+ * - Acquire ControlLock lock once before the loop.\n+ *\n+ * - Call InvalidatePossiblyObsoleteSlot for each slot.\n+ *\n+ * - Handle the cases in which ControlLock gets released just like\n+ * InvalidateObsoleteReplicationSlots does.\n+ *\n+ * - Avoid saving slot info to disk two times for each invalidated\n+ * slot.\n\nPlease see the attached v43 patches addressing the above review comments.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Aug 2024 11:31:09 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi, here are some review comments for patch v43-0001.\n\n======\nCommit message\n\n1.\n... introduces a GUC allowing users set inactive timeout.\n\n~\n\n1a. You should give the name of the new GUC in the commit message.\n\n1b. /set/to set/\n\n======\ndoc/src/sgml/config.sgml\n\nGUC \"replication_slot_inactive_timeout\"\n\n2.\nInvalidates replication slots that are inactive for longer than\nspecified amount of time\n\nnit - suggest use similar wording as the prior GUC (wal_sender_timeout):\nInvalidate replication slots that are inactive for longer than this\namount of time.\n\n~\n\n3.\nThis invalidation check happens either when the slot is acquired for\nuse or during a checkpoint. The time since the slot has become\ninactive is known from its inactive_since value using which the\ntimeout is measured.\n\nnit - the wording is too complicated. suggest:\nThe timeout check occurs when the slot is next acquired for use, or\nduring a checkpoint. The slot's 'inactive_since' field value is when\nthe slot became inactive.\n\n~\n\n4.\nNote that the inactive timeout invalidation mechanism is not\napplicable for slots on the standby that are being synced from a\nprimary server (whose synced field is true).\n\nnit - that word \"whose\" seems ambiguous. suggest:\n(e.g. the standby slot has 'synced' field true).\n\n======\ndoc/src/sgml/system-views.sgml\n\n5.\ninactive_timeout means that the slot has been inactive for the\nduration specified by replication_slot_inactive_timeout parameter.\n\nnit - suggestion (\"longer than\"):\n... the slot has been inactive for longer than the duration specified\nby the replication_slot_inactive_timeout parameter.\n\n======\nsrc/backend/replication/slot.c\n\n6.\n /* Maximum number of invalidation causes */\n-#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL\n+#define RS_INVAL_MAX_CAUSES RS_INVAL_INACTIVE_TIMEOUT\n\nIMO this #define belongs in the slot.h, immediately below where the\nenum is defined.\n\n~~~\n\n7. ReplicationSlotAcquire:\n\nI had a fundamental question about this logic.\n\nIIUC the purpose of the patch was to invalidate replication slots that\nhave been inactive for too long.\n\nSo, it makes sense to me that some periodic processing (e.g.\nCheckPointReplicationSlots) might do a sweep over all the slots, and\ninvalidate the too-long-inactive ones that it finds.\n\nOTOH, it seemed quite strange to me that the patch logic is also\ndetecting and invalidating inactive slots during the\nReplicationSlotAcquire function. This is kind of saying \"ERROR -\nsorry, because this was inactive for too long you can't have it\" at\nthe very moment that you wanted to use it again! IIUC such a slot\nwould be invalidated by the function InvalidatePossiblyObsoleteSlot(),\nbut therein lies my doubt -- how can the slot be considered as\n\"obsolete\" when we are in the very act of trying to acquire/use it?\n\nI guess it might be argued this is not so different to the scenario of\nattempting to acquire a slot that had been invalidated momentarily\nbefore during checkpoint processing. But, somehow that scenario seems\nmore like bad luck to me, versus ReplicationSlotAcquire() deliberately\ninvalidating something we *know* is wanted.\n\n~\n\n8.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\n+ NameStr(s->data.name)),\n+ errdetail(\"This slot has been invalidated because it was inactive\nsince %s for more than %d seconds specified by\n\\\"replication_slot_inactive_timeout\\\".\",\n+ timestamptz_to_str(s->inactive_since),\n+ replication_slot_inactive_timeout)));\n\nnit - IMO the info should be split into errdetail + errhint. Like this:\nerrdetail(\"The slot became invalid because it was inactive since %s,\nwhich is more than %d seconds ago.\"...)\nerrhint(\"You might need to increase \\\"%s\\\".\",\n\"replication_slot_inactive_timeout\")\n\n~~~\n\n9. ReportSlotInvalidation\n\n+ appendStringInfo(&err_detail,\n+ _(\"The slot has been inactive since %s for more than %d seconds\nspecified by \\\"replication_slot_inactive_timeout\\\".\"),\n+ timestamptz_to_str(inactive_since),\n+ replication_slot_inactive_timeout);\n+ break;\n\nIMO this error in ReportSlotInvalidation() should be the same as the\nother one from ReplicationSlotAcquire(), which I suggested above\n(comment #8) should include a hint. Also, including a hint here will\nmake this new message consistent with the other errhint (for\n\"max_slot_wal_keep_size\") that is already in this function.\n\n~~~\n\n10. InvalidatePossiblyObsoleteSlot\n\n+ if (cause == RS_INVAL_INACTIVE_TIMEOUT &&\n+ (replication_slot_inactive_timeout > 0 &&\n+ s->inactive_since > 0 &&\n+ !(RecoveryInProgress() && s->data.synced)))\n\n10a. Everything here is && so this has some redundant parentheses.\n\n10b. Actually, IMO this complicated condition is overkill. Won't it be\nbetter to just unconditionally assign\nnow = GetCurrentTimestamp(); here?\n\n~\n\n11.\n+ * Note that we don't invalidate synced slots because,\n+ * they are typically considered not active as they don't\n+ * perform logical decoding to produce the changes.\n\nnit - tweaked punctuation\n\n~\n\n12.\n+ * If the slot can be acquired, do so or if the slot is already ours,\n+ * then mark it invalidated. Otherwise we'll signal the owning\n+ * process, below, and retry.\n\nnit - tidied this comment. Suggestion:\nIf the slot can be acquired, do so and mark it as invalidated. If the\nslot is already ours, mark it as invalidated. Otherwise, we'll signal\nthe owning process below and retry.\n\n~\n\n13.\n+ if (active_pid == 0 ||\n+ (MyReplicationSlot != NULL &&\n+ MyReplicationSlot == s &&\n+ active_pid == MyProcPid))\n\nYou are already checking MyReplicationSlot == s here, so that extra\ncheck for MyReplicationSlot != NULL is redundant, isn't it?\n\n~~~\n\n14. CheckPointReplicationSlots\n\n /*\n- * Flush all replication slots to disk.\n+ * Flush all replication slots to disk. Also, invalidate slots during\n+ * non-shutdown checkpoint.\n *\n * It is convenient to flush dirty replication slots at the time of checkpoint.\n * Additionally, in case of a shutdown checkpoint, we also identify the slots\n\nnit - /Also, invalidate slots/Also, invalidate obsolete slots/\n\n======\nsrc/backend/utils/misc/guc_tables.c\n\n15.\n+ {\"replication_slot_inactive_timeout\", PGC_SIGHUP, REPLICATION_SENDING,\n+ gettext_noop(\"Sets the amount of time to wait before invalidating an \"\n+ \"inactive replication slot.\"),\n\nnit - that is maybe a bit misleading because IIUC there is no real\n\"waiting\" happening anywhere. Suggest:\nSets the amount of time a replication slot can remain inactive before\nit will be invalidated.\n\n======\n\nPlease take a look at the attached top-up patches. These include\nchanges for many of the nits above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 30 Aug 2024 12:43:10 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for looking into this.\n\nOn Fri, Aug 30, 2024 at 8:13 AM Peter Smith <[email protected]> wrote:\n>\n> ======\n> Commit message\n>\n> 1.\n> ... introduces a GUC allowing users set inactive timeout.\n>\n> ~\n>\n> 1a. You should give the name of the new GUC in the commit message.\n\nModified.\n\n> 1b. /set/to set/\n\nReworded the commit message.\n\n> ======\n> doc/src/sgml/config.sgml\n>\n> GUC \"replication_slot_inactive_timeout\"\n>\n> 2.\n> Invalidates replication slots that are inactive for longer than\n> specified amount of time\n>\n> nit - suggest use similar wording as the prior GUC (wal_sender_timeout):\n> Invalidate replication slots that are inactive for longer than this\n> amount of time.\n\nModified.\n\n> 3.\n> This invalidation check happens either when the slot is acquired for\n> use or during a checkpoint. The time since the slot has become\n> inactive is known from its inactive_since value using which the\n> timeout is measured.\n>\n> nit - the wording is too complicated. suggest:\n> The timeout check occurs when the slot is next acquired for use, or\n> during a checkpoint. The slot's 'inactive_since' field value is when\n> the slot became inactive.\n\n\n> 4.\n> Note that the inactive timeout invalidation mechanism is not\n> applicable for slots on the standby that are being synced from a\n> primary server (whose synced field is true).\n>\n> nit - that word \"whose\" seems ambiguous. suggest:\n> (e.g. the standby slot has 'synced' field true).\n\nReworded.\n\n> ======\n> doc/src/sgml/system-views.sgml\n>\n> 5.\n> inactive_timeout means that the slot has been inactive for the\n> duration specified by replication_slot_inactive_timeout parameter.\n>\n> nit - suggestion (\"longer than\"):\n> ... the slot has been inactive for longer than the duration specified\n> by the replication_slot_inactive_timeout parameter.\n\nModified.\n\n> ======\n> src/backend/replication/slot.c\n>\n> 6.\n> /* Maximum number of invalidation causes */\n> -#define RS_INVAL_MAX_CAUSES RS_INVAL_WAL_LEVEL\n> +#define RS_INVAL_MAX_CAUSES RS_INVAL_INACTIVE_TIMEOUT\n>\n> IMO this #define belongs in the slot.h, immediately below where the\n> enum is defined.\n\nPlease check the commit that introduced it -\nhttps://www.postgresql.org/message-id/ZdU3CHqza9XJw4P-%40paquier.xyz.\nIt is kept in the file where it's used.\n\n> 7. ReplicationSlotAcquire:\n>\n> I had a fundamental question about this logic.\n>\n> IIUC the purpose of the patch was to invalidate replication slots that\n> have been inactive for too long.\n>\n> So, it makes sense to me that some periodic processing (e.g.\n> CheckPointReplicationSlots) might do a sweep over all the slots, and\n> invalidate the too-long-inactive ones that it finds.\n>\n> OTOH, it seemed quite strange to me that the patch logic is also\n> detecting and invalidating inactive slots during the\n> ReplicationSlotAcquire function. This is kind of saying \"ERROR -\n> sorry, because this was inactive for too long you can't have it\" at\n> the very moment that you wanted to use it again! IIUC such a slot\n> would be invalidated by the function InvalidatePossiblyObsoleteSlot(),\n> but therein lies my doubt -- how can the slot be considered as\n> \"obsolete\" when we are in the very act of trying to acquire/use it?\n>\n> I guess it might be argued this is not so different to the scenario of\n> attempting to acquire a slot that had been invalidated momentarily\n> before during checkpoint processing. But, somehow that scenario seems\n> more like bad luck to me, versus ReplicationSlotAcquire() deliberately\n> invalidating something we *know* is wanted.\n\nHm. TBH, there's no real reason for invalidating the slot in\nReplicationSlotAcquire(). My thinking back then was to take this\nopportunity to do some work. I agree to leave the invalidation work to\nthe checkpointer. However, I still think ReplicationSlotAcquire()\nshould error out if the slot has already been invalidated similar to\n\"can no longer get changes from replication slot \\\"%s\\\" for\nwal_removed.\n\n> 8.\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\n> + NameStr(s->data.name)),\n> + errdetail(\"This slot has been invalidated because it was inactive\n> since %s for more than %d seconds specified by\n> \\\"replication_slot_inactive_timeout\\\".\",\n> + timestamptz_to_str(s->inactive_since),\n> + replication_slot_inactive_timeout)));\n>\n> nit - IMO the info should be split into errdetail + errhint. Like this:\n> errdetail(\"The slot became invalid because it was inactive since %s,\n> which is more than %d seconds ago.\"...)\n> errhint(\"You might need to increase \\\"%s\\\".\",\n> \"replication_slot_inactive_timeout\")\n\n\"invalid\" is being covered by errmsg \"invalidating obsolete\nreplication slot\", so no need to duplicate it in errdetail.\n\n> 9. ReportSlotInvalidation\n>\n> + appendStringInfo(&err_detail,\n> + _(\"The slot has been inactive since %s for more than %d seconds\n> specified by \\\"replication_slot_inactive_timeout\\\".\"),\n> + timestamptz_to_str(inactive_since),\n> + replication_slot_inactive_timeout);\n> + break;\n>\n> IMO this error in ReportSlotInvalidation() should be the same as the\n> other one from ReplicationSlotAcquire(), which I suggested above\n> (comment #8) should include a hint. Also, including a hint here will\n> make this new message consistent with the other errhint (for\n> \"max_slot_wal_keep_size\") that is already in this function.\n\nNot exactly the same but similar. Because ReportSlotInvalidation()\nerrmsg has an \"invalidating\" component, whereas errmsg in\nReplicationSlotAcquire doesn't. Please check latest wordings.\n\n> 10. InvalidatePossiblyObsoleteSlot\n>\n> + if (cause == RS_INVAL_INACTIVE_TIMEOUT &&\n> + (replication_slot_inactive_timeout > 0 &&\n> + s->inactive_since > 0 &&\n> + !(RecoveryInProgress() && s->data.synced)))\n>\n> 10a. Everything here is && so this has some redundant parentheses.\n\nRemoved.\n\n> 10b. Actually, IMO this complicated condition is overkill. Won't it be\n> better to just unconditionally assign\n> now = GetCurrentTimestamp(); here?\n\nGetCurrentTimestamp() can get costlier on certain platforms. I think\nthe fields checking in the condition are pretty straight forward -\ne.g. !RecoveryInProgress() server not in recovery, !s->data.synced\nslot is not being synced and so on. Added a macro\nIsInactiveTimeoutSlotInvalidationApplicable() for better readability\nin two places.\n\n> 11.\n> + * Note that we don't invalidate synced slots because,\n> + * they are typically considered not active as they don't\n> + * perform logical decoding to produce the changes.\n>\n> nit - tweaked punctuation\n\nUsed the consistent wording in the commit message, docs and code comments.\n\n> 12.\n> + * If the slot can be acquired, do so or if the slot is already ours,\n> + * then mark it invalidated. Otherwise we'll signal the owning\n> + * process, below, and retry.\n>\n> nit - tidied this comment. Suggestion:\n> If the slot can be acquired, do so and mark it as invalidated. If the\n> slot is already ours, mark it as invalidated. Otherwise, we'll signal\n> the owning process below and retry.\n\nModified.\n\n> 13.\n> + if (active_pid == 0 ||\n> + (MyReplicationSlot != NULL &&\n> + MyReplicationSlot == s &&\n> + active_pid == MyProcPid))\n>\n> You are already checking MyReplicationSlot == s here, so that extra\n> check for MyReplicationSlot != NULL is redundant, isn't it?\n\nRemoved.\n\n> 14. CheckPointReplicationSlots\n>\n> /*\n> - * Flush all replication slots to disk.\n> + * Flush all replication slots to disk. Also, invalidate slots during\n> + * non-shutdown checkpoint.\n> *\n> * It is convenient to flush dirty replication slots at the time of checkpoint.\n> * Additionally, in case of a shutdown checkpoint, we also identify the slots\n>\n> nit - /Also, invalidate slots/Also, invalidate obsolete slots/\n\nModified.\n\n> 15.\n> + {\"replication_slot_inactive_timeout\", PGC_SIGHUP, REPLICATION_SENDING,\n> + gettext_noop(\"Sets the amount of time to wait before invalidating an \"\n> + \"inactive replication slot.\"),\n>\n> nit - that is maybe a bit misleading because IIUC there is no real\n> \"waiting\" happening anywhere. Suggest:\n> Sets the amount of time a replication slot can remain inactive before\n> it will be invalidated.\n\nModified.\n\nPlease find the attached v44 patch with the above changes. I will\ninclude the 0002 xid_age based invalidation patch later.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 31 Aug 2024 13:45:39 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Aug 29, 2024 at 11:31 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Thanks for looking into this.\n>\n> On Mon, Aug 26, 2024 at 4:35 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Few comments on 0001:\n> > 1.\n> > @@ -651,6 +651,13 @@ synchronize_one_slot(RemoteSlot *remote_slot, Oid\n> >\n> > + /*\n> > + * Skip the sync if the local slot is already invalidated. We do this\n> > + * beforehand to avoid slot acquire and release.\n> > + */\n> >\n> > I was wondering why you have added this new check as part of this\n> > patch. If you see the following comments in the related code, you will\n> > know why we haven't done this previously.\n>\n> Removed. Can deal with optimization separately.\n>\n> > 2.\n> > + */\n> > + InvalidateObsoleteReplicationSlots(RS_INVAL_INACTIVE_TIMEOUT,\n> > + 0, InvalidOid, InvalidTransactionId);\n> >\n> > Why do we want to call this for shutdown case (when is_shutdown is\n> > true)? I understand trying to invalidate slots during regular\n> > checkpoint but not sure if we need it at the time of shutdown.\n>\n> Changed it to invalidate only for non-shutdown checkpoints. inactive_timeout invalidation isn't critical for shutdown unlike wal_removed which can help shutdown by freeing up some disk space.\n>\n> > The\n> > other point is can we try to check the performance impact with 100s of\n> > slots as mentioned in the code comments?\n>\n> I first checked how much does the wal_removed invalidation check add to the checkpoint (see 2nd and 3rd column). I then checked how much inactive_timeout invalidation check adds to the checkpoint (see 4th column), it is not more than wal_remove invalidation check. I then checked how much the wal_removed invalidation check adds for replication slots that have already been invalidated due to inactive_timeout (see 5th column), looks like not much.\n>\n> | # of slots | HEAD (no invalidation) ms | HEAD (wal_removed) ms | PATCHED (inactive_timeout) ms | PATCHED (inactive_timeout+wal_removed) ms |\n> |------------|----------------------------|-----------------------|-------------------------------|------------------------------------------|\n> | 100 | 18.591 | 370.586 | 359.299 | 373.882 |\n> | 1000 | 15.722 | 4834.901 | 5081.751 | 5072.128 |\n> | 10000 | 19.261 | 59801.062 | 61270.406 | 60270.099 |\n>\n> Having said that, I'm okay to implement the optimization specified. Thoughts?\n>\n\nThe other possibility is to try invalidating due to timeout along with\nwal_removed case during checkpoint. The idea is that if the slot can\nbe invalidated due to WAL then fine, otherwise check if it can be\ninvalidated due to timeout. This can avoid looping the slots and doing\nsimilar work multiple times during the checkpoint.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Sep 2024 12:25:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi. Thanks for addressing my previous review comments.\n\nHere are some review comments for v44-0001.\n\n======\nCommit message.\n\n1.\nBecause such synced slots are typically considered not\nactive (for them to be later considered as inactive) as they don't\nperform logical decoding to produce the changes.\n\n~\n\nThis sentence is bad grammar. The docs have the same wording, so\nplease see my doc review comment #4 suggestion below.\n\n======\ndoc/src/sgml/config.sgml\n\n2.\n+ <para>\n+ Invalidates replication slots that are inactive for longer than\n+ specified amount of time. If this value is specified without units,\n+ it is taken as seconds. A value of zero (which is default) disables\n+ the timeout mechanism. This parameter can only be set in\n+ the <filename>postgresql.conf</filename> file or on the server\n+ command line.\n+ </para>\n+\n\nnit - This is OK as-is, but OTOH why not make the wording consistent\nwith the previous GUC description? (e.g. see my v43 [1] #2 review\ncomment)\n\n~~~\n\n3.\n+ <para>\n+ This invalidation check happens either when the slot is acquired\n+ for use or during checkpoint. The time since the slot has become\n+ inactive is known from its\n+ <structfield>inactive_since</structfield> value using which the\n+ timeout is measured.\n+ </para>\n+\n\nI felt this is slightly misleading because slot acquiring has nothing\nto do with setting the slot invalidation anymore. Furthermore, the 2nd\nsentence is bad grammar.\n\nnit - IMO something simple like the following rewording can address\nboth of those points:\n\nSlot invalidation due to inactivity timeout occurs during checkpoint.\nThe duration of slot inactivity is calculated using the slot's\n<structfield>inactive_since</structfield> field value.\n\n~\n\n4.\n+ Because such synced slots are typically considered not active\n+ (for them to be later considered as inactive) as they don't perform\n+ logical decoding to produce the changes.\n\nThat sentence has bad grammar.\n\nnit – suggest a much simpler replacement:\nSynced slots are always considered to be inactive because they don't\nperform logical decoding to produce changes.\n\n======\nsrc/backend/replication/slot.c\n\n5.\n+#define IsInactiveTimeoutSlotInvalidationApplicable(s) \\\n+ (replication_slot_inactive_timeout > 0 && \\\n+ s->inactive_since > 0 && \\\n+ !RecoveryInProgress() && \\\n+ !s->data.synced)\n+\n\n5a.\nI felt this would be better implemented as an inline function. Then it\ncan be commented on properly to explain the parts of the condition.\ne.g. the large comment currently in InvalidatePossiblyObsoleteSlot()\nwould be more appropriate in this function.\n\n~\n\n5b.\nThe name is very long. Can't it be something shorter/simpler like:\n'IsSlotATimeoutCandidate()'\n\n~~~\n\n6. ReplicationSlotAcquire\n\n-ReplicationSlotAcquire(const char *name, bool nowait)\n+ReplicationSlotAcquire(const char *name, bool nowait,\n+ bool check_for_invalidation)\n\nnit - Previously this new parameter really did mean to \"check\" for\n[and set the slot] invalidation. But now I suggest renaming it to\n'error_if_invalid' to properly reflect the new usage. And also in the\nslot.h.\n\n~\n\n7.\n+ /*\n+ * Error out if the slot has been invalidated previously. Because there's\n+ * no use in acquiring the invalidated slot.\n+ */\n\nnit - The comment is contrary to the code. If there was no reason to\nskip this error, then you would not have the new parameter allowing\nyou to skip this error. I suggest just repeating the same comment as\nin the function header.\n\n~~~\n\n8. ReportSlotInvalidation\n\nnit - Added some blank lines for consistency.\n\n~~~\n\n9. InvalidatePossiblyObsoleteSlot\n\n+ /*\n+ * Quick exit if inactive timeout invalidation mechanism\n+ * is disabled or slot is currently being used or the\n+ * server is in recovery mode or the slot on standby is\n+ * currently being synced from the primary.\n+ *\n+ * Note that the inactive timeout invalidation mechanism\n+ * is not applicable for slots on the standby server that\n+ * are being synced from primary server. Because such\n+ * synced slots are typically considered not active (for\n+ * them to be later considered as inactive) as they don't\n+ * perform logical decoding to produce the changes.\n+ */\n+ if (!IsInactiveTimeoutSlotInvalidationApplicable(s))\n+ break;\n\n9a.\nConsistency is good (commit message, docs and code comments for this),\nbut the added sentence has bad grammar. Please see the docs review\ncomment #4 above for some alternate phrasing.\n\n~\n\n9b.\nNow that this logic is moved into a macro (I suggested it should be an\ninline function) IMO this comment does not belong here anymore because\nit is commenting code that you cannot see. Instead, this comment (or\nsomething like it) should be as comments within the new function.\n\n======\nsrc/include/replication/slot.h\n\n10.\n+extern void ReplicationSlotAcquire(const char *name, bool nowait,\n+ bool check_for_invalidation);\n\nChange the new param name as described in the earlier review comment.\n\n======\nsrc/test/recovery/t/050_invalidate_slots.pl\n\n~~~\n\nPlease refer to the attached file which implements some of the nits\nmentioned above.\n\n======\n[1] v43 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPuFzCHPCiZbpoQX59kgZbebuWT0gR0O7rOe4t_sdYu%3DOA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 2 Sep 2024 18:06:44 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Aug 31, 2024 at 1:45 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Please find the attached v44 patch with the above changes. I will\n> include the 0002 xid_age based invalidation patch later.\n>\n\nIt is better to get the 0001 reviewed and committed first. We can\ndiscuss about 0002 afterwards as 0001 is in itself a complete and\nseparate patch that can be committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 2 Sep 2024 15:20:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi, my previous review posts did not cover the test code.\n\nHere are my review comments for the v44-0001 test code\n\n======\nTEST CASE #1\n\n1.\n+# Wait for the inactive replication slot to be invalidated.\n+$standby1->poll_query_until(\n+ 'postgres', qq[\n+ SELECT COUNT(slot_name) = 1 FROM pg_replication_slots\n+ WHERE slot_name = 'lsub1_sync_slot' AND\n+ invalidation_reason = 'inactive_timeout';\n+])\n+ or die\n+ \"Timed out while waiting for lsub1_sync_slot invalidation to be\nsynced on standby\";\n+\n\nIs that comment correct? IIUC the synced slot should *already* be\ninvalidated from the primary, so here we are not really \"waiting\" for\nit to be invalidated; Instead, we are just \"confirming\" that the\nsynchronized slot is already invalidated with the correct reason as\nexpected.\n\n~~~\n\n2.\n+# Synced slot mustn't get invalidated on the standby even after a checkpoint,\n+# it must sync invalidation from the primary. So, we must not see the slot's\n+# invalidation message in server log.\n+$standby1->safe_psql('postgres', \"CHECKPOINT\");\n+ok( !$standby1->log_contains(\n+ \"invalidating obsolete replication slot \\\"lsub1_sync_slot\\\"\",\n+ $standby1_logstart),\n+ 'check that syned lsub1_sync_slot has not been invalidated on the standby'\n+);\n+\n\nThis test case seemed bogus, for a couple of reasons:\n\n2a. IIUC this 'lsub1_sync_slot' is the same one that is already\ninvalid (from the primary), so nobody should be surprised that an\nalready invalid slot doesn't get flagged as invalid again. i.e.\nShouldn't your test scenario here be done using a valid synced slot?\n\n2b. AFAICT it was only moments above this CHECKPOINT where you\nassigned the standby inactivity timeout to 2s. So even if there was\nsome bug invalidating synced slots I don't think you gave it enough\ntime to happen -- e.g. I doubt 2s has elapsed yet.\n\n~\n\n3.\n+# Stop standby to make the standby's replication slot on the primary inactive\n+$standby1->stop;\n+\n+# Wait for the standby's replication slot to become inactive\n+wait_for_slot_invalidation($primary, 'sb1_slot', $logstart,\n+ $inactive_timeout);\n\nThis seems a bit tricky. Both these (the stop and the wait) seem to\nbelong together, so I think maybe a single bigger explanatory comment\ncovering both parts would help for understanding.\n\n======\nTEST CASE #2\n\n4.\n+# Stop subscriber to make the replication slot on publisher inactive\n+$subscriber->stop;\n+\n+# Wait for the replication slot to become inactive and then invalidated due to\n+# timeout.\n+wait_for_slot_invalidation($publisher, 'lsub1_slot', $logstart,\n+ $inactive_timeout);\n\nIIUC, this is just like comment #3 above. Both these (the stop and the\nwait) seem to belong together, so I think maybe a single bigger\nexplanatory comment covering both parts would help for understanding.\n\n~~~\n\n5.\n+# Testcase end: Invalidate logical subscriber's slot due to\n+# replication_slot_inactive_timeout.\n+# =============================================================================\n\n\nIMO the rest of the comment after \"Testcase end\" isn't very useful.\n\n======\nsub wait_for_slot_invalidation\n\n6.\n+sub wait_for_slot_invalidation\n+{\n\nAn explanatory header comment for this subroutine would be helpful.\n\n~~~\n\n7.\n+ # Wait for the replication slot to become inactive\n+ $node->poll_query_until(\n+ 'postgres', qq[\n+ SELECT COUNT(slot_name) = 1 FROM pg_replication_slots\n+ WHERE slot_name = '$slot_name' AND active = 'f';\n+ ])\n+ or die\n+ \"Timed out while waiting for slot $slot_name to become inactive on\nnode $name\";\n+\n+ # Wait for the replication slot info to be updated\n+ $node->poll_query_until(\n+ 'postgres', qq[\n+ SELECT COUNT(slot_name) = 1 FROM pg_replication_slots\n+ WHERE inactive_since IS NOT NULL\n+ AND slot_name = '$slot_name' AND active = 'f';\n+ ])\n+ or die\n+ \"Timed out while waiting for info of slot $slot_name to be updated\non node $name\";\n+\n\nWhy are there are 2 separate poll_query_until's here? Can't those be\ncombined into just one?\n\n~~~\n\n8.\n+ # Sleep at least $inactive_timeout duration to avoid multiple checkpoints\n+ # for the slot to get invalidated.\n+ sleep($inactive_timeout);\n+\n\nMaybe this special sleep to prevent too many CHECKPOINTs should be\nmoved to be inside the other subroutine, which is actually doing those\nCHECKPOINTs.\n\n~~~\n\n9.\n+ # Wait for the inactive replication slot to be invalidated\n+ $node->poll_query_until(\n+ 'postgres', qq[\n+ SELECT COUNT(slot_name) = 1 FROM pg_replication_slots\n+ WHERE slot_name = '$slot_name' AND\n+ invalidation_reason = 'inactive_timeout';\n+ ])\n+ or die\n+ \"Timed out while waiting for inactive slot $slot_name to be\ninvalidated on node $name\";\n+\n\nThe comment seems misleading. IIUC you are not \"waiting\" for the\ninvalidation here, because it is the other subroutine doing the\nwaiting for the invalidation message in the logs. Instead, here I\nthink you are just confirming the 'invalidation_reason' got set\ncorrectly. The comment should say what it is really doing.\n\n======\nsub check_for_slot_invalidation_in_server_log\n\n10.\n+# Check for invalidation of slot in server log\n+sub check_for_slot_invalidation_in_server_log\n+{\n\nI think the main function of this subroutine is the CHECKPOINT and the\nwaiting for the server log to say invalidation happened. It is doing a\nloop of a) CHECKPOINT then b) inspecting the server log for the slot\ninvalidation, and c) waiting for a bit. Repeat 10 times.\n\nA comment describing the logic for this subroutine would be helpful.\n\nThe most important side-effect of this function is the CHECKPOINT\nbecause without that nothing will ever get invalidated due to\ninactivity, but this key point is not obvious from the subroutine\nname.\n\nIMO it would be better to name this differently to reflect what it is\nreally doing:\ne.g. \"CHECKPOINT_and_wait_for_slot_invalidation_in_server_log\"\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 3 Sep 2024 16:55:58 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sat, Aug 31, 2024 at 1:45 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n>\n> Please find the attached v44 patch with the above changes. I will\n> include the 0002 xid_age based invalidation patch later.\n>\n\nThanks for the patch Bharath. My review and testing is WIP, but please\nfind few comments and queries:\n\n1)\nI see that ReplicationSlotAlter() will error out if the slot is\ninvalidated due to timeout. I have not tested it myself, but do you\nknow if slot-alter errors out for other invalidation causes as well?\nJust wanted to confirm that the behaviour is consistent for all\ninvalidation causes.\n\n2)\nWhen a slot is invalidated, and we try to use that slot, it gives this msg:\n\nERROR: can no longer get changes from replication slot \"mysubnew1_2\"\nDETAIL: The slot became invalid because it was inactive since\n2024-09-03 14:23:34.094067+05:30, which is more than 600 seconds ago.\nHINT: You might need to increase \"replication_slot_inactive_timeout.\".\n\nIsn't HINT misleading? Even if we increase it now, the slot can not be\nreused again.\n\n\n3)\nWhen the slot is invalidated, the' inactive_since' still keeps on\nchanging when there is a subscriber trying to start replication\ncontinuously. I think ReplicationSlotAcquire() keeps on failing and\nthus Release keeps on setting it again and again. Shouldn't we stop\nsetting/chnaging 'inactive_since' once the slot is invalidated\nalready, otherwise it will be misleading.\n\npostgres=# select failover,synced,inactive_since,invalidation_reason\nfrom pg_replication_slots;\n\n failover | synced | inactive_since | invalidation_reason\n----------+--------+----------------------------------+---------------------\n t | f | 2024-09-03 14:23:.. | inactive_timeout\n\nafter sometime:\n failover | synced | inactive_since | invalidation_reason\n----------+--------+----------------------------------+---------------------\n t | f | 2024-09-03 14:26:..| inactive_timeout\n\n\n4)\nsrc/sgml/config.sgml:\n\n4a)\n+ A value of zero (which is default) disables the timeout mechanism.\n\nBetter will be:\nA value of zero (which is default) disables the inactive timeout\ninvalidation mechanism .\nor\nA value of zero (which is default) disables the slot invalidation due\nto the inactive timeout mechanism.\n\ni.e. rephrase to indicate that invalidation is disabled.\n\n4b)\n'synced' and inactive_since should point to pg_replication_slots:\n\nexample:\n<link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>synced</structfield>\n\n5)\nsrc/sgml/system-views.sgml:\n+ ..the slot has been inactive for longer than the duration specified\nby replication_slot_inactive_timeout parameter.\n\nBetter to have:\n..the slot has been inactive for a time longer than the duration\nspecified by the replication_slot_inactive_timeout parameter.\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 3 Sep 2024 15:01:06 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n>\n>\n> 1)\n> I see that ReplicationSlotAlter() will error out if the slot is\n> invalidated due to timeout. I have not tested it myself, but do you\n> know if slot-alter errors out for other invalidation causes as well?\n> Just wanted to confirm that the behaviour is consistent for all\n> invalidation causes.\n\nI was able to test this and as anticipated behavior is different. When\nslot is invalidated due to say 'wal_removed', I am still able to do\n'alter' of that slot.\nPlease see:\n\nPub:\n slot_name | failover | synced | inactive_since |\ninvalidation_reason\n-------------+----------+--------+----------------------------------+---------------------\n mysubnew1_1 | t | f | 2024-09-04 08:58:12.802278+05:30 |\nwal_removed\n\nSub:\nnewdb1=# alter subscription mysubnew1_1 disable;\nALTER SUBSCRIPTION\n\nnewdb1=# alter subscription mysubnew1_1 set (failover=false);\nALTER SUBSCRIPTION\n\nPub: (failover altered)\n slot_name | failover | synced | inactive_since |\ninvalidation_reason\n-------------+----------+--------+----------------------------------+---------------------\n mysubnew1_1 | f | f | 2024-09-04 08:58:47.824471+05:30 |\nwal_removed\n\n\nwhile when invalidation_reason is 'inactive_timeout', it fails:\n\nPub:\n slot_name | failover | synced | inactive_since |\ninvalidation_reason\n-------------+----------+--------+----------------------------------+---------------------\n mysubnew1_1 | t | f | 2024-09-03 14:30:57.532206+05:30 |\ninactive_timeout\n\nSub:\nnewdb1=# alter subscription mysubnew1_1 disable;\nALTER SUBSCRIPTION\n\nnewdb1=# alter subscription mysubnew1_1 set (failover=false);\nERROR: could not alter replication slot \"mysubnew1_1\": ERROR: can no\nlonger get changes from replication slot \"mysubnew1_1\"\nDETAIL: The slot became invalid because it was inactive since\n2024-09-04 08:54:20.308996+05:30, which is more than 0 seconds ago.\nHINT: You might need to increase \"replication_slot_inactive_timeout.\".\n\nI think the behavior should be same.\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 4 Sep 2024 09:17:33 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 4, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n> >\n> >\n\n\n1)\nIt is related to one of my previous comments (pt 3 in [1]) where I\nstated that inactive_since should not keep on changing once a slot is\ninvalidated.\nBelow is one side effect if inactive_since keeps on changing:\n\npostgres=# SELECT * FROM pg_replication_slot_advance('mysubnew1_1',\npg_current_wal_lsn());\nERROR: can no longer get changes from replication slot \"mysubnew1_1\"\nDETAIL: The slot became invalid because it was inactive since\n2024-09-04 10:03:56.68053+05:30, which is more than 10 seconds ago.\nHINT: You might need to increase \"replication_slot_inactive_timeout.\".\n\npostgres=# select now();\n now\n---------------------------------\n 2024-09-04 10:04:00.26564+05:30\n\n'DETAIL' gives wrong information, we are not past 10-seconds. This is\nbecause inactive_since got updated even in ERROR scenario.\n\n\n2)\nOne more issue in this message is, once I set\nreplication_slot_inactive_timeout to a bigger value, it becomes more\nmisleading. This is because invalidation was done in the past using\nprevious value while message starts showing new value:\n\nALTER SYSTEM SET replication_slot_inactive_timeout TO '36h';\n\n--see 129600 secs in DETAIL and the current time.\npostgres=# SELECT * FROM pg_replication_slot_advance('mysubnew1_1',\npg_current_wal_lsn());\nERROR: can no longer get changes from replication slot \"mysubnew1_1\"\nDETAIL: The slot became invalid because it was inactive since\n2024-09-04 10:06:38.980939+05:30, which is more than 129600 seconds\nago.\npostgres=# select now();\n now\n----------------------------------\n 2024-09-04 10:07:35.201894+05:30\n\nI feel we should change this message itself.\n\n~~~~~\n\nWhen invalidation is due to wal_removed, we get a way simpler message:\n\nnewdb1=# SELECT * FROM pg_replication_slot_advance('mysubnew1_2',\npg_current_wal_lsn());\nERROR: replication slot \"mysubnew1_2\" cannot be advanced\nDETAIL: This slot has never previously reserved WAL, or it has been\ninvalidated.\n\nThis message does not mention 'max_slot_wal_keep_size'. We should have\na similar message for our case. Thoughts?\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uC8Dg-0JS3NRUwVUemgz5Ar2v3_EQQFXyAigWSEQ8U47Q%40mail.gmail.com\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 4 Sep 2024 14:48:51 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 4, 2024 at 2:49 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Sep 4, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n> >\n> > On Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n> > >\n> > >\n>\n>\n> 1)\n> It is related to one of my previous comments (pt 3 in [1]) where I\n> stated that inactive_since should not keep on changing once a slot is\n> invalidated.\n>\n\nAgreed. Updating the inactive_since for a slot that is already invalid\nis misleading.\n\n>\n>\n> 2)\n> One more issue in this message is, once I set\n> replication_slot_inactive_timeout to a bigger value, it becomes more\n> misleading. This is because invalidation was done in the past using\n> previous value while message starts showing new value:\n>\n> ALTER SYSTEM SET replication_slot_inactive_timeout TO '36h';\n>\n> --see 129600 secs in DETAIL and the current time.\n> postgres=# SELECT * FROM pg_replication_slot_advance('mysubnew1_1',\n> pg_current_wal_lsn());\n> ERROR: can no longer get changes from replication slot \"mysubnew1_1\"\n> DETAIL: The slot became invalid because it was inactive since\n> 2024-09-04 10:06:38.980939+05:30, which is more than 129600 seconds\n> ago.\n> postgres=# select now();\n> now\n> ----------------------------------\n> 2024-09-04 10:07:35.201894+05:30\n>\n> I feel we should change this message itself.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Sep 2024 09:25:51 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 4, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n>\n> On Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n> >\n> >\n> > 1)\n> > I see that ReplicationSlotAlter() will error out if the slot is\n> > invalidated due to timeout. I have not tested it myself, but do you\n> > know if slot-alter errors out for other invalidation causes as well?\n> > Just wanted to confirm that the behaviour is consistent for all\n> > invalidation causes.\n>\n> I was able to test this and as anticipated behavior is different. When\n> slot is invalidated due to say 'wal_removed', I am still able to do\n> 'alter' of that slot.\n> Please see:\n>\n> Pub:\n> slot_name | failover | synced | inactive_since |\n> invalidation_reason\n> -------------+----------+--------+----------------------------------+---------------------\n> mysubnew1_1 | t | f | 2024-09-04 08:58:12.802278+05:30 |\n> wal_removed\n>\n> Sub:\n> newdb1=# alter subscription mysubnew1_1 disable;\n> ALTER SUBSCRIPTION\n>\n> newdb1=# alter subscription mysubnew1_1 set (failover=false);\n> ALTER SUBSCRIPTION\n>\n> Pub: (failover altered)\n> slot_name | failover | synced | inactive_since |\n> invalidation_reason\n> -------------+----------+--------+----------------------------------+---------------------\n> mysubnew1_1 | f | f | 2024-09-04 08:58:47.824471+05:30 |\n> wal_removed\n>\n>\n> while when invalidation_reason is 'inactive_timeout', it fails:\n>\n> Pub:\n> slot_name | failover | synced | inactive_since |\n> invalidation_reason\n> -------------+----------+--------+----------------------------------+---------------------\n> mysubnew1_1 | t | f | 2024-09-03 14:30:57.532206+05:30 |\n> inactive_timeout\n>\n> Sub:\n> newdb1=# alter subscription mysubnew1_1 disable;\n> ALTER SUBSCRIPTION\n>\n> newdb1=# alter subscription mysubnew1_1 set (failover=false);\n> ERROR: could not alter replication slot \"mysubnew1_1\": ERROR: can no\n> longer get changes from replication slot \"mysubnew1_1\"\n> DETAIL: The slot became invalid because it was inactive since\n> 2024-09-04 08:54:20.308996+05:30, which is more than 0 seconds ago.\n> HINT: You might need to increase \"replication_slot_inactive_timeout.\".\n>\n> I think the behavior should be same.\n>\n\nWe should not allow the invalid replication slot to be altered\nirrespective of the reason unless there is any benefit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Sep 2024 09:30:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for reviewing.\n\nOn Mon, Sep 2, 2024 at 1:37 PM Peter Smith <[email protected]> wrote:\n>\n> Commit message.\n>\n> 1.\n> Because such synced slots are typically considered not\n> active (for them to be later considered as inactive) as they don't\n> perform logical decoding to produce the changes.\n>\n> This sentence is bad grammar. The docs have the same wording, so\n> please see my doc review comment #4 suggestion below.\n\n+1\n\n> 2.\n> + <para>\n> + Invalidates replication slots that are inactive for longer than\n> + specified amount of time. If this value is specified without units,\n> + it is taken as seconds. A value of zero (which is default) disables\n> + the timeout mechanism. This parameter can only be set in\n> + the <filename>postgresql.conf</filename> file or on the server\n> + command line.\n> + </para>\n> +\n>\n> nit - This is OK as-is, but OTOH why not make the wording consistent\n> with the previous GUC description? (e.g. see my v43 [1] #2 review\n> comment)\n\n+1.\n\n> 3.\n> + <para>\n> + This invalidation check happens either when the slot is acquired\n> + for use or during checkpoint. The time since the slot has become\n> + inactive is known from its\n> + <structfield>inactive_since</structfield> value using which the\n> + timeout is measured.\n> + </para>\n> +\n>\n> I felt this is slightly misleading because slot acquiring has nothing\n> to do with setting the slot invalidation anymore. Furthermore, the 2nd\n> sentence is bad grammar.\n>\n> nit - IMO something simple like the following rewording can address\n> both of those points:\n>\n> Slot invalidation due to inactivity timeout occurs during checkpoint.\n> The duration of slot inactivity is calculated using the slot's\n> <structfield>inactive_since</structfield> field value.\n\n+1.\n\n> 4.\n> + Because such synced slots are typically considered not active\n> + (for them to be later considered as inactive) as they don't perform\n> + logical decoding to produce the changes.\n>\n> That sentence has bad grammar.\n>\n> nit – suggest a much simpler replacement:\n> Synced slots are always considered to be inactive because they don't\n> perform logical decoding to produce changes.\n\n+1.\n\n> 5.\n> +#define IsInactiveTimeoutSlotInvalidationApplicable(s) \\\n>\n> 5a.\n> I felt this would be better implemented as an inline function. Then it\n> can be commented on properly to explain the parts of the condition.\n> e.g. the large comment currently in InvalidatePossiblyObsoleteSlot()\n> would be more appropriate in this function.\n\n+1.\n\n> 5b.\n> The name is very long. Can't it be something shorter/simpler like:\n> 'IsSlotATimeoutCandidate()'\n>\n> ~~~\n\nMissing inactive in the above suggested name. Used\nSlotInactiveTimeoutCheckAllowed, similar to XLogInsertAllowed.\n\n> 6. ReplicationSlotAcquire\n>\n> -ReplicationSlotAcquire(const char *name, bool nowait)\n> +ReplicationSlotAcquire(const char *name, bool nowait,\n> + bool check_for_invalidation)\n>\n> nit - Previously this new parameter really did mean to \"check\" for\n> [and set the slot] invalidation. But now I suggest renaming it to\n> 'error_if_invalid' to properly reflect the new usage. And also in the\n> slot.h.\n\n+1.\n\n> 7.\n> + /*\n> + * Error out if the slot has been invalidated previously. Because there's\n> + * no use in acquiring the invalidated slot.\n> + */\n>\n> nit - The comment is contrary to the code. If there was no reason to\n> skip this error, then you would not have the new parameter allowing\n> you to skip this error. I suggest just repeating the same comment as\n> in the function header.\n\n+1.\n\n> 8. ReportSlotInvalidation\n>\n> nit - Added some blank lines for consistency.\n\n+1.\n\n> 9. InvalidatePossiblyObsoleteSlot\n>\n> 9a.\n> Consistency is good (commit message, docs and code comments for this),\n> but the added sentence has bad grammar. Please see the docs review\n> comment #4 above for some alternate phrasing.\n\n+1.\n\n> 9b.\n> Now that this logic is moved into a macro (I suggested it should be an\n> inline function) IMO this comment does not belong here anymore because\n> it is commenting code that you cannot see. Instead, this comment (or\n> something like it) should be as comments within the new function.\n>\n> ======\n> src/include/replication/slot.h\n\n+1.\n\n> 10.\n> +extern void ReplicationSlotAcquire(const char *name, bool nowait,\n> + bool check_for_invalidation);\n>\n> Change the new param name as described in the earlier review comment.\n\n+1.\n\n> Please refer to the attached file which implements some of the nits\n> mentioned above.\n\nMerged the diff into v45. Thanks.\n\nOn Tue, Sep 3, 2024 at 12:26 PM Peter Smith <[email protected]> wrote:\n>\n> TEST CASE #1\n>\n> 1.\n> +# Wait for the inactive replication slot to be invalidated.\n>\n> Is that comment correct? IIUC the synced slot should *already* be\n> invalidated from the primary, so here we are not really \"waiting\" for\n> it to be invalidated; Instead, we are just \"confirming\" that the\n> synchronized slot is already invalidated with the correct reason as\n> expected.\n\nModified the comment.\n\n> 2.\n> +# Synced slot mustn't get invalidated on the standby even after a checkpoint,\n> +# it must sync invalidation from the primary. So, we must not see the slot's\n> +# invalidation message in server log.\n>\n> This test case seemed bogus, for a couple of reasons:\n>\n> 2a. IIUC this 'lsub1_sync_slot' is the same one that is already\n> invalid (from the primary), so nobody should be surprised that an\n> already invalid slot doesn't get flagged as invalid again. i.e.\n> Shouldn't your test scenario here be done using a valid synced slot?\n\n+1. Added another test case for checking the synced slot not getting\ninvalidated despite inactive timeout being set.\n\n> 2b. AFAICT it was only moments above this CHECKPOINT where you\n> assigned the standby inactivity timeout to 2s. So even if there was\n> some bug invalidating synced slots I don't think you gave it enough\n> time to happen -- e.g. I doubt 2s has elapsed yet.\n\nAdded sleep(timeout+1) before the checkpoint.\n\n> 3.\n> +# Stop standby to make the standby's replication slot on the primary inactive\n> +$standby1->stop;\n> +\n> +# Wait for the standby's replication slot to become inactive\n>\n> TEST CASE #2\n>\n> 4.\n> +# Stop subscriber to make the replication slot on publisher inactive\n> +$subscriber->stop;\n> +\n> +# Wait for the replication slot to become inactive and then invalidated due to\n> +# timeout.\n> +wait_for_slot_invalidation($publisher, 'lsub1_slot', $logstart,\n> + $inactive_timeout);\n>\n> IIUC, this is just like comment #3 above. Both these (the stop and the\n> wait) seem to belong together, so I think maybe a single bigger\n> explanatory comment covering both parts would help for understanding.\n\nDone.\n\n> 5.\n> +# Testcase end: Invalidate logical subscriber's slot due to\n> +# replication_slot_inactive_timeout.\n> +# =============================================================================\n>\n> IMO the rest of the comment after \"Testcase end\" isn't very useful.\n\nRemoved.\n\n> ======\n> sub wait_for_slot_invalidation\n>\n> 6.\n> +sub wait_for_slot_invalidation\n> +{\n>\n> An explanatory header comment for this subroutine would be helpful.\n\nDone.\n\n> 7.\n> + # Wait for the replication slot to become inactive\n> + $node->poll_query_until(\n>\n> Why are there are 2 separate poll_query_until's here? Can't those be\n> combined into just one?\n\nAh. My bad. Removed.\n\n> ~~~\n>\n> 8.\n> + # Sleep at least $inactive_timeout duration to avoid multiple checkpoints\n> + # for the slot to get invalidated.\n> + sleep($inactive_timeout);\n> +\n>\n> Maybe this special sleep to prevent too many CHECKPOINTs should be\n> moved to be inside the other subroutine, which is actually doing those\n> CHECKPOINTs.\n\nDone.\n\n> 9.\n> + # Wait for the inactive replication slot to be invalidated\n> + \"Timed out while waiting for inactive slot $slot_name to be\n> invalidated on node $name\";\n> +\n>\n> The comment seems misleading. IIUC you are not \"waiting\" for the\n> invalidation here, because it is the other subroutine doing the\n> waiting for the invalidation message in the logs. Instead, here I\n> think you are just confirming the 'invalidation_reason' got set\n> correctly. The comment should say what it is really doing.\n\nModified.\n\n> sub check_for_slot_invalidation_in_server_log\n>\n> 10.\n> +# Check for invalidation of slot in server log\n> +sub check_for_slot_invalidation_in_server_log\n> +{\n>\n> I think the main function of this subroutine is the CHECKPOINT and the\n> waiting for the server log to say invalidation happened. It is doing a\n> loop of a) CHECKPOINT then b) inspecting the server log for the slot\n> invalidation, and c) waiting for a bit. Repeat 10 times.\n>\n> A comment describing the logic for this subroutine would be helpful.\n>\n> The most important side-effect of this function is the CHECKPOINT\n> because without that nothing will ever get invalidated due to\n> inactivity, but this key point is not obvious from the subroutine\n> name.\n>\n> IMO it would be better to name this differently to reflect what it is\n> really doing:\n> e.g. \"CHECKPOINT_and_wait_for_slot_invalidation_in_server_log\"\n\nThat would be too long. Changed the function name to\ntrigger_slot_invalidation() which is appropriate.\n\nPlease find the v45 patch. Addressed above and Shveta's review comments [1].\n\nAmit's comments [2] and [3] are still pending.\n\n[1] https://www.postgresql.org/message-id/CAJpy0uC8Dg-0JS3NRUwVUemgz5Ar2v3_EQQFXyAigWSEQ8U47Q%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAA4eK1K7DdT_5HnOWs5tVPYC%3D-h%2Bm85wu7k-7RVJaJ7zMxprWQ%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAA4eK1%2Bkt-QRr1RP%3DD%3D4_tp%2BS%2BCErQ6rNe7KVYEyZ3f6PYXpvw%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 8 Sep 2024 17:24:47 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for reviewing.\n\nOn Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n>\n> 1)\n> I see that ReplicationSlotAlter() will error out if the slot is\n> invalidated due to timeout. I have not tested it myself, but do you\n> know if slot-alter errors out for other invalidation causes as well?\n> Just wanted to confirm that the behaviour is consistent for all\n> invalidation causes.\n\nWill respond to Amit's comment soon.\n\n> 2)\n> When a slot is invalidated, and we try to use that slot, it gives this msg:\n>\n> ERROR: can no longer get changes from replication slot \"mysubnew1_2\"\n> DETAIL: The slot became invalid because it was inactive since\n> 2024-09-03 14:23:34.094067+05:30, which is more than 600 seconds ago.\n> HINT: You might need to increase \"replication_slot_inactive_timeout.\".\n>\n> Isn't HINT misleading? Even if we increase it now, the slot can not be\n> reused again.\n>\n> Below is one side effect if inactive_since keeps on changing:\n>\n> postgres=# SELECT * FROM pg_replication_slot_advance('mysubnew1_1',\n> pg_current_wal_lsn());\n> ERROR: can no longer get changes from replication slot \"mysubnew1_1\"\n> DETAIL: The slot became invalid because it was inactive since\n> 2024-09-04 10:03:56.68053+05:30, which is more than 10 seconds ago.\n> HINT: You might need to increase \"replication_slot_inactive_timeout.\".\n>\n> postgres=# select now();\n> now\n> ---------------------------------\n> 2024-09-04 10:04:00.26564+05:30\n>\n> 'DETAIL' gives wrong information, we are not past 10-seconds. This is\n> because inactive_since got updated even in ERROR scenario.\n>\n> ERROR: can no longer get changes from replication slot \"mysubnew1_1\"\n> DETAIL: The slot became invalid because it was inactive since\n> 2024-09-04 10:06:38.980939+05:30, which is more than 129600 seconds\n> ago.\n> postgres=# select now();\n> now\n> ----------------------------------\n> 2024-09-04 10:07:35.201894+05:30\n>\n> I feel we should change this message itself.\n\nRemoved the hint and corrected the detail message as following:\n\nerrmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\nNameStr(s->data.name)),\nerrdetail(\"This slot has been invalidated because it was inactive for\nlonger than the amount of time specified by \\\"%s\\\".\",\n\"replication_slot_inactive_timeout.\")));\n\n> 3)\n> When the slot is invalidated, the' inactive_since' still keeps on\n> changing when there is a subscriber trying to start replication\n> continuously. I think ReplicationSlotAcquire() keeps on failing and\n> thus Release keeps on setting it again and again. Shouldn't we stop\n> setting/chnaging 'inactive_since' once the slot is invalidated\n> already, otherwise it will be misleading.\n>\n> postgres=# select failover,synced,inactive_since,invalidation_reason\n> from pg_replication_slots;\n>\n> failover | synced | inactive_since | invalidation_reason\n> ----------+--------+----------------------------------+---------------------\n> t | f | 2024-09-03 14:23:.. | inactive_timeout\n>\n> after sometime:\n> failover | synced | inactive_since | invalidation_reason\n> ----------+--------+----------------------------------+---------------------\n> t | f | 2024-09-03 14:26:..| inactive_timeout\n\nChanged it to not update inactive_since for slots invalidated due to\ninactive timeout.\n\n> 4)\n> src/sgml/config.sgml:\n>\n> 4a)\n> + A value of zero (which is default) disables the timeout mechanism.\n>\n> Better will be:\n> A value of zero (which is default) disables the inactive timeout\n> invalidation mechanism .\n\nChanged.\n\n> 4b)\n> 'synced' and inactive_since should point to pg_replication_slots:\n>\n> example:\n> <link linkend=\"view-pg-replication-slots\">pg_replication_slots</link>.<structfield>synced</structfield>\n\nModified.\n\n> 5)\n> src/sgml/system-views.sgml:\n> + ..the slot has been inactive for longer than the duration specified\n> by replication_slot_inactive_timeout parameter.\n>\n> Better to have:\n> ..the slot has been inactive for a time longer than the duration\n> specified by the replication_slot_inactive_timeout parameter.\n\nChanged it to the following to be consistent with the config.sgml.\n\n <literal>inactive_timeout</literal> means that the slot has been\n inactive for longer than the amount of time specified by the\n <xref linkend=\"guc-replication-slot-inactive-timeout\"/> parameter.\n\nPlease find the v45 patch posted upthread at\nhttps://www.postgresql.org/message-id/CALj2ACWXQT3_HY40ceqKf1DadjLQP6b1r%3D0sZRh-xhAOd-b0pA%40mail.gmail.com\nfor the changes.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 8 Sep 2024 17:25:42 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Thu, Sep 5, 2024 at 9:30 AM Amit Kapila <[email protected]> wrote:\n>\n> On Wed, Sep 4, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n> >\n> > On Tue, Sep 3, 2024 at 3:01 PM shveta malik <[email protected]> wrote:\n> > >\n> > >\n> > > 1)\n> > > I see that ReplicationSlotAlter() will error out if the slot is\n> > > invalidated due to timeout. I have not tested it myself, but do you\n> > > know if slot-alter errors out for other invalidation causes as well?\n> > > Just wanted to confirm that the behaviour is consistent for all\n> > > invalidation causes.\n> >\n> > I was able to test this and as anticipated behavior is different. When\n> > slot is invalidated due to say 'wal_removed', I am still able to do\n> > 'alter' of that slot.\n> > Please see:\n> >\n> > Pub:\n> > slot_name | failover | synced | inactive_since |\n> > invalidation_reason\n> > -------------+----------+--------+----------------------------------+---------------------\n> > mysubnew1_1 | t | f | 2024-09-04 08:58:12.802278+05:30 |\n> > wal_removed\n> >\n> > Sub:\n> > newdb1=# alter subscription mysubnew1_1 disable;\n> > ALTER SUBSCRIPTION\n> >\n> > newdb1=# alter subscription mysubnew1_1 set (failover=false);\n> > ALTER SUBSCRIPTION\n> >\n> > Pub: (failover altered)\n> > slot_name | failover | synced | inactive_since |\n> > invalidation_reason\n> > -------------+----------+--------+----------------------------------+---------------------\n> > mysubnew1_1 | f | f | 2024-09-04 08:58:47.824471+05:30 |\n> > wal_removed\n> >\n> >\n> > while when invalidation_reason is 'inactive_timeout', it fails:\n> >\n> > Pub:\n> > slot_name | failover | synced | inactive_since |\n> > invalidation_reason\n> > -------------+----------+--------+----------------------------------+---------------------\n> > mysubnew1_1 | t | f | 2024-09-03 14:30:57.532206+05:30 |\n> > inactive_timeout\n> >\n> > Sub:\n> > newdb1=# alter subscription mysubnew1_1 disable;\n> > ALTER SUBSCRIPTION\n> >\n> > newdb1=# alter subscription mysubnew1_1 set (failover=false);\n> > ERROR: could not alter replication slot \"mysubnew1_1\": ERROR: can no\n> > longer get changes from replication slot \"mysubnew1_1\"\n> > DETAIL: The slot became invalid because it was inactive since\n> > 2024-09-04 08:54:20.308996+05:30, which is more than 0 seconds ago.\n> > HINT: You might need to increase \"replication_slot_inactive_timeout.\".\n> >\n> > I think the behavior should be same.\n> >\n>\n> We should not allow the invalid replication slot to be altered\n> irrespective of the reason unless there is any benefit.\n>\n\nOkay, then I think we need to change the existing behaviour of the\nother invalidation causes which still allow alter-slot.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 9 Sep 2024 09:17:30 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 9, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n>\n> > We should not allow the invalid replication slot to be altered\n> > irrespective of the reason unless there is any benefit.\n>\n> Okay, then I think we need to change the existing behaviour of the\n> other invalidation causes which still allow alter-slot.\n\n+1. Perhaps, track it in a separate thread?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 9 Sep 2024 10:26:17 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Sep 9, 2024 at 10:26 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Mon, Sep 9, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n> >\n> > > We should not allow the invalid replication slot to be altered\n> > > irrespective of the reason unless there is any benefit.\n> >\n> > Okay, then I think we need to change the existing behaviour of the\n> > other invalidation causes which still allow alter-slot.\n>\n> +1. Perhaps, track it in a separate thread?\n\nI think so. It does not come under the scope of this thread.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 9 Sep 2024 10:28:42 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Sun, Sep 8, 2024 at 5:25 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n>\n> Please find the v45 patch. Addressed above and Shveta's review comments [1].\n>\n\nThanks for the patch. Please find my comments:\n\n1)\nsrc/sgml/config.sgml:\n\n+ Synced slots are always considered to be inactive because they\ndon't perform logical decoding to produce changes.\n\nIt is better we avoid such a statement, as internally we use logical\ndecoding to advance restart-lsn, see\n'LogicalSlotAdvanceAndCheckSnapState' called form slotsync.c.\n<Also see related comment 6 below>\n\n2)\nsrc/sgml/config.sgml:\n\n+ disables the inactive timeout invalidation mechanism\n\n+ Slot invalidation due to inactivity timeout occurs during checkpoint.\n\nEither have 'inactive' at both the places or 'inactivity'.\n\n\n3)\nslot.c:\n+static bool InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause\ncause,\n+ ReplicationSlot *s,\n+ XLogRecPtr oldestLSN,\n+ Oid dboid,\n+ TransactionId snapshotConflictHorizon,\n+ bool *invalidated);\n+static inline bool SlotInactiveTimeoutCheckAllowed(ReplicationSlot *s);\n\nI think, we do not need above 2 declarations. The code compile fine\nwithout these as the usage is later than the definition.\n\n\n4)\n+ /*\n+ * An error is raised if error_if_invalid is true and the slot has been\n+ * invalidated previously.\n+ */\n+ if (error_if_invalid && s->data.invalidated == RS_INVAL_INACTIVE_TIMEOUT)\n\nThe comment is generic while the 'if condition' is specific to one\ninvalidation cause. Even though I feel it can be made generic test for\nall invalidation causes but that is not under scope of this thread and\nneeds more testing/analysis. For the time being, we can make comment\nspecific to the concerned invalidation cause. The header of function\nwill also need the same change.\n\n5)\nSlotInactiveTimeoutCheckAllowed():\n\n+ * Check if inactive timeout invalidation mechanism is disabled or slot is\n+ * currently being used or server is in recovery mode or slot on standby is\n+ * currently being synced from the primary.\n+ *\n\nThese comments say exact opposite of what we are checking in code.\nSince the function name has 'Allowed' in it, we should be putting\ncomments which say what allows it instead of what disallows it.\n\n\n6)\n\n+ * Synced slots are always considered to be inactive because they don't\n+ * perform logical decoding to produce changes.\n+ */\n+static inline bool\n+SlotInactiveTimeoutCheckAllowed(ReplicationSlot *s)\n\nPerhaps we should avoid mentioning logical decoding here. When slots\nare synced, they are performing decoding and their inactive_since is\nchanging continuously. A better way to make this statement will be:\n\nWe want to ensure that the slots being synchronized are not\ninvalidated, as they need to be preserved for future use when the\nstandby server is promoted to the primary. This is necessary for\nresuming logical replication from the new primary server.\n<Rephrase if needed>\n\n7)\n\nInvalidatePossiblyObsoleteSlot()\n\nwe are calling SlotInactiveTimeoutCheckAllowed() twice in this\nfunction. We shall optimize.\n\nAt the first usage place, shall we simply get timestamp when cause is\nRS_INVAL_INACTIVE_TIMEOUT without checking\nSlotInactiveTimeoutCheckAllowed() as IMO it does not seem a\nperformance critical section. Or if we retain check at first place,\nthen at the second place we can avoid calling it again based on\nwhether 'now' is NULL or not.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 9 Sep 2024 10:53:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi, here are some review comments for v45-0001 (excluding the test code)\n\n======\ndoc/src/sgml/config.sgml\n\n1.\n+ Note that the inactive timeout invalidation mechanism is not\n+ applicable for slots on the standby server that are being synced\n+ from primary server (i.e., standby slots having\n\nnit - /from primary server/from the primary server/\n\n======\nsrc/backend/replication/slot.c\n\n2. ReplicationSlotAcquire\n\n+ errmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\n+ NameStr(s->data.name)),\n+ errdetail(\"This slot has been invalidated because it was inactive\nfor longer than the amount of time specified by \\\"%s\\\".\",\n+ \"replication_slot_inactive_timeout.\")));\n\nnit - \"replication_slot_inactive_timeout.\" - should be no period\ninside that GUC name literal\n\n~~~\n\n3. ReportSlotInvalidation\n\nI didn't understand why there was a hint for:\n\"You might need to increase \\\"%s\\\".\", \"max_slot_wal_keep_size\"\n\nBut you don't have an equivalent hint for timeout invalidation:\n\"You might need to increase \\\"%s\\\".\", \"replication_slot_inactive_timeout\"\n\nWhy aren't these similar cases consistent?\n\n~~~\n\n4. RestoreSlotFromDisk\n\n+ /* Use the same inactive_since time for all the slots. */\n+ if (now == 0)\n+ now = GetCurrentTimestamp();\n+\n\nIs the deferred assignment really necessary? Why not just\nunconditionally assign the 'now' just before the for-loop? Or even at\nthe declaration? e.g. The 'replication_slot_inactive_timeout' is\nmeasured in seconds so I don't think 'inactive_since' being wrong by a\nmillisecond here will make any difference.\n\n======\nsrc/include/replication/slot.h\n\n5. ReplicationSlotSetInactiveSince\n\n+/*\n+ * Set slot's inactive_since property unless it was previously invalidated due\n+ * to inactive timeout.\n+ */\n+static inline void\n+ReplicationSlotSetInactiveSince(ReplicationSlot *s, TimestampTz *now,\n+ bool acquire_lock)\n+{\n+ if (acquire_lock)\n+ SpinLockAcquire(&s->mutex);\n+\n+ if (s->data.invalidated != RS_INVAL_INACTIVE_TIMEOUT)\n+ s->inactive_since = *now;\n+\n+ if (acquire_lock)\n+ SpinLockRelease(&s->mutex);\n+}\n\nIs the logic correct? What if the slot was already invalid due to some\nreason other than RS_INVAL_INACTIVE_TIMEOUT? Is an Assert needed?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 9 Sep 2024 17:40:50 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Sep 9, 2024 at 10:28 AM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 9, 2024 at 10:26 AM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Mon, Sep 9, 2024 at 9:17 AM shveta malik <[email protected]> wrote:\n> > >\n> > > > We should not allow the invalid replication slot to be altered\n> > > > irrespective of the reason unless there is any benefit.\n> > >\n> > > Okay, then I think we need to change the existing behaviour of the\n> > > other invalidation causes which still allow alter-slot.\n> >\n> > +1. Perhaps, track it in a separate thread?\n>\n> I think so. It does not come under the scope of this thread.\n>\n\nIt makes sense to me as well. But let's go ahead and get that sorted out first.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Sep 2024 15:04:44 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 9, 2024 at 3:04 PM Amit Kapila <[email protected]> wrote:\n>\n> > > > > We should not allow the invalid replication slot to be altered\n> > > > > irrespective of the reason unless there is any benefit.\n> > > >\n> > > > Okay, then I think we need to change the existing behaviour of the\n> > > > other invalidation causes which still allow alter-slot.\n> > >\n> > > +1. Perhaps, track it in a separate thread?\n> >\n> > I think so. It does not come under the scope of this thread.\n>\n> It makes sense to me as well. But let's go ahead and get that sorted out first.\n\nMoved the discussion to new thread -\nhttps://www.postgresql.org/message-id/CALj2ACW4fSOMiKjQ3%3D2NVBMTZRTG8Ujg6jsK9z3EvOtvA4vzKQ%40mail.gmail.com.\nPlease have a look.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 00:12:50 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi, here is the remainder of my v45-0001 review. These comments are\nfor the test code only.\n\n======\nTestcase #1\n\n1.\n+# Testcase start\n+#\n+# Invalidate streaming standby slot and logical failover slot on primary due to\n+# inactive timeout. Also, check logical failover slot synced to standby from\n+# primary doesn't invalidate on its own, but gets the invalidated\nstate from the\n+# primary.\n\nnit - s/primary/the primary/ (in a couple of places)\nnit - s/standby/the standby/\nnit - other trivial tweaks.\n\n~~~\n\n2.\n+# Create sync slot on primary\n+$primary->psql('postgres',\n+ q{SELECT pg_create_logical_replication_slot('sync_slot1',\n'test_decoding', false, false, true);}\n+);\n\nnit - s/primary/the primary/\n\n~~~\n\n3.\n+$primary->safe_psql(\n+ 'postgres', qq[\n+ SELECT pg_create_physical_replication_slot(slot_name :=\n'sb_slot1', immediately_reserve := true);\n+]);\n\nShould this have a comment?\n\n~~~\n\n4.\n+# Wait until standby has replayed enough data\n+$primary->wait_for_catchup($standby1);\n\nnit - s/standby/the standby/\n\n~~~\n\n5.\n+# Sync primary slot to standby\n+$standby1->safe_psql('postgres', \"SELECT pg_sync_replication_slots();\");\n\nnit - /Sync primary slot to standby/Sync the primary slots to the standby/\n\n~~~\n\n6.\n+# Confirm that logical failover slot is created on standby\n\nnit - s/Confirm that logical failover slot is created on\nstandby/Confirm that the logical failover slot is created on the\nstandby/\n\n~~~\n\n7.\n+is( $standby1->safe_psql(\n+ 'postgres',\n+ q{SELECT count(*) = 1 FROM pg_replication_slots\n+ WHERE slot_name = 'sync_slot1' AND synced AND NOT temporary;}\n+ ),\n+ \"t\",\n+ 'logical slot sync_slot1 has synced as true on standby');\n\nIMO here you should also be checking that the sync slot state is NOT\ninvalidated, just as a counterpoint for the test part later that\nchecks that it IS invalidated.\n\n~~~\n\n8.\n+my $inactive_timeout = 1;\n+\n+# Set timeout so that next checkpoint will invalidate inactive slot\n+$primary->safe_psql(\n+ 'postgres', qq[\n+ ALTER SYSTEM SET replication_slot_inactive_timeout TO\n'${inactive_timeout}s';\n+]);\n+$primary->reload;\n\n8a.\nnit - I think that $inactive_timeout assignment belongs below your comment.\n\n~\n\n8b.\nnit - s/Set timeout so that next checkpoint will invalidate inactive\nslot/Set timeout GUC so that the next checkpoint will invalidate\ninactive slots/\n\n~~~\n\n9.\n+# Check for logical failover slot to become inactive on primary. Note that\n+# nobody has acquired slot yet, so it must get invalidated due to\n+# inactive timeout.\n\nnit - /Check for logical failover slot to become inactive on\nprimary./Wait for logical failover slot to become inactive on the\nprimary./\nnit - /has acquired slot/has acquired the slot/\n\n~~~\n\n10.\n+# Sync primary slot to standby. Note that primary slot has already been\n+# invalidated due to inactive timeout. Standby must just sync inavalidated\n+# state.\n\nnit - minor, add \"the\". fix typo \"inavalidated\", etc. suggestion:\n\nRe-sync the primary slots to the standby. Note that the primary slot was already\ninvalidated (above) due to inactive timeout. The standby must just\nsync the invalidated\nstate.\n\n~~~\n\n11.\n+# Make standby slot on primary inactive and check for invalidation\n+$standby1->stop;\n\nnit - /standby slot/the standby slot/\nnit - /on primary/on the primary/\n\n======\nTestcase #2\n\n12.\nI'm not sure it is necessary to do all this extra work. IIUC, there\nwas already almost everything you needed in the previous Testcase #1.\nSo, I thought you could just combine this extra standby timeout test\nin Testcase #1.\n\nIndeed, your Testcase #1 comment still says it is doing this: (\"Also,\ncheck logical failover slot synced to standby from primary doesn't\ninvalidate on its own,...\")\n\ne.g.\n- NEW: set the GUC timeout on the standby\n- sync the sync_slot (already doing in test #1)\n- ensure the synced slot is NOT invalid (already suggested above for test #1)\n- NEW: then do a standby sleep > timeout duration\n- NEW: then do a standby CHECKPOINT...\n- NEW: then ensure the sync slot invalidation did NOT happen\n- then proceed with the rest of test #1...\n\n======\nTestcase #3\n\n13.\nnit - remove a few blank lines to group associated statements together.\n\n~~~\n\n14.\n+$publisher->safe_psql(\n+ 'postgres', qq[\n+ ALTER SYSTEM SET replication_slot_inactive_timeout TO '\n${inactive_timeout}s';\n+]);\n+$publisher->reload;\n\nnit - this deserves a comment, the same as in Testcase #1\n\n======\nsub wait_for_slot_invalidation\n\n15.\n+# Check for slot to first become inactive and then get invalidated\n+sub check_for_slot_invalidation\n\nnit - IMO the previous name was better (e.g. \"wait_for..\" instead of\n\"check_for...\") because that describes exactly what the subroutine is\ndoing.\n\nsuggestion:\n# Wait for the slot to first become inactive and then get invalidated\nsub wait_for_slot_invalidation\n\n~~~\n\n16.\n+{\n+ my ($node, $slot, $offset, $inactive_timeout) = @_;\n+ my $name = $node->name;\n\nThe variable $name seems too vague. How about $node_name?\n\n~~~\n\n17.\n+ # Wait for invalidation reason to be set\n+ $node->poll_query_until(\n+ 'postgres', qq[\n+ SELECT COUNT(slot_name) = 1 FROM pg_replication_slots\n+ WHERE slot_name = '$slot' AND\n+ invalidation_reason = 'inactive_timeout';\n+ ])\n+ or die\n+ \"Timed out while waiting for invalidation reason of slot $slot to\nbe set on node $name\";\n\n17a.\nnit - /# Wait for invalidation reason to be set/# Check that the\ninvalidation reason is 'inactive_timeout'/\n\nIIUC, the 'trigger_slot_invalidation' function has already invalidated\nthe slot at this point, so we are not really \"Waiting...\"; we are\n\"Checking...\" that the reason was correctly set.\n\n~\n\n17b.\nI think this code fragment maybe would be better put inside the\n'trigger_slot_invalidation' function. (I've done this in the nitpicks\nattachment)\n\n~~~\n\n18.\n+ # Check that invalidated slot cannot be acquired\n+ my ($result, $stdout, $stderr);\n+\n+ ($result, $stdout, $stderr) = $node->psql(\n+ 'postgres', qq[\n+ SELECT pg_replication_slot_advance('$slot', '0/1');\n+ ]);\n\n18a.\ns/Check that invalidated slot/Check that an invalidated slot/\n\n~\n\n18b.\nnit - Remove some blank lines, because the comment applies to all below it.\n\n======\nsub trigger_slot_invalidation\n\n19.\n+# Trigger slot invalidation and confirm it in server log\n+sub trigger_slot_invalidation\n\nnit - s/confirm it in server log/confirm it in the server log/\n\n~\n\n20.\n+{\n+ my ($node, $slot, $offset, $inactive_timeout) = @_;\n+ my $name = $node->name;\n+ my $invalidated = 0;\n\n(same as the other subroutine)\nnit - The variable $name seems too vague. How about $node_name?\n\n======\n\nPlease refer to the attached nitpicks top-up patch which implements\nmost of the above nits.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 10 Sep 2024 11:34:24 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Tue, Sep 10, 2024 at 12:13 AM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Sep 9, 2024 at 3:04 PM Amit Kapila <[email protected]> wrote:\n> >\n> > > > > > We should not allow the invalid replication slot to be altered\n> > > > > > irrespective of the reason unless there is any benefit.\n> > > > >\n> > > > > Okay, then I think we need to change the existing behaviour of the\n> > > > > other invalidation causes which still allow alter-slot.\n> > > >\n> > > > +1. Perhaps, track it in a separate thread?\n> > >\n> > > I think so. It does not come under the scope of this thread.\n> >\n> > It makes sense to me as well. But let's go ahead and get that sorted out first.\n>\n> Moved the discussion to new thread -\n> https://www.postgresql.org/message-id/CALj2ACW4fSOMiKjQ3%3D2NVBMTZRTG8Ujg6jsK9z3EvOtvA4vzKQ%40mail.gmail.com.\n> Please have a look.\n>\n\nThat is pushed now. Please send the rebased patch after addressing the\npending comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 16 Sep 2024 08:55:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for reviewing.\n\nOn Mon, Sep 9, 2024 at 10:54 AM shveta malik <[email protected]> wrote:\n>\n> 2)\n> src/sgml/config.sgml:\n>\n> + disables the inactive timeout invalidation mechanism\n>\n> + Slot invalidation due to inactivity timeout occurs during checkpoint.\n>\n> Either have 'inactive' at both the places or 'inactivity'.\n\nUsed \"inactive timeout\".\n\n> 3)\n> slot.c:\n> +static bool InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause\n> cause,\n> + ReplicationSlot *s,\n> + XLogRecPtr oldestLSN,\n> + Oid dboid,\n> + TransactionId snapshotConflictHorizon,\n> + bool *invalidated);\n> +static inline bool SlotInactiveTimeoutCheckAllowed(ReplicationSlot *s);\n>\n> I think, we do not need above 2 declarations. The code compile fine\n> without these as the usage is later than the definition.\n\nHm, it's a usual practice that I follow irrespective of the placement\nof function declarations. Since it was brought up, I removed the\ndeclarations.\n\n> 4)\n> + /*\n> + * An error is raised if error_if_invalid is true and the slot has been\n> + * invalidated previously.\n> + */\n> + if (error_if_invalid && s->data.invalidated == RS_INVAL_INACTIVE_TIMEOUT)\n>\n> The comment is generic while the 'if condition' is specific to one\n> invalidation cause. Even though I feel it can be made generic test for\n> all invalidation causes but that is not under scope of this thread and\n> needs more testing/analysis.\n\nRight.\n\n> For the time being, we can make comment\n> specific to the concerned invalidation cause. The header of function\n> will also need the same change.\n\nAdjusted the comment, but left the variable name error_if_invalid as\nis. Didn't want to make it long, one can look at the code to\nunderstand what it is used for.\n\n> 5)\n> SlotInactiveTimeoutCheckAllowed():\n>\n> + * Check if inactive timeout invalidation mechanism is disabled or slot is\n> + * currently being used or server is in recovery mode or slot on standby is\n> + * currently being synced from the primary.\n> + *\n>\n> These comments say exact opposite of what we are checking in code.\n> Since the function name has 'Allowed' in it, we should be putting\n> comments which say what allows it instead of what disallows it.\n\nModified.\n\n> 1)\n> src/sgml/config.sgml:\n>\n> + Synced slots are always considered to be inactive because they\n> don't perform logical decoding to produce changes.\n>\n> It is better we avoid such a statement, as internally we use logical\n> decoding to advance restart-lsn, see\n> 'LogicalSlotAdvanceAndCheckSnapState' called form slotsync.c.\n> <Also see related comment 6 below>\n>\n> 6)\n>\n> + * Synced slots are always considered to be inactive because they don't\n> + * perform logical decoding to produce changes.\n> + */\n> +static inline bool\n> +SlotInactiveTimeoutCheckAllowed(ReplicationSlot *s)\n>\n> Perhaps we should avoid mentioning logical decoding here. When slots\n> are synced, they are performing decoding and their inactive_since is\n> changing continuously. A better way to make this statement will be:\n>\n> We want to ensure that the slots being synchronized are not\n> invalidated, as they need to be preserved for future use when the\n> standby server is promoted to the primary. This is necessary for\n> resuming logical replication from the new primary server.\n> <Rephrase if needed>\n\nThey are performing logical decoding, but not producing the changes\nfor the clients to consume. So, IMO, the accompanying \"to produce\nchanges\" next to the \"logical decoding\" is good here.\n\n> 7)\n>\n> InvalidatePossiblyObsoleteSlot()\n>\n> we are calling SlotInactiveTimeoutCheckAllowed() twice in this\n> function. We shall optimize.\n>\n> At the first usage place, shall we simply get timestamp when cause is\n> RS_INVAL_INACTIVE_TIMEOUT without checking\n> SlotInactiveTimeoutCheckAllowed() as IMO it does not seem a\n> performance critical section. Or if we retain check at first place,\n> then at the second place we can avoid calling it again based on\n> whether 'now' is NULL or not.\n\nGetting a current timestamp can get costlier on platforms that use\nvarious clock sources, so assigning 'now' unconditionally isn't the\nway IMO. Using the inline function in two places improves the\nreadability. Can optimize it if there's any performance impact of\ncalling the inline function in two places.\n\nWill post the new patch version soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 16 Sep 2024 15:17:47 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for reviewing.\n\nOn Mon, Sep 9, 2024 at 1:11 PM Peter Smith <[email protected]> wrote:\n>\n> 1.\n> + Note that the inactive timeout invalidation mechanism is not\n> + applicable for slots on the standby server that are being synced\n> + from primary server (i.e., standby slots having\n>\n> nit - /from primary server/from the primary server/\n\n+1\n\n> 2. ReplicationSlotAcquire\n>\n> + errmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\n> + NameStr(s->data.name)),\n> + errdetail(\"This slot has been invalidated because it was inactive\n> for longer than the amount of time specified by \\\"%s\\\".\",\n> + \"replication_slot_inactive_timeout.\")));\n>\n> nit - \"replication_slot_inactive_timeout.\" - should be no period\n> inside that GUC name literal\n\nTypo. Fixed.\n\n> 3. ReportSlotInvalidation\n>\n> I didn't understand why there was a hint for:\n> \"You might need to increase \\\"%s\\\".\", \"max_slot_wal_keep_size\"\n>\n> Why aren't these similar cases consistent?\n\nIt looks misleading and not very useful. What happens if the removed\nWAL (that's needed for the slot) is put back into pg_wal somehow (by\nmanually copying from archive or by some tool/script)? Can the slot\ninvalidated due to wal_removed start sending WAL to its clients?\n\n> But you don't have an equivalent hint for timeout invalidation:\n> \"You might need to increase \\\"%s\\\".\", \"replication_slot_inactive_timeout\"\n\nI removed this per review comments upthread.\n\n> 4. RestoreSlotFromDisk\n>\n> + /* Use the same inactive_since time for all the slots. */\n> + if (now == 0)\n> + now = GetCurrentTimestamp();\n> +\n>\n> Is the deferred assignment really necessary? Why not just\n> unconditionally assign the 'now' just before the for-loop? Or even at\n> the declaration? e.g. The 'replication_slot_inactive_timeout' is\n> measured in seconds so I don't think 'inactive_since' being wrong by a\n> millisecond here will make any difference.\n\nMoved it before the for-loop.\n\n> 5. ReplicationSlotSetInactiveSince\n>\n> +/*\n> + * Set slot's inactive_since property unless it was previously invalidated due\n> + * to inactive timeout.\n> + */\n> +static inline void\n> +ReplicationSlotSetInactiveSince(ReplicationSlot *s, TimestampTz *now,\n> + bool acquire_lock)\n> +{\n> + if (acquire_lock)\n> + SpinLockAcquire(&s->mutex);\n> +\n> + if (s->data.invalidated != RS_INVAL_INACTIVE_TIMEOUT)\n> + s->inactive_since = *now;\n> +\n> + if (acquire_lock)\n> + SpinLockRelease(&s->mutex);\n> +}\n>\n> Is the logic correct? What if the slot was already invalid due to some\n> reason other than RS_INVAL_INACTIVE_TIMEOUT? Is an Assert needed?\n\nHm. Since invalidated slots can't be acquired and made active, not\nmodifying inactive_since irrespective of invalidation reason looks\ngood to me.\n\nPlease find the attached v46 patch having changes for the above review\ncomments and your test review comments and Shveta's review comments.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 16 Sep 2024 15:31:11 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Sep 16, 2024 at 3:31 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Please find the attached v46 patch having changes for the above review\n> comments and your test review comments and Shveta's review comments.\n>\n\n-ReplicationSlotAcquire(const char *name, bool nowait)\n+ReplicationSlotAcquire(const char *name, bool nowait, bool error_if_invalid)\n {\n ReplicationSlot *s;\n int active_pid;\n@@ -615,6 +620,22 @@ retry:\n /* We made this slot active, so it's ours now. */\n MyReplicationSlot = s;\n\n+ /*\n+ * An error is raised if error_if_invalid is true and the slot has been\n+ * previously invalidated due to inactive timeout.\n+ */\n+ if (error_if_invalid &&\n+ s->data.invalidated == RS_INVAL_INACTIVE_TIMEOUT)\n+ {\n+ Assert(s->inactive_since > 0);\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"can no longer get changes from replication slot \\\"%s\\\"\",\n+ NameStr(s->data.name)),\n+ errdetail(\"This slot has been invalidated because it was inactive\nfor longer than the amount of time specified by \\\"%s\\\".\",\n+ \"replication_slot_inactive_timeout\")));\n+ }\n\nWhy raise the ERROR just for timeout invalidation here and why not if\nthe slot is invalidated for other reasons? This raises the question of\nwhat happens before this patch if the invalid slot is used from places\nwhere we call ReplicationSlotAcquire(). I did a brief code analysis\nand found that for StartLogicalReplication(), even if the error won't\noccur in ReplicationSlotAcquire(), it would have been caught in\nCreateDecodingContext(). I think that is where we should also add this\nnew error. Similarly, pg_logical_slot_get_changes_guts() and other\nlogical replication functions should be calling\nCreateDecodingContext() which can raise the new ERROR. I am not sure\nabout how the invalid slots are handled during physical replication,\nplease check the behavior of that before this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 16 Sep 2024 16:54:40 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Hi,\n\nThanks for looking into this.\n\nOn Mon, Sep 16, 2024 at 4:54 PM Amit Kapila <[email protected]> wrote:\n>\n> Why raise the ERROR just for timeout invalidation here and why not if\n> the slot is invalidated for other reasons? This raises the question of\n> what happens before this patch if the invalid slot is used from places\n> where we call ReplicationSlotAcquire(). I did a brief code analysis\n> and found that for StartLogicalReplication(), even if the error won't\n> occur in ReplicationSlotAcquire(), it would have been caught in\n> CreateDecodingContext(). I think that is where we should also add this\n> new error. Similarly, pg_logical_slot_get_changes_guts() and other\n> logical replication functions should be calling\n> CreateDecodingContext() which can raise the new ERROR. I am not sure\n> about how the invalid slots are handled during physical replication,\n> please check the behavior of that before this patch.\n\nWhen physical slots are invalidated due to wal_removed reason, the failure\nhappens at a much later point for the streaming standbys while reading the\nrequested WAL files like the following:\n\n2024-09-16 16:29:52.416 UTC [876059] FATAL: could not receive data from\nWAL stream: ERROR: requested WAL segment 000000010000000000000005 has\nalready been removed\n2024-09-16 16:29:52.416 UTC [872418] LOG: waiting for WAL to become\navailable at 0/5002000\n\nAt this point, despite the slot being invalidated, its wal_status can still\ncome back to 'unreserved' even from 'lost', and the standby can catch up if\nremoved WAL files are copied either by manually or by a tool/script to the\nprimary's pg_wal directory. IOW, the physical slots invalidated due to\nwal_removed are *somehow* recoverable unlike the logical slots.\n\nIIUC, the invalidation of a slot implies that it is not guaranteed to hold\nany resources like WAL and XMINs. Does it also imply that the slot must be\nunusable?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\nHi,Thanks for looking into this.On Mon, Sep 16, 2024 at 4:54 PM Amit Kapila <[email protected]> wrote:>> Why raise the ERROR just for timeout invalidation here and why not if> the slot is invalidated for other reasons? This raises the question of> what happens before this patch if the invalid slot is used from places> where we call ReplicationSlotAcquire(). I did a brief code analysis> and found that for StartLogicalReplication(), even if the error won't> occur in ReplicationSlotAcquire(), it would have been caught in> CreateDecodingContext(). I think that is where we should also add this> new error. Similarly, pg_logical_slot_get_changes_guts() and other> logical replication functions should be calling> CreateDecodingContext() which can raise the new ERROR. I am not sure> about how the invalid slots are handled during physical replication,> please check the behavior of that before this patch.When physical slots are invalidated due to wal_removed reason, the failure happens at a much later point for the streaming standbys while reading the requested WAL files like the following:2024-09-16 16:29:52.416 UTC [876059] FATAL:  could not receive data from WAL stream: ERROR:  requested WAL segment 000000010000000000000005 has already been removed2024-09-16 16:29:52.416 UTC [872418] LOG:  waiting for WAL to become available at 0/5002000At this point, despite the slot being invalidated, its wal_status can still come back to 'unreserved' even from 'lost', and the standby can catch up if removed WAL files are copied either by manually or by a tool/script to the primary's pg_wal directory. IOW, the physical slots invalidated due to wal_removed are *somehow* recoverable unlike the logical slots.IIUC, the invalidation of a slot implies that it is not guaranteed to hold any resources like WAL and XMINs. Does it also imply that the slot must be unusable?-- Bharath RupireddyPostgreSQL Contributors TeamRDS Open Source DatabasesAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 16 Sep 2024 22:40:52 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "Here are a few comments for the patch v46-0001.\n\n======\nsrc/backend/replication/slot.c\n\n1. ReportSlotInvalidation\n\nOn Mon, Sep 16, 2024 at 8:01 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Mon, Sep 9, 2024 at 1:11 PM Peter Smith <[email protected]> wrote:\n> > 3. ReportSlotInvalidation\n> >\n> > I didn't understand why there was a hint for:\n> > \"You might need to increase \\\"%s\\\".\", \"max_slot_wal_keep_size\"\n> >\n> > Why aren't these similar cases consistent?\n>\n> It looks misleading and not very useful. What happens if the removed\n> WAL (that's needed for the slot) is put back into pg_wal somehow (by\n> manually copying from archive or by some tool/script)? Can the slot\n> invalidated due to wal_removed start sending WAL to its clients?\n>\n> > But you don't have an equivalent hint for timeout invalidation:\n> > \"You might need to increase \\\"%s\\\".\", \"replication_slot_inactive_timeout\"\n>\n> I removed this per review comments upthread.\n\nIIUC the errors are quite similar, so my previous review comment was\nmostly about the unexpected inconsistency of why one of them has a\nhint and the other one does not. I don't have a strong opinion about\nwhether they should both *have* or *not have* hints, so long as they\nare treated the same.\n\nIf you think the current code hint is not useful then maybe we need a\nnew thread to address that existing issue. For example, maybe it\nshould be removed or reworded.\n\n~~~\n\n2. InvalidatePossiblyObsoleteSlot:\n\n+ case RS_INVAL_INACTIVE_TIMEOUT:\n+\n+ if (!SlotInactiveTimeoutCheckAllowed(s))\n+ break;\n+\n+ /*\n+ * Check if the slot needs to be invalidated due to\n+ * replication_slot_inactive_timeout GUC.\n+ */\n+ if (TimestampDifferenceExceeds(s->inactive_since, now,\n+ replication_slot_inactive_timeout * 1000))\n\nnit - it might be tidier to avoid multiple breaks by just combining\nthese conditions. See the nitpick attachment.\n\n~~~\n\n3.\n * - RS_INVAL_WAL_LEVEL: is logical\n+ * - RS_INVAL_INACTIVE_TIMEOUT: inactive timeout occurs\n\nnit - use comment wording \"inactive slot timeout has occurred\", to\nmake it identical to the comment in slot.h\n\n======\nsrc/test/recovery/t/050_invalidate_slots.pl\n\n4.\n+# Despite inactive timeout being set, the synced slot won't get invalidated on\n+# its own on the standby. So, we must not see invalidation message in server\n+# log.\n+$standby1->safe_psql('postgres', \"CHECKPOINT\");\n+ok( !$standby1->log_contains(\n+ \"invalidating obsolete replication slot \\\"sync_slot1\\\"\",\n+ $logstart),\n+ 'check that synced slot sync_slot1 has not been invalidated on standby'\n+);\n+\n\nIt seems kind of brittle to check the logs for something that is NOT\nthere because any change to the message will make this accidentally\npass. Apart from that, it might anyway be more efficient just to check\nthe pg_replication_slots again to make sure the 'invalidation_reason\nremains' still NULL.\n\n======\n\nPlease see the attachment which implements some of the nit changes\nmentioned above.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 17 Sep 2024 11:27:24 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Sep 16, 2024 at 3:31 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Hi,\n>\n>\n> Please find the attached v46 patch having changes for the above review\n> comments and your test review comments and Shveta's review comments.\n>\n\nThanks for addressing comments.\n\nIs there a reason that we don't support this invalidation on hot\nstandby for non-synced slots? Shouldn't we support this time-based\ninvalidation there too just like other invalidations?\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 18 Sep 2024 12:21:56 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 18, 2024 at 12:21 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Sep 16, 2024 at 3:31 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> >\n> > Please find the attached v46 patch having changes for the above review\n> > comments and your test review comments and Shveta's review comments.\n> >\n>\n> Thanks for addressing comments.\n>\n> Is there a reason that we don't support this invalidation on hot\n> standby for non-synced slots? Shouldn't we support this time-based\n> invalidation there too just like other invalidations?\n>\n\nNow since we are not changing inactive_since once it is invalidated,\nwe are not even initializing it during restart; and thus later when\nsomeone tries to use slot, it leads to assert in\nReplicationSlotAcquire() ( Assert(s->inactive_since > 0);\n\nSteps:\n--Disable logical subscriber and let the slot on publisher gets\ninvalidated due to inactive_timeout.\n--Enable the logical subscriber again.\n--Restart publisher.\n\na) We should initialize inactive_since when\nReplicationSlotSetInactiveSince() is called from RestoreSlotFromDisk()\neven though it is invalidated.\nb) And shall we mention in the doc of 'active_since', that once the\nslot is invalidated, this value will remain unchanged until we\nshutdown the server. On server restart, it is initialized to start\ntime. Thought?\n\n\nthanks\nShveta\n\n\n", "msg_date": "Wed, 18 Sep 2024 14:49:00 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 18, 2024 at 2:49 PM shveta malik <[email protected]> wrote:\n>\n> > > Please find the attached v46 patch having changes for the above review\n> > > comments and your test review comments and Shveta's review comments.\n> > >\n\nWhen the synced slot is marked as 'inactive_timeout' invalidated on\nhot standby due to invalidation of publisher 's failover slot, the\nformer starts showing NULL' inactive_since'. Is this intentional\nbehaviour? I feel inactive_since should be non-NULL here too?\nThoughts?\n\nphysical standby:\npostgres=# select slot_name, inactive_since, invalidation_reason,\nfailover, synced from pg_replication_slots;\nslot_name | inactive_since |\ninvalidation_reason | failover | synced\n-------------+----------------------------------+---------------------+----------+--------\nsub2 | 2024-09-18 15:20:04.364998+05:30 | | t | t\nsub3 | 2024-09-18 15:20:04.364953+05:30 | | t | t\n\nAfter sync of invalidation_reason:\n\nslot_name | inactive_since | invalidation_reason |\nfailover | synced\n-------------+----------------------------------+---------------------+----------+--------\n sub2 | | inactive_timeout | t | t\n sub3 | | inactive_timeout | t | t\n\n\nthanks\nshveta\n\n\n", "msg_date": "Wed, 18 Sep 2024 15:31:16 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Mon, Sep 16, 2024 at 10:41 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> Thanks for looking into this.\n>\n> On Mon, Sep 16, 2024 at 4:54 PM Amit Kapila <[email protected]> wrote:\n> >\n> > Why raise the ERROR just for timeout invalidation here and why not if\n> > the slot is invalidated for other reasons? This raises the question of\n> > what happens before this patch if the invalid slot is used from places\n> > where we call ReplicationSlotAcquire(). I did a brief code analysis\n> > and found that for StartLogicalReplication(), even if the error won't\n> > occur in ReplicationSlotAcquire(), it would have been caught in\n> > CreateDecodingContext(). I think that is where we should also add this\n> > new error. Similarly, pg_logical_slot_get_changes_guts() and other\n> > logical replication functions should be calling\n> > CreateDecodingContext() which can raise the new ERROR. I am not sure\n> > about how the invalid slots are handled during physical replication,\n> > please check the behavior of that before this patch.\n>\n> When physical slots are invalidated due to wal_removed reason, the failure happens at a much later point for the streaming standbys while reading the requested WAL files like the following:\n>\n> 2024-09-16 16:29:52.416 UTC [876059] FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000000000005 has already been removed\n> 2024-09-16 16:29:52.416 UTC [872418] LOG: waiting for WAL to become available at 0/5002000\n>\n> At this point, despite the slot being invalidated, its wal_status can still come back to 'unreserved' even from 'lost', and the standby can catch up if removed WAL files are copied either by manually or by a tool/script to the primary's pg_wal directory. IOW, the physical slots invalidated due to wal_removed are *somehow* recoverable unlike the logical slots.\n>\n> IIUC, the invalidation of a slot implies that it is not guaranteed to hold any resources like WAL and XMINs. Does it also imply that the slot must be unusable?\n>\n\nIf we can't hold the dead rows against xmin of the invalid slot, then\nhow can we make it usable even after copying the required WAL?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 18 Sep 2024 17:40:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" }, { "msg_contents": "On Wed, Sep 18, 2024 at 3:31 PM shveta malik <[email protected]> wrote:\n>\n> On Wed, Sep 18, 2024 at 2:49 PM shveta malik <[email protected]> wrote:\n> >\n> > > > Please find the attached v46 patch having changes for the above review\n> > > > comments and your test review comments and Shveta's review comments.\n> > > >\n>\n\nWhen we promote hot standby with synced logical slots to become new\nprimary, the logical slots are never invalidated with\n'inactive_timeout' on new primary. It seems the check in\nSlotInactiveTimeoutCheckAllowed() is wrong. We should allow\ninvalidation of slots on primary even if they are marked as 'synced'.\nPlease see [4].\nI have raised 4 issues so far on v46, the first 3 are in [1],[2],[3].\nOnce all these are addressed, I can continue reviewing further.\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uAwxc49Dz6t%3D-y_-z-MU%2BA4RWX4BR3Zri_jj2qgGMq_8g%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAJpy0uC6nN3SLbEuCvz7-CpaPdNdXxH%3DfeW5MhYQch-JWV0tLg%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CAJpy0uBXXJC6f04%2BFU1axKaU%2Bp78wN0SEhUNE9XoqbjXj%3Dhhgw%40mail.gmail.com\n\n[4]:\n--------------------\npostgres=# select pg_is_in_recovery();\n--------\n f\n\npostgres=# show replication_slot_inactive_timeout;\n replication_slot_inactive_timeout\n-----------------------------------\n 10s\n\npostgres=# select slot_name, inactive_since, invalidation_reason,\nsynced from pg_replication_slots;\n slot_name | inactive_since | invalidation_reason | synced\n-------------+----------------------------------+---------------------+----------+--------\n mysubnew1_1 | 2024-09-19 09:04:09.714283+05:30 | | t\n\npostgres=# select now();\n now\n----------------------------------\n 2024-09-19 09:06:28.871354+05:30\n\npostgres=# checkpoint;\nCHECKPOINT\n\npostgres=# select slot_name, inactive_since, invalidation_reason,\nsynced from pg_replication_slots;\n slot_name | inactive_since | invalidation_reason | synced\n-------------+----------------------------------+---------------------+----------+--------\n mysubnew1_1 | 2024-09-19 09:04:09.714283+05:30 | | t\n--------------------\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 19 Sep 2024 09:40:12 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Introduce XID age and inactive timeout based replication slot\n invalidation" } ]
[ { "msg_contents": "Hi Hackers,\n\nVarious sections of the code utilize the walrcv_connect() function,\nemployed by various processes such as walreceiver, logical replication\napply worker, etc., to establish connections with other hosts.\nPresently, in case of connection failures, the error message lacks\ninformation about the specific process attempting to connect and\nencountering the failure.\n\nThe provided patch enhances error messages for such connection\nfailures, offering more details on the processes that failed to\nestablish a connection.\n\n--\nThanks,\nNisha", "msg_date": "Thu, 11 Jan 2024 14:24:20 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Improve the connection failure error messages" }, { "msg_contents": "Thanks for the patch! Here are a couple of review comments for it.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n1.\n@@ -742,7 +742,7 @@ CreateSubscription(ParseState *pstate,\nCreateSubscriptionStmt *stmt,\n if (!wrconn)\n ereport(ERROR,\n (errcode(ERRCODE_CONNECTION_FAILURE),\n- errmsg(\"could not connect to the publisher: %s\", err)));\n+ errmsg(\"\\\"%s\\\" could not connect to the publisher: %s\", stmt->subname, err)));\n\nIn practice, these commands give errors like:\n\ntest_sub=# create subscription sub1 connection 'dbname=bogus' publication pub1;\nERROR: could not connect to the publisher: connection to server on\nsocket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"bogus\" does not\nexist\n\nand logs like:\n\n2024-01-12 12:45:05.177 AEDT [13108] ERROR: could not connect to the\npublisher: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed:\nFATAL: database \"bogus\" does not exist\n2024-01-12 12:45:05.177 AEDT [13108] STATEMENT: create subscription\nsub1 connection 'dbname=bogus' publication pub1;\n\nSince the subscription name is already obvious from the statement that\ncaused the error I'm not sure it benefits much to add this to the\nerror message (but maybe it is useful if the error message was somehow\nread in isolation from the statement).\n\nAnyway, I felt at least it should include the word \"subscription\" for\nbetter consistency with the other messages in this patch:\n\nSUGGESTION\nsubscription \\\"%s\\\" could not connect to the publisher: %s\n\n======\n\n2.\n+ appname = cluster_name[0] ? cluster_name : \"walreceiver\";\n+\n /* Establish the connection to the primary for XLOG streaming */\n- wrconn = walrcv_connect(conninfo, false, false,\n- cluster_name[0] ? cluster_name : \"walreceiver\",\n- &err);\n+ wrconn = walrcv_connect(conninfo, false, false, appname, &err);\n if (!wrconn)\n ereport(ERROR,\n (errcode(ERRCODE_CONNECTION_FAILURE),\n- errmsg(\"could not connect to the primary server: %s\", err)));\n+ errmsg(\"%s could not connect to the primary server: %s\", appname, err)));\n\nI think your new %s should be quoted according to the guidelines at [1].\n\n======\nsrc/test/regress/expected/subscription.out\n\n3.\nApparently, there is no existing regression test case for the ALTER\n\"could not connect\" message because if there was, it would have\nfailed. Maybe a test should be added?\n\n======\n[1] https://www.postgresql.org/docs/current/error-style-guide.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 12 Jan 2024 13:49:39 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Thanks for the review. Attached v2 patch with suggested changes.\nPlease find my response inline.\n\nOn Fri, Jan 12, 2024 at 8:20 AM Peter Smith <[email protected]> wrote:\n>\n> Thanks for the patch! Here are a couple of review comments for it.\n>\n> ======\n> src/backend/commands/subscriptioncmds.c\n>\n> 1.\n> @@ -742,7 +742,7 @@ CreateSubscription(ParseState *pstate,\n> CreateSubscriptionStmt *stmt,\n> if (!wrconn)\n> ereport(ERROR,\n> (errcode(ERRCODE_CONNECTION_FAILURE),\n> - errmsg(\"could not connect to the publisher: %s\", err)));\n> + errmsg(\"\\\"%s\\\" could not connect to the publisher: %s\", stmt->subname, err)));\n>\n> In practice, these commands give errors like:\n>\n> test_sub=# create subscription sub1 connection 'dbname=bogus' publication pub1;\n> ERROR: could not connect to the publisher: connection to server on\n> socket \"/tmp/.s.PGSQL.5432\" failed: FATAL: database \"bogus\" does not\n> exist\n>\n> and logs like:\n>\n> 2024-01-12 12:45:05.177 AEDT [13108] ERROR: could not connect to the\n> publisher: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed:\n> FATAL: database \"bogus\" does not exist\n> 2024-01-12 12:45:05.177 AEDT [13108] STATEMENT: create subscription\n> sub1 connection 'dbname=bogus' publication pub1;\n>\n> Since the subscription name is already obvious from the statement that\n> caused the error I'm not sure it benefits much to add this to the\n> error message (but maybe it is useful if the error message was somehow\n> read in isolation from the statement).\n>\n> Anyway, I felt at least it should include the word \"subscription\" for\n> better consistency with the other messages in this patch:\n>\n> SUGGESTION\n> subscription \\\"%s\\\" could not connect to the publisher: %s\n\nDone.\n\n> ======\n>\n> 2.\n> + appname = cluster_name[0] ? cluster_name : \"walreceiver\";\n> +\n> /* Establish the connection to the primary for XLOG streaming */\n> - wrconn = walrcv_connect(conninfo, false, false,\n> - cluster_name[0] ? cluster_name : \"walreceiver\",\n> - &err);\n> + wrconn = walrcv_connect(conninfo, false, false, appname, &err);\n> if (!wrconn)\n> ereport(ERROR,\n> (errcode(ERRCODE_CONNECTION_FAILURE),\n> - errmsg(\"could not connect to the primary server: %s\", err)));\n> + errmsg(\"%s could not connect to the primary server: %s\", appname, err)));\n>\n> I think your new %s should be quoted according to the guidelines at [1].\n\nDone.\n\n> ======\n> src/test/regress/expected/subscription.out\n>\n> 3.\n> Apparently, there is no existing regression test case for the ALTER\n> \"could not connect\" message because if there was, it would have\n> failed. Maybe a test should be added?\n>\nThe ALTER SUBSCRIPTION command does not error out on the user\ninterface if updated with a bad connection string and the connection\nfailure error can only be seen in the respective log file.\nDue to this behavior, it is not possible to add a test to show the\nerror message as it is done for CREATE SUBSCRIPTION.\nLet me know if you think there is another way to add this test.\n\n--\nThanks,\nNisha", "msg_date": "Fri, 12 Jan 2024 17:37:58 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Hi,\n\nThanks for the patch.\n\n> Due to this behavior, it is not possible to add a test to show the\n> error message as it is done for CREATE SUBSCRIPTION.\n> Let me know if you think there is another way to add this test.\n\nI believe it can be done with TAP tests, see for instance:\n\ncontrib/auto_explain/t/001_auto_explain.pl\n\nHowever I wouldn't insist on including the test in scope of this\nparticular patch. Such a test doesn't currently exist, it can be added\nas a separate patch, and whether this is actually a useful test (all\nthe tests consume time after all...) is somewhat debatable. Personally\nI agree that it would be nice to have though.\n\nThis being said...\n\n> The ALTER SUBSCRIPTION command does not error out on the user\n> interface if updated with a bad connection string and the connection\n> failure error can only be seen in the respective log file.\n\nI wonder if we should fix this. Again, not necessarily in scope of\nthis work, but if this is not a difficult task, again, it would be\nnice to have.\n\nOther than that v2 looks good.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 12 Jan 2024 16:35:52 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Sat, Jan 13, 2024 at 12:36 AM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the patch.\n>\n> > Due to this behavior, it is not possible to add a test to show the\n> > error message as it is done for CREATE SUBSCRIPTION.\n> > Let me know if you think there is another way to add this test.\n>\n> I believe it can be done with TAP tests, see for instance:\n>\n> contrib/auto_explain/t/001_auto_explain.pl\n>\n> However I wouldn't insist on including the test in scope of this\n> particular patch. Such a test doesn't currently exist, it can be added\n> as a separate patch, and whether this is actually a useful test (all\n> the tests consume time after all...) is somewhat debatable. Personally\n> I agree that it would be nice to have though.\n>\n> This being said...\n>\n> > The ALTER SUBSCRIPTION command does not error out on the user\n> > interface if updated with a bad connection string and the connection\n> > failure error can only be seen in the respective log file.\n>\n> I wonder if we should fix this. Again, not necessarily in scope of\n> this work, but if this is not a difficult task, again, it would be\n> nice to have.\n>\n> Other than that v2 looks good.\n>\n\nOK. I see now that any ALTER of the subscription's connection, even\nto some value that fails, will restart a new worker (like ALTER of any\nother subscription parameters). For a bad connection, it will continue\nto relaunch-worker/ERROR over and over.\n\ntest_sub=# \\r2024-01-17 09:34:28.665 AEDT [11274] LOG: logical\nreplication apply worker for subscription \"sub4\" has started\n2024-01-17 09:34:28.666 AEDT [11274] ERROR: could not connect to the\npublisher: invalid port number: \"-1\"\n2024-01-17 09:34:28.667 AEDT [928] LOG: background worker \"logical\nreplication apply worker\" (PID 11274) exited with exit code 1\ndRs su2024-01-17 09:34:33.669 AEDT [11391] LOG: logical replication\napply worker for subscription \"sub4\" has started\n2024-01-17 09:34:33.669 AEDT [11391] ERROR: could not connect to the\npublisher: invalid port number: \"-1\"\n2024-01-17 09:34:33.670 AEDT [928] LOG: background worker \"logical\nreplication apply worker\" (PID 11391) exited with exit code 1\nb4\n...\n\nI don't really have any opinion if that behaviour is good or bad, but\nanyway, it is deliberate, and IMO it is outside the scope of your\npatch, so v2 patch LGTM.\n\n~~\n\nBTW, while experimenting with the bad connection ALTER I also tried\nsetting 'disable_on_error' like below:\n\nALTER SUBSCRIPTION sub4 SET (disable_on_error);\nALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';\n\n...but here the subscription did not become DISABLED as I expected it\nwould do on the next connection error iteration. It remains enabled\nand just continues to loop relaunch/ERROR indefinitely same as before.\n\nThat looks like it may be a bug. Thoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 17 Jan 2024 10:26:01 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Thanks for reviewing, please find my response inline.\n\nOn Wed, Jan 17, 2024 at 4:56 AM Peter Smith <[email protected]> wrote:\n>\n> On Sat, Jan 13, 2024 at 12:36 AM Aleksander Alekseev\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > Thanks for the patch.\n> >\n> > > Due to this behavior, it is not possible to add a test to show the\n> > > error message as it is done for CREATE SUBSCRIPTION.\n> > > Let me know if you think there is another way to add this test.\n> >\n> > I believe it can be done with TAP tests, see for instance:\n> >\n> > contrib/auto_explain/t/001_auto_explain.pl\n> >\n> > However I wouldn't insist on including the test in scope of this\n> > particular patch. Such a test doesn't currently exist, it can be added\n> > as a separate patch, and whether this is actually a useful test (all\n> > the tests consume time after all...) is somewhat debatable. Personally\n> > I agree that it would be nice to have though.\n> >\n> > This being said...\n> >\n> > > The ALTER SUBSCRIPTION command does not error out on the user\n> > > interface if updated with a bad connection string and the connection\n> > > failure error can only be seen in the respective log file.\n> >\n> > I wonder if we should fix this. Again, not necessarily in scope of\n> > this work, but if this is not a difficult task, again, it would be\n> > nice to have.\n> >\n> > Other than that v2 looks good.\n> >\n>\n> OK. I see now that any ALTER of the subscription's connection, even\n> to some value that fails, will restart a new worker (like ALTER of any\n> other subscription parameters). For a bad connection, it will continue\n> to relaunch-worker/ERROR over and over.\n>\n> test_sub=# \\r2024-01-17 09:34:28.665 AEDT [11274] LOG: logical\n> replication apply worker for subscription \"sub4\" has started\n> 2024-01-17 09:34:28.666 AEDT [11274] ERROR: could not connect to the\n> publisher: invalid port number: \"-1\"\n> 2024-01-17 09:34:28.667 AEDT [928] LOG: background worker \"logical\n> replication apply worker\" (PID 11274) exited with exit code 1\n> dRs su2024-01-17 09:34:33.669 AEDT [11391] LOG: logical replication\n> apply worker for subscription \"sub4\" has started\n> 2024-01-17 09:34:33.669 AEDT [11391] ERROR: could not connect to the\n> publisher: invalid port number: \"-1\"\n> 2024-01-17 09:34:33.670 AEDT [928] LOG: background worker \"logical\n> replication apply worker\" (PID 11391) exited with exit code 1\n> b4\n> ...\n>\n> I don't really have any opinion if that behaviour is good or bad, but\n> anyway, it is deliberate, and IMO it is outside the scope of your\n> patch, so v2 patch LGTM.\n\nUpon code review, the ALTER SUBSCRIPTION updates the connection string\nafter checking for parse and a few obvious errors and does not attempt\nto establish a connection. It is the apply worker running for the\nrespective subscription that will try to connect and fail in case of a\nbad connection string.\nTo me, it seems an intentional design behavior and I agree that\ndeciding to change or maintain this behavior is out of this patch's\nscope.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Wed, 17 Jan 2024 10:13:17 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": ">\n> ~~\n>\n> BTW, while experimenting with the bad connection ALTER I also tried\n> setting 'disable_on_error' like below:\n>\n> ALTER SUBSCRIPTION sub4 SET (disable_on_error);\n> ALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';\n>\n> ...but here the subscription did not become DISABLED as I expected it\n> would do on the next connection error iteration. It remains enabled\n> and just continues to loop relaunch/ERROR indefinitely same as before.\n>\n> That looks like it may be a bug. Thoughts?\n>\nIdeally, if the already running apply worker in\n\"LogicalRepApplyLoop()\" has any exception/error it will be handled and\nthe subscription will be disabled if 'disable_on_error' is set -\n\nstart_apply(XLogRecPtr origin_startpos)\n{\nPG_TRY();\n{\nLogicalRepApplyLoop(origin_startpos);\n}\nPG_CATCH();\n{\nif (MySubscription->disableonerr)\nDisableSubscriptionAndExit();\n...\n\nWhat is happening in this case is that the control reaches the function -\nrun_apply_worker() -> start_apply() -> LogicalRepApplyLoop ->\nmaybe_reread_subscription()\n...\n/*\n* Exit if any parameter that affects the remote connection was changed.\n* The launcher will start a new worker but note that the parallel apply\n* worker won't restart if the streaming option's value is changed from\n* 'parallel' to any other value or the server decides not to stream the\n* in-progress transaction.\n*/\nif (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||\n...\n\nand it sees a change in the parameter and calls apply_worker_exit().\nThis will exit the current process, without throwing an exception to\nthe caller and the postmaster will try to restart the apply worker.\nThe new apply worker, before reaching the start_apply() [where we\nhandle exception], will hit the code to establish the connection to\nthe publisher -\n\nApplyWorkerMain() -> run_apply_worker() -\n...\nLogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo,\ntrue /* replication */ ,\ntrue,\nmust_use_password,\nMySubscription->name, &err);\n\nif (LogRepWorkerWalRcvConn == NULL)\n ereport(ERROR,\n (errcode(ERRCODE_CONNECTION_FAILURE),\n errmsg(\"could not connect to the publisher: %s\", err)));\n...\nand due to the bad connection string in the subscription, it will error out.\n[28680] ERROR: could not connect to the publisher: invalid port number: \"-1\"\n[3196] LOG: background worker \"logical replication apply worker\" (PID\n28680) exited with exit code 1\n\nNow, the postmaster keeps trying to restart the apply worker and it\nwill keep failing until the connection string is corrected or the\nsubscription is disabled manually.\n\nI think this is a bug that needs to be handled in run_apply_worker()\nwhen disable_on_error is set.\nIMO, this bug-fix discussion deserves a separate thread. Thoughts?\n\n\n", "msg_date": "Wed, 17 Jan 2024 13:44:53 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Wed, Jan 17, 2024 at 7:15 PM Nisha Moond <[email protected]> wrote:\n>\n> >\n> > ~~\n> >\n> > BTW, while experimenting with the bad connection ALTER I also tried\n> > setting 'disable_on_error' like below:\n> >\n> > ALTER SUBSCRIPTION sub4 SET (disable_on_error);\n> > ALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';\n> >\n> > ...but here the subscription did not become DISABLED as I expected it\n> > would do on the next connection error iteration. It remains enabled\n> > and just continues to loop relaunch/ERROR indefinitely same as before.\n> >\n> > That looks like it may be a bug. Thoughts?\n> >\n> Ideally, if the already running apply worker in\n> \"LogicalRepApplyLoop()\" has any exception/error it will be handled and\n> the subscription will be disabled if 'disable_on_error' is set -\n>\n> start_apply(XLogRecPtr origin_startpos)\n> {\n> PG_TRY();\n> {\n> LogicalRepApplyLoop(origin_startpos);\n> }\n> PG_CATCH();\n> {\n> if (MySubscription->disableonerr)\n> DisableSubscriptionAndExit();\n> ...\n>\n> What is happening in this case is that the control reaches the function -\n> run_apply_worker() -> start_apply() -> LogicalRepApplyLoop ->\n> maybe_reread_subscription()\n> ...\n> /*\n> * Exit if any parameter that affects the remote connection was changed.\n> * The launcher will start a new worker but note that the parallel apply\n> * worker won't restart if the streaming option's value is changed from\n> * 'parallel' to any other value or the server decides not to stream the\n> * in-progress transaction.\n> */\n> if (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||\n> ...\n>\n> and it sees a change in the parameter and calls apply_worker_exit().\n> This will exit the current process, without throwing an exception to\n> the caller and the postmaster will try to restart the apply worker.\n> The new apply worker, before reaching the start_apply() [where we\n> handle exception], will hit the code to establish the connection to\n> the publisher -\n>\n> ApplyWorkerMain() -> run_apply_worker() -\n> ...\n> LogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo,\n> true /* replication */ ,\n> true,\n> must_use_password,\n> MySubscription->name, &err);\n>\n> if (LogRepWorkerWalRcvConn == NULL)\n> ereport(ERROR,\n> (errcode(ERRCODE_CONNECTION_FAILURE),\n> errmsg(\"could not connect to the publisher: %s\", err)));\n> ...\n> and due to the bad connection string in the subscription, it will error out.\n> [28680] ERROR: could not connect to the publisher: invalid port number: \"-1\"\n> [3196] LOG: background worker \"logical replication apply worker\" (PID\n> 28680) exited with exit code 1\n>\n> Now, the postmaster keeps trying to restart the apply worker and it\n> will keep failing until the connection string is corrected or the\n> subscription is disabled manually.\n>\n> I think this is a bug that needs to be handled in run_apply_worker()\n> when disable_on_error is set.\n> IMO, this bug-fix discussion deserves a separate thread. Thoughts?\n\nHi Nisha,\n\nThanks for your analysis -- it is the same as my understanding.\n\nAs suggested, I have created a new thread for any further discussion\nrelated to this 'disable_on_error' topic [1].\n\n======\n[1] https://www.postgresql.org/message-id/flat/CAHut%2BPuEsekA3e7ThwzWr%2BUs4x%3DLzkF7DSrED1UsZTUqNrhCUQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Jan 2024 10:24:41 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Fri, Jan 12, 2024 at 7:06 PM Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Hi,\n>\n> Thanks for the patch.\n>\n> > Due to this behavior, it is not possible to add a test to show the\n> > error message as it is done for CREATE SUBSCRIPTION.\n> > Let me know if you think there is another way to add this test.\n>\n> I believe it can be done with TAP tests, see for instance:\n>\n> contrib/auto_explain/t/001_auto_explain.pl\n>\n> However I wouldn't insist on including the test in scope of this\n> particular patch. Such a test doesn't currently exist, it can be added\n> as a separate patch, and whether this is actually a useful test (all\n> the tests consume time after all...) is somewhat debatable. Personally\n> I agree that it would be nice to have though.\nThank you for providing this information. Yes, I can write a TAP test\nto check the log for the same error message.\nI'll attempt it and perform a time analysis. I'm unsure where to\nappropriately add this test. Any suggestions?\n\nFollowing your suggestion, I won't include the test in the scope of\nthis patch. Instead, I'll start a new thread once I have sufficient\ninformation required.\n\n--\nThanks,\nNisha\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:24:37 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "AFAIK some recent commits patches (e,g [1] for the \"slot sync\"\ndevelopment) have created some more cases of \"could not connect...\"\nmessages. So, you might need to enhance your patch to deal with any\nnew ones in the latest HEAD.\n\n======\n[1] https://github.com/postgres/postgres/commit/776621a5e4796fa214b6b29a7ca134f6c138572a\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 31 Jan 2024 16:56:40 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "> AFAIK some recent commits patches (e,g [1] for the \"slot sync\"\n> development) have created some more cases of \"could not connect...\"\n> messages. So, you might need to enhance your patch to deal with any\n> new ones in the latest HEAD.\n>\n> ======\n> [1]\n> https://github.com/postgres/postgres/commit/776621a5e4796fa214b6b29a7ca134f6c138572a\n>\n> Thank you for the update.\nThe v3 patch has the changes needed as per the latest HEAD.\n\n--\nThanks,\nNisha", "msg_date": "Wed, 31 Jan 2024 16:28:28 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Wed, Jan 31, 2024 at 9:58 PM Nisha Moond <[email protected]> wrote:\n>\n>\n>> AFAIK some recent commits patches (e,g [1] for the \"slot sync\"\n>> development) have created some more cases of \"could not connect...\"\n>> messages. So, you might need to enhance your patch to deal with any\n>> new ones in the latest HEAD.\n>>\n>> ======\n>> [1] https://github.com/postgres/postgres/commit/776621a5e4796fa214b6b29a7ca134f6c138572a\n>>\n> Thank you for the update.\n> The v3 patch has the changes needed as per the latest HEAD.\n>\n\nHi, just going by visual inspection of the v2/v3 patch diffs, the\nlatest v3 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 1 Feb 2024 09:12:59 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "FYI -- some more code has been pushed since this patch was last\nupdated. AFAICT perhaps you'll want to update this patch again for the\nfollowing new connection messages on HEAD:\n\n- slotfuncs.c [1]\n- slotsync.c [2]\n\n----------\n[1] https://github.com/postgres/postgres/blob/0b84f5c419a300dc1b1a70cf63b9907208e52643/src/backend/replication/slotfuncs.c#L989\n[2] https://github.com/postgres/postgres/blob/0b84f5c419a300dc1b1a70cf63b9907208e52643/src/backend/replication/logical/slotsync.c#L1258\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 13 Mar 2024 16:46:23 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Wed, Mar 13, 2024 at 11:16 AM Peter Smith <[email protected]> wrote:\n>\n> FYI -- some more code has been pushed since this patch was last\n> updated. AFAICT perhaps you'll want to update this patch again for the\n> following new connection messages on HEAD:\n>\n> - slotfuncs.c [1]\n> - slotsync.c [2]\n>\n> ----------\n> [1] https://github.com/postgres/postgres/blob/0b84f5c419a300dc1b1a70cf63b9907208e52643/src/backend/replication/slotfuncs.c#L989\n> [2] https://github.com/postgres/postgres/blob/0b84f5c419a300dc1b1a70cf63b9907208e52643/src/backend/replication/logical/slotsync.c#L1258\n>\nThanks for the update.\nHere is the v4 patch with changes required in slotfuncs.c and slotsync.c files.\n\n--\nThanks,\nNisha", "msg_date": "Fri, 22 Mar 2024 16:12:39 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Hi, just by visual inspection of the v3/v4 patch diffs, the latest v4 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:33:20 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "> On 22 Mar 2024, at 11:42, Nisha Moond <[email protected]> wrote:\n\n> Here is the v4 patch with changes required in slotfuncs.c and slotsync.c files.\n\n-\terrmsg(\"could not connect to the primary server: %s\", err));\n+\terrmsg(\"\\\"%s\\\" could not connect to the primary server: %s\", app_name.data, err));\n\nMessages like this should perhaps have translator comments to indicate what the\nleading \"%s\" will contain?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 26 Apr 2024 09:40:27 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Fri, Apr 26, 2024 at 1:10 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 22 Mar 2024, at 11:42, Nisha Moond <[email protected]> wrote:\n>\n> > Here is the v4 patch with changes required in slotfuncs.c and slotsync.c files.\n>\n> - errmsg(\"could not connect to the primary server: %s\", err));\n> + errmsg(\"\\\"%s\\\" could not connect to the primary server: %s\", app_name.data, err));\n>\n> Messages like this should perhaps have translator comments to indicate what the\n> leading \"%s\" will contain?\n\nAttached v5 patch with the translator comments as suggested.\n\n--\nThanks,\nNisha", "msg_date": "Fri, 31 May 2024 18:15:22 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Hi, just by visual inspection of the v4/v5 patch diffs, the latest v5 LGTM.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 3 Jun 2024 09:57:40 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "FYI - I created a CF entry [1] for this because AFAIK the patch is\njust waiting for a committer to check if it is OK to be pushed, but\nmaybe nobody has noticed it.\n\n======\n[1] https://commitfest.postgresql.org/48/5075/\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 26 Jun 2024 13:45:40 +1000", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "> On 26 Jun 2024, at 05:46, Peter Smith <[email protected]> wrote:\n> \n> FYI - I created a CF entry [1] for this because AFAIK the patch is\n> just waiting for a committer to check if it is OK to be pushed, but\n> maybe nobody has noticed it.\n\nThanks, always good to have a CF entry to track it with. It’s not forgotten though, just waiting for the tree to be branched. Right now focus is on stabilizing what will become v17.\n\n./daniel\n\n", "msg_date": "Wed, 26 Jun 2024 08:05:47 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Nisha Moond <[email protected]> writes:\n> Attached v5 patch with the translator comments as suggested.\n\nI looked at this, and I agree with the goal, but I find just about all\nof the translator comments unnecessary. The ones that are useful are\nuseful only because the message is violating one of our message style\nguidelines [1]:\n\n\tWhen citing the name of an object, state what kind of object it is.\n\n\tRationale: Otherwise no one will know what “foo.bar.baz” refers to.\n\nSo, for example, where you have\n\n+\n+ /*\n+ * translator: first %s is the subscription name, second %s is the\n+ * error\n+ */\n+ errmsg(\"subscription \\\"%s\\\" could not connect to the publisher: %s\", stmt->subname, err)));\n\nI don't find that that translator comment is adding anything.\nBut there are a couple of places like\n\n+ /*\n+ * translator: first %s is the slotsync worker name, second %s is the\n+ * error\n+ */\n+ errmsg(\"\\\"%s\\\" could not connect to the primary server: %s\", app_name.data, err));\n\nI think that the right cure for the ambiguity here is not to add a\ntranslator comment, but to label the name properly, perhaps like\n\n errmsg(\"synchronization worker \\\"%s\\\" could not connect to the primary server: %s\",\n app_name.data, err));\n\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-OBJECT-TYPE\n\n\n", "msg_date": "Mon, 08 Jul 2024 15:30:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "On Tue, Jul 9, 2024 at 1:00 AM Tom Lane <[email protected]> wrote:\n>\n> Nisha Moond <[email protected]> writes:\n> > Attached v5 patch with the translator comments as suggested.\n>\n> I looked at this, and I agree with the goal, but I find just about all\n> of the translator comments unnecessary. The ones that are useful are\n> useful only because the message is violating one of our message style\n> guidelines [1]:\n>\n> When citing the name of an object, state what kind of object it is.\n>\n> Rationale: Otherwise no one will know what “foo.bar.baz” refers to.\n>\n> So, for example, where you have\n>\n> +\n> + /*\n> + * translator: first %s is the subscription name, second %s is the\n> + * error\n> + */\n> + errmsg(\"subscription \\\"%s\\\" could not connect to the publisher: %s\", stmt->subname, err)));\n>\n> I don't find that that translator comment is adding anything.\n> But there are a couple of places like\n>\n> + /*\n> + * translator: first %s is the slotsync worker name, second %s is the\n> + * error\n> + */\n> + errmsg(\"\\\"%s\\\" could not connect to the primary server: %s\", app_name.data, err));\n>\n> I think that the right cure for the ambiguity here is not to add a\n> translator comment, but to label the name properly, perhaps like\n>\n> errmsg(\"synchronization worker \\\"%s\\\" could not connect to the primary server: %s\",\n> app_name.data, err));\n>\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-OBJECT-TYPE\n\nThank you for the review.\n\nAttached the patch v6 with suggested improvements.\n- Removed unnecessary translator comments.\n- Added appropriate identifier names where missing.\n\n--\nThanks,\nNisha", "msg_date": "Thu, 11 Jul 2024 15:01:21 +0530", "msg_from": "Nisha Moond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improve the connection failure error messages" }, { "msg_contents": "Nisha Moond <[email protected]> writes:\n> Attached the patch v6 with suggested improvements.\n> - Removed unnecessary translator comments.\n> - Added appropriate identifier names where missing.\n\nI think this is generally OK, although one could argue that it's\nviolating our message style guideline that primary error messages\nshould be short [1]. The text itself isn't that bad, but once you\ntack on a libpq connection failure message it's hard to claim that\nthe result \"fits on one line\".\n\nAnother way we could address this that'd reduce that problem is to\nleave the primary messages as-is and add an errdetail or errcontext\nmessage to show what's throwing the error. However, I'm not convinced\nthat's better. The main argument against it is that detail/context\nlines can get lost, eg if you're running the server in terse logging\nmode. \n\nOn balance I think it's OK, so I pushed it. I did take out a couple\nof uses of \"logical replication\" that seemed unnecessary and were\nmaking the length problem worse.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html#ERROR-STYLE-GUIDE-WHAT-GOES-WHERE\n\n\n", "msg_date": "Thu, 11 Jul 2024 13:30:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improve the connection failure error messages" } ]
[ { "msg_contents": "Hi, hackers\n\nThere is below description in docs for stats_fetch_consistency.\n\"Changing this parameter in a transaction discards the statistics \nsnapshot.\"\n\nHowever, I wonder if changes stats_fetch_consistency in a transaction, \nstatistics is not discarded in some cases.\n\nExample:\n--\n* session 1\n=# SET stats_fetch_consistency TO snapshot;\n=# BEGIN;\n=*# SELECT wal_records, wal_fpi, wal_bytes FROM pg_stat_wal;\n wal_records | wal_fpi | wal_bytes\n-------------+---------+-----------\n 23592 | 628 | 5939027\n(1 row)\n\n* session 2\n=# CREATE TABLE test (i int); -- generate WAL records\n=# SELECT wal_records, wal_fpi, wal_bytes FROM pg_stat_wal;\n wal_records | wal_fpi | wal_bytes\n-------------+---------+-----------\n 23631 | 644 | 6023411\n(1 row)\n\n* session 1\n=*# -- snapshot is not discarded, it is right\n=*# SELECT wal_records, wal_fpi, wal_bytes FROM pg_stat_wal;\n wal_records | wal_fpi | wal_bytes\n-------------+---------+-----------\n 23592 | 628 | 5939027\n(1 row)\n\n=*# SET stats_fetch_consistency TO cache;\n\n=*# -- snapshot is not discarded, it is not right\n=*# SELECT wal_records, wal_fpi, wal_bytes FROM pg_stat_wal;\n wal_records | wal_fpi | wal_bytes\n-------------+---------+-----------\n 23592 | 628 | 5939027\n(1 row)\n--\n\nI can see similar cases in pg_stat_archiver, pg_stat_bgwriter, \npg_stat_checkpointer, pg_stat_io, and pg_stat_slru.\nIs it a bug? I fixed it, and do you think?\n\n-- \nRegards,\nShinya Kato\nNTT DATA GROUP CORPORATION", "msg_date": "Thu, 11 Jan 2024 18:18:38 +0900", "msg_from": "Shinya Kato <[email protected]>", "msg_from_op": true, "msg_subject": "Fix bugs not to discard statistics when changing\n stats_fetch_consistency" }, { "msg_contents": "On Thu, Jan 11, 2024 at 06:18:38PM +0900, Shinya Kato wrote:\n> Hi, hackers\n\n(Sorry for the delay, this thread was on my TODO list for some time.)\n\n> There is below description in docs for stats_fetch_consistency.\n> \"Changing this parameter in a transaction discards the statistics snapshot.\"\n> \n> However, I wonder if changes stats_fetch_consistency in a transaction,\n> statistics is not discarded in some cases.\n\nYep, you're right. This is inconsistent with the documentation where\nwe need to clean up the cached data when changing this GUC. I was\nconsidering a few options regarding the location of the extra\npgstat_clear_snapshot(), but at the end I see the point in doing it in\na path even if it creates a duplicate with pgstat_build_snapshot()\nwhen pgstat_fetch_consistency is using the snapshot mode. A location\nbetter than your patch is pgstat_snapshot_fixed(), though, so as new\nstats kinds will be able to get the call.\n\nI have been banging my head on my desk for a bit when thinking about a\nway to test that in a predictible way, until I remembered that these\nstats are only flushed at commit, so this requires at least two\nsessions, with one of them having a transaction opened while\nmanipulating stats_fetch_consistency. TAP would be one option, but\nI'm not really tempted about spending more cycles with a\nbackground_psql just for this case. If I'm missing something, feel\nfree.\n--\nMichael", "msg_date": "Thu, 1 Feb 2024 17:33:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix bugs not to discard statistics when changing\n stats_fetch_consistency" }, { "msg_contents": "On 2024-02-01 17:33, Michael Paquier wrote:\n> On Thu, Jan 11, 2024 at 06:18:38PM +0900, Shinya Kato wrote:\n>> Hi, hackers\n> \n> (Sorry for the delay, this thread was on my TODO list for some time.)\n>> There is below description in docs for stats_fetch_consistency.\n>> \"Changing this parameter in a transaction discards the statistics \n>> snapshot.\"\n>> \n>> However, I wonder if changes stats_fetch_consistency in a transaction,\n>> statistics is not discarded in some cases.\n> \n> Yep, you're right. This is inconsistent with the documentation where\n> we need to clean up the cached data when changing this GUC. I was\n> considering a few options regarding the location of the extra\n> pgstat_clear_snapshot(), but at the end I see the point in doing it in\n> a path even if it creates a duplicate with pgstat_build_snapshot()\n> when pgstat_fetch_consistency is using the snapshot mode. A location\n> better than your patch is pgstat_snapshot_fixed(), though, so as new\n> stats kinds will be able to get the call.\n> \n> I have been banging my head on my desk for a bit when thinking about a\n> way to test that in a predictible way, until I remembered that these\n> stats are only flushed at commit, so this requires at least two\n> sessions, with one of them having a transaction opened while\n> manipulating stats_fetch_consistency. TAP would be one option, but\n> I'm not really tempted about spending more cycles with a\n> background_psql just for this case. If I'm missing something, feel\n> free.\n\nThank you for the review and pushing!\nI think it is better and more concise than my implementation.\n\n-- \nRegards,\nShinya Kato\nNTT DATA GROUP CORPORATION\n\n\n", "msg_date": "Thu, 01 Feb 2024 21:22:48 +0900", "msg_from": "Shinya Kato <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix bugs not to discard statistics when changing\n stats_fetch_consistency" } ]
[ { "msg_contents": "Good day, hackers.\n\nHere I am to suggest two small improvements to Point In Time Recovery.\n\nFirst is ability to recover recovery-target-time with timestamp stored \nin XLOG_RESTORE_POINT. Looks like historically this ability did exist \nand were removed unintentionally during refactoring at commit [1]\nc945af80 \"Refactor checking whether we've reached the recovery target.\"\n\nSecond is extending XLOG_BACKUP_END record with timestamp, therefore \nbackup will have its own timestamp as well. It is backward compatible \nchange since there were no record length check before.\n\nBoth changes slightly helps in mostly idle systems, when between several \nbackups may happens no commits at all, so there's no timestamp to \nrecover to.\n\nAttached sample patches are made in reverse order:\n- XLOG_BACKUP_END then XLOG_RESTORE_POINT.\nSecond patch made by colleague by my idea.\nPublishing for both is permitted.\n\nIf idea is accepted, patches for tests will be applied as well.\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=c945af80\n\n---\n\nYura Sokolov.", "msg_date": "Thu, 11 Jan 2024 19:58:50 +0300", "msg_from": "Yura Sokolov <[email protected]>", "msg_from_op": true, "msg_subject": "Suggest two small improvements for PITR." }, { "msg_contents": "11.01.2024 19:58, Yura Sokolov пишет:\n> Good day, hackers.\n> \n> Here I am to suggest two small improvements to Point In Time Recovery.\n> \n> First is ability to recover recovery-target-time with timestamp stored \n> in XLOG_RESTORE_POINT. Looks like historically this ability did exist \n> and were removed unintentionally during refactoring at commit [1]\n> c945af80 \"Refactor checking whether we've reached the recovery target.\"\n> \n> Second is extending XLOG_BACKUP_END record with timestamp, therefore \n> backup will have its own timestamp as well. It is backward compatible \n> change since there were no record length check before.\n> \n> Both changes slightly helps in mostly idle systems, when between several \n> backups may happens no commits at all, so there's no timestamp to \n> recover to.\n> \n> Attached sample patches are made in reverse order:\n> - XLOG_BACKUP_END then XLOG_RESTORE_POINT.\n> Second patch made by colleague by my idea.\n> Publishing for both is permitted.\n> \n> If idea is accepted, patches for tests will be applied as well.\n> \n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=c945af80\n\nGood day.\n\nHere're reordered and rebased patches with tests.\nNow first patch is for XLOG_RETORE_POINT, and second one adds timestamp \nto XLOG_BACKUP_END.\n\nBtw, there's other thread by Simon Riggs with additions to \ngetRecordTimestamp:\n\nhttps://www.postgresql.org/message-id/flat/CANbhV-F%2B8%3DY%3DcfurfD2hjoWVUvTk-Ot9BJdw2Myc%3Dst3TsZy9g%40mail.gmail.com\n\nI didn't rush to adsorb it, because I'm not recoveryTargetUseOriginTime.\nThough reaction on XLOG_END_OF_RECOVERY's timestamp is easily could be \ncopied from.\n\nI believe, to react on XLOG_CHECKPOINT_ONLINE/XLOG_CHECKPOINT_SHUTDOWN \nthe CheckPoint.time field should be changed from `pg_time_t` to \n`TimestampTz` type, since otherwise it interfere hard with \n\"inclusive\"-ness of recovery_target_time.\n\n-----\n\nregards,\nYura", "msg_date": "Tue, 18 Jun 2024 17:31:15 +0300", "msg_from": "Yura Sokolov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggest two small improvements for PITR." } ]
[ { "msg_contents": "Hi,\n\n\n>> BTW, while nosing around I found what seems like a very nasty related\n>> bug. Suppose that a catalog tuple being loaded into syscache contains\n>> some toasted fields. CatalogCacheCreateEntry will flatten the tuple,\n>> involving fetches from toast tables that will certainly cause\n>> AcceptInvalidationMessages calls. What if one of those should have\n>> invalidated this tuple? We will not notice, because it's not in\n>> the hashtable yet. When we do add it, we will mark it not-dead,\n>> meaning that the stale entry looks fine and could persist for a long\n>> while.\nI spent some time trying to understand the bug and finally, I can reproduce\nit locally with the following steps:\n\nstep1:\ncreate a function called 'test' with a long body that must be stored in a\ntoast table.\nand put it in schema 'yy' by : \"alter function test set schema yy\";\n\nstep 2:\nI added a breakpoint at 'toast_flatten_tuple' for session1 ,\n then execute the following SQL:\n----------\nset search_path='public';\nalter function test set schema xx;\n----------\nstep 3:\nwhen the session1 stops at the breakpoint, I open session2 and execute\n-----------\nset search_path = 'yy';\nalter function test set schema public;\n-----------\nstep4:\nresume the session1 , it reports the error \"ERROR: could not find a\nfunction named \"test\"\"\n\nstep 5:\ncontinue to execute \"alter function test set schema xx;\" in session1, but\nit still can not work and report the above error although the function test\nalready belongs to schema 'public'\n\nObviously, in session 1, the \"test\" proc tuple in the cache is outdated.\n\n>> The detection of \"get an invalidation\" could be refined: what I did\n>> here is to check for any advance of SharedInvalidMessageCounter,\n>> which clearly will have a significant number of false positives.\n>> However, the only way I see to make that a lot better is to\n>> temporarily create a placeholder catcache entry (probably a negative\n>> one) with the same keys, and then see if it got marked dead.\n>> This seems a little expensive, plus I'm afraid that it'd be actively\n>> wrong in the recursive-lookup cases that the existing comment in\n>> SearchCatCacheMiss is talking about (that is, the catcache entry\n>> might mislead any recursive lookup that happens).\n\nI have reviewed your patch, and it looks good. But instead of checking for\nany advance of SharedInvalidMessageCounter ( if the invalidate message is\nnot related to the current tuple, it is a little expensive) I have another\nidea: we can recheck the visibility of the tuple with CatalogSnapshot(the\nCatalogSnapthot must be refreshed if there is any SharedInvalidMessages) if\nit is not visible, we re-fetch the tuple, otherwise, we can continue to use\nit as it is not outdated.\n\nI added a commit based on your patch and attached it.", "msg_date": "Fri, 12 Jan 2024 01:52:21 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "Xiaoran Wang <[email protected]> writes:\n>>> The detection of \"get an invalidation\" could be refined: what I did\n>>> here is to check for any advance of SharedInvalidMessageCounter,\n>>> which clearly will have a significant number of false positives.\n\n> I have reviewed your patch, and it looks good. But instead of checking for\n> any advance of SharedInvalidMessageCounter ( if the invalidate message is\n> not related to the current tuple, it is a little expensive) I have another\n> idea: we can recheck the visibility of the tuple with CatalogSnapshot(the\n> CatalogSnapthot must be refreshed if there is any SharedInvalidMessages) if\n> it is not visible, we re-fetch the tuple, otherwise, we can continue to use\n> it as it is not outdated.\n\nMaybe, but that undocumented hack in SetHintBits seems completely\nunacceptable. Isn't there a cleaner way to make this check?\n\nAlso, I'm pretty dubious that GetNonHistoricCatalogSnapshot rather\nthan GetCatalogSnapshot is the right thing, because the catcaches\nuse the latter.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Jan 2024 17:21:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "> Also, I'm pretty dubious that GetNonHistoricCatalogSnapshot rather\n> than GetCatalogSnapshot is the right thing, because the catcaches\n> use the latter.\nYes, you are right, should use GetCatalogSnapshot here.\n\n> Maybe, but that undocumented hack in SetHintBits seems completely\n> unacceptable. Isn't there a cleaner way to make this check?\nMaybe we don't need to call 'HeapTupleSatisfiesVisibility' to check if the\ntuple has been deleted.\nAs the tuple's xmin must been committed, so we just need to check if its\nxmax is committed,\nlike the below:\n\n------------\n@@ -1956,9 +1956,11 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple\nntp, Datum *arguments,\n */\n if (HeapTupleHasExternal(ntp))\n {\n+ TransactionId xmax;\n\n dtp = toast_flatten_tuple(ntp, cache->cc_tupdesc);\n- if (!HeapTupleSatisfiesVisibility(ntp,\nGetNonHistoricCatalogSnapshot(cache->cc_reloid), InvalidBuffer))\n+ xmax = HeapTupleHeaderGetUpdateXid(ntp->t_data);\n+ if (TransactionIdIsValid(xmax) &&\nTransactionIdDidCommit(xmax))\n {\n heap_freetuple(dtp);\n return NULL;\n------------\n\nI'm not quite sure the code is correct, I cannot clearly understand\n'HeapTupleHeaderGetUpdateXid', and I need more time to dive into it.\n\nAny thoughts?\n\n\nTom Lane <[email protected]> 于2024年1月12日周五 06:21写道:\n\n> Xiaoran Wang <[email protected]> writes:\n> >>> The detection of \"get an invalidation\" could be refined: what I did\n> >>> here is to check for any advance of SharedInvalidMessageCounter,\n> >>> which clearly will have a significant number of false positives.\n>\n> > I have reviewed your patch, and it looks good. But instead of checking\n> for\n> > any advance of SharedInvalidMessageCounter ( if the invalidate message is\n> > not related to the current tuple, it is a little expensive) I have\n> another\n> > idea: we can recheck the visibility of the tuple with\n> CatalogSnapshot(the\n> > CatalogSnapthot must be refreshed if there is any SharedInvalidMessages)\n> if\n> > it is not visible, we re-fetch the tuple, otherwise, we can continue to\n> use\n> > it as it is not outdated.\n>\n> Maybe, but that undocumented hack in SetHintBits seems completely\n> unacceptable. Isn't there a cleaner way to make this check?\n>\n> Also, I'm pretty dubious that GetNonHistoricCatalogSnapshot rather\n> than GetCatalogSnapshot is the right thing, because the catcaches\n> use the latter.\n>\n> regards, tom lane\n>\n\n> Also, I'm pretty dubious that GetNonHistoricCatalogSnapshot rather> than GetCatalogSnapshot is the right thing, because the catcaches> use the latter.Yes, you are right, should use GetCatalogSnapshot here. > Maybe, but that undocumented hack in SetHintBits seems completely> unacceptable.  Isn't there a cleaner way to make this check?Maybe we don't need to call 'HeapTupleSatisfiesVisibility' to check if the tuple has been deleted.As the tuple's xmin must been committed, so we just need to check if its xmax is committed,like the below:------------@@ -1956,9 +1956,11 @@ CatalogCacheCreateEntry(CatCache *cache, HeapTuple ntp, Datum *arguments,                 */                if (HeapTupleHasExternal(ntp))                {+                       TransactionId xmax;                        dtp = toast_flatten_tuple(ntp, cache->cc_tupdesc);-                       if (!HeapTupleSatisfiesVisibility(ntp, GetNonHistoricCatalogSnapshot(cache->cc_reloid), InvalidBuffer))+                       xmax = HeapTupleHeaderGetUpdateXid(ntp->t_data);+                       if (TransactionIdIsValid(xmax) && TransactionIdDidCommit(xmax))                        {                                heap_freetuple(dtp);                                return NULL;------------I'm not quite sure the code is correct, I cannot clearly understand 'HeapTupleHeaderGetUpdateXid', and I need more time to dive into it.Any thoughts?Tom Lane <[email protected]> 于2024年1月12日周五 06:21写道:Xiaoran Wang <[email protected]> writes:\n>>> The detection of \"get an invalidation\" could be refined: what I did\n>>> here is to check for any advance of SharedInvalidMessageCounter,\n>>> which clearly will have a significant number of false positives.\n\n> I have reviewed your patch, and it looks good.  But instead of checking for\n> any advance of SharedInvalidMessageCounter ( if the invalidate message is\n> not related to the current tuple, it is a little expensive)  I have another\n> idea:  we can recheck the visibility of the tuple with CatalogSnapshot(the\n> CatalogSnapthot must be refreshed if there is any SharedInvalidMessages) if\n> it is not visible, we re-fetch the tuple, otherwise, we can continue to use\n> it as it is not outdated.\n\nMaybe, but that undocumented hack in SetHintBits seems completely\nunacceptable.  Isn't there a cleaner way to make this check?\n\nAlso, I'm pretty dubious that GetNonHistoricCatalogSnapshot rather\nthan GetCatalogSnapshot is the right thing, because the catcaches\nuse the latter.\n\n                        regards, tom lane", "msg_date": "Fri, 12 Jan 2024 11:56:32 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "Xiaoran Wang <[email protected]> writes:\n>> Maybe, but that undocumented hack in SetHintBits seems completely\n>> unacceptable. Isn't there a cleaner way to make this check?\n\n> Maybe we don't need to call 'HeapTupleSatisfiesVisibility' to check if the\n> tuple has been deleted.\n> As the tuple's xmin must been committed, so we just need to check if its\n> xmax is committed,\n\nI'm not super thrilled with that. Something I realized last night is\nthat your proposal only works if \"ntp\" is pointing directly into the\ncatalog's disk buffers. If something earlier than this code had made\na local-memory copy of the catalog tuple, then it's possible that its\nheader fields (particularly xmax) are out of date compared to shared\nbuffers and would fail to tell us that some other process just\ninvalidated the tuple. Now in fact, with the current implementation\nof syscache_getnext() the result is actually a live tuple and so we\ncan expect to see any relevant updates. But I think we'd better add\nsome Asserts that that's so; and that also provides us with a way to\ncall HeapTupleSatisfiesVisibility fully legally, because we can get\nthe buffer reference out of the scan descriptor too.\n\nThis is uncomfortably much in bed with the tuple table slot code,\nperhaps, but I don't see a way to do it more cleanly unless we want\nto add some new provisions to that API. Andres, do you have any\nthoughts about that?\n\nAnyway, this approach gets rid of false positives, which is great\nfor performance and bad for testing. Code coverage says that now\nwe never hit the failure paths during regression tests, which is\nunsurprising, but I'm not very comfortable with leaving those paths\nunexercised. I tried to make an isolation test to exercise them,\nbut there's no good way at the SQL level to get a session to block\nduring the detoast step. LOCK TABLE on some catalog's toast table\nwould do, but we disallow it. I thought about adding a small C\nfunction to regress.so to take out such a lock, but we have no\ninfrastructure for referencing regress.so from isolation tests.\nWhat I ended up doing is adding a random failure about 0.1% of\nthe time in USE_ASSERT_CHECKING builds --- that's intellectually\nugly for sure, but doing better seems like way more work than\nit's worth.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 12 Jan 2024 15:14:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "I wrote:\n> This is uncomfortably much in bed with the tuple table slot code,\n> perhaps, but I don't see a way to do it more cleanly unless we want\n> to add some new provisions to that API. Andres, do you have any\n> thoughts about that?\n\nOh! After nosing around a bit more I remembered systable_recheck_tuple,\nwhich is meant for exactly this purpose. So v4 attached.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 12 Jan 2024 15:47:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "Great! That's what exactly we need.\n\nThe patch LGTM, +1\n\n\nTom Lane <[email protected]> 于2024年1月13日周六 04:47写道:\n\n> I wrote:\n> > This is uncomfortably much in bed with the tuple table slot code,\n> > perhaps, but I don't see a way to do it more cleanly unless we want\n> > to add some new provisions to that API. Andres, do you have any\n> > thoughts about that?\n>\n> Oh! After nosing around a bit more I remembered systable_recheck_tuple,\n> which is meant for exactly this purpose. So v4 attached.\n>\n> regards, tom lane\n>\n>\n\nGreat! That's what exactly we need.The patch LGTM,  +1Tom Lane <[email protected]> 于2024年1月13日周六 04:47写道:I wrote:\n> This is uncomfortably much in bed with the tuple table slot code,\n> perhaps, but I don't see a way to do it more cleanly unless we want\n> to add some new provisions to that API.  Andres, do you have any\n> thoughts about that?\n\nOh!  After nosing around a bit more I remembered systable_recheck_tuple,\nwhich is meant for exactly this purpose.  So v4 attached.\n\n                        regards, tom lane", "msg_date": "Sat, 13 Jan 2024 13:16:52 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "Hmm, how about first checking if any invalidated shared messages have been\naccepted, then rechecking the tuple's visibility?\nIf there is no invalidated shared message accepted during\n'toast_flatten_tuple',\nthere is no need to do then visibility check, then it can save several\nCPU cycles.\n\n----\n if (inval_count != SharedInvalidMessageCounter &&\n!systable_recheck_tuple(scandesc, ntp))\n {\n heap_freetuple(dtp);\n return NULL;\n }\n----\n\n\nXiaoran Wang <[email protected]> 于2024年1月13日周六 13:16写道:\n\n> Great! That's what exactly we need.\n>\n> The patch LGTM, +1\n>\n>\n> Tom Lane <[email protected]> 于2024年1月13日周六 04:47写道:\n>\n>> I wrote:\n>> > This is uncomfortably much in bed with the tuple table slot code,\n>> > perhaps, but I don't see a way to do it more cleanly unless we want\n>> > to add some new provisions to that API. Andres, do you have any\n>> > thoughts about that?\n>>\n>> Oh! After nosing around a bit more I remembered systable_recheck_tuple,\n>> which is meant for exactly this purpose. So v4 attached.\n>>\n>> regards, tom lane\n>>\n>>\n\nHmm, how about first checking if any invalidated shared messages have been accepted, then rechecking the tuple's visibility?If there is no invalidated shared message accepted during 'toast_flatten_tuple', there is no need to do then visibility check, then it can save severalCPU cycles.----   if (inval_count != SharedInvalidMessageCounter && !systable_recheck_tuple(scandesc, ntp))   {              heap_freetuple(dtp);              return NULL;    }----Xiaoran Wang <[email protected]> 于2024年1月13日周六 13:16写道:Great! That's what exactly we need.The patch LGTM,  +1Tom Lane <[email protected]> 于2024年1月13日周六 04:47写道:I wrote:\n> This is uncomfortably much in bed with the tuple table slot code,\n> perhaps, but I don't see a way to do it more cleanly unless we want\n> to add some new provisions to that API.  Andres, do you have any\n> thoughts about that?\n\nOh!  After nosing around a bit more I remembered systable_recheck_tuple,\nwhich is meant for exactly this purpose.  So v4 attached.\n\n                        regards, tom lane", "msg_date": "Sat, 13 Jan 2024 17:02:15 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "Xiaoran Wang <[email protected]> writes:\n> Hmm, how about first checking if any invalidated shared messages have been\n> accepted, then rechecking the tuple's visibility?\n> If there is no invalidated shared message accepted during\n> 'toast_flatten_tuple',\n> there is no need to do then visibility check, then it can save several\n> CPU cycles.\n\nMeh, I'd just as soon not add the additional dependency/risk of bugs.\nThis is an expensive and seldom-taken code path, so I don't think\nshaving a few cycles is really important.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Jan 2024 12:18:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "I wrote:\n> Xiaoran Wang <[email protected]> writes:\n>> Hmm, how about first checking if any invalidated shared messages have been\n>> accepted, then rechecking the tuple's visibility?\n>> If there is no invalidated shared message accepted during\n>> 'toast_flatten_tuple',\n>> there is no need to do then visibility check, then it can save several\n>> CPU cycles.\n\n> Meh, I'd just as soon not add the additional dependency/risk of bugs.\n> This is an expensive and seldom-taken code path, so I don't think\n> shaving a few cycles is really important.\n\nIt occurred to me that this idea might be more interesting if we\ncould encapsulate it right into systable_recheck_tuple: something\nlike having systable_beginscan capture the current\nSharedInvalidMessageCounter and save it in the SysScanDesc struct,\nthen compare in systable_recheck_tuple to possibly short-circuit\nthat work. This'd eliminate one of the main bug hazards in the\nidea, namely that you might capture SharedInvalidMessageCounter too\nlate, after something's already happened. However, the whole idea\nonly works for catalogs that have catcaches, and the other users of\nsystable_recheck_tuple are interested in pg_depend which doesn't.\nSo that put a damper on my enthusiasm for the idea.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Jan 2024 14:12:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "On Fri, Jan 12, 2024 at 03:47:13PM -0500, Tom Lane wrote:\n> I wrote:\n> > This is uncomfortably much in bed with the tuple table slot code,\n> > perhaps, but I don't see a way to do it more cleanly unless we want\n> > to add some new provisions to that API. Andres, do you have any\n> > thoughts about that?\n> \n> Oh! After nosing around a bit more I remembered systable_recheck_tuple,\n> which is meant for exactly this purpose. So v4 attached.\n\nsystable_recheck_tuple() is blind to heap_inplace_update(), so it's not a\ngeneral proxy for invalidation messages. The commit for $SUBJECT (ad98fb1)\ndoesn't create any new malfunctions, but I expect the systable_recheck_tuple()\npart will change again before the heap_inplace_update() story is over\n(https://postgr.es/m/flat/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com).\n\n\n", "msg_date": "Sun, 14 Jan 2024 12:14:11 -0800", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "This is an interesting idea.\n Although some catalog tables are not in catcaches,\nsuch as pg_depend, when scanning them, if there is any\nSharedInvalidationMessage, the CatalogSnapshot\nwill be invalidated and recreated (\"RelationInvalidatesSnapshotsOnly\"\nin syscache.c)\nMaybe during the system_scan, it receives the SharedInvalidationMessages\nand returns the tuples which\nare out of date. systable_recheck_tuple is used in dependency.c for such\ncase.\n\n\n\nTom Lane <[email protected]> 于2024年1月14日周日 03:12写道:\n\n> I wrote:\n> > Xiaoran Wang <[email protected]> writes:\n> >> Hmm, how about first checking if any invalidated shared messages have\n> been\n> >> accepted, then rechecking the tuple's visibility?\n> >> If there is no invalidated shared message accepted during\n> >> 'toast_flatten_tuple',\n> >> there is no need to do then visibility check, then it can save several\n> >> CPU cycles.\n>\n> > Meh, I'd just as soon not add the additional dependency/risk of bugs.\n> > This is an expensive and seldom-taken code path, so I don't think\n> > shaving a few cycles is really important.\n>\n> It occurred to me that this idea might be more interesting if we\n> could encapsulate it right into systable_recheck_tuple: something\n> like having systable_beginscan capture the current\n> SharedInvalidMessageCounter and save it in the SysScanDesc struct,\n> then compare in systable_recheck_tuple to possibly short-circuit\n> that work. This'd eliminate one of the main bug hazards in the\n> idea, namely that you might capture SharedInvalidMessageCounter too\n> late, after something's already happened. However, the whole idea\n> only works for catalogs that have catcaches, and the other users of\n> systable_recheck_tuple are interested in pg_depend which doesn't.\n> So that put a damper on my enthusiasm for the idea.\n>\n> regards, tom lane\n>\n\nThis is an interesting idea. Although some catalog tables are not in catcaches,such as pg_depend, when scanning them, if there is any SharedInvalidationMessage, the CatalogSnapshotwill be invalidated and recreated (\"RelationInvalidatesSnapshotsOnly\" in  syscache.c)Maybe during the system_scan, it receives the SharedInvalidationMessages and returns the tuples whichare out of date. systable_recheck_tuple is used in dependency.c for such case.Tom Lane <[email protected]> 于2024年1月14日周日 03:12写道:I wrote:\n> Xiaoran Wang <[email protected]> writes:\n>> Hmm, how about first checking if any invalidated shared messages have been\n>> accepted, then rechecking the tuple's visibility?\n>> If there is no invalidated shared message accepted during\n>> 'toast_flatten_tuple',\n>> there is no need to do then visibility check, then it can save several\n>> CPU cycles.\n\n> Meh, I'd just as soon not add the additional dependency/risk of bugs.\n> This is an expensive and seldom-taken code path, so I don't think\n> shaving a few cycles is really important.\n\nIt occurred to me that this idea might be more interesting if we\ncould encapsulate it right into systable_recheck_tuple: something\nlike having systable_beginscan capture the current\nSharedInvalidMessageCounter and save it in the SysScanDesc struct,\nthen compare in systable_recheck_tuple to possibly short-circuit\nthat work.  This'd eliminate one of the main bug hazards in the\nidea, namely that you might capture SharedInvalidMessageCounter too\nlate, after something's already happened.  However, the whole idea\nonly works for catalogs that have catcaches, and the other users of\nsystable_recheck_tuple are interested in pg_depend which doesn't.\nSo that put a damper on my enthusiasm for the idea.\n\n                        regards, tom lane", "msg_date": "Mon, 15 Jan 2024 11:28:23 +0800", "msg_from": "Xiaoran Wang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" }, { "msg_contents": "On Sun, Jan 14, 2024 at 12:14:11PM -0800, Noah Misch wrote:\n> On Fri, Jan 12, 2024 at 03:47:13PM -0500, Tom Lane wrote:\n> > Oh! After nosing around a bit more I remembered systable_recheck_tuple,\n> > which is meant for exactly this purpose. So v4 attached.\n> \n> systable_recheck_tuple() is blind to heap_inplace_update(), so it's not a\n> general proxy for invalidation messages. The commit for $SUBJECT (ad98fb1)\n> doesn't create any new malfunctions, but I expect the systable_recheck_tuple()\n> part will change again before the heap_inplace_update() story is over\n> (https://postgr.es/m/flat/CAMp+ueZQz3yDk7qg42hk6-9gxniYbp-=bG2mgqecErqR5gGGOA@mail.gmail.com).\n\nCommit f9f47f0 (2024-06-27) addressed inplace updates here.\n\n\n", "msg_date": "Tue, 24 Sep 2024 14:20:36 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovering from detoast-related catcache invalidations" } ]
[ { "msg_contents": "Add new pg_walsummary tool.\n\nThis can dump the contents of the WAL summary files found in\npg_wal/summaries. Normally, this shouldn't really be something anyone\nneeds to do, but it may be needed for debugging problems with\nincremental backup, or could possibly be useful to external tools.\n\nDiscussion: http://postgr.es/m/CA+Tgmobvqqj-DW9F7uUzT-cQqs6wcVb-Xhs=w=hzJnXSE-kRGw@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ee1bfd168390bc843c6704d16e909692c0a79f27\n\nModified Files\n--------------\ndoc/src/sgml/ref/allfiles.sgml | 1 +\ndoc/src/sgml/ref/pg_walsummary.sgml | 122 +++++++++++++++\ndoc/src/sgml/reference.sgml | 1 +\nsrc/bin/Makefile | 1 +\nsrc/bin/meson.build | 1 +\nsrc/bin/pg_walsummary/.gitignore | 1 +\nsrc/bin/pg_walsummary/Makefile | 48 ++++++\nsrc/bin/pg_walsummary/meson.build | 30 ++++\nsrc/bin/pg_walsummary/nls.mk | 6 +\nsrc/bin/pg_walsummary/pg_walsummary.c | 280 ++++++++++++++++++++++++++++++++++\nsrc/bin/pg_walsummary/t/001_basic.pl | 19 +++\nsrc/bin/pg_walsummary/t/002_blocks.pl | 88 +++++++++++\nsrc/tools/pgindent/typedefs.list | 2 +\n13 files changed, 600 insertions(+)", "msg_date": "Thu, 11 Jan 2024 17:56:36 +0000", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Add new pg_walsummary tool." }, { "msg_contents": "On Thu, Jan 11, 2024 at 12:56 PM Robert Haas <[email protected]> wrote:\n> Add new pg_walsummary tool.\n\nculicidae is unhappy with this, but I don't yet understand why. The output is:\n\n# Failed test 'stdout shows block 0 modified'\n# at t/002_blocks.pl line 85.\n# 'TS 1663, DB 5, REL 16384, FORK main: blocks 0..1'\n# doesn't match '(?^m:FORK main: block 0$)'\n\nThe test is expecting block 0 to be modified, but block 1 to be\nunmodified, but here, both blocks are modified. That would maybe make\nsense if this machine had a really big block size, but that doesn't\nseem to be the case. Or, maybe the test has erred in failing to\ndisable autovacuum -- though it does take other precautions to try to\nprevent that from interfering.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 13:49:12 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new pg_walsummary tool." }, { "msg_contents": "On Thu, Jan 11, 2024 at 1:49 PM Robert Haas <[email protected]> wrote:\n> On Thu, Jan 11, 2024 at 12:56 PM Robert Haas <[email protected]> wrote:\n> > Add new pg_walsummary tool.\n>\n> culicidae is unhappy with this, but I don't yet understand why. The output is:\n>\n> # Failed test 'stdout shows block 0 modified'\n> # at t/002_blocks.pl line 85.\n> # 'TS 1663, DB 5, REL 16384, FORK main: blocks 0..1'\n> # doesn't match '(?^m:FORK main: block 0$)'\n>\n> The test is expecting block 0 to be modified, but block 1 to be\n> unmodified, but here, both blocks are modified. That would maybe make\n> sense if this machine had a really big block size, but that doesn't\n> seem to be the case. Or, maybe the test has erred in failing to\n> disable autovacuum -- though it does take other precautions to try to\n> prevent that from interfering.\n\nIt's not autovacuum, the test is flaky. I ran it in a loop locally\nuntil it failed, and then ran pg_waldump, finding this:\n\nrmgr: Heap len (rec/tot): 73/ 8249, tx: 738, lsn:\n0/0158AEE8, prev 0/01588EB8, desc: UPDATE old_xmax: 738, old_off: 2,\nold_infobits: [], flags: 0x03, new_xmax: 0, new_off: 76, blkref #0:\nrel 1663/5/16384 blk 1 FPW, blkref #1: rel 1663/5/16384 blk 0\n\nI'm slightly puzzled, here. I would have expected that if I inserted a\nbunch of records into the table and then updated one of them, the new\nrecord would have gone into a new page at the end of the table, and\nalso that even if it didn't extend the relation, it would go into the\nsame page every time the test was run. But here the behavior seems to\nbe nondeterministic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 13:58:18 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new pg_walsummary tool." } ]
[ { "msg_contents": "I tried to post this elsewhere but it got moderated, so retrying:\n\nOn Thu, Jan 11, 2024 at 1:49 PM Robert Haas <[email protected]> wrote:\n> On Thu, Jan 11, 2024 at 12:56 PM Robert Haas <[email protected]> wrote:\n> > Add new pg_walsummary tool.\n>\n> culicidae is unhappy with this, but I don't yet understand why. The output is:\n>\n> # Failed test 'stdout shows block 0 modified'\n> # at t/002_blocks.pl line 85.\n> # 'TS 1663, DB 5, REL 16384, FORK main: blocks 0..1'\n> # doesn't match '(?^m:FORK main: block 0$)'\n>\n> The test is expecting block 0 to be modified, but block 1 to be\n> unmodified, but here, both blocks are modified. That would maybe make\n> sense if this machine had a really big block size, but that doesn't\n> seem to be the case. Or, maybe the test has erred in failing to\n> disable autovacuum -- though it does take other precautions to try to\n> prevent that from interfering.\n\nIt's not autovacuum, the test is flaky. I ran it in a loop locally\nuntil it failed, and then ran pg_waldump, finding this:\n\nrmgr: Heap len (rec/tot): 73/ 8249, tx: 738, lsn:\n0/0158AEE8, prev 0/01588EB8, desc: UPDATE old_xmax: 738, old_off: 2,\nold_infobits: [], flags: 0x03, new_xmax: 0, new_off: 76, blkref #0:\nrel 1663/5/16384 blk 1 FPW, blkref #1: rel 1663/5/16384 blk 0\n\nI'm slightly puzzled, here. I would have expected that if I inserted a\nbunch of records into the table and then updated one of them, the new\nrecord would have gone into a new page at the end of the table, and\nalso that even if it didn't extend the relation, it would go into the\nsame page every time the test was run. But here the behavior seems to\nbe nondeterministic.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Jan 2024 14:12:20 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "buildfarm failures in pg_walsummary checks" } ]
[ { "msg_contents": "I recently began trying to write documentation for the dynamic shared\nmemory registry feature [0], and I noticed that the \"Shared Memory and\nLWLocks\" section of the documentation might need some improvement. At\nleast, I felt that it would be hard to add any new content to this section\nwithout making it very difficult to follow.\n\nConcretely, I am proposing breaking it into two sections: one for shared\nmemory and one for LWLocks. Furthermore, the LWLocks section will be split\ninto two: one for requesting locks at server startup and one for requesting\nlocks after server startup. I intend to also split the shared memory\nsection into at-startup and after-startup sections if/when the dynamic\nshared memory registry feature is committed.\n\nBesides this restructuring, I felt that certain parts of this documentation\ncould benefit from rephrasing and/or additional detail.\n\nThoughts?\n\n[0] https://postgr.es/m/20231205034647.GA2705267%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 11 Jan 2024 22:14:30 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "Hi,\n\n> I recently began trying to write documentation for the dynamic shared\n> memory registry feature [0], and I noticed that the \"Shared Memory and\n> LWLocks\" section of the documentation might need some improvement.\n\nI know that feeling.\n\n> Thoughts?\n\n\"\"\"\nAny registered shmem_startup_hook will be executed shortly after each\nbackend attaches to shared memory.\n\"\"\"\n\nIMO the word \"each\" here can give the wrong impression as if there are\ncertain guarantees about synchronization between backends. Maybe we\nshould change this to simply \"... will be executed shortly after\n[the?] backend attaches...\"\n\n\"\"\"\nshould ensure that only one process allocates a new tranche_id\n(LWLockNewTrancheId) and initializes each new LWLock\n(LWLockInitialize).\n\"\"\"\n\nPersonally I think that reminding the corresponding function name here\nis redundant and complicates reading just a bit. But maybe it's just\nme.\n\nExcept for these nitpicks the patch looks good.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 12 Jan 2024 17:12:28 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "Thanks for reviewing.\n\nOn Fri, Jan 12, 2024 at 05:12:28PM +0300, Aleksander Alekseev wrote:\n> \"\"\"\n> Any registered shmem_startup_hook will be executed shortly after each\n> backend attaches to shared memory.\n> \"\"\"\n> \n> IMO the word \"each\" here can give the wrong impression as if there are\n> certain guarantees about synchronization between backends. Maybe we\n> should change this to simply \"... will be executed shortly after\n> [the?] backend attaches...\"\n\nI see what you mean, but I don't think the problem is the word \"each.\" I\nthink the problem is the use of passive voice. What do you think about\nsomething like\n\n\tEach backend will execute the registered shmem_startup_hook shortly\n\tafter it attaches to shared memory.\n\n> \"\"\"\n> should ensure that only one process allocates a new tranche_id\n> (LWLockNewTrancheId) and initializes each new LWLock\n> (LWLockInitialize).\n> \"\"\"\n> \n> Personally I think that reminding the corresponding function name here\n> is redundant and complicates reading just a bit. But maybe it's just\n> me.\n\nYeah, I waffled on this one. I don't mind removing it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 09:46:50 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Fri, Jan 12, 2024 at 09:46:50AM -0600, Nathan Bossart wrote:\n> On Fri, Jan 12, 2024 at 05:12:28PM +0300, Aleksander Alekseev wrote:\n>> \"\"\"\n>> Any registered shmem_startup_hook will be executed shortly after each\n>> backend attaches to shared memory.\n>> \"\"\"\n>> \n>> IMO the word \"each\" here can give the wrong impression as if there are\n>> certain guarantees about synchronization between backends. Maybe we\n>> should change this to simply \"... will be executed shortly after\n>> [the?] backend attaches...\"\n> \n> I see what you mean, but I don't think the problem is the word \"each.\" I\n> think the problem is the use of passive voice. What do you think about\n> something like\n> \n> \tEach backend will execute the registered shmem_startup_hook shortly\n> \tafter it attaches to shared memory.\n> \n>> \"\"\"\n>> should ensure that only one process allocates a new tranche_id\n>> (LWLockNewTrancheId) and initializes each new LWLock\n>> (LWLockInitialize).\n>> \"\"\"\n>> \n>> Personally I think that reminding the corresponding function name here\n>> is redundant and complicates reading just a bit. But maybe it's just\n>> me.\n> \n> Yeah, I waffled on this one. I don't mind removing it.\n\nHere is a new version of the patch with these changes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 12 Jan 2024 11:23:50 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "Hi,\n\nThanks for the updated patch.\n\n> > I see what you mean, but I don't think the problem is the word \"each.\" I\n> > think the problem is the use of passive voice. What do you think about\n> > something like\n> >\n> > Each backend will execute the registered shmem_startup_hook shortly\n> > after it attaches to shared memory.\n\nThat's much better, thanks.\n\nI think the patch could use another pair of eyes, ideally from a\nnative English speaker. But if no one will express any objections for\na while I suggest merging it.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 13 Jan 2024 13:49:08 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Sat, Jan 13, 2024 at 01:49:08PM +0300, Aleksander Alekseev wrote:\n> That's much better, thanks.\n> \n> I think the patch could use another pair of eyes, ideally from a\n> native English speaker. But if no one will express any objections for\n> a while I suggest merging it.\n\nGreat. I've attached a v3 with a couple of fixes suggested in the other\nthread [0]. I'll wait a little while longer in case anyone else wants to\ntake a look.\n\n[0] https://postgr.es/m/ZaF6UpYImGqVIhVp%40toroid.org\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 13 Jan 2024 15:28:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Sun, Jan 14, 2024 at 2:58 AM Nathan Bossart <[email protected]> wrote:\n>\n> Great. I've attached a v3 with a couple of fixes suggested in the other\n> thread [0]. I'll wait a little while longer in case anyone else wants to\n> take a look.\n\nThe v3 patch looks good to me except for a nitpick: the input\nparameter for RequestAddinShmemSpace is 'Size' not 'int'\n\n <programlisting>\n void RequestAddinShmemSpace(int size)\n </programlisting>\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 10:02:15 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Tue, Jan 16, 2024 at 10:02:15AM +0530, Bharath Rupireddy wrote:\n> The v3 patch looks good to me except for a nitpick: the input\n> parameter for RequestAddinShmemSpace is 'Size' not 'int'\n> \n> <programlisting>\n> void RequestAddinShmemSpace(int size)\n> </programlisting>\n\nHah, I think this mistake is nearly old enough to vote (e0dece1, 5f78aa5).\nGood catch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 08:20:19 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Tue, Jan 16, 2024 at 08:20:19AM -0600, Nathan Bossart wrote:\n> On Tue, Jan 16, 2024 at 10:02:15AM +0530, Bharath Rupireddy wrote:\n>> The v3 patch looks good to me except for a nitpick: the input\n>> parameter for RequestAddinShmemSpace is 'Size' not 'int'\n>> \n>> <programlisting>\n>> void RequestAddinShmemSpace(int size)\n>> </programlisting>\n> \n> Hah, I think this mistake is nearly old enough to vote (e0dece1, 5f78aa5).\n> Good catch.\n\nI fixed this in v4.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 16 Jan 2024 09:52:52 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Tue, Jan 16, 2024 at 9:22 PM Nathan Bossart <[email protected]> wrote:\n>\n> I fixed this in v4.\n\nLGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 06:48:37 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" }, { "msg_contents": "On Wed, Jan 17, 2024 at 06:48:37AM +0530, Bharath Rupireddy wrote:\n> LGTM.\n\nCommitted. Thanks for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:22:06 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reorganize \"Shared Memory and LWLocks\" section of docs" } ]
[ { "msg_contents": "Hi,\n\nRETURNING is usually tagged with appropriate tags, such as <LITERAL>, \nbut not in the 'query' section of COPY.\n\nhttps://www.postgresql.org/docs/devel/sql-copy.html\n\nWould it be better to put <LITERAL> here as well?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Fri, 12 Jan 2024 14:56:45 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "doc: add LITERAL tag to RETURNING" }, { "msg_contents": "On Fri, Jan 12, 2024 at 11:27 AM torikoshia <[email protected]> wrote:\n>\n> Hi,\n>\n> RETURNING is usually tagged with appropriate tags, such as <LITERAL>,\n> but not in the 'query' section of COPY.\n\nI have the same observation.\n\n>\n> https://www.postgresql.org/docs/devel/sql-copy.html\n>\n> Would it be better to put <LITERAL> here as well?\n>\n\nThe patch looks good.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:52:26 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: add LITERAL tag to RETURNING" }, { "msg_contents": "On 2024-Jan-12, Ashutosh Bapat wrote:\n\n> On Fri, Jan 12, 2024 at 11:27 AM torikoshia <[email protected]> wrote:\n> >\n> > RETURNING is usually tagged with appropriate tags, such as <LITERAL>,\n> > but not in the 'query' section of COPY.\n\n> The patch looks good.\n\nGood catch, pushed. It has user-visible effect, so I backpatched it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n", "msg_date": "Fri, 12 Jan 2024 12:56:35 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doc: add LITERAL tag to RETURNING" }, { "msg_contents": "On 2024-01-12 20:56, Alvaro Herrera wrote:\n> On 2024-Jan-12, Ashutosh Bapat wrote:\n> \n>> On Fri, Jan 12, 2024 at 11:27 AM torikoshia \n>> <[email protected]> wrote:\n>> >\n>> > RETURNING is usually tagged with appropriate tags, such as <LITERAL>,\n>> > but not in the 'query' section of COPY.\n> \n>> The patch looks good.\n> \n> Good catch, pushed. It has user-visible effect, so I backpatched it.\n\nThanks for your review and push.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Mon, 15 Jan 2024 10:48:51 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doc: add LITERAL tag to RETURNING" } ]
[ { "msg_contents": "Hi\n\nI have reported very memory expensive pattern:\n\nCREATE OR REPLACE FUNCTION public.fx(iter integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\ndeclare\n c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);\n t bigint;\n s bigint;\nbegin\n for i in 1..iter\n loop\n open c(m := i * 10000);\n s := 0;\n loop\n fetch c into t;\n exit when not found;\n s := s + t;\n end loop;\n close c; raise notice '%=%', i, s;\n end loop;\nend;\n$function$\n;\n\nThis script takes for 100 iterations 100MB\n\nbut rewritten\n\nCREATE OR REPLACE FUNCTION public.fx(iter integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\ndeclare\n t bigint;\n s bigint;\nbegin\n for i in 1..iter\n loop\n s := 0;\n for t in select ic from generate_series(1, i * 10000) g(ic)\n loop\n s := s + t;\n end loop;\n raise notice '%=%', i, s;\n end loop;\nend;\n$function$\n\ntakes lot of megabytes of memory too.\n\nRegards\n\nPavel\n\nHiI have reported very memory expensive pattern:CREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);  t bigint;  s bigint;begin  for i in 1..iter  loop    open c(m := i * 10000);    s := 0;    loop      fetch c into t;      exit when not found;      s := s + t;    end loop;    close c; raise notice '%=%', i, s;  end loop;end;$function$;This script takes for 100 iterations 100MB but rewrittenCREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  t bigint;  s bigint;begin  for i in 1..iter  loop    s := 0;    for t in select  ic from generate_series(1, i * 10000) g(ic)    loop      s := s + t;    end loop;    raise notice '%=%', i, s;  end loop;end;$function$takes lot of megabytes of memory too.RegardsPavel", "msg_date": "Fri, 12 Jan 2024 10:27:25 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "plpgsql memory leaks" }, { "msg_contents": "pá 12. 1. 2024 v 10:27 odesílatel Pavel Stehule <[email protected]>\nnapsal:\n\n> Hi\n>\n> I have reported very memory expensive pattern:\n>\n> CREATE OR REPLACE FUNCTION public.fx(iter integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> declare\n> c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);\n> t bigint;\n> s bigint;\n> begin\n> for i in 1..iter\n> loop\n> open c(m := i * 10000);\n> s := 0;\n> loop\n> fetch c into t;\n> exit when not found;\n> s := s + t;\n> end loop;\n> close c; raise notice '%=%', i, s;\n> end loop;\n> end;\n> $function$\n> ;\n>\n> This script takes for 100 iterations 100MB\n>\n> but rewritten\n>\n> CREATE OR REPLACE FUNCTION public.fx(iter integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> AS $function$\n> declare\n> t bigint;\n> s bigint;\n> begin\n> for i in 1..iter\n> loop\n> s := 0;\n> for t in select ic from generate_series(1, i * 10000) g(ic)\n> loop\n> s := s + t;\n> end loop;\n> raise notice '%=%', i, s;\n> end loop;\n> end;\n> $function$\n>\n> takes lot of megabytes of memory too.\n>\n\nThe megabytes leaks are related to JIT. With JIT off the memory consumption\nis significantly less although there are some others probably.\n\nregards\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n\npá 12. 1. 2024 v 10:27 odesílatel Pavel Stehule <[email protected]> napsal:HiI have reported very memory expensive pattern:CREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);  t bigint;  s bigint;begin  for i in 1..iter  loop    open c(m := i * 10000);    s := 0;    loop      fetch c into t;      exit when not found;      s := s + t;    end loop;    close c; raise notice '%=%', i, s;  end loop;end;$function$;This script takes for 100 iterations 100MB but rewrittenCREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  t bigint;  s bigint;begin  for i in 1..iter  loop    s := 0;    for t in select  ic from generate_series(1, i * 10000) g(ic)    loop      s := s + t;    end loop;    raise notice '%=%', i, s;  end loop;end;$function$takes lot of megabytes of memory too.The megabytes leaks are related to JIT. With JIT off the memory consumption is significantly less  although there are some others probably.regardsPavelRegardsPavel", "msg_date": "Fri, 12 Jan 2024 11:02:14 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 11:02:14AM +0100, Pavel Stehule wrote:\n> p� 12. 1. 2024 v 10:27 odes�latel Pavel Stehule <[email protected]>\n> napsal:\n> \n> > Hi\n> >\n> > I have reported very memory expensive pattern:\n\n[...]\n\n> > takes lot of megabytes of memory too.\n> \n> The megabytes leaks are related to JIT. With JIT off the memory consumption\n> is significantly less although there are some others probably.\n\nI cannot readily reproduce this.\n\nWhich version of Postgres is this and on which platform/distribution?\n\nDid you try keep jit on but set jit_inline_above_cost to 0?\n\nThe back-branches have a fix for the above case, i.e. llvmjit memleaks\nthat can be worked-around by setting jit_inline_above_cost=0.\n\n\nMichael\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:54:29 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "pá 12. 1. 2024 v 11:54 odesílatel Michael Banck <[email protected]> napsal:\n\n> Hi,\n>\n> On Fri, Jan 12, 2024 at 11:02:14AM +0100, Pavel Stehule wrote:\n> > pá 12. 1. 2024 v 10:27 odesílatel Pavel Stehule <[email protected]\n> >\n> > napsal:\n> >\n> > > Hi\n> > >\n> > > I have reported very memory expensive pattern:\n>\n> [...]\n>\n> > > takes lot of megabytes of memory too.\n> >\n> > The megabytes leaks are related to JIT. With JIT off the memory\n> consumption\n> > is significantly less although there are some others probably.\n>\n> I cannot readily reproduce this.\n>\n> Which version of Postgres is this and on which platform/distribution?\n>\n\nIt was tested on master branch (pg 17) on Fedora 39\n\n>\n> Did you try keep jit on but set jit_inline_above_cost to 0?\n>\n> The back-branches have a fix for the above case, i.e. llvmjit memleaks\n> that can be worked-around by setting jit_inline_above_cost=0.\n>\n\nI'll do recheck\n\nPavel\n\n\n\n>\n>\n> Michael\n>\n\npá 12. 1. 2024 v 11:54 odesílatel Michael Banck <[email protected]> napsal:Hi,\n\nOn Fri, Jan 12, 2024 at 11:02:14AM +0100, Pavel Stehule wrote:\n> pá 12. 1. 2024 v 10:27 odesílatel Pavel Stehule <[email protected]>\n> napsal:\n> \n> > Hi\n> >\n> > I have reported very memory expensive pattern:\n\n[...]\n\n> > takes lot of megabytes of memory too.\n> \n> The megabytes leaks are related to JIT. With JIT off the memory consumption\n> is significantly less  although there are some others probably.\n\nI cannot readily reproduce this.\n\nWhich version of Postgres is this and on which platform/distribution?It was tested on master branch (pg 17) on Fedora 39 \n\nDid you try keep jit on but set jit_inline_above_cost to 0?\n\nThe back-branches have a fix for the above case, i.e. llvmjit memleaks\nthat can be worked-around by setting jit_inline_above_cost=0.I'll do recheckPavel \n\n\nMichael", "msg_date": "Fri, 12 Jan 2024 13:35:24 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 01:35:24PM +0100, Pavel Stehule wrote:\n> p� 12. 1. 2024 v 11:54 odes�latel Michael Banck <[email protected]> napsal:\n> > Which version of Postgres is this and on which platform/distribution?\n> \n> It was tested on master branch (pg 17) on Fedora 39\n> \n> > Did you try keep jit on but set jit_inline_above_cost to 0?\n> >\n> > The back-branches have a fix for the above case, i.e. llvmjit memleaks\n> > that can be worked-around by setting jit_inline_above_cost=0.\n\nI got that wrong, it needs to be -1 to disable it.\n\nBut if you are already running the master branch, it is probably a\nseparate issue.\n\n\nMichael\n\n\n", "msg_date": "Fri, 12 Jan 2024 14:53:45 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "pá 12. 1. 2024 v 14:53 odesílatel Michael Banck <[email protected]> napsal:\n\n> Hi,\n>\n> On Fri, Jan 12, 2024 at 01:35:24PM +0100, Pavel Stehule wrote:\n> > pá 12. 1. 2024 v 11:54 odesílatel Michael Banck <[email protected]> napsal:\n> > > Which version of Postgres is this and on which platform/distribution?\n> >\n> > It was tested on master branch (pg 17) on Fedora 39\n> >\n> > > Did you try keep jit on but set jit_inline_above_cost to 0?\n> > >\n> > > The back-branches have a fix for the above case, i.e. llvmjit memleaks\n> > > that can be worked-around by setting jit_inline_above_cost=0.\n>\n> I got that wrong, it needs to be -1 to disable it.\n>\n> But if you are already running the master branch, it is probably a\n> separate issue.\n>\n\nI tested code\n\nCREATE OR REPLACE FUNCTION public.fx(iter integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\ndeclare\n c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);\n t bigint;\n s bigint;\nbegin\n for i in 1..iter\n loop\n s := 0;\n for r in c(i*10000)\n loop\n s := s + r.i;\n end loop;\n raise notice '%=%', i, s;\n end loop;\nend;\n$function$\n\ndefault master branch - res 190MB ram\njit_inline_above_cost = -1 doesn't helps\ndisabling JIT doesn't helps too,\n\nso it looks like the wrong hypothesis , and the problem is maybe somewhere\nelse :-/\n\nRegards\n\nPavel\n\n\n\n>\n>\n> Michael\n>\n\npá 12. 1. 2024 v 14:53 odesílatel Michael Banck <[email protected]> napsal:Hi,\n\nOn Fri, Jan 12, 2024 at 01:35:24PM +0100, Pavel Stehule wrote:\n> pá 12. 1. 2024 v 11:54 odesílatel Michael Banck <[email protected]> napsal:\n> > Which version of Postgres is this and on which platform/distribution?\n> \n> It was tested on master branch (pg 17) on Fedora 39\n> \n> > Did you try keep jit on but set jit_inline_above_cost to 0?\n> >\n> > The back-branches have a fix for the above case, i.e. llvmjit memleaks\n> > that can be worked-around by setting jit_inline_above_cost=0.\n\nI got that wrong, it needs to be -1 to disable it.\n\nBut if you are already running the master branch, it is probably a\nseparate issue.I tested codeCREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);  t bigint;  s bigint;begin  for i in 1..iter  loop    s := 0;     for r in c(i*10000)    loop      s := s + r.i;    end loop;    raise notice '%=%', i, s;  end loop;end;$function$default master branch - res 190MB ramjit_inline_above_cost = -1 doesn't helpsdisabling JIT doesn't helps too,so it looks like the wrong hypothesis , and the problem is maybe somewhere else :-/RegardsPavel \n\n\nMichael", "msg_date": "Fri, 12 Jan 2024 20:09:14 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> default master branch - res 190MB ram\n> jit_inline_above_cost = -1 doesn't helps\n> disabling JIT doesn't helps too,\n\n> so it looks like the wrong hypothesis , and the problem is maybe somewhere\n> else :-/\n\nI see no leak with these examples on HEAD, either with or without\n--enable-llvm --- the process size stays quite stable according\nto \"top\". I wonder if you are using some extension that's\ncontributing to the problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jan 2024 16:25:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plpgsql memory leaks" }, { "msg_contents": "Hi\n\npá 12. 1. 2024 v 22:25 odesílatel Tom Lane <[email protected]> napsal:\n\n> Pavel Stehule <[email protected]> writes:\n> > default master branch - res 190MB ram\n> > jit_inline_above_cost = -1 doesn't helps\n> > disabling JIT doesn't helps too,\n>\n> > so it looks like the wrong hypothesis , and the problem is maybe\n> somewhere\n> > else :-/\n>\n> I see no leak with these examples on HEAD, either with or without\n> --enable-llvm --- the process size stays quite stable according\n> to \"top\". I wonder if you are using some extension that's\n> contributing to the problem.\n>\n\nmemory info after DO $$ BEGIN END $$;\n\n(2024-01-13 05:36:46) postgres=# do $$ begin end $$;\nDO\n(2024-01-13 05:37:16) postgres=# select meminfo();\nNOTICE: Total non-mmapped bytes (arena): 1114112\nNOTICE: # of free chunks (ordblks): 11\nNOTICE: # of free fastbin blocks (smblks): 0\nNOTICE: # of mapped regions (hblks): 2\nNOTICE: Bytes in mapped regions (hblkhd): 401408\nNOTICE: Max. total allocated space (usmblks): 0\nNOTICE: Free bytes held in fastbins (fsmblks): 0\nNOTICE: Total allocated space (uordblks): 1039216\nNOTICE: Total free space (fordblks): 74896\nNOTICE: Topmost releasable block (keepcost): 67360\n\nafter script execution\n\nNOTICE: (\"1165 kB\",\"1603 kB\",\"438 kB\")\nNOTICE: Total non-mmapped bytes (arena): 22548480\nNOTICE: # of free chunks (ordblks): 25\nNOTICE: # of free fastbin blocks (smblks): 0\nNOTICE: # of mapped regions (hblks): 2\nNOTICE: Bytes in mapped regions (hblkhd): 401408\nNOTICE: Max. total allocated space (usmblks): 0\nNOTICE: Free bytes held in fastbins (fsmblks): 0\nNOTICE: Total allocated space (uordblks): 1400224\nNOTICE: Total free space (fordblks): 21148256\nNOTICE: Topmost releasable block (keepcost): 20908384\n\nso attached memory is 20MB - but is almost free. The sum of memory context\nis very stable without leaks (used 1165kB).\n\nbut when I modify the script to\n\nCREATE OR REPLACE FUNCTION public.fx(iter integer)\n RETURNS void\n LANGUAGE plpgsql\nAS $function$\ndeclare\n c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);\n t bigint;\n s bigint;\nbegin\n for i in 1..iter\n loop\n open c(m := i * 10000);\n s := 0;\n loop\n fetch c into t;\n exit when not found;\n s := s + t;\n end loop;\n raise notice '===========before close';\n raise notice '%', (select (pg_size_pretty(sum(used_bytes)),\npg_size_pretty(sum(total_bytes)), pg_size_pretty(sum(free_bytes))) from\npg_get_backend_memory_contexts());\n --perform meminfo();\n raise notice '-----------after close';\n close c;\n raise notice '%=%', i, s;\n raise notice '%', (select (pg_size_pretty(sum(used_bytes)),\npg_size_pretty(sum(total_bytes)), pg_size_pretty(sum(free_bytes))) from\npg_get_backend_memory_contexts());\n --perform meminfo();\n end loop;\nend;\n$function$\n\nmeminfo is simple extension - see the attachment, I got interesting things\n\nNOTICE: ===========before close\nNOTICE: (\"149 MB\",\"154 MB\",\"5586 kB\")\nNOTICE: Total non-mmapped bytes (arena): 132960256\nNOTICE: # of free chunks (ordblks): 49\nNOTICE: # of free fastbin blocks (smblks): 0\nNOTICE: # of mapped regions (hblks): 4\nNOTICE: Bytes in mapped regions (hblkhd): 51265536\nNOTICE: Max. total allocated space (usmblks): 0\nNOTICE: Free bytes held in fastbins (fsmblks): 0\nNOTICE: Total allocated space (uordblks): 110730576\nNOTICE: Total free space (fordblks): 22229680\nNOTICE: Topmost releasable block (keepcost): 133008\n\nso this script really used mbytes memory, but it is related to query\n`select distinct i from generate_series(1, m) g(i);`\n\nThis maybe is in correlation to my default work mem 64MB - when I set work\nmem to 10MB, then it consumes only 15MB\n\nSo I was confused because it uses only about 3x work_mem, which is not too\nbad.\n\nRegards\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n\nHipá 12. 1. 2024 v 22:25 odesílatel Tom Lane <[email protected]> napsal:Pavel Stehule <[email protected]> writes:\n> default master branch - res 190MB ram\n> jit_inline_above_cost = -1 doesn't helps\n> disabling JIT doesn't helps too,\n\n> so it looks like the wrong hypothesis , and the problem is maybe somewhere\n> else :-/\n\nI see no leak with these examples on HEAD, either with or without\n--enable-llvm --- the process size stays quite stable according\nto \"top\".  I wonder if you are using some extension that's\ncontributing to the problem.memory info after DO $$ BEGIN END $$;(2024-01-13 05:36:46) postgres=# do $$ begin end $$;DO(2024-01-13 05:37:16) postgres=# select meminfo();NOTICE:  Total non-mmapped bytes (arena):       1114112NOTICE:  # of free chunks (ordblks):            11NOTICE:  # of free fastbin blocks (smblks):     0NOTICE:  # of mapped regions (hblks):           2NOTICE:  Bytes in mapped regions (hblkhd):      401408NOTICE:  Max. total allocated space (usmblks):  0NOTICE:  Free bytes held in fastbins (fsmblks): 0NOTICE:  Total allocated space (uordblks):      1039216NOTICE:  Total free space (fordblks):           74896NOTICE:  Topmost releasable block (keepcost):   67360after script executionNOTICE:  (\"1165 kB\",\"1603 kB\",\"438 kB\")NOTICE:  Total non-mmapped bytes (arena):       22548480NOTICE:  # of free chunks (ordblks):            25NOTICE:  # of free fastbin blocks (smblks):     0NOTICE:  # of mapped regions (hblks):           2NOTICE:  Bytes in mapped regions (hblkhd):      401408NOTICE:  Max. total allocated space (usmblks):  0NOTICE:  Free bytes held in fastbins (fsmblks): 0NOTICE:  Total allocated space (uordblks):      1400224NOTICE:  Total free space (fordblks):           21148256NOTICE:  Topmost releasable block (keepcost):   20908384so attached memory is 20MB -  but is almost free. The sum of memory context is very stable without leaks (used 1165kB).but when I modify the script toCREATE OR REPLACE FUNCTION public.fx(iter integer) RETURNS void LANGUAGE plpgsqlAS $function$declare  c cursor(m bigint) for select distinct i from generate_series(1, m) g(i);  t bigint;  s bigint;begin  for i in 1..iter  loop    open c(m := i * 10000);    s := 0;    loop      fetch c into t;      exit when not found;      s := s + t;    end loop;    raise notice '===========before close';    raise notice '%', (select (pg_size_pretty(sum(used_bytes)), pg_size_pretty(sum(total_bytes)), pg_size_pretty(sum(free_bytes))) from pg_get_backend_memory_contexts());    --perform meminfo();    raise notice '-----------after close';    close c;    raise notice '%=%', i, s;    raise notice '%', (select (pg_size_pretty(sum(used_bytes)), pg_size_pretty(sum(total_bytes)), pg_size_pretty(sum(free_bytes))) from pg_get_backend_memory_contexts());    --perform meminfo();  end loop;end;$function$meminfo is simple extension - see the attachment, I got interesting things NOTICE:  ===========before closeNOTICE:  (\"149 MB\",\"154 MB\",\"5586 kB\")NOTICE:  Total non-mmapped bytes (arena):       132960256NOTICE:  # of free chunks (ordblks):            49NOTICE:  # of free fastbin blocks (smblks):     0NOTICE:  # of mapped regions (hblks):           4NOTICE:  Bytes in mapped regions (hblkhd):      51265536NOTICE:  Max. total allocated space (usmblks):  0NOTICE:  Free bytes held in fastbins (fsmblks): 0NOTICE:  Total allocated space (uordblks):      110730576NOTICE:  Total free space (fordblks):           22229680NOTICE:  Topmost releasable block (keepcost):   133008so this script really used mbytes memory, but it is related to query `select distinct i from generate_series(1, m) g(i);`This maybe is in correlation to my default work mem 64MB - when I set work mem to 10MB, then it consumes only 15MBSo I was confused because it uses only about 3x work_mem, which is not too bad.RegardsPavel \n\n                        regards, tom lane", "msg_date": "Sat, 13 Jan 2024 06:25:04 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plpgsql memory leaks" } ]
[ { "msg_contents": "Perl 5.38 has landed in Debian unstable, and plperl doesn't like it:\n\ndiff -U3 /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_elog_1.out /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_elog.out\n--- /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_elog_1.out\t2023-07-24 12:47:52.124583553 +0000\n+++ /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_elog.out\t2024-01-12 10:09:51.065265341 +0000\n@@ -76,6 +76,7 @@\n RETURN 1;\n END;\n $$;\n+WARNING: could not determine encoding for locale \"C.utf8\": codeset is \"ANSI_X3.4-1968\"\n select die_caller();\n NOTICE: caught die\n die_caller\ndiff -U3 /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_call.out /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_call.out\n--- /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_call.out\t2023-10-17 09:40:01.365865484 +0000\n+++ /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_call.out\t2024-01-12 10:09:51.413278511 +0000\n@@ -64,6 +64,7 @@\n RAISE NOTICE '_a: %, _b: %', _a, _b;\n END\n $$;\n+WARNING: could not determine encoding for locale \"C.utf8\": codeset is \"ANSI_X3.4-1968\"\n NOTICE: a: 10, b:\n NOTICE: _a: 10, _b: 20\n DROP PROCEDURE test_proc1;\n\nSame problem in 17devel and 16. (Did not try the older branches yet.)\n\nChristoph\n\n\n", "msg_date": "Fri, 12 Jan 2024 11:14:28 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "plperl and perl 5.38" }, { "msg_contents": "\nOn 2024-01-12 Fr 05:14, Christoph Berg wrote:\n> Perl 5.38 has landed in Debian unstable, and plperl doesn't like it:\n>\n> diff -U3 /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_elog_1.out /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_elog.out\n> --- /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_elog_1.out\t2023-07-24 12:47:52.124583553 +0000\n> +++ /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_elog.out\t2024-01-12 10:09:51.065265341 +0000\n> @@ -76,6 +76,7 @@\n> RETURN 1;\n> END;\n> $$;\n> +WARNING: could not determine encoding for locale \"C.utf8\": codeset is \"ANSI_X3.4-1968\"\n> select die_caller();\n> NOTICE: caught die\n> die_caller\n> diff -U3 /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_call.out /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_call.out\n> --- /home/myon/projects/postgresql/pg/postgresql/src/pl/plperl/expected/plperl_call.out\t2023-10-17 09:40:01.365865484 +0000\n> +++ /home/myon/projects/postgresql/pg/postgresql/build/testrun/plperl/regress/results/plperl_call.out\t2024-01-12 10:09:51.413278511 +0000\n> @@ -64,6 +64,7 @@\n> RAISE NOTICE '_a: %, _b: %', _a, _b;\n> END\n> $$;\n> +WARNING: could not determine encoding for locale \"C.utf8\": codeset is \"ANSI_X3.4-1968\"\n> NOTICE: a: 10, b:\n> NOTICE: _a: 10, _b: 20\n> DROP PROCEDURE test_proc1;\n>\n> Same problem in 17devel and 16. (Did not try the older branches yet.)\n>\n\nI can't reproduce this on my Ubuntu 22.04 ARM64 instance with perl \n5.38.2 installed via perlbrew, nor on a fresh Debian unstable with it's \nperl 5.38.2. In both instances my LANG is set to en_US.UTF-8.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 14 Jan 2024 09:02:33 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plperl and perl 5.38" }, { "msg_contents": "Re: Andrew Dunstan\n> > +WARNING: could not determine encoding for locale \"C.utf8\": codeset is \"ANSI_X3.4-1968\"\n> \n> I can't reproduce this on my Ubuntu 22.04 ARM64 instance with perl 5.38.2\n> installed via perlbrew, nor on a fresh Debian unstable with it's perl\n> 5.38.2. In both instances my LANG is set to en_US.UTF-8.\n\nIt was a problem on the perl side:\n\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1060456\n\n(Fixed on Jan 12th)\n\nThanks for trying,\nChristoph\n\n\n", "msg_date": "Sun, 14 Jan 2024 22:00:42 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plperl and perl 5.38" } ]
[ { "msg_contents": "Hi,\n\nI propose to add a new predefined role to Postgres,\npg_manage_extensions. The idea is that it allows Superusers to delegate\nthe rights to create, update or delete extensions to other roles, even\nif those extensions are not trusted or those users are not the database\nowner.\n\nI have attached a WIP patch for this.\n\n\nThoughts?\n\nMichael", "msg_date": "Fri, 12 Jan 2024 15:53:01 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] New predefined role pg_manage_extensions" }, { "msg_contents": "On Fri, 12 Jan 2024 at 15:53, Michael Banck <[email protected]> wrote:\n> I propose to add a new predefined role to Postgres,\n> pg_manage_extensions. The idea is that it allows Superusers to delegate\n> the rights to create, update or delete extensions to other roles, even\n> if those extensions are not trusted or those users are not the database\n> owner.\n\nI agree that extension creation is one of the main reasons people\nrequire superuser access, and I think it would be beneficial to try to\nreduce that. But I'm not sure that such a pg_manage_extensions role\nwould have any fewer permissions than superuser in practice. Afaik\nmany extensions that are not marked as trusted, are not trusted\nbecause they would allow fairly trivial privilege escalation to\nsuperuser if they were.\n\n\n", "msg_date": "Fri, 12 Jan 2024 16:13:27 +0100", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New predefined role pg_manage_extensions" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 12, 2024 at 04:13:27PM +0100, Jelte Fennema-Nio wrote:\n> But I'm not sure that such a pg_manage_extensions role would have any\n> fewer permissions than superuser in practice. \n\nNote that just being able to create an extension does not give blanket\npermission to use it. I did a few checks with things I thought might be\nproblematic like adminpack or plpython3u, and a pg_manage_extensions\nuser is not allowed to call those functions or use the untrusted\nlanguage.\n\n> Afaik many extensions that are not marked as trusted, are not trusted\n> because they would allow fairly trivial privilege escalation to\n> superuser if they were.\n\nWhile that might be true (or we err on the side of caution), I thought\nthe rationale was more that they either disclose more information about\nthe database server than we want to disclose to ordinary users, or that\nthey allow access to the file system etc.\n\nI think if we have extensions in contrib that trivially allow\nnon-superusers to become superusers just by being installed, that should\nbe a bug and be fixed by making it impossible for ordinary users to\nuse those extensions without being granted some access to them in\naddition.\n\nAfter all, socially engineering a DBA into installing an extension due\nto user demand would be a thing anyway (even if most DBAs might reject\nit) and at least DBAs should be aware of the specific risks of a\nparticular extension probably?\n\n\nMichael\n\n\n", "msg_date": "Sat, 13 Jan 2024 09:20:40 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New predefined role pg_manage_extensions" } ]
[ { "msg_contents": "It looks like every recent cfbot run has failed in the\nFreeBSD-13-Meson build, even if it worked in other ones.\nThe symptoms are failures in the TAP tests that try to\nuse interactive_psql:\n\nCan't call method \"slave\" on an undefined value at /usr/local/lib/perl5/site_perl/IPC/Run.pm line 2889.\n\nI suspect that we are looking at some bug in IPC::Run that exists in\nthe version that that FreeBSD release has (but not, seemingly,\nelsewhere in the community), and that was mostly harmless until\nc53859295 made all Perl warnings fatal. Not sure what we want\nto do about this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jan 2024 15:32:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "On Sat, Jan 13, 2024 at 9:32 AM Tom Lane <[email protected]> wrote:\n> It looks like every recent cfbot run has failed in the\n> FreeBSD-13-Meson build, even if it worked in other ones.\n> The symptoms are failures in the TAP tests that try to\n> use interactive_psql:\n>\n> Can't call method \"slave\" on an undefined value at /usr/local/lib/perl5/site_perl/IPC/Run.pm line 2889.\n>\n> I suspect that we are looking at some bug in IPC::Run that exists in\n> the version that that FreeBSD release has (but not, seemingly,\n> elsewhere in the community), and that was mostly harmless until\n> c53859295 made all Perl warnings fatal. Not sure what we want\n> to do about this.\n\nRight, I see this locally on my FreeBSD box. Reverting the fatal\nwarnings thing doesn't help, it's more broken than that. I tried to\nunderstand\n\nhttps://github.com/cpan-authors/IPC-Run/blob/master/lib/IPC/Run.pm\nhttps://github.com/cpan-authors/IO-Tty/blob/master/Pty.pm\n\nbut I am not good at perl. I think the error means that in\n\n ## Close all those temporary filehandles that the kids needed.\n for my $pty ( values %{ $self->{PTYS} } ) {\n close $pty->slave;\n }\n\nthe variable $pty holds undef, so then we have to find out where undef\nwas put into $self->{PTYS}. There is a place that does:\n\n ## Just flag the pyt's existence for now. It'll be\n ## converted to a real IO::Pty by _open_pipes.\n $self->{PTYS}->{$pty_id} = undef;\n\nWe can see that this started a couple of days ago:\n\nhttps://github.com/postgres/postgres/commits/master/\n\nBut at older commits I see the problem locally too, so, yeah, it must\nbe coming from where else. Looking at the relevant packages\np5-IPC-Run and p5-IO-Tty I see that they recently moved to 20231003.0\nand 1.18, respectively. Downgrading p5-IPC-Run to\np5-IPC-Run-20220807.0.pkg was not enough. Downgrading p5-IO-Tty to\np5-IO-Tty-1.17.pkg allowed psql/010_tab_completion to pass with either\nversion of p5-IPC-Run.\n\nThe answer may lie in the commits between those versions here:\n\nhttps://github.com/cpan-authors/IO-Tty/commits/master/\n\nI see that Debian Bookwork is still using 1.17, which fits with your\nguess that FreeBSD is just breaking first because it is using a newer\nversion.\n\n\n", "msg_date": "Sat, 13 Jan 2024 13:39:28 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Looking at the relevant packages\n> p5-IPC-Run and p5-IO-Tty I see that they recently moved to 20231003.0\n> and 1.18, respectively. Downgrading p5-IPC-Run to\n> p5-IPC-Run-20220807.0.pkg was not enough. Downgrading p5-IO-Tty to\n> p5-IO-Tty-1.17.pkg allowed psql/010_tab_completion to pass with either\n> version of p5-IPC-Run.\n\n> The answer may lie in the commits between those versions here:\n\n> https://github.com/cpan-authors/IO-Tty/commits/master/\n\n> I see that Debian Bookwork is still using 1.17, which fits with your\n> guess that FreeBSD is just breaking first because it is using a newer\n> version.\n\nYeah, I have nothing newer than 1.17 here either.\n\nTime for a bug report to IO::Tty's authors, I guess.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jan 2024 19:51:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "On Sat, Jan 13, 2024 at 1:51 PM Tom Lane <[email protected]> wrote:\n> Time for a bug report to IO::Tty's authors, I guess.\n\nAhh, there is one: https://github.com/cpan-authors/IO-Tty/issues/38\n\nIn the meantime, will look into whether I can pin that package to 1.17\nsomewhere in the pipeline, hopefully later today...\n\n\n", "msg_date": "Sat, 13 Jan 2024 13:57:54 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sat, Jan 13, 2024 at 1:51 PM Tom Lane <[email protected]> wrote:\n>> Time for a bug report to IO::Tty's authors, I guess.\n\n> Ahh, there is one: https://github.com/cpan-authors/IO-Tty/issues/38\n\nJust for the archives' sake: I hit this today on a fresh install\nof FreeBSD 14.0, which has pulled in p5-IO-Tty-1.18. Annoying...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Jan 2024 23:06:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "On Tue, Jan 30, 2024 at 5:06 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Sat, Jan 13, 2024 at 1:51 PM Tom Lane <[email protected]> wrote:\n> >> Time for a bug report to IO::Tty's authors, I guess.\n>\n> > Ahh, there is one: https://github.com/cpan-authors/IO-Tty/issues/38\n>\n> Just for the archives' sake: I hit this today on a fresh install\n> of FreeBSD 14.0, which has pulled in p5-IO-Tty-1.18. Annoying...\n\nFWIW here's what I did to downgrade:\n\n # remove the problematic version (also removes p5-IPC-Run)\n pkg remove -y p5-IO-Tty\n\n # fetch the known good 1.17 package and install it\n curl -O\n\"https://pkg.freebsd.org/freebsd:14:x86:64/release_0/All/p5-IO-Tty-1.17.pkg\"\n pkg install -y p5-IO-Tty-1.17.pkg\n\n # put back p5-IPC-Run\n pkg install -y p5-IPC-Run\n\n # temporarily prevent future \"pkg upgrade\" from upgrading p5-IO-Tty\n pkg lock -y p5-IO-Tty\n\n\n", "msg_date": "Thu, 8 Feb 2024 14:27:59 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Tue, Jan 30, 2024 at 5:06 PM Tom Lane <[email protected]> wrote:\n>> Thomas Munro <[email protected]> writes:\n>>> Ahh, there is one: https://github.com/cpan-authors/IO-Tty/issues/38\n\n>> Just for the archives' sake: I hit this today on a fresh install\n>> of FreeBSD 14.0, which has pulled in p5-IO-Tty-1.18. Annoying...\n\n> FWIW here's what I did to downgrade:\n\nThanks for the recipe --- this worked for me, although I noticed\nit insisted on installing perl5.34-5.34.3_3 alongside 5.36.\nDoesn't seem to be a problem though --- the main perl installation\nis still 5.36.\n\nFWIW, I spent some time yesterday staring at IPC/Run.pm and\ncame to the (unsurprising) conclusion that there's little we can\ndo to work around the bug. None of the moving parts are exposed\nto callers.\n\nAlso, I'm not entirely convinced that the above-cited issue report is\ncomplaining about the same thing that's biting us. The reported error\nmessages are completely different.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 07 Feb 2024 21:53:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "On Thu, Feb 8, 2024 at 3:53 PM Tom Lane <[email protected]> wrote:\n> Thomas Munro <[email protected]> writes:\n> > On Tue, Jan 30, 2024 at 5:06 PM Tom Lane <[email protected]> wrote:\n> >> Thomas Munro <[email protected]> writes:\n> >>> Ahh, there is one: https://github.com/cpan-authors/IO-Tty/issues/38\n>\n> >> Just for the archives' sake: I hit this today on a fresh install\n> >> of FreeBSD 14.0, which has pulled in p5-IO-Tty-1.18. Annoying...\n>\n> > FWIW here's what I did to downgrade:\n>\n> Thanks for the recipe --- this worked for me, although I noticed\n> it insisted on installing perl5.34-5.34.3_3 alongside 5.36.\n> Doesn't seem to be a problem though --- the main perl installation\n> is still 5.36.\n\nLooks like CI is broken in this way again, as of ~13 hours ago.\nLooking into that...\n\n> Also, I'm not entirely convinced that the above-cited issue report is\n> complaining about the same thing that's biting us. The reported error\n> messages are completely different.\n\nYou could be right about that. It seems there was a clash between an\nupstream commit and a patch in FBSD's port tree, which has just been\nremoved:\n\nhttps://bugs.freebsd.org/bugzilla/show_bug.cgi?id=276535\n\nSo perhaps it's time for me to undo what I did before... looking now.\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:36:22 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" }, { "msg_contents": "On Fri, Apr 19, 2024 at 10:36 AM Thomas Munro <[email protected]> wrote:\n> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=276535\n>\n> So perhaps it's time for me to undo what I did before... looking now.\n\nIt turned out that I still needed the previous work-around, but I was\ntoo clever for my own boots last time. For the record, here is the\nnew change to the image building script:\n\nhttps://github.com/anarazel/pg-vm-images/commit/faff91cd40d6af0cbc658f5c11da47e2aa88d332\n\nI should have listened to Bilal's prediction[1] about this the first\ntime. But this time, I know that the real fix is coming in the next\npackage very soon, per bugzilla link above.\n\nOne thing I noticed is that 010_tab_completion is actually being\nskipped, with that fix. It used to run, I'm sure. Not sure why, but\nI'll look more seriously when the 1.21 or whatever shows up. At least\nwe should soon have green CI again in the meantime.\n\n[1] https://github.com/anarazel/pg-vm-images/pull/92\n\n\n", "msg_date": "Fri, 19 Apr 2024 17:00:29 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cfbot is failing all tests on FreeBSD/Meson builds" } ]
[ { "msg_contents": "Introduced in commit c3afe8cf5a.\n\nSomeone issuing repeated \"CREATE SUBSCRIPTION\" commands where the\nconnection has no password and must_have_password is true will leak\nmalloc'd memory in the error path. Minor issue in practice, because I\nsuspect that a user privileged enough to create a subscription could\ncause bigger problems.\n\nIt makes me wonder if we should use the resowner mechanism to track\npointers to malloc'd memory. Then we could use a standard pattern for\nthese kinds of cases, and it would also catch more remote issues, like\nif a pstrdup() fails in an error path (which can happen a few lines up\nif the parse fails).\n\nPatch attached; intended for 16 and 17.\n\nRegards,\n\tJeff Davis", "msg_date": "Fri, 12 Jan 2024 15:06:26 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Fix minor memory leak in connection string validation" }, { "msg_contents": "On Fri, Jan 12, 2024 at 03:06:26PM -0800, Jeff Davis wrote:\n> It makes me wonder if we should use the resowner mechanism to track\n> pointers to malloc'd memory. Then we could use a standard pattern for\n> these kinds of cases, and it would also catch more remote issues, like\n> if a pstrdup() fails in an error path (which can happen a few lines up\n> if the parse fails).\n\nThat seems worth exploring.\n\n> \t\tif (!uses_password)\n> +\t\t{\n> +\t\t\t/* malloc'd, so we must free it explicitly */\n> +\t\t\tPQconninfoFree(opts);\n> +\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_S_R_E_PROHIBITED_SQL_STATEMENT_ATTEMPTED),\n> \t\t\t\t\t errmsg(\"password is required\"),\n> \t\t\t\t\t errdetail(\"Non-superusers must provide a password in the connection string.\")));\n> +\t\t}\n> \t}\n> \n> \tPQconninfoFree(opts);\n\nAnother option could be to surround this with PG_TRY/PG_FINALLY, but your\npatch seems sufficient, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Jan 2024 20:37:05 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix minor memory leak in connection string validation" }, { "msg_contents": "Nathan Bossart <[email protected]> writes:\n> On Fri, Jan 12, 2024 at 03:06:26PM -0800, Jeff Davis wrote:\n>> It makes me wonder if we should use the resowner mechanism to track\n>> pointers to malloc'd memory. Then we could use a standard pattern for\n>> these kinds of cases, and it would also catch more remote issues, like\n>> if a pstrdup() fails in an error path (which can happen a few lines up\n>> if the parse fails).\n\n> That seems worth exploring.\n\nI'm pretty dubious about adding overhead for that, mainly because\nmost of the direct callers of malloc in a backend are going to be\ncode that's not under our control. Modifying the callers that we\ndo control is not going to give a full solution, and could well be\noutright misleading.\n\n> Another option could be to surround this with PG_TRY/PG_FINALLY, but your\n> patch seems sufficient, too.\n\nYeah, seems fine for now. If that function grows any more complexity\nthen we could think about using PG_TRY.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Jan 2024 22:18:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix minor memory leak in connection string validation" } ]
[ { "msg_contents": "Hi,\n\nA friend of mine complained about strange behavior of `postgres`. When\nexecuted without any arguments the following error is shown:\n\n```\n$ postgres\npostgres does not know where to find the server configuration file.\nYou must specify the --config-file or -D invocation option or set the\nPGDATA environment variable.\n```\n\nHowever --config-file is not listed in --help output. Apparently\nthat's because it's not a regular option but a GUС. It is in fact\nsupported:\n\n```\n$ postgres --config-file=/tmp/fake.txt\npostgres: could not access the server configuration file\n\"/tmp/fake.txt\": No such file or directory\n```\n\nAdditionally --help says:\n\n```\n[...]\nPlease read the documentation for the complete list of run-time\nconfiguration settings and how to set them on the command line or in\nthe configuration file\n```\n\n... which personally I don't find extremely useful to be honest.\n\nOK, let's check section \"20.1.4. Parameter Interaction via the Shell\"\n[1] of the documentation. Currently it doesn't tell anything about the\nability to specify GUCs --like-this, unless I missed something.\n\nShould we remove --config-file from the error message to avoid any\nconfusion? Should we correct --help output? Should we update the\ndocumentation?\n\n[1]: https://www.postgresql.org/docs/current/config-setting.html#CONFIG-SETTING-SHELL\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Sat, 13 Jan 2024 13:39:50 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres and --config-file option" }, { "msg_contents": "On Sat, Jan 13, 2024 at 01:39:50PM +0300, Aleksander Alekseev wrote:\n> OK, let's check section \"20.1.4. Parameter Interaction via the Shell\"\n> [1] of the documentation. Currently it doesn't tell anything about the\n> ability to specify GUCs --like-this, unless I missed something.\n\nIt appears to be documented for 'postgres' as follows [0]:\n\n\t--name=value\n\t\tSets a named run-time parameter; a shorter form of -c.\n\nand similarly within the --help output:\n\n\t--NAME=VALUE set run-time parameter\n\nIts documentation also describes this method of specifying parameters in\nthe 'Examples' section. The section you refer to calls out \"-c\", so it is\nsort-of indirectly mentioned, but that might be a bit of a generous\nassessment.\n\n> Should we remove --config-file from the error message to avoid any\n> confusion? Should we correct --help output? Should we update the\n> documentation?\n\nIt might be worthwhile to update the documentation if it would've helped\nprevent confusion here.\n\nSeparately, I noticed that this is implemented in postmaster.c by looking\nfor the '-' option character returned by getopt(), and I'm wondering why\nthis doesn't use getopt_long() instead. AFAICT this dates back to the\nintroduction of GUCs in 6a68f426 (May 2000).\n\n[0] https://www.postgresql.org/docs/devel/app-postgres.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 13 Jan 2024 16:38:00 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Saturday, January 13, 2024, Nathan Bossart <[email protected]>\nwrote:\n\n> On Sat, Jan 13, 2024 at 01:39:50PM +0300, Aleksander Alekseev wrote:\n>\n> > Should we remove --config-file from the error message to avoid any\n> > confusion? Should we correct --help output? Should we update the\n> > documentation?\n>\n> It might be worthwhile to update the documentation if it would've helped\n> prevent confusion here.\n>\n\nPointing out the long form in the -c definition makes sense.\n\nAs for the help message, I’d minimally add:\n\n“You must specify the --config-file (or equivalent -c) or -D invocation …”\n\nI’m fine with the status quo regarding the overview documentation\nmentioning both forms. I also haven’t tested whether PGOPTIONS accepts\nboth forms or only the -c form as presently documented. Presently the\n—name=value form seems discouraged in favor of -c which I’m ok with and\ntrying to mention both everywhere seems needlessly verbose. But I’d be\ninterested in reviewing against an informed patch improving this area more\nbroadly than dealing with this single deviant usage. I do like this\nspecific usage of the long-form option.\n\nDavid J.\n\nOn Saturday, January 13, 2024, Nathan Bossart <[email protected]> wrote:On Sat, Jan 13, 2024 at 01:39:50PM +0300, Aleksander Alekseev wrote:\n\n> Should we remove --config-file from the error message to avoid any\n> confusion? Should we correct --help output? Should we update the\n> documentation?\n\nIt might be worthwhile to update the documentation if it would've helped\nprevent confusion here.\nPointing out the long form in the -c definition makes sense.As for the help message, I’d minimally add:“You must specify the --config-file (or equivalent -c) or -D invocation …”I’m fine with the status quo regarding the overview documentation mentioning both forms.  I also haven’t tested whether PGOPTIONS accepts both forms or only the -c form as presently documented.  Presently the —name=value form seems discouraged in favor of -c which I’m ok with and trying to mention both everywhere seems needlessly verbose.   But I’d be interested in reviewing against an informed patch improving this area more broadly than dealing with this single deviant usage.  I do like this specific usage of the long-form option.David J.", "msg_date": "Sat, 13 Jan 2024 17:36:41 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres and --config-file option" }, { "msg_contents": "Hi,\n\n> It might be worthwhile to update the documentation if it would've helped\n> prevent confusion here.\n\n> Its documentation also describes this method of specifying parameters in\n> the 'Examples' section.\n\nI believe the documentation for 'postgres' already does a decent job\nin describing what --NAME=VALUE means, and gives an example. IMO the\nactual problem is with --help message and the specific error message.\n\n> Please read the documentation for the complete list of run-time\n> configuration settings and how to set them on the command line or in\n> the configuration file\n\nAdditionally --help message doesn't tell which part of the\ndocumentation should be read specifically. This being said, personally\nI don't think that providing specific URLs in the --help message would\nbe a good idea. This would indicate that the --help message is just\nwritten poorly.\n\n> As for the help message, I’d minimally add:\n>\n> “You must specify the --config-file (or equivalent -c) or -D invocation …”\n\nGood idea.\n\nPFA the patch. It's short but I think it mitigates the problem.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 15 Jan 2024 14:35:27 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Mon, Jan 15, 2024 at 4:35 AM Aleksander Alekseev <\[email protected]> wrote:\n\n> PFA the patch. It's short but I think it mitigates the problem.\n>\n>\nI took a look at where these options are discussed in the documentation and\nnow feel that we should make these options clear more broadly (config and\nlibpq, plus pointing to --name from -c in a couple of places). It doesn't\nadd much verbosity and, frankly, if I was to pick one \"--name=value\" would\nwin and so I'd rather document it, leaving -c alone for historical reasons.\n\nI've attached a replacement patch with the additional changes.\n\nDavid J.", "msg_date": "Fri, 2 Feb 2024 14:23:23 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Fri, Feb 2, 2024 at 2:23 PM David G. Johnston <[email protected]>\nwrote:\n\n> On Mon, Jan 15, 2024 at 4:35 AM Aleksander Alekseev <\n> [email protected]> wrote:\n>\n>> PFA the patch. It's short but I think it mitigates the problem.\n>>\n>>\n> I took a look at where these options are discussed in the documentation\n> and now feel that we should make these options clear more broadly (config\n> and libpq, plus pointing to --name from -c in a couple of places). It\n> doesn't add much verbosity and, frankly, if I was to pick one\n> \"--name=value\" would win and so I'd rather document it, leaving -c alone\n> for historical reasons.\n>\n> I've attached a replacement patch with the additional changes.\n>\n>\nAnd I just saw one more apparently undocumented requirement (or a typo)\n\nYou must specify the --config-file\n\nThe actual parameter is \"config_file\", so apparently we are supposed to\neither convert underscores to hyphens or we have a typo.\n\nDavid J.\n\nOn Fri, Feb 2, 2024 at 2:23 PM David G. Johnston <[email protected]> wrote:On Mon, Jan 15, 2024 at 4:35 AM Aleksander Alekseev <[email protected]> wrote:PFA the patch. It's short but I think it mitigates the problem.I took a look at where these options are discussed in the documentation and now feel that we should make these options clear more broadly (config and libpq, plus pointing to --name from -c in a couple of places).  It doesn't add much verbosity and, frankly, if I was to pick one \"--name=value\" would win and so I'd rather document it, leaving -c alone for historical reasons.I've attached a replacement patch with the additional changes.And I just saw one more apparently undocumented requirement (or a typo)You must specify the --config-fileThe actual parameter is \"config_file\", so apparently we are supposed to either convert underscores to hyphens or we have a typo.David J.", "msg_date": "Fri, 2 Feb 2024 14:27:54 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On 02.02.24 22:27, David G. Johnston wrote:\n> On Fri, Feb 2, 2024 at 2:23 PM David G. Johnston \n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> On Mon, Jan 15, 2024 at 4:35 AM Aleksander Alekseev\n> <[email protected] <mailto:[email protected]>> wrote:\n> \n> PFA the patch. It's short but I think it mitigates the problem.\n> \n> \n> I took a look at where these options are discussed in the\n> documentation and now feel that we should make these options clear\n> more broadly (config and libpq, plus pointing to --name from -c in a\n> couple of places).  It doesn't add much verbosity and, frankly, if I\n> was to pick one \"--name=value\" would win and so I'd rather document\n> it, leaving -c alone for historical reasons.\n> \n> I've attached a replacement patch with the additional changes.\n> \n> \n> And I just saw one more apparently undocumented requirement (or a typo)\n> \n> You must specify the --config-file\n> \n> The actual parameter is \"config_file\", so apparently we are supposed to \n> either convert underscores to hyphens or we have a typo.\n\nWe convert '-' to '_' when parsing long options (see ParseLongOption() \nin guc.c). So writing the long options with hyphens should generally be \npreferred in documentation.\n\n\n\n", "msg_date": "Wed, 7 Feb 2024 10:58:40 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "David, Peter,\n\n> > The actual parameter is \"config_file\", so apparently we are supposed to\n> > either convert underscores to hyphens or we have a typo.\n>\n> We convert '-' to '_' when parsing long options (see ParseLongOption()\n> in guc.c). So writing the long options with hyphens should generally be\n> preferred in documentation.\n\nThanks for all your great input. Here is the updated patch.\n\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 14 May 2024 14:18:33 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "Hi,\n\n> Thanks for all your great input. Here is the updated patch.\n\nHere is the patch v4 with fixed typo (\"geoq\"). Per off-list feedback\nfrom Alvaro - thanks!\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 14 May 2024 15:03:58 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Tue, May 14, 2024 at 03:03:58PM +0300, Aleksander Alekseev wrote:\n> Here is the patch v4 with fixed typo (\"geoq\"). Per off-list feedback\n> from Alvaro - thanks!\n\n+ <option>-c name=value</option> command-line parameter, or its equivalent\n+ <option>--name=value</option> variation. For example,\n <programlisting>\n-postgres -c log_connections=yes -c log_destination='syslog'\n+postgres -c log_connections=yes --log-destination='syslog'\n\nWow. I've used -c many times, and never noticed that this was a\nsupported option switch. There's always something to learn around\nhere..\n\n- printf(_(\" -c NAME=VALUE set run-time parameter\\n\"));\n+ printf(_(\" -c NAME=VALUE set run-time parameter (see also --NAME)\\n\"));\n[...]\n- printf(_(\" --NAME=VALUE set run-time parameter\\n\"));\n+ printf(_(\" --NAME=VALUE set run-time parameter, a shorter form of -c\\n\"));\n[...]\n- to set multiple parameters.\n+ to set multiple parameters. See the <option>--name</option>\n+ option below for an alternate syntax.\n[...]\n- Sets a named run-time parameter; a shorter form of\n- <option>-c</option>.\n+ Sets the named run-time parameter; a shorter form of\n+ <option>-c</option>. See <xref linkend=\"runtime-config\"/>\n+ for a listing of parameters.\n\nNot sure that these additions in --help or the docs are necessary.\nThe rest looks OK.\n\n- \"You must specify the --config-file or -D invocation \"\n+ \"You must specify the --config-file (or equivalent -c) or -D invocation \"\n\nHow about \"You must specify the --config-file, -c\n\\\"config_file=VALUE\\\" or -D invocation\"? There is some practice for\n--opt=VALUE in .po files.\n--\nMichael", "msg_date": "Wed, 15 May 2024 11:07:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On 15.05.24 04:07, Michael Paquier wrote:\n> Not sure that these additions in --help or the docs are necessary.\n> The rest looks OK.\n> \n> - \"You must specify the --config-file or -D invocation \"\n> + \"You must specify the --config-file (or equivalent -c) or -D invocation \"\n> \n> How about \"You must specify the --config-file, -c\n> \\\"config_file=VALUE\\\" or -D invocation\"? There is some practice for\n> --opt=VALUE in .po files.\n\nYeah, some of this is becoming quite unwieldy, if we document and \nmention each spelling variant of each option everywhere.\n\nMaybe if the original problem is that the option --config-file is not \nexplicitly in the --help output, let's add it to the --help output?\n\n\n\n", "msg_date": "Wed, 15 May 2024 11:49:22 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Wed, May 15, 2024 at 2:49 AM Peter Eisentraut <[email protected]>\nwrote:\n\n> On 15.05.24 04:07, Michael Paquier wrote:\n> > Not sure that these additions in --help or the docs are necessary.\n> > The rest looks OK.\n> >\n> > - \"You must specify the --config-file or -D invocation \"\n> > + \"You must specify the --config-file (or equivalent -c) or -D\n> invocation \"\n> >\n> > How about \"You must specify the --config-file, -c\n> > \\\"config_file=VALUE\\\" or -D invocation\"? There is some practice for\n> > --opt=VALUE in .po files.\n>\n> Yeah, some of this is becoming quite unwieldy, if we document and\n> mention each spelling variant of each option everywhere.\n>\n\nWhere else would this need to be added that was missed? Largely we don't\ndiscuss how to bring a setting into effect - rather there is a single\nreference area that discusses how, and everywhere else just assumes you\nhave read it and goes on to name the setting. On this grounds the\nproper fix here is probably to not put the how into the message:\n\n\"You must specify the config_file option, the -D argument, or the PGDATA\nenvironment variable.\"\n\nAnd this is only unwieldy because while -D and --config-file both can get\nto the same result they are not substitutes for each other. Namely if the\nconfiguration file is not in the data directory, as is the case on Debian,\nthe choice to use -D is not going to work.\n\nThis isn't an error message, I'm not all that worried if we output a wall\nof text in lieu of pointing the user to the reference page.\n\n\n> Maybe if the original problem is that the option --config-file is not\n> explicitly in the --help output, let's add it to the --help output?\n>\n>\nI'm not opposed to this. Though maybe it is sufficient to do:\n\n--NAME=VALUE (e.g., --config-file='...')\n\nI would do this in addition to removing the explicit how of setting\nconfig_file above.\n\nWe also don't mention environment variables in the help but that message\nrefers to PGDATA...so the complaint and fix if done on that basis seems a\nbit selective.\n\nDavid J.\n\nOn Wed, May 15, 2024 at 2:49 AM Peter Eisentraut <[email protected]> wrote:On 15.05.24 04:07, Michael Paquier wrote:\n> Not sure that these additions in --help or the docs are necessary.\n> The rest looks OK.\n> \n> -    \"You must specify the --config-file or -D invocation \"\n> +    \"You must specify the --config-file (or equivalent -c) or -D invocation \"\n> \n> How about \"You must specify the --config-file, -c\n> \\\"config_file=VALUE\\\" or -D invocation\"?  There is some practice for\n> --opt=VALUE in .po files.\n\nYeah, some of this is becoming quite unwieldy, if we document and \nmention each spelling variant of each option everywhere.Where else would this need to be added that was missed?  Largely we don't discuss how to bring a setting into effect - rather there is a single reference area that discusses how, and everywhere else just assumes you have read it and goes on to name the setting.  On this grounds the proper fix here is probably to not put the how into the message:\"You must specify the config_file option, the -D argument, or the PGDATA environment variable.\"And this is only unwieldy because while -D and --config-file both can get to the same result they are not substitutes for each other.  Namely if the configuration file is not in the data directory, as is the case on Debian, the choice to use -D is not going to work.This isn't an error message, I'm not all that worried if we output a wall of text in lieu of pointing the user to the reference page.\n\nMaybe if the original problem is that the option --config-file is not \nexplicitly in the --help output, let's add it to the --help output?\nI'm not opposed to this.  Though maybe it is sufficient to do:--NAME=VALUE (e.g., --config-file='...')I would do this in addition to removing the explicit how of setting config_file above.We also don't mention environment variables in the help but that message refers to PGDATA...so the complaint and fix if done on that basis seems a bit selective.David J.", "msg_date": "Wed, 15 May 2024 06:35:31 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Wed, 15 May 2024 at 11:49, Peter Eisentraut <[email protected]> wrote:\n> Yeah, some of this is becoming quite unwieldy, if we document and\n> mention each spelling variant of each option everywhere.\n>\n> Maybe if the original problem is that the option --config-file is not\n> explicitly in the --help output, let's add it to the --help output?\n\nI definitely think it would be useful to list this --config variant in\nmore places, imho it's nicer than the -c variant. Especially in the\nPGOPTIONS docs it would be useful. People are already using it in the\nwild and I regressed on support for that in PgBouncer by accident:\nhttps://github.com/pgbouncer/pgbouncer/pull/1064\n\n\n", "msg_date": "Wed, 15 May 2024 16:47:27 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Wed, May 15, 2024 at 04:47:27PM +0200, Jelte Fennema-Nio wrote:\n> I definitely think it would be useful to list this --config variant in\n> more places, imho it's nicer than the -c variant. Especially in the\n> PGOPTIONS docs it would be useful. People are already using it in the\n> wild and I regressed on support for that in PgBouncer by accident:\n> https://github.com/pgbouncer/pgbouncer/pull/1064\n\nAgreed that mentioning the --name variant is useful. I'm not really\non board with having one option refer to the other on the pages where\nboth are described, like on --help or the doc page for \"postgres\".\n\nFor now, I've applied a patch for the libpq.sgml and config.sgml bits\nwhich are improvements of their own.\n--\nMichael", "msg_date": "Thu, 16 May 2024 09:17:47 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "Hi,\n\n> Agreed that mentioning the --name variant is useful. I'm not really\n> on board with having one option refer to the other on the pages where\n> both are described, like on --help or the doc page for \"postgres\".\n>\n> For now, I've applied a patch for the libpq.sgml and config.sgml bits\n> which are improvements of their own.\n\nThanks, Michael.\n\nI propose my original v1 patch for correcting the --help output of\n'postgres' too. I agree with the above comments that corresponding\nchanges in v4 became somewhat unwieldy.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 16 May 2024 11:57:10 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Thu, May 16, 2024 at 11:57:10AM +0300, Aleksander Alekseev wrote:\n> I propose my original v1 patch for correcting the --help output of\n> 'postgres' too. I agree with the above comments that corresponding\n> changes in v4 became somewhat unwieldy.\n\nThanks for compiling the rest.\n\n- printf(_(\" --NAME=VALUE set run-time parameter\\n\"));\n+ printf(_(\" --NAME=VALUE set run-time parameter, a shorter form of -c\\n\"));\n\nThis part with cross-references in the output is still meh to me, for\nsame reason as for the doc changes I've argued to discard upthread.\n\n write_stderr(\"%s does not know where to find the server configuration file.\\n\"\n- \"You must specify the --config-file or -D invocation \"\n+ \"You must specify the --config-file (or equivalent -c) or -D invocation \"\n\nI can fall behind changing this one, still I'm not sure if this\nproposal is the optimal choice. Adding this option to --help makes\nsense when applied to this error message, but that's incomplete in\nregard with the other GUCs where this concept applies. A different\napproach would be to do nothing in --help and change the reference of\n--config-file to -c config_name=VALUE, which would be in line with\n--help.\n--\nMichael", "msg_date": "Fri, 17 May 2024 08:11:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Thu, May 16, 2024 at 4:11 PM Michael Paquier <[email protected]> wrote:\n\n> On Thu, May 16, 2024 at 11:57:10AM +0300, Aleksander Alekseev wrote:\n> > I propose my original v1 patch for correcting the --help output of\n> > 'postgres' too. I agree with the above comments that corresponding\n> > changes in v4 became somewhat unwieldy.\n>\n> Thanks for compiling the rest.\n>\n> - printf(_(\" --NAME=VALUE set run-time parameter\\n\"));\n> + printf(_(\" --NAME=VALUE set run-time parameter, a shorter form\n> of -c\\n\"));\n>\n> This part with cross-references in the output is still meh to me, for\n> same reason as for the doc changes I've argued to discard upthread.\n>\n\nI'm fine with leaving these alone. If we did change this I'd want to\nchange both, not just --NAME.\n\n\n> write_stderr(\"%s does not know where to find the server\n> configuration file.\\n\"\n> - \"You must specify the --config-file or -D invocation\n> \"\n> + \"You must specify the --config-file (or equivalent\n> -c) or -D invocation \"\n>\n>\nI would rather just do this:\n\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 3fb6803998..f827086489 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -1828,8 +1828,8 @@ SelectConfigFiles(const char *userDoption, const char\n*progname)\n else\n {\n write_stderr(\"%s does not know where to find the server\nconfiguration file.\\n\"\n- \"You must specify the\n--config-file or -D invocation \"\n- \"option or set the PGDATA\nenvironment variable.\\n\",\n+ \"You must specify either the\nconfig_file run-time parameter or \"\n+ \"provide the database directory\\n\",\n progname);\n return false;\n }\n\nBoth \"run-time parameter\" and \"database directory\" are words present in the\nhelp and the user can find the correct argument if that is the option they\nwant to use. The removal of the mention of the PGDATA environment variable\ndoesn't seem to be a great loss here. This error message doesn't seem to\nbe the correct place to teach the user about all of their options so long\nas they choose to read the documentation they learn about them there; and\nwe need not prescribe the specific means by which they supply either of\nthose pieces of information - which is the norm. If someone simply runs\n\"postgres\" at the command line this message and --help gives them\nsufficient information to proceed.\n\nDavid J.\n\nOn Thu, May 16, 2024 at 4:11 PM Michael Paquier <[email protected]> wrote:On Thu, May 16, 2024 at 11:57:10AM +0300, Aleksander Alekseev wrote:\n> I propose my original v1 patch for correcting the --help output of\n> 'postgres' too. I agree with the above comments that corresponding\n> changes in v4 became somewhat unwieldy.\n\nThanks for compiling the rest.\n\n-    printf(_(\"  --NAME=VALUE       set run-time parameter\\n\"));\n+    printf(_(\"  --NAME=VALUE       set run-time parameter, a shorter form of -c\\n\"));\n\nThis part with cross-references in the output is still meh to me, for\nsame reason as for the doc changes I've argued to discard upthread.I'm fine with leaving these alone.  If we did change this I'd want to change both, not just --NAME.\n\n         write_stderr(\"%s does not know where to find the server configuration file.\\n\"\n-                     \"You must specify the --config-file or -D invocation \"\n+                     \"You must specify the --config-file (or equivalent -c) or -D invocation \"\nI would rather just do this:diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.cindex 3fb6803998..f827086489 100644--- a/src/backend/utils/misc/guc.c+++ b/src/backend/utils/misc/guc.c@@ -1828,8 +1828,8 @@ SelectConfigFiles(const char *userDoption, const char *progname)        else        {                write_stderr(\"%s does not know where to find the server configuration file.\\n\"-                                        \"You must specify the --config-file or -D invocation \"-                                        \"option or set the PGDATA environment variable.\\n\",+                                        \"You must specify either the config_file run-time parameter or \"+                                        \"provide the database directory\\n\",                                         progname);                return false;        }Both \"run-time parameter\" and \"database directory\" are words present in the help and the user can find the correct argument if that is the option they want to use.  The removal of the mention of the PGDATA environment variable doesn't seem to be a great loss here.  This error message doesn't seem to be the correct place to teach the user about all of their options so long as they choose to read the documentation they learn about them there; and we need not prescribe the specific means by which they supply either of those pieces of information - which is the norm.  If someone simply runs \"postgres\" at the command line this message and --help gives them sufficient information to proceed.David J.", "msg_date": "Thu, 16 May 2024 16:38:28 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On 2024-May-17, Michael Paquier wrote:\n\n> On Thu, May 16, 2024 at 11:57:10AM +0300, Aleksander Alekseev wrote:\n> > I propose my original v1 patch for correcting the --help output of\n> > 'postgres' too. I agree with the above comments that corresponding\n> > changes in v4 became somewhat unwieldy.\n> \n> Thanks for compiling the rest.\n> \n> - printf(_(\" --NAME=VALUE set run-time parameter\\n\"));\n> + printf(_(\" --NAME=VALUE set run-time parameter, a shorter form of -c\\n\"));\n> \n> This part with cross-references in the output is still meh to me, for\n> same reason as for the doc changes I've argued to discard upthread.\n\nWas the idea considered of moving the --NAME=VALUE line to appear\ntogether with -c? We already do that with \"-?, --help\" and \"-V, --version\",\nso I think it's pretty reasonable:\n\nOptions:\n -B NBUFFERS number of shared buffers\n -c NAME=VALUE, --NAME=VALUE\n set run-time parameter\n -C NAME print value of run-time parameter, then exit\n[...]\n\n\n> write_stderr(\"%s does not know where to find the server configuration file.\\n\"\n> - \"You must specify the --config-file or -D invocation \"\n> + \"You must specify the --config-file (or equivalent -c) or -D invocation \"\n\nI'd rather change the --help and leave this one alone.\n\n\nAbout the final paragraph\n\n\tPlease read the documentation for the complete list of run-time\n\tconfiguration settings and how to set them on the command line or in\n\tthe configuration file.\n\nI was thinking we could mention that using --describe-config here could\nhelp, but the literal output from that is quite ugly and unwieldy, more\nsuitable for machine consumption than humans. Would it be useful to add\nanother output format? Say, a --describe-config=man prints a\nmanpage-style table of options with their descriptions and links to the\nonline manual.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No nos atrevemos a muchas cosas porque son difíciles,\npero son difíciles porque no nos atrevemos a hacerlas\" (Séneca)\n\n\n", "msg_date": "Fri, 17 May 2024 14:02:19 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Thu, May 16, 2024 at 4:57 AM Aleksander Alekseev\n<[email protected]> wrote:\n> I propose my original v1 patch for correcting the --help output of\n> 'postgres' too. I agree with the above comments that corresponding\n> changes in v4 became somewhat unwieldy.\n\nSo, who is it exactly that will be confused by the status quo? I mean,\nlet's say you get this error:\n\npostgres does not know where to find the server configuration file.\nYou must specify the --config-file or -D invocation option or set the\nPGDATA environment variable.\n\nAs I see it, either you know how it works and just made a mistake this\ntime, or you are a beginner. If it's the former, the fact that the\nerror message doesn't mention every possible way of solving the\nproblem does not matter, because you already know how to fix your\nmistake. If it's the latter, you don't need to know *every* way to fix\nthe problem. You just need to know *one* way to fix the problem. I\ndon't really understand why somebody would look at the existing\nmessage and say \"gosh, it didn't tell me that I could also use -c!\".\nIf you already know that, don't you just ignore the hint and get busy\nwith fixing the problem?\n\nIf the reason that somebody is upset is because it's not technically\ntrue to say that you *must* do one of those things, we could fix that\nwith \"You must\" -> \"You can\" or with \"You must specify\" -> \"Specify\".\nThe patch you propose is also not terrible or anything, but it goes in\nthe direction of listing every alternative, which will become\nunpalatable as soon as somebody adds one more way to do it, or maybe\nit's unpalatable already.\n\nEven if we don't do that, I don't see that there's a huge problem here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 14:04:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> If the reason that somebody is upset is because it's not technically\n> true to say that you *must* do one of those things, we could fix that\n> with \"You must\" -> \"You can\" or with \"You must specify\" -> \"Specify\".\n> The patch you propose is also not terrible or anything, but it goes in\n> the direction of listing every alternative, which will become\n> unpalatable as soon as somebody adds one more way to do it, or maybe\n> it's unpalatable already.\n\nI agree that it's not necessary or particularly useful for this hint\nto be exhaustive. I could get behind your suggestion of\ns/You must specify/Specify/, but I also think it's fine to do nothing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 14:11:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "Hi,\n\n> Robert Haas <[email protected]> writes:\n> > If the reason that somebody is upset is because it's not technically\n> > true to say that you *must* do one of those things, we could fix that\n> > with \"You must\" -> \"You can\" or with \"You must specify\" -> \"Specify\".\n> > The patch you propose is also not terrible or anything, but it goes in\n> > the direction of listing every alternative, which will become\n> > unpalatable as soon as somebody adds one more way to do it, or maybe\n> > it's unpalatable already.\n>\n> I agree that it's not necessary or particularly useful for this hint\n> to be exhaustive. I could get behind your suggestion of\n> s/You must specify/Specify/, but I also think it's fine to do nothing.\n\nFair enough, let's leave the help message as is then. I closed the\ncorresponding CF entry.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 20 May 2024 12:20:02 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres and --config-file option" }, { "msg_contents": "On Mon, May 20, 2024 at 12:20:02PM +0300, Aleksander Alekseev wrote:\n> Robert Haas <[email protected]> writes:\n>> I agree that it's not necessary or particularly useful for this hint\n>> to be exhaustive. I could get behind your suggestion of\n>> s/You must specify/Specify/, but I also think it's fine to do nothing.\n> \n> Fair enough, let's leave the help message as is then. I closed the\n> corresponding CF entry.\n\nI'm OK to leave this be, as well.\n--\nMichael", "msg_date": "Tue, 21 May 2024 13:43:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres and --config-file option" } ]
[ { "msg_contents": "Hello all,\n\nCurrently PostgreSQL doesn't support data change delta tables. For example, it doesn't support this type of query:\n\nSELECT * FROM NEW TABLE (\n     INSERT INTO phone_book\n     VALUES ( 'Peter Doe', '555-2323' )\n) AS t\n\nPostgreSQL has RETURNING that provides only a subset of this functionality.\n\nSo I suggest to add support for data change delta tables. Because this feature is more powerful and it is included\nin the SQL Standard.\n\nBest regards, Pavel\n\n\n", "msg_date": "Sat, 13 Jan 2024 13:41:09 +0200", "msg_from": "PavelTurk <[email protected]>", "msg_from_op": true, "msg_subject": "Add support for data change delta tables" }, { "msg_contents": "On 1/13/24 12:41, PavelTurk wrote:\n> Hello all,\n\nHi Pavel!\n\n> Currently PostgreSQL doesn't support data change delta tables. For \n> example, it doesn't support this type of query:\n> \n> SELECT * FROM NEW TABLE (\n>     INSERT INTO phone_book\n>     VALUES ( 'Peter Doe', '555-2323' )\n> ) AS t\n\n\nCorrect. We do not yet support that.\n\n\n> PostgreSQL has RETURNING that provides only a subset of this functionality.\n\n\nI think that because of the way postgres is designed, it will only ever \nprovide a subset of that functionality anyway. Other people know more \nof the internals than I do, but I don't think we can easily distinguish \nbetween NEW TABLE and FINAL TABLE.\n\nUnfortunately, your example does not show how postgres is inadequate.\n\nFor example,\n\n INSERT INTO t1 (c1)\n SELECT c2\n FROM OLD TABLE (\n DELETE FROM t2\n WHERE ...\n ) AS t\n\ncan be written as\n\n WITH\n old_table (c2) AS (\n DELETE FROM t2\n WHERE ...\n RETURNING c2\n )\n INSERT INTO t1 (c1) TABLE old_table\n\n\n> So I suggest to add support for data change delta tables. Because this \n> feature is more powerful and it is included\n> in the SQL Standard.\n\n\nIt is indeed included in the SQL Standard, but is it really more powerful?\n\nConsider this example which is currently not implemented but could be, \nand compare it to the standard where such a query could not be possible \nat all:\n\n UPDATE t\n SET a = ...\n WHERE ...\n RETURNING OLD.a, NEW.a, FINAL.a\n\n\nAll this being said, I would love to see data change delta tables \nimplemented per-spec in PostgreSQL; in addition to improving the \nRETURNING clause. I believe I have heard that we can't just do a \nsyntactic transformation because the trigger execution order is not the \nsame, but I would have to research that.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 04:29:12 +0100", "msg_from": "Vik Fearing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add support for data change delta tables" } ]
[ { "msg_contents": "Hi.\n\nWhile there are plans to remove the sockets functions (Windows) [1], I\nbelieve it is worth fixing possible current bugs.\n\nIn the pgwin32_socket function (src/backend/port/win32/socket.c), there is\na possible socket leak if the socket cannot be made non-blocking.\n\nTrivial patch attached.\n\nBest regards,\nRanier Vilela\n\n[1] Re: Windows sockets (select missing events?)\n<https://www.postgresql.org/message-id/CA%2BhUKGKSLgxFhSP8%2BdqQqHsuZvrRRU3wZ6ytLOcno-mdGvckHg%40mail.gmail.com>", "msg_date": "Sat, 13 Jan 2024 18:38:13 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" }, { "msg_contents": "> On 13 Jan 2024, at 22:38, Ranier Vilela <[email protected]> wrote:\n\n> In the pgwin32_socket function (src/backend/port/win32/socket.c), there is a possible socket leak if the socket cannot be made non-blocking.\n\nI don't know Windows well enough to comment on the implications of not calling\nclosesocket here, but it definitely seems like a prudent thing to do\nbackpatched down to 12. Unless objections I'll do that.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 15 Jan 2024 13:43:03 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" }, { "msg_contents": "Em seg., 15 de jan. de 2024 às 09:43, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 13 Jan 2024, at 22:38, Ranier Vilela <[email protected]> wrote:\n>\n> > In the pgwin32_socket function (src/backend/port/win32/socket.c), there\n> is a possible socket leak if the socket cannot be made non-blocking.\n>\n> I don't know Windows well enough to comment on the implications of not\n> calling\n> closesocket here, but it definitely seems like a prudent thing to do\n> backpatched down to 12. Unless objections I'll do that.\n>\nThanks for taking care of this.\nDo you have plans or should I register for a commitfest?\n\nBest regards,\nRanier Vilela\n\nEm seg., 15 de jan. de 2024 às 09:43, Daniel Gustafsson <[email protected]> escreveu:> On 13 Jan 2024, at 22:38, Ranier Vilela <[email protected]> wrote:\n\n> In the pgwin32_socket function (src/backend/port/win32/socket.c), there is a possible socket leak if the socket cannot be made non-blocking.\n\nI don't know Windows well enough to comment on the implications of not calling\nclosesocket here, but it definitely seems like a prudent thing to do\nbackpatched down to 12. Unless objections I'll do that.Thanks for taking care of this.Do you have plans or should I register for a commitfest? Best regards,Ranier Vilela", "msg_date": "Tue, 16 Jan 2024 17:25:39 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" }, { "msg_contents": "On Tue, Jan 16, 2024 at 05:25:39PM -0300, Ranier Vilela wrote:\n> Thanks for taking care of this.\n\nYeah, that's a good catch.\n\n> Do you have plans or should I register for a commitfest?\n\nDaniel has stated that he would take care of it, so why not letting\nhim a few days? I don't think that a CF entry is necessary.\n--\nMichael", "msg_date": "Wed, 17 Jan 2024 15:26:59 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" }, { "msg_contents": "> On 17 Jan 2024, at 07:26, Michael Paquier <[email protected]> wrote:\n> On Tue, Jan 16, 2024 at 05:25:39PM -0300, Ranier Vilela wrote:\n\n>> Do you have plans or should I register for a commitfest?\n> \n> Daniel has stated that he would take care of it, so why not letting\n> him a few days? I don't think that a CF entry is necessary.\n\nIt isn't, I've now committed it backpatched down to 12.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 17 Jan 2024 13:53:59 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" }, { "msg_contents": "Em qua., 17 de jan. de 2024 09:54, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 17 Jan 2024, at 07:26, Michael Paquier <[email protected]> wrote:\n> > On Tue, Jan 16, 2024 at 05:25:39PM -0300, Ranier Vilela wrote:\n>\n> >> Do you have plans or should I register for a commitfest?\n> >\n> > Daniel has stated that he would take care of it, so why not letting\n> > him a few days? I don't think that a CF entry is necessary.\n>\n> It isn't, I've now committed it backpatched down to 12.\n>\nThanks for the commit, Daniel.\n\nBest regards,\nRanier Vilela\n\nEm qua., 17 de jan. de 2024 09:54, Daniel Gustafsson <[email protected]> escreveu:> On 17 Jan 2024, at 07:26, Michael Paquier <[email protected]> wrote:\n> On Tue, Jan 16, 2024 at 05:25:39PM -0300, Ranier Vilela wrote:\n\n>> Do you have plans or should I register for a commitfest?\n> \n> Daniel has stated that he would take care of it, so why not letting\n> him a few days?  I don't think that a CF entry is necessary.\n\nIt isn't, I've now committed it backpatched down to 12.Thanks for the commit, Daniel.Best regards,Ranier Vilela", "msg_date": "Wed, 17 Jan 2024 12:28:28 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a possible socket leak at Windows\n (src/backend/port/win32/socket.c)" } ]
[ { "msg_contents": "Hi,\n\nHere's a quick status report after the second week:\nStatus summary:\nstatus | w1 | w2\n-------------------------+-----------+-----\nNeeds review: | 238 | 213\nWaiting on Author: | 44 | 46\nReady for Committer: | 27 | 27\nCommitted: | 36 | 46\nMoved to next CF | 1 | 3\nWithdrawn: | 2 | 4\nReturned with Feedback: | 3 | 12\nRejected: | 1 | 1\nTotal: | 352 | 352\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing. I have also returned few patches as the threads\nhave been inactive for a long time. I'll continue on this if the\nthread is still inactive.\nI have sent a private mail through commitfest to patch owners who have\nsubmitted one or more patches but have not picked any of the patches\nfor review.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 15 Jan 2024 11:35:25 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2024-01 second week update" } ]
[ { "msg_contents": "Daniel,\n You have a commit [1] that MIGHT fix this.\nI have a script that recreates the problem, using random data in pg_temp.\nAnd a nested cursor.\n\n It took me a few days to reduce this from actual code that was\nexperiencing this. If I turn off JIT, the problem goes away. (if I don't\nFETCH the first row, the memory loss does not happen. Maybe because\nopening a cursor is more decoration/prepare)\n\n I don't have an easy way to test this script right now against the commit.\nI am hopeful that your fix fixes this.\n\n This was my first OOM issue in PG in 3yrs of working with it.\n\n The problem goes away if the TABLE is analyzed, or JIT is disabled.\n\n The current script, if run, will consume about 25% of my system memory\n(10GB).\nJust call the function below until it dies if that's what you need. The\nonly way to get the memory back down is to close the connection.\n\nSELECT pg_temp.fx(497);\n\nSurprisingly, to me, the report from pg_get_backend_memory_contexts()\ndoesn't really show \"missing memory\", which I thought it would. (FWIW, we\ncaught this with multiple rounds of testing our code, slowing down, then\ncrashing... Is there ANY way to interrogate that we are above X% of system\nmemory so we know to let this backend go?)\n\nIt takes about 18 minutes to run on my 4 CPU VM.\n\nFor now, we are going to add some ANALYZE statements to our code.\nWe will consider disabling JIT.\n\nThanks,\nKirk\n[1] = 2cf50585e54a7b0c6bc62a087c69043ae57e4252\n<https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=2cf50585e54a7b0c6bc62a087c69043ae57e4252>", "msg_date": "Mon, 15 Jan 2024 01:24:31 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "Hi\n\npo 15. 1. 2024 v 7:24 odesílatel Kirk Wolak <[email protected]> napsal:\n\n> Daniel,\n> You have a commit [1] that MIGHT fix this.\n> I have a script that recreates the problem, using random data in pg_temp.\n> And a nested cursor.\n>\n> It took me a few days to reduce this from actual code that was\n> experiencing this. If I turn off JIT, the problem goes away. (if I don't\n> FETCH the first row, the memory loss does not happen. Maybe because\n> opening a cursor is more decoration/prepare)\n>\n> I don't have an easy way to test this script right now against the\n> commit.\n> I am hopeful that your fix fixes this.\n>\n> This was my first OOM issue in PG in 3yrs of working with it.\n>\n> The problem goes away if the TABLE is analyzed, or JIT is disabled.\n>\n> The current script, if run, will consume about 25% of my system memory\n> (10GB).\n> Just call the function below until it dies if that's what you need. The\n> only way to get the memory back down is to close the connection.\n>\n> SELECT pg_temp.fx(497);\n>\n> Surprisingly, to me, the report from pg_get_backend_memory_contexts()\n> doesn't really show \"missing memory\", which I thought it would. (FWIW, we\n> caught this with multiple rounds of testing our code, slowing down, then\n> crashing... Is there ANY way to interrogate that we are above X% of system\n> memory so we know to let this backend go?)\n>\n\nI wrote simple extension that can show memory allocation from system\nperspective\n\nhttps://github.com/okbob/pgmeminfo\n\n\n\n>\n> It takes about 18 minutes to run on my 4 CPU VM.\n>\n> For now, we are going to add some ANALYZE statements to our code.\n>\n\nremember - don't run anything without VACUUM ANALYZE.\n\nWithout it, the queries can be slow - ANALYZE sets stats, VACUUM prepare\nvisibility maps - without visibility maps index only scan cannot be used\n\nautovacuum doesn't see into opened transactions, and autovacuum is executed\nin 1minute cycles. Autovacuum doesn't see temporary tables too. Temporary\ntables (data) are visible only from owner process.\n\n\n\n\n> We will consider disabling JIT.\n>\n\nHas sense only for bigger analytics queries.\n\nRegards\n\nPavel\n\n\n>\n> Thanks,\n> Kirk\n> [1] = 2cf50585e54a7b0c6bc62a087c69043ae57e4252\n> <https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=2cf50585e54a7b0c6bc62a087c69043ae57e4252>\n>\n>\n>\n>\n>\n\nHipo 15. 1. 2024 v 7:24 odesílatel Kirk Wolak <[email protected]> napsal:Daniel,  You have a commit [1] that MIGHT fix this.I have a script that recreates the problem, using random data in pg_temp.And a nested cursor.  It took me a few days to reduce this from actual code that was experiencing this.  If I turn off JIT, the problem goes away.  (if I don't FETCH the first row, the memory loss does not happen.  Maybe because opening a cursor is more decoration/prepare)  I don't have an easy way to test this script right now against the commit.I am hopeful that your fix fixes this.  This was my first OOM issue in PG in 3yrs of working with it.  The problem goes away if the TABLE is analyzed, or JIT is disabled.  The current script, if run, will consume about 25% of my system memory (10GB).Just call the function below until it dies if that's what you need.  The only way to get the memory back down is to close the connection.SELECT pg_temp.fx(497);Surprisingly, to me, the report from pg_get_backend_memory_contexts() doesn't really show \"missing memory\", which  I thought it would.  (FWIW, we caught this with multiple rounds of testing our code, slowing down, then crashing...  Is there ANY way to interrogate that we are above X% of system memory so we know to let this backend go?)I wrote simple extension that can show memory allocation from system perspectivehttps://github.com/okbob/pgmeminfo It takes about 18 minutes to run on my 4 CPU VM.For now, we are going to add some ANALYZE statements to our code.remember - don't run anything without VACUUM ANALYZE.Without it, the queries can be slow - ANALYZE sets stats, VACUUM prepare visibility maps - without visibility maps index only scan cannot be usedautovacuum doesn't see into opened transactions, and autovacuum is executed in 1minute cycles. Autovacuum doesn't see temporary tables too. Temporary tables (data) are visible only from owner process. We will consider disabling JIT.Has sense only for bigger analytics queries.RegardsPavel Thanks,Kirk[1] = 2cf50585e54a7b0c6bc62a087c69043ae57e4252", "msg_date": "Mon, 15 Jan 2024 07:50:55 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "> On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n\n> You have a commit [1] that MIGHT fix this.\n> I have a script that recreates the problem, using random data in pg_temp.\n> And a nested cursor.\n\nRunning your reproducer script in a tight loop for a fair bit of time on the\nv16 HEAD I cannot see any memory growth, so it's plausible that the upcoming\n16.2 will work better in your environment.\n\n> It took me a few days to reduce this from actual code that was experiencing this. If I turn off JIT, the problem goes away. (if I don't FETCH the first row, the memory loss does not happen. Maybe because opening a cursor is more decoration/prepare)\n> \n> I don't have an easy way to test this script right now against the commit.\n\nThere are up to date snapshots of the upcoming v16 minor release which might\nmake testing easier than building postgres from source?\n\n https://www.postgresql.org/download/snapshots/\n\nAdmittedly I don't know whether those are built with LLVM support but I think\nthey might be.\n\n> I am hopeful that your fix fixes this.\n\nIt seems likely, so it would be very valuable if you could try running the pre\n-release version of v16 before 16.2 ships to verify this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 15 Jan 2024 15:03:36 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n>\n> > You have a commit [1] that MIGHT fix this.\n> > I have a script that recreates the problem, using random data in pg_temp.\n> > And a nested cursor.\n>\n> Running your reproducer script in a tight loop for a fair bit of time on\n> the\n> v16 HEAD I cannot see any memory growth, so it's plausible that the\n> upcoming\n> 16.2 will work better in your environment.\n>\n\nThe script starts by creating 90 Million rows... In my world that part of\nthe script, plus the indexes, etc. Takes about 8-9 minutes.\nAnd has no memory loss.\n\nI used the memory reported by HTOP to track the problem. I Forgot to\nmention this.\nI am curious what you used? (Because it doesn't display symptoms [running\ndog slow] until the backend has about 85% of the machines memory)\n\n\n> There are up to date snapshots of the upcoming v16 minor release which\n> might\n> make testing easier than building postgres from source?\n>\n> https://www.postgresql.org/download/snapshots/\n\n\nThank you. I have assigned that task to the guy who maintains our\nVMs/Environments.\nI will report back to you.\n\nKirk\n\nOn Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected]> wrote:> On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n\n>   You have a commit [1] that MIGHT fix this.\n> I have a script that recreates the problem, using random data in pg_temp.\n> And a nested cursor.\n\nRunning your reproducer script in a tight loop for a fair bit of time on the\nv16 HEAD I cannot see any memory growth, so it's plausible that the upcoming\n16.2 will work better in your environment.The script starts by creating 90 Million rows...  In my world that part of the script, plus the indexes, etc.  Takes about 8-9 minutes.And has no memory loss. I used the memory reported by HTOP to track the problem.  I Forgot to mention this.I am curious what you used?  (Because it doesn't display symptoms [running dog slow] until the backend has about 85% of the machines memory) There are up to date snapshots of the upcoming v16 minor release which might\nmake testing easier than building postgres from source?\n\n    https://www.postgresql.org/download/snapshots/Thank you.  I have assigned that task to the guy who maintains our VMs/Environments. I will report back to you.Kirk", "msg_date": "Mon, 15 Jan 2024 10:49:19 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "po 15. 1. 2024 v 15:03 odesílatel Daniel Gustafsson <[email protected]>\nnapsal:\n\n> > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n>\n> > You have a commit [1] that MIGHT fix this.\n> > I have a script that recreates the problem, using random data in pg_temp.\n> > And a nested cursor.\n>\n> Running your reproducer script in a tight loop for a fair bit of time on\n> the\n> v16 HEAD I cannot see any memory growth, so it's plausible that the\n> upcoming\n> 16.2 will work better in your environment.\n>\n\nyes\n\n\n\n>\n> > It took me a few days to reduce this from actual code that was\n> experiencing this. If I turn off JIT, the problem goes away. (if I don't\n> FETCH the first row, the memory loss does not happen. Maybe because\n> opening a cursor is more decoration/prepare)\n> >\n> > I don't have an easy way to test this script right now against the\n> commit.\n>\n> There are up to date snapshots of the upcoming v16 minor release which\n> might\n> make testing easier than building postgres from source?\n>\n> https://www.postgresql.org/download/snapshots/\n>\n> Admittedly I don't know whether those are built with LLVM support but I\n> think\n> they might be.\n>\n> > I am hopeful that your fix fixes this.\n>\n> It seems likely, so it would be very valuable if you could try running the\n> pre\n> -release version of v16 before 16.2 ships to verify this.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\npo 15. 1. 2024 v 15:03 odesílatel Daniel Gustafsson <[email protected]> napsal:> On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n\n>   You have a commit [1] that MIGHT fix this.\n> I have a script that recreates the problem, using random data in pg_temp.\n> And a nested cursor.\n\nRunning your reproducer script in a tight loop for a fair bit of time on the\nv16 HEAD I cannot see any memory growth, so it's plausible that the upcoming\n16.2 will work better in your environment.yes \n\n>   It took me a few days to reduce this from actual code that was experiencing this.  If I turn off JIT, the problem goes away.  (if I don't FETCH the first row, the memory loss does not happen.  Maybe because opening a cursor is more decoration/prepare)\n> \n>   I don't have an easy way to test this script right now against the commit.\n\nThere are up to date snapshots of the upcoming v16 minor release which might\nmake testing easier than building postgres from source?\n\n    https://www.postgresql.org/download/snapshots/\n\nAdmittedly I don't know whether those are built with LLVM support but I think\nthey might be.\n\n> I am hopeful that your fix fixes this.\n\nIt seems likely, so it would be very valuable if you could try running the pre\n-release version of v16 before 16.2 ships to verify this.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 15 Jan 2024 16:56:20 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "> On 15 Jan 2024, at 16:49, Kirk Wolak <[email protected]> wrote:\n> On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected] <mailto:[email protected]>> wrote:\n\n> The script starts by creating 90 Million rows... In my world that part of the script, plus the indexes, etc. Takes about 8-9 minutes.\n> And has no memory loss. \n\nThat's expected, the memory leak did not affect those operations.\n\n> I used the memory reported by HTOP to track the problem. I Forgot to mention this.\n> I am curious what you used? (Because it doesn't display symptoms [running dog slow] until the backend has about 85% of the machines memory)\n\nI use a combination of tools, in thise case I analyzed a build with Instruments on macOS.\n\n> There are up to date snapshots of the upcoming v16 minor release which might\n> make testing easier than building postgres from source?\n> \n> https://www.postgresql.org/download/snapshots/ <https://www.postgresql.org/download/snapshots/>\n> \n> Thank you. I have assigned that task to the guy who maintains our VMs/Environments. \n> I will report back to you.\n\nGreat!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 15 Jan 2024 18:47:58 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1" }, { "msg_contents": "On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n>\n> > You have a commit [1] that MIGHT fix this.\n> > I have a script that recreates the problem, using random data in pg_temp.\n> > And a nested cursor.\n>\n> Running your reproducer script in a tight loop for a fair bit of time on\n> the\n> v16 HEAD I cannot see any memory growth, so it's plausible that the\n> upcoming\n> 16.2 will work better in your environment.\n>\n\nOkay, I took the latest source off of git (17devel) and got it to work\nthere in a VM.\n\nIt appears this issue is fixed. It must have been related to the issue\noriginally tagged.\n\nThanks!\n\nOn Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected]> wrote:> On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected]> wrote:\n\n>   You have a commit [1] that MIGHT fix this.\n> I have a script that recreates the problem, using random data in pg_temp.\n> And a nested cursor.\n\nRunning your reproducer script in a tight loop for a fair bit of time on the\nv16 HEAD I cannot see any memory growth, so it's plausible that the upcoming\n16.2 will work better in your environment.Okay, I took the latest source off of git (17devel) and got it to work there in a VM.It appears this issue is fixed.  It must have been related to the issue originally tagged. Thanks!", "msg_date": "Mon, 15 Jan 2024 20:53:55 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "> On 16 Jan 2024, at 02:53, Kirk Wolak <[email protected]> wrote:\n> \n> On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected] <mailto:[email protected]>> wrote:\n> > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected] <mailto:[email protected]>> wrote:\n> \n> > You have a commit [1] that MIGHT fix this.\n> > I have a script that recreates the problem, using random data in pg_temp.\n> > And a nested cursor.\n> \n> Running your reproducer script in a tight loop for a fair bit of time on the\n> v16 HEAD I cannot see any memory growth, so it's plausible that the upcoming\n> 16.2 will work better in your environment.\n> \n> Okay, I took the latest source off of git (17devel) and got it to work there in a VM.\n> \n> It appears this issue is fixed. It must have been related to the issue originally tagged. \n\nThanks for testing and confirming! Testing pre-release builds on real life\nworkloads is invaluable for the development of Postgres so thank you taking the\ntime.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 16 Jan 2024 09:43:50 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "On Tue, Jan 16, 2024 at 3:43 AM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 16 Jan 2024, at 02:53, Kirk Wolak <[email protected]> wrote:\n> >\n> > On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected]\n> <mailto:[email protected]>> wrote:\n> > > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected] <mailto:\n> [email protected]>> wrote:\n> >...\n> > Okay, I took the latest source off of git (17devel) and got it to work\n> there in a VM.\n> >\n> > It appears this issue is fixed. It must have been related to the issue\n> originally tagged.\n>\n> Thanks for testing and confirming! Testing pre-release builds on real life\n> workloads is invaluable for the development of Postgres so thank you\n> taking the\n> time.\n\nDaniel,\n I did a little more checking and the reason I did not see the link MIGHT\nbe because EXPLAIN did not show a JIT attempt.\nI tried to use settings that FORCE a JIT... But to no avail.\n\n I am now concerned that the problem is more hidden in my use case.\nMeaning I CANNOT conclude it is fixed.\nBut I know of NO WAY to force a JIT (I lowered costs to 1, etc. ).\n\n You don't know a way to force at least the JIT analysis to happen?\n(because I already knew if JIT was off, the leak wouldn't happen).\n\nThanks,\n\nKirk Out!\nPS: I assume there is no pg_jit(1) function I can call. LOL\n\nOn Tue, Jan 16, 2024 at 3:43 AM Daniel Gustafsson <[email protected]> wrote:> On 16 Jan 2024, at 02:53, Kirk Wolak <[email protected]> wrote:\n> \n> On Mon, Jan 15, 2024 at 9:03 AM Daniel Gustafsson <[email protected] <mailto:[email protected]>> wrote:\n> > On 15 Jan 2024, at 07:24, Kirk Wolak <[email protected] <mailto:[email protected]>> wrote:\n>...> Okay, I took the latest source off of git (17devel) and got it to work there in a VM.\n> \n> It appears this issue is fixed.  It must have been related to the issue originally tagged. \n\nThanks for testing and confirming!  Testing pre-release builds on real life\nworkloads is invaluable for the development of Postgres so thank you taking the\ntime.Daniel,  I did a little more checking and the reason I did not see the link MIGHT be because EXPLAIN did not show a JIT attempt.I tried to use settings that FORCE a JIT...  But to no avail.  I am now concerned that the problem is more hidden in my use case.  Meaning I CANNOT conclude it is fixed.But I know of NO WAY to force a JIT (I lowered costs to 1, etc.  ).  You don't know a way to force at least the JIT analysis to happen?  (because I already knew if JIT was off, the leak wouldn't happen). Thanks,Kirk Out! PS: I assume there is no pg_jit(1) function I can call. LOL", "msg_date": "Thu, 18 Jan 2024 19:50:27 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "On Thu, 2024-01-18 at 19:50 -0500, Kirk Wolak wrote:\n>   I did a little more checking and the reason I did not see the link MIGHT be because EXPLAIN did not show a JIT attempt.\n> I tried to use settings that FORCE a JIT...  But to no avail.\n> \n>   I am now concerned that the problem is more hidden in my use case.  Meaning I CANNOT conclude it is fixed.\n> But I know of NO WAY to force a JIT (I lowered costs to 1, etc.  ).\n> \n>   You don't know a way to force at least the JIT analysis to happen?  (because I already knew if JIT was off, the leak wouldn't happen). \n\nIf you set the limits to 0, you can trigger it easily:\n\nSET jit = on;\nSET jit_above_cost = 0;\nSET jit_inline_above_cost = 0;\nSET jit_optimize_above_cost = 0;\n\nEXPLAIN (ANALYZE) SELECT count(*) FROM foo;\n QUERY PLAN \n══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════\n Finalize Aggregate (cost=58889.84..58889.85 rows=1 width=8) (actual time=400.462..418.214 rows=1 loops=1)\n -> Gather (cost=58889.62..58889.83 rows=2 width=8) (actual time=400.300..418.190 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Partial Aggregate (cost=57889.62..57889.64 rows=1 width=8) (actual time=384.876..384.878 rows=1 loops=3)\n -> Parallel Seq Scan on foo (cost=0.00..52681.30 rows=2083330 width=0) (actual time=0.028..168.510 rows=1666667 loops=3)\n Planning Time: 0.133 ms\n JIT:\n Functions: 8\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 1.038 ms, Inlining 279.779 ms, Optimization 38.395 ms, Emission 73.105 ms, Total 392.316 ms\n Execution Time: 478.257 ms\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 19 Jan 2024 10:20:50 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "> On 19 Jan 2024, at 01:50, Kirk Wolak <[email protected]> wrote:\n\n> I did a little more checking and the reason I did not see the link MIGHT be because EXPLAIN did not show a JIT attempt.\n> I tried to use settings that FORCE a JIT... But to no avail.\n\nAre you sure you are running a JIT enabled server? Did you compile it yourself\nor install a snapshot?\n\n> You don't know a way to force at least the JIT analysis to happen? (because I already knew if JIT was off, the leak wouldn't happen). \n\nIf you set jit_above_cost=0 then postgres will compile a JIT enabled execution\ntree. This does bring up an interesting point, I don't think there is a way\nfor a user to know whether the server is jit enabled or not (apart from\nexplaining a query with costs adjusted but that's not all that userfriendly).\nMaybe we need a way to reliably tell if JIT is active or not.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 10:48:12 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "Hi,\n\nOn Fri, Jan 19, 2024 at 10:48:12AM +0100, Daniel Gustafsson wrote:\n> This does bring up an interesting point, I don't think there is a way\n> for a user to know whether the server is jit enabled or not (apart\n> from explaining a query with costs adjusted but that's not all that\n> userfriendly). Maybe we need a way to reliably tell if JIT is active\n> or not.\n\nI thought this is what pg_jit_available() is for?\n\npostgres=> SHOW jit;\n jit \n-----\n on\n(1 Zeile)\n\npostgres=> SELECT pg_jit_available();\n pg_jit_available \n------------------\n f\n(1 Zeile)\n\n\nMichael\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:04:49 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "> On 19 Jan 2024, at 11:04, Michael Banck <[email protected]> wrote:\n> \n> Hi,\n> \n> On Fri, Jan 19, 2024 at 10:48:12AM +0100, Daniel Gustafsson wrote:\n>> This does bring up an interesting point, I don't think there is a way\n>> for a user to know whether the server is jit enabled or not (apart\n>> from explaining a query with costs adjusted but that's not all that\n>> userfriendly). Maybe we need a way to reliably tell if JIT is active\n>> or not.\n> \n> I thought this is what pg_jit_available() is for?\n\nAh, it is, I completely forgot we had that one. Thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:06:42 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "On Fri, Jan 19, 2024 at 4:20 AM Laurenz Albe <[email protected]>\nwrote:\n\n> On Thu, 2024-01-18 at 19:50 -0500, Kirk Wolak wrote:\n> > I did a little more checking and the reason I did not see the link\n> MIGHT be because EXPLAIN did not show a JIT attempt.\n> > I tried to use settings that FORCE a JIT... But to no avail.\n> >\n> > I am now concerned that the problem is more hidden in my use case.\n> Meaning I CANNOT conclude it is fixed.\n> > But I know of NO WAY to force a JIT (I lowered costs to 1, etc. ).\n> >\n> > You don't know a way to force at least the JIT analysis to happen?\n> (because I already knew if JIT was off, the leak wouldn't happen).\n>\n> If you set the limits to 0, you can trigger it easily:\n>\n> SET jit = on;\n> SET jit_above_cost = 0;\n> SET jit_inline_above_cost = 0;\n> SET jit_optimize_above_cost = 0;\n>\n\nOkay,\n I Did exactly this (just now). But the EXPLAIN does not contain the JIT?\n\n-------------------------------------------------------------------------------\n Sort (cost=5458075.88..5477861.00 rows=7914047 width=12)\n Sort Key: seid\n -> HashAggregate (cost=3910139.62..4280784.00 rows=7914047 width=12)\n Group Key: seid, fr_field_name, st_field_name\n Planned Partitions: 128\n -> Seq Scan on parts (cost=0.00..1923249.00 rows=29850000\nwidth=12)\n Filter: ((seid <> 497) AND ((partnum)::text >= '1'::text))\n(7 rows)\n\n From a FUTURE email, I noticed pg_jit_available()\n\nand it's set to f??\n\nOkay, so does this require a special BUILD command?\n\npostgres=# select pg_jit_available();\n pg_jit_available\n------------------\n f\n(1 row)\n\npostgres=# \\dconfig *jit*\n List of configuration parameters\n Parameter | Value\n-------------------------+---------\n jit | on\n jit_above_cost | 100000\n jit_debugging_support | off\n jit_dump_bitcode | off\n jit_expressions | on\n jit_inline_above_cost | 500000\n jit_optimize_above_cost | 500000\n jit_profiling_support | off\n jit_provider | llvmjit\n jit_tuple_deforming | on\n(10 rows)\n\nOn Fri, Jan 19, 2024 at 4:20 AM Laurenz Albe <[email protected]> wrote:On Thu, 2024-01-18 at 19:50 -0500, Kirk Wolak wrote:\n>   I did a little more checking and the reason I did not see the link MIGHT be because EXPLAIN did not show a JIT attempt.\n> I tried to use settings that FORCE a JIT...  But to no avail.\n> \n>   I am now concerned that the problem is more hidden in my use case.  Meaning I CANNOT conclude it is fixed.\n> But I know of NO WAY to force a JIT (I lowered costs to 1, etc.  ).\n> \n>   You don't know a way to force at least the JIT analysis to happen?  (because I already knew if JIT was off, the leak wouldn't happen). \n\nIf you set the limits to 0, you can trigger it easily:\n\nSET jit = on;\nSET jit_above_cost = 0;\nSET jit_inline_above_cost = 0;\nSET jit_optimize_above_cost = 0;Okay,  I Did exactly this (just now).  But the EXPLAIN does not contain the JIT?------------------------------------------------------------------------------- Sort  (cost=5458075.88..5477861.00 rows=7914047 width=12)   Sort Key: seid   ->  HashAggregate  (cost=3910139.62..4280784.00 rows=7914047 width=12)         Group Key: seid, fr_field_name, st_field_name         Planned Partitions: 128         ->  Seq Scan on parts  (cost=0.00..1923249.00 rows=29850000 width=12)               Filter: ((seid <> 497) AND ((partnum)::text >= '1'::text))(7 rows) From a FUTURE email, I noticed pg_jit_available()and it's set to f??Okay, so does this require a special BUILD command?postgres=# select pg_jit_available(); pg_jit_available ------------------ f(1 row)postgres=# \\dconfig *jit* List of configuration parameters        Parameter        |  Value  -------------------------+--------- jit                     | on jit_above_cost          | 100000 jit_debugging_support   | off jit_dump_bitcode        | off jit_expressions         | on jit_inline_above_cost   | 500000 jit_optimize_above_cost | 500000 jit_profiling_support   | off jit_provider            | llvmjit jit_tuple_deforming     | on(10 rows)", "msg_date": "Fri, 19 Jan 2024 17:09:02 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "> On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n\n> From a FUTURE email, I noticed pg_jit_available() and it's set to f??\n\nRight, then this installation does not contain the necessary library to JIT\ncompile the query.\n\n> Okay, so does this require a special BUILD command?\n\nYes, it requires that you compile with --with-llvm. If you don't have llvm in\nthe PATH you might need to set LLVM_CONFIG to point to your llvm-config binary.\nWith autotools that would be something like:\n\n ./configure <other params> --with-llvm LLVM_CONFIG=/path/to/llvm-config\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 20 Jan 2024 01:03:24 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]> wrote:\n\n> > On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n>\n> > From a FUTURE email, I noticed pg_jit_available() and it's set to f??\n>\n> Right, then this installation does not contain the necessary library to JIT\n> compile the query.\n>\n> > Okay, so does this require a special BUILD command?\n>\n> Yes, it requires that you compile with --with-llvm. If you don't have\n> llvm in\n> the PATH you might need to set LLVM_CONFIG to point to your llvm-config\n> binary.\n> With autotools that would be something like:\n>\n> ./configure <other params> --with-llvm LLVM_CONFIG=/path/to/llvm-config\n>\n> --\n> Daniel Gustafsson\n>\n\nThank you, that made it possible to build and run...\nUNFORTUNATELY this has a CLEAR memory leak (visible in htop)\nI am watching it already consuming 6% of my system memory.\n\nI am re-attaching my script. WHICH includes the settings to FORCE JIT.\nIt also does an EXPLAIN so you can verify that JIT is on (this is what I\nadded/noticed!)\nAnd it takes over 20 minutes to get this far. It's still running.\nI am re-attaching the script. (as I tweaked it).\n\nThis creates 90 million rows of data, so it takes a while.\nI BELIEVE that it consumes far less memory if you do not fetch any rows (I\nhad problems reproducing it if no rows were fetched).\nSo, this may be beyond the planning stages.\n\nThanks,\n\nKirk Out!", "msg_date": "Mon, 22 Jan 2024 01:30:13 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [Fixed\n Already]" }, { "msg_contents": "On Mon, Jan 22, 2024 at 1:30 AM Kirk Wolak <[email protected]> wrote:\n\n> On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]> wrote:\n>\n>> > On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n>>\n>> ...\n>> ./configure <other params> --with-llvm\n>> LLVM_CONFIG=/path/to/llvm-config\n>>\n>> --\n>> Daniel Gustafsson\n>>\n>\n> Thank you, that made it possible to build and run...\n> UNFORTUNATELY this has a CLEAR memory leak (visible in htop)\n> I am watching it already consuming 6% of my system memory.\n>\n>\nDaniel,\n In the previous email, I made note that once the JIT was enabled, the\nproblem exists in 17Devel.\nI re-included my script, which forced the JIT to be used...\n\n I attached an updated script that forced the settings.\nBut this is still leaking memory (outside of the\npg_backend_memory_context() calls).\nProbably because it's at the LLVM level? And it does NOT happen from\nplanning/opening the query. It appears I have to fetch the rows to see the\nproblem.\n\nThanks in advance. Let me know if I should be doing something different?\n\nKirk Out!\nPS: I was wondering if we had a function that showed total memory of the\nbackend. For helping to determine if we might have a 3rd party leak?\n[increase in total memory consumed not noticed by\npg_backend_memory_contexts)\n\n#include \"postgres.h\"\n#include <sys/resource.h>\n\nPG_MODULE_MAGIC;\n\nPG_FUNCTION_INFO_V1(pg_backend_memory_footprint);\n\nDatum pg_backend_memory_footprint(PG_FUNCTION_ARGS) {\n long memory_usage_bytes = 0;\n struct rusage usage;\n\n getrusage(RUSAGE_SELF, &usage);\n memory_usage_bytes = usage.ru_maxrss * 1024;\n\n PG_RETURN_INT64(memory_usage_bytes);\n}\n\nOn Mon, Jan 22, 2024 at 1:30 AM Kirk Wolak <[email protected]> wrote:On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]> wrote:> On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n...\n    ./configure <other params> --with-llvm LLVM_CONFIG=/path/to/llvm-config\n\n--\nDaniel GustafssonThank you, that made it possible to build and run...UNFORTUNATELY this has a CLEAR memory leak (visible in htop)I am watching it already consuming 6% of my system memory.Daniel,  In the previous email, I made note that once the JIT was enabled, the problem exists in 17Devel.I re-included my script, which forced the JIT to be used...  I attached an updated script that forced the settings.But this is still leaking memory (outside of the pg_backend_memory_context() calls).Probably because it's at the LLVM level?  And it does NOT happen from planning/opening the query.  It appears I have to fetch the rows to see the problem.Thanks in advance.  Let me know if I should be doing something different?Kirk Out!PS: I was wondering if we had a function that showed total memory of the backend.  For helping to determine if we might have a 3rd party leak? [increase in total memory consumed not noticed by pg_backend_memory_contexts)#include \"postgres.h\"#include <sys/resource.h>PG_MODULE_MAGIC;PG_FUNCTION_INFO_V1(pg_backend_memory_footprint);Datum pg_backend_memory_footprint(PG_FUNCTION_ARGS) {    long memory_usage_bytes = 0;    struct rusage usage;    getrusage(RUSAGE_SELF, &usage);    memory_usage_bytes = usage.ru_maxrss * 1024;    PG_RETURN_INT64(memory_usage_bytes);}", "msg_date": "Wed, 24 Jan 2024 14:50:52 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [ NOT Fixed ]" }, { "msg_contents": "On Thu, Jan 25, 2024 at 8:51 AM Kirk Wolak <[email protected]> wrote:\n> getrusage(RUSAGE_SELF, &usage);\n> memory_usage_bytes = usage.ru_maxrss * 1024;\n\nFWIW log_statement_stats = on shows that in the logs. See ShowUsage()\nin postgres.c.\n\n\n", "msg_date": "Thu, 25 Jan 2024 10:16:02 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [ NOT Fixed ]" }, { "msg_contents": "On Wed, Jan 24, 2024 at 4:16 PM Thomas Munro <[email protected]> wrote:\n\n> On Thu, Jan 25, 2024 at 8:51 AM Kirk Wolak <[email protected]> wrote:\n> > getrusage(RUSAGE_SELF, &usage);\n> > memory_usage_bytes = usage.ru_maxrss * 1024;\n>\n> FWIW log_statement_stats = on shows that in the logs. See ShowUsage()\n> in postgres.c.\n>\n\nThank you for this, here is the *TERMINAL *(Below is the tail of the log).\nNotice that the pg_backend_memory_contexts does NOT show the memory\nconsumed.\nBut your logging sure did! (I wonder if I enable logging during planning,\nbut there is like 82,000 cursors being opened... (This removed the FETCH\nand still leaks)\n\n\n7:01:08 kwolak@postgres= # *select pg_temp.fx(497);*\nNOTICE: (\"9848 kB\",\"10 MB\",\"638 kB\")\nNOTICE: -----------after close, Count a: 82636, count b: 82636\nNOTICE: (\"9997 kB\",\"10 MB\",\"648 kB\")\n fx\n----\n\n(1 row)\n\nTime: 525870.117 ms (08:45.870)\n\n\n*Tail*:\n\n024-01-24 17:01:08.752 EST [28804] DETAIL: ! system usage stats:\n ! 0.001792 s user, 0.000000 s system, 0.005349 s elapsed\n ! [560.535969 s user, 31.441656 s system total]\n ! 185300 kB max resident size\n ! 232/0 [29219648/54937864] filesystem blocks in/out\n ! 0/25 [0/1016519] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 10/1 [62671/9660] voluntary/involuntary context switches\n2024-01-24 17:01:08.752 EST [28804] STATEMENT: explain SELECT DISTINCT\nseid, fr_field_name, st_field_name\n FROM pg_temp.parts\n WHERE seid <> 497 AND partnum >= '1'\n ORDER BY seid;\n2024-01-24 17:01:08.759 EST [28804] LOG: QUERY STATISTICS\n2024-01-24 17:01:08.759 EST [28804] DETAIL: ! system usage stats:\n ! 0.006207 s user, 0.000092 s system, 0.006306 s elapsed\n ! [560.542262 s user, 31.441748 s system total]\n !* 185300 kB max resident size*\n ! 0/0 [29219648/54937864] filesystem blocks in/out\n ! 0/4 [0/1016523] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/1 [62672/9661] voluntary/involuntary context switches\n2024-01-24 17:01:08.759 EST [28804] STATEMENT: SELECT 'pg_temp.fx(497); --\nNot run, do \\dt+ parts';\n2024-01-24 17:04:30.844 EST [28746] LOG: checkpoint starting: time\n2024-01-24 17:04:32.931 EST [28746] LOG: checkpoint complete: wrote 21\nbuffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.008 s,\nsync=0.006 s, total=2.087 s; sync files=15, longest=0.001 s, average=0.001\ns; distance=98 kB, estimate=134 kB; lsn=0/16304D8, redo lsn=0/1630480\n2024-01-24 17:11:06.350 EST [28804] LOG: QUERY STATISTICS\n2024-01-24 17:11:06.350 EST [28804] DETAIL: ! system usage stats:\n ! 515.952870 s user, 6.688389 s system, 525.869933 s elapsed\n ! [1076.495280 s user, 38.130145 s system total]\n !* 708104 kB max resident size*\n ! 370000/3840 [29589648/54941712] filesystem blocks in/out\n ! 0/327338 [0/1343861] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 22001/5216 [84675/14878] voluntary/involuntary context\nswitches\n2024-01-24 17:11:06.350 EST [28804] STATEMENT: * select pg_temp.fx(497);*\n2024-01-24 17:12:16.162 EST [28804] LOG: QUERY STATISTICS\n2024-01-24 17:12:16.162 EST [28804] DETAIL: ! system usage stats:\n ! 1.130029 s user, 0.007727 s system, 1.157486 s elapsed\n ! [1077.625396 s user, 38.137921 s system total]\n ! *708104 kB max resident size*\n ! 992/0 [29590640/54941720] filesystem blocks in/out\n ! 3/41 [3/1343902] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 9/68 [84685/14946] voluntary/involuntary context switches\n2024-01-24 17:12:16.162 EST [28804] STATEMENT: select now();\n2024-01-24 17:12:30.944 EST [28804] LOG: QUERY STATISTICS\n2024-01-24 17:12:30.944 EST [28804] DETAIL: ! system usage stats:\n ! 0.004561 s user, 0.000019 s system, 0.004580 s elapsed\n ! [1077.630064 s user, 38.137944 s system total]\n ! *708104 kB max resident size*\n ! 0/0 [29590640/54941728] filesystem blocks in/out\n ! 0/4 [3/1343906] page faults/reclaims, 0 [0] swaps\n ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n ! 0/0 [84686/14947] voluntary/involuntary context switches\n2024-01-24 17:12:30.944 EST [28804] STATEMENT: select now();\n\nOn Wed, Jan 24, 2024 at 4:16 PM Thomas Munro <[email protected]> wrote:On Thu, Jan 25, 2024 at 8:51 AM Kirk Wolak <[email protected]> wrote:\n>     getrusage(RUSAGE_SELF, &usage);\n>     memory_usage_bytes = usage.ru_maxrss * 1024;\n\nFWIW log_statement_stats = on shows that in the logs.  See ShowUsage()\nin postgres.c.Thank you for this, here is the TERMINAL (Below is the tail of the log).  Notice that the pg_backend_memory_contexts does NOT show the memory consumed.But your logging sure did!  (I wonder if I enable logging during planning, but there is like 82,000 cursors being opened... (This removed the FETCH and still leaks)7:01:08 kwolak@postgres= # select pg_temp.fx(497);NOTICE:  (\"9848 kB\",\"10 MB\",\"638 kB\")NOTICE:  -----------after close, Count a: 82636, count b: 82636NOTICE:  (\"9997 kB\",\"10 MB\",\"648 kB\") fx ---- (1 row)Time: 525870.117 ms (08:45.870)Tail:024-01-24 17:01:08.752 EST [28804] DETAIL:  ! system usage stats:        !       0.001792 s user, 0.000000 s system, 0.005349 s elapsed        !       [560.535969 s user, 31.441656 s system total]        !       185300 kB max resident size        !       232/0 [29219648/54937864] filesystem blocks in/out        !       0/25 [0/1016519] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       10/1 [62671/9660] voluntary/involuntary context switches2024-01-24 17:01:08.752 EST [28804] STATEMENT:  explain SELECT DISTINCT seid, fr_field_name, st_field_name                  FROM pg_temp.parts                 WHERE seid <> 497 AND partnum >= '1'                 ORDER BY seid;2024-01-24 17:01:08.759 EST [28804] LOG:  QUERY STATISTICS2024-01-24 17:01:08.759 EST [28804] DETAIL:  ! system usage stats:        !       0.006207 s user, 0.000092 s system, 0.006306 s elapsed        !       [560.542262 s user, 31.441748 s system total]        !       185300 kB max resident size        !       0/0 [29219648/54937864] filesystem blocks in/out        !       0/4 [0/1016523] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       0/1 [62672/9661] voluntary/involuntary context switches2024-01-24 17:01:08.759 EST [28804] STATEMENT:  SELECT 'pg_temp.fx(497); -- Not run, do \\dt+ parts';2024-01-24 17:04:30.844 EST [28746] LOG:  checkpoint starting: time2024-01-24 17:04:32.931 EST [28746] LOG:  checkpoint complete: wrote 21 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled; write=2.008 s, sync=0.006 s, total=2.087 s; sync files=15, longest=0.001 s, average=0.001 s; distance=98 kB, estimate=134 kB; lsn=0/16304D8, redo lsn=0/16304802024-01-24 17:11:06.350 EST [28804] LOG:  QUERY STATISTICS2024-01-24 17:11:06.350 EST [28804] DETAIL:  ! system usage stats:        !       515.952870 s user, 6.688389 s system, 525.869933 s elapsed        !       [1076.495280 s user, 38.130145 s system total]        !       708104 kB max resident size        !       370000/3840 [29589648/54941712] filesystem blocks in/out        !       0/327338 [0/1343861] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       22001/5216 [84675/14878] voluntary/involuntary context switches2024-01-24 17:11:06.350 EST [28804] STATEMENT:  select pg_temp.fx(497);2024-01-24 17:12:16.162 EST [28804] LOG:  QUERY STATISTICS2024-01-24 17:12:16.162 EST [28804] DETAIL:  ! system usage stats:        !       1.130029 s user, 0.007727 s system, 1.157486 s elapsed        !       [1077.625396 s user, 38.137921 s system total]        !       708104 kB max resident size        !       992/0 [29590640/54941720] filesystem blocks in/out        !       3/41 [3/1343902] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       9/68 [84685/14946] voluntary/involuntary context switches2024-01-24 17:12:16.162 EST [28804] STATEMENT:  select now();2024-01-24 17:12:30.944 EST [28804] LOG:  QUERY STATISTICS2024-01-24 17:12:30.944 EST [28804] DETAIL:  ! system usage stats:        !       0.004561 s user, 0.000019 s system, 0.004580 s elapsed        !       [1077.630064 s user, 38.137944 s system total]        !       708104 kB max resident size        !       0/0 [29590640/54941728] filesystem blocks in/out        !       0/4 [3/1343906] page faults/reclaims, 0 [0] swaps        !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent        !       0/0 [84686/14947] voluntary/involuntary context switches2024-01-24 17:12:30.944 EST [28804] STATEMENT:  select now();", "msg_date": "Wed, 24 Jan 2024 17:26:03 -0500", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [ NOT Fixed ]" }, { "msg_contents": "Hi,\n\nOn Wed, Jan 24, 2024 at 02:50:52PM -0500, Kirk Wolak wrote:\n> On Mon, Jan 22, 2024 at 1:30 AM Kirk Wolak <[email protected]> wrote:\n> > On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]> wrote:\n> >> > On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n> > Thank you, that made it possible to build and run...\n> > UNFORTUNATELY this has a CLEAR memory leak (visible in htop)\n> > I am watching it already consuming 6% of my system memory.\n> >\n> Daniel,\n> In the previous email, I made note that once the JIT was enabled, the\n> problem exists in 17Devel.\n> I re-included my script, which forced the JIT to be used...\n> \n> I attached an updated script that forced the settings.\n> But this is still leaking memory (outside of the\n> pg_backend_memory_context() calls).\n> Probably because it's at the LLVM level? And it does NOT happen from\n> planning/opening the query. It appears I have to fetch the rows to\n> see the problem.\n\nI had a look at this (and blogged about it here[1]) and was also\nwondering what was going on with 17devel and the recent back-branch\nreleases, cause I could also reproduce those continuing memory leaks.\n\nAdding some debug logging when llvm_inline_reset_caches() is called\nsolves the mystery: as you are calling a function, the fix that is in\n17devel and the back-branch releases is not applicable and only after\nthe function returns llvm_inline_reset_caches() is being called (as\nllvm_jit_context_in_use_count is greater than zero, presumably, so it\nnever reaches the call-site of llvm_inline_reset_caches()).\n\nIf you instead run your SQL in a DO-loop (as in the blog post) and not\nas a PL/PgSQL function, you should see that it no longer leaks. This\nmight be obvious to some (and Andres mentioned it in \nhttps://www.postgresql.org/message-id/20210421002056.gjd6rpe6toumiqd6%40alap3.anarazel.de)\nbut it took me a while to figure out/find.\n\n\nMichael\n\n[1] https://www.credativ.de/en/blog/postgresql-en/quick-benchmark-postgresql-2024q1-release-performance-improvements/\n\n\n", "msg_date": "Thu, 22 Feb 2024 22:49:20 +0100", "msg_from": "Michael Banck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [ NOT Fixed ]" }, { "msg_contents": "On Thu, Feb 22, 2024 at 4:49 PM Michael Banck <[email protected]> wrote:\n\n> Hi,\n>\n> On Wed, Jan 24, 2024 at 02:50:52PM -0500, Kirk Wolak wrote:\n> > On Mon, Jan 22, 2024 at 1:30 AM Kirk Wolak <[email protected]> wrote:\n> > > On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]>\n> wrote:\n> > >> > On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n> > > Thank you, that made it possible to build and run...\n> > > UNFORTUNATELY this has a CLEAR memory leak (visible in htop)\n> > > I am watching it already consuming 6% of my system memory.\n> ...\n> I had a look at this (and blogged about it here[1]) and was also\n> wondering what was going on with 17devel and the recent back-branch\n> releases, cause I could also reproduce those continuing memory leaks.\n>\n> Adding some debug logging when llvm_inline_reset_caches() is called\n> solves the mystery: as you are calling a function, the fix that is in\n> 17devel and the back-branch releases is not applicable and only after\n> the function returns llvm_inline_reset_caches() is being called (as\n> llvm_jit_context_in_use_count is greater than zero, presumably, so it\n> never reaches the call-site of llvm_inline_reset_caches()).\n>\n> If you instead run your SQL in a DO-loop (as in the blog post) and not\n> as a PL/PgSQL function, you should see that it no longer leaks. This\n> might be obvious to some (and Andres mentioned it in\n>\n> https://www.postgresql.org/message-id/20210421002056.gjd6rpe6toumiqd6%40alap3.anarazel.de\n> )\n> but it took me a while to figure out/find.\n>\n> Thanks for confirming. Inside a routine is required. But disabling JIT\nwas good enough for us.\n\nCurious if there was another way to end up calling the cleanup? But it had\nthat \"feel\" that the context of the cleanup was\nbeing lost between iterations of the loop?\n\nOn Thu, Feb 22, 2024 at 4:49 PM Michael Banck <[email protected]> wrote:Hi,\n\nOn Wed, Jan 24, 2024 at 02:50:52PM -0500, Kirk Wolak wrote:\n> On Mon, Jan 22, 2024 at 1:30 AM Kirk Wolak <[email protected]> wrote:\n> > On Fri, Jan 19, 2024 at 7:03 PM Daniel Gustafsson <[email protected]> wrote:\n> >> > On 19 Jan 2024, at 23:09, Kirk Wolak <[email protected]> wrote:\n> > Thank you, that made it possible to build and run...\n> > UNFORTUNATELY this has a CLEAR memory leak (visible in htop)\n> > I am watching it already consuming 6% of my system memory....\nI had a look at this (and blogged about it here[1]) and was also\nwondering what was going on with 17devel and the recent back-branch\nreleases, cause I could also reproduce those continuing memory leaks.\n\nAdding some debug logging when llvm_inline_reset_caches() is called\nsolves the mystery: as you are calling a function, the fix that is in\n17devel and the back-branch releases is not applicable and only after\nthe function returns llvm_inline_reset_caches() is being called (as\nllvm_jit_context_in_use_count is greater than zero, presumably, so it\nnever reaches the call-site of llvm_inline_reset_caches()).\n\nIf you instead run your SQL in a DO-loop (as in the blog post) and not\nas a PL/PgSQL function, you should see that it no longer leaks. This\nmight be obvious to some (and Andres mentioned it in \nhttps://www.postgresql.org/message-id/20210421002056.gjd6rpe6toumiqd6%40alap3.anarazel.de)\nbut it took me a while to figure out/find.Thanks for confirming.  Inside a routine is required.  But disabling JIT was good enough for us.Curious if there was another way to end up calling the cleanup?  But it had that \"feel\" that the context of the cleanup wasbeing lost between iterations of the loop?", "msg_date": "Thu, 18 Apr 2024 14:28:08 -0400", "msg_from": "Kirk Wolak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Oom on temp (un-analyzed table caused by JIT) V16.1 [ NOT Fixed ]" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working on [1], we discovered (thanks Alexander for the testing) that an\nconflicting active logical slot on a standby could be \"terminated\" without\nleading to an \"obsolete\" message (see [2]).\n\nIndeed, in case of an active slot we proceed in 2 steps in\nInvalidatePossiblyObsoleteSlot():\n\n- terminate the backend holding the slot\n- report the slot as obsolete\n\nThis is racy because between the two we release the mutex on the slot, which\nmeans that the slot's effective_xmin and effective_catalog_xmin could advance\nduring that time (leading to exit the loop).\n\nI think that holding the mutex longer is not an option (given what we'd to do\nwhile holding it) so the attached proposal is to record the effective_xmin and\neffective_catalog_xmin instead that was used during the backend termination.\n\n[1]: https://www.postgresql.org/message-id/flat/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6%40gmail.com\n[2]: https://www.postgresql.org/message-id/7c025095-5763-17a6-8c80-899b35bd0459%40gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 15 Jan 2024 07:48:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Mon, Jan 15, 2024 at 1:18 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi hackers,\n>\n> While working on [1], we discovered (thanks Alexander for the testing) that an\n> conflicting active logical slot on a standby could be \"terminated\" without\n> leading to an \"obsolete\" message (see [2]).\n>\n> Indeed, in case of an active slot we proceed in 2 steps in\n> InvalidatePossiblyObsoleteSlot():\n>\n> - terminate the backend holding the slot\n> - report the slot as obsolete\n>\n> This is racy because between the two we release the mutex on the slot, which\n> means that the slot's effective_xmin and effective_catalog_xmin could advance\n> during that time (leading to exit the loop).\n>\n> I think that holding the mutex longer is not an option (given what we'd to do\n> while holding it) so the attached proposal is to record the effective_xmin and\n> effective_catalog_xmin instead that was used during the backend termination.\n>\n> [1]: https://www.postgresql.org/message-id/flat/bf67e076-b163-9ba3-4ade-b9fc51a3a8f6%40gmail.com\n> [2]: https://www.postgresql.org/message-id/7c025095-5763-17a6-8c80-899b35bd0459%40gmail.com\n>\n> Looking forward to your feedback,\n\nIIUC, the issue is that we terminate the process holding the\nreplication slot, and the conflict cause that we recorded before\nterminating the process changes in the next iteration due to the\nadvancement in effective_xmin and/or effective_catalog_xmin.\n\nFWIW, a test code something like [1], can help detect above race issues, right?\n\nSome comments on the patch:\n\n1.\n last_signaled_pid = active_pid;\n+ terminated = true;\n }\n\nWhy is a separate variable needed? Can't last_signaled_pid != 0 enough\nto determine that a process was terminated earlier?\n\n2. If my understanding of the racy behavior is right, can the same\nissue happen due to the advancement in restart_lsn? I'm not sure if it\ncan happen at all, but I think we can rely on previous conflict reason\ninstead of previous effective_xmin/effective_catalog_xmin/restart_lsn.\n\n3. Is there a way to reproduce this racy issue, may be by adding some\nsleeps or something? If yes, can you share it here, just for the\nrecords and verify the whatever fix provided actually works?\n\n[1]\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex 52da694c79..d020b038bc 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -1352,6 +1352,7 @@\nInvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n {\n int last_signaled_pid = 0;\n bool released_lock = false;\n+ ReplicationSlotInvalidationCause conflict_prev = RS_INVAL_NONE;\n\n for (;;)\n {\n@@ -1417,6 +1418,18 @@\nInvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n }\n }\n\n+ /*\n+ * Check if the conflict cause recorded previously\nbefore we terminate\n+ * the process changed now for any reason.\n+ */\n+ if (conflict_prev != RS_INVAL_NONE &&\n+ last_signaled_pid != 0 &&\n+ conflict_prev != conflict)\n+ elog(PANIC, \"conflict cause recorded before\nterminating process %d has been changed; previous cause: %d, current\ncause: %d\",\n+ last_signaled_pid,\n+ conflict_prev,\n+ conflict);\n+\n /* if there's no conflict, we're done */\n if (conflict == RS_INVAL_NONE)\n {\n@@ -1499,6 +1512,7 @@\nInvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n (void) kill(active_pid, SIGTERM);\n\n last_signaled_pid = active_pid;\n+ conflict_prev = conflict;\n }\n\n /* Wait until the slot is released. */\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:59:39 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 18, 2024 at 04:59:39PM +0530, Bharath Rupireddy wrote:\n> IIUC, the issue is that we terminate the process holding the\n> replication slot, and the conflict cause that we recorded before\n> terminating the process changes in the next iteration due to the\n> advancement in effective_xmin and/or effective_catalog_xmin.\n\nThanks for looking at it!\n\nYeah, and that could lead to no conflict detected anymore (like in the\ncase [2] reported up-thread).\n\n> FWIW, a test code something like [1], can help detect above race issues, right?\n\nI think so and I've added it in v2 attached (except that it uses the new\n\"terminated\" variable, see below), thanks!\n\n> Some comments on the patch:\n> \n> 1.\n> last_signaled_pid = active_pid;\n> + terminated = true;\n> }\n> \n> Why is a separate variable needed? Can't last_signaled_pid != 0 enough\n> to determine that a process was terminated earlier?\n\nYeah probably, I thought about it but preferred to add a new variable for this \npurpose for clarity and avoid race conditions (in case futur patches \"touch\" the\nlast_signaled_pid anyhow). I was thinking that this part of the code is already\nnot that easy.\n\n> 2. If my understanding of the racy behavior is right, can the same\n> issue happen due to the advancement in restart_lsn?\n\nI'm not sure as I never saw it but it should not hurt to also consider this\n\"potential\" case so it's done in v2 attached.\n\n> I'm not sure if it\n> can happen at all, but I think we can rely on previous conflict reason\n> instead of previous effective_xmin/effective_catalog_xmin/restart_lsn.\n\nI'm not sure what you mean here. I think we should still keep the \"initial\" LSN\nso that the next check done with it still makes sense. The previous conflict\nreason as you're proposing also makes sense to me but for another reason: PANIC\nin case the issue still happen (for cases we did not think about, means not\ncovered by what the added previous LSNs are covering).\n\n> 3. Is there a way to reproduce this racy issue, may be by adding some\n> sleeps or something? If yes, can you share it here, just for the\n> records and verify the whatever fix provided actually works?\n\nAlexander was able to reproduce it on a slow machine and the issue was not there\nanymore with v1 in place. I think it's tricky to reproduce as it would need the\nslot to advance between the 2 checks.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 18 Jan 2024 14:20:28 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Thu, Jan 18, 2024 at 02:20:28PM +0000, Bertrand Drouvot wrote:\n> On Thu, Jan 18, 2024 at 04:59:39PM +0530, Bharath Rupireddy wrote:\n>> I'm not sure if it\n>> can happen at all, but I think we can rely on previous conflict reason\n>> instead of previous effective_xmin/effective_catalog_xmin/restart_lsn.\n> \n> I'm not sure what you mean here. I think we should still keep the \"initial\" LSN\n> so that the next check done with it still makes sense. The previous conflict\n> reason as you're proposing also makes sense to me but for another reason: PANIC\n> in case the issue still happen (for cases we did not think about, means not\n> covered by what the added previous LSNs are covering).\n\nUsing a PANIC seems like an overreaction to me for this path. Sure, \nwe should not record twice a conflict reason, but wouldn't an\nassertion be more adapted enough to check that a termination is not\nlogged twice for this code path run in the checkpointer?\n\n+ if (!terminated)\n+ {\n+ initial_restart_lsn = s->data.restart_lsn;\n+ initial_effective_xmin = s->effective_xmin;\n+ initial_catalog_effective_xmin = s->effective_catalog_xmin;\n+ }\n\nIt seems important to document why we are saving this data here; while\nwe hold the LWLock for repslots, before we perform any termination,\nand we want to protect conflict reports with incorrect data if the\nslot data got changed while the lwlock is temporarily released while\nthere's a termination. No need to mention the pesky standby snapshot\nrecords, I guess, as there could be different reasons why these xmins\nadvance.\n\n>> 3. Is there a way to reproduce this racy issue, may be by adding some\n>> sleeps or something? If yes, can you share it here, just for the\n>> records and verify the whatever fix provided actually works?\n> \n> Alexander was able to reproduce it on a slow machine and the issue was not there\n> anymore with v1 in place. I think it's tricky to reproduce as it would need the\n> slot to advance between the 2 checks.\n\nI'd really want a test for that in the long term. And an injection\npoint stuck between the point where ReplicationSlotControlLock is\nreleased then re-acquired when there is an active PID should be\nenough, isn't it? For example just after the kill()? It seems to me\nthat we should stuck the checkpointer, perform a forced upgrade of the\nxmins, release the checkpointer and see the effects of the change in\nthe second loop. Now, modules/injection_points/ does not allow that,\nyet, but I've posted a patch among these lines that can stop a process\nand release it using a condition variable (should be 0006 on [1]). I\nwas planning to start a new thread with a patch posted for the next CF\nto add this kind of facility with a regression test for an old bug,\nthe patch needs a few hours of love, first. I should be able to post\nthat next week.\n\n[1]: https://www.postgresql.org/message-id/CAExHW5uwP9RmCt9bA91bUfKNQeUrosAWtMens4Ah2PuYZv2NRA@mail.gmail.com\n--\nMichael", "msg_date": "Thu, 15 Feb 2024 14:09:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 15, 2024 at 02:09:45PM +0900, Michael Paquier wrote:\n> On Thu, Jan 18, 2024 at 02:20:28PM +0000, Bertrand Drouvot wrote:\n> > On Thu, Jan 18, 2024 at 04:59:39PM +0530, Bharath Rupireddy wrote:\n> >> I'm not sure if it\n> >> can happen at all, but I think we can rely on previous conflict reason\n> >> instead of previous effective_xmin/effective_catalog_xmin/restart_lsn.\n> > \n> > I'm not sure what you mean here. I think we should still keep the \"initial\" LSN\n> > so that the next check done with it still makes sense. The previous conflict\n> > reason as you're proposing also makes sense to me but for another reason: PANIC\n> > in case the issue still happen (for cases we did not think about, means not\n> > covered by what the added previous LSNs are covering).\n> \n> Using a PANIC seems like an overreaction to me for this path. Sure, \n> we should not record twice a conflict reason, but wouldn't an\n> assertion be more adapted enough to check that a termination is not\n> logged twice for this code path run in the checkpointer?\n\nThanks for looking at it!\n\nAgree, using an assertion instead in v3 attached.\n\n> \n> + if (!terminated)\n> + {\n> + initial_restart_lsn = s->data.restart_lsn;\n> + initial_effective_xmin = s->effective_xmin;\n> + initial_catalog_effective_xmin = s->effective_catalog_xmin;\n> + }\n> \n> It seems important to document why we are saving this data here; while\n> we hold the LWLock for repslots, before we perform any termination,\n> and we want to protect conflict reports with incorrect data if the\n> slot data got changed while the lwlock is temporarily released while\n> there's a termination.\n\nYeah, comments added in v3.\n\n> >> 3. Is there a way to reproduce this racy issue, may be by adding some\n> >> sleeps or something? If yes, can you share it here, just for the\n> >> records and verify the whatever fix provided actually works?\n> > \n> > Alexander was able to reproduce it on a slow machine and the issue was not there\n> > anymore with v1 in place. I think it's tricky to reproduce as it would need the\n> > slot to advance between the 2 checks.\n> \n> I'd really want a test for that in the long term. And an injection\n> point stuck between the point where ReplicationSlotControlLock is\n> released then re-acquired when there is an active PID should be\n> enough, isn't it? \n\nYeah, that should be enough.\n\n> For example just after the kill()? It seems to me\n> that we should stuck the checkpointer, perform a forced upgrade of the\n> xmins, release the checkpointer and see the effects of the change in\n> the second loop. Now, modules/injection_points/ does not allow that,\n> yet, but I've posted a patch among these lines that can stop a process\n> and release it using a condition variable (should be 0006 on [1]). I\n> was planning to start a new thread with a patch posted for the next CF\n> to add this kind of facility with a regression test for an old bug,\n> the patch needs a few hours of love, first. I should be able to post\n> that next week.\n\nGreat, that looks like a good idea!\n\nAre we going to fix this on master and 16 stable first and then later on add a\ntest on master with the injection points?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 15 Feb 2024 11:24:59 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Thu, Feb 15, 2024 at 11:24:59AM +0000, Bertrand Drouvot wrote:\n> Thanks for looking at it!\n> \n> Agree, using an assertion instead in v3 attached.\n\nAnd you did not forget the PG_USED_FOR_ASSERTS_ONLY.\n\n> > It seems important to document why we are saving this data here; while\n> > we hold the LWLock for repslots, before we perform any termination,\n> > and we want to protect conflict reports with incorrect data if the\n> > slot data got changed while the lwlock is temporarily released while\n> > there's a termination.\n> \n> Yeah, comments added in v3.\n\nThe contents look rather OK, I may do some word-smithing for both.\n\n>> For example just after the kill()? It seems to me\n>> that we should stuck the checkpointer, perform a forced upgrade of the\n>> xmins, release the checkpointer and see the effects of the change in\n>> the second loop. Now, modules/injection_points/ does not allow that,\n>> yet, but I've posted a patch among these lines that can stop a process\n>> and release it using a condition variable (should be 0006 on [1]). I\n>> was planning to start a new thread with a patch posted for the next CF\n>> to add this kind of facility with a regression test for an old bug,\n>> the patch needs a few hours of love, first. I should be able to post\n>> that next week.\n> \n> Great, that looks like a good idea!\n\nI've been finally able to spend some time on what I had in mind and\nposted it here for the next CF:\nhttps://www.postgresql.org/message-id/[email protected]\n\nYou should be able to reuse that the same way I do in 0002 posted on\nthe thread, where I'm causing the checkpointer to wait, then wake it\nup.\n\n> Are we going to fix this on master and 16 stable first and then later on add a\n> test on master with the injection points?\n\nStill, the other patch is likely going to take a couple of weeks\nbefore getting into the tree, so I have no objection to fix the bug\nfirst and backpatch, then introduce a test. Things have proved that\nfailures could show up in the buildfarm in this area, so more time\nrunning this code across two branches is not a bad concept, either.\n--\nMichael", "msg_date": "Mon, 19 Feb 2024 15:14:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Mon, Feb 19, 2024 at 03:14:07PM +0900, Michael Paquier wrote:\n> On Thu, Feb 15, 2024 at 11:24:59AM +0000, Bertrand Drouvot wrote:\n> > Thanks for looking at it!\n> > \n> > Agree, using an assertion instead in v3 attached.\n> \n> And you did not forget the PG_USED_FOR_ASSERTS_ONLY.\n\nThanks to the \"CompilerWarnings\" cirrus test ;-)\n\n> \n> > > It seems important to document why we are saving this data here; while\n> > > we hold the LWLock for repslots, before we perform any termination,\n> > > and we want to protect conflict reports with incorrect data if the\n> > > slot data got changed while the lwlock is temporarily released while\n> > > there's a termination.\n> > \n> > Yeah, comments added in v3.\n> \n> The contents look rather OK, I may do some word-smithing for both.\n\nThanks!\n\n> >> For example just after the kill()? It seems to me\n> >> that we should stuck the checkpointer, perform a forced upgrade of the\n> >> xmins, release the checkpointer and see the effects of the change in\n> >> the second loop. Now, modules/injection_points/ does not allow that,\n> >> yet, but I've posted a patch among these lines that can stop a process\n> >> and release it using a condition variable (should be 0006 on [1]). I\n> >> was planning to start a new thread with a patch posted for the next CF\n> >> to add this kind of facility with a regression test for an old bug,\n> >> the patch needs a few hours of love, first. I should be able to post\n> >> that next week.\n> > \n> > Great, that looks like a good idea!\n> \n> I've been finally able to spend some time on what I had in mind and\n> posted it here for the next CF:\n> https://www.postgresql.org/message-id/[email protected]\n> \n> You should be able to reuse that the same way I do in 0002 posted on\n> the thread, where I'm causing the checkpointer to wait, then wake it\n> up.\n\nThanks! I'll look at it.\n\n> > Are we going to fix this on master and 16 stable first and then later on add a\n> > test on master with the injection points?\n> \n> Still, the other patch is likely going to take a couple of weeks\n> before getting into the tree, so I have no objection to fix the bug\n> first and backpatch, then introduce a test. Things have proved that\n> failures could show up in the buildfarm in this area, so more time\n> running this code across two branches is not a bad concept, either.\n\nFully agree.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 07:47:23 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Mon, Feb 19, 2024 at 11:44 AM Michael Paquier <[email protected]> wrote:\n>\n> > Yeah, comments added in v3.\n>\n> The contents look rather OK, I may do some word-smithing for both.\n\nHere are some comments on v3:\n\n1.\n+ XLogRecPtr initial_effective_xmin = InvalidXLogRecPtr;\n+ XLogRecPtr initial_catalog_effective_xmin = InvalidXLogRecPtr;\n+ XLogRecPtr initial_restart_lsn = InvalidXLogRecPtr;\n\nPrefix 'initial_' makes the variable names a bit longer, I think we\ncan just use effective_xmin, catalog_effective_xmin and restart_lsn,\nthe code updating then only when if (!terminated) tells one that they\naren't updated every time.\n\n2.\n+ /*\n+ * We'll release the slot's mutex soon, so it's possible that\n+ * those values change since the process holding the slot has been\n+ * terminated (if any), so record them here to ensure we would\n+ * report the slot as obsolete correctly.\n+ */\n\nThis needs a bit more info as to why and how effective_xmin,\ncatalog_effective_xmin and restart_lsn can move ahead after signaling\na backend and before the signalled backend reports back.\n\n3.\n+ /*\n+ * Assert that the conflict cause recorded previously before we\n+ * terminate the process did not change now for any reason.\n+ */\n+ Assert(!(conflict_prev != RS_INVAL_NONE && terminated &&\n+ conflict_prev != conflict));\n\nIt took a while for me to understand the above condition, can we\nsimplify it like below using De Morgan's laws for better readability?\n\nAssert((conflict_prev == RS_INVAL_NONE) || !terminated ||\n(conflict_prev == conflict));\n\n> > Are we going to fix this on master and 16 stable first and then later on add a\n> > test on master with the injection points?\n>\n> Still, the other patch is likely going to take a couple of weeks\n> before getting into the tree, so I have no objection to fix the bug\n> first and backpatch, then introduce a test. Things have proved that\n> failures could show up in the buildfarm in this area, so more time\n> running this code across two branches is not a bad concept, either.\n\nWhile I couldn't agree more on getting this fix in, it's worth pulling\nin the required injection points patch here and writing the test to\nreproduce this race issue.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 13:45:16 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Mon, Feb 19, 2024 at 01:45:16PM +0530, Bharath Rupireddy wrote:\n> On Mon, Feb 19, 2024 at 11:44 AM Michael Paquier <[email protected]> wrote:\n> >\n> > > Yeah, comments added in v3.\n> >\n> > The contents look rather OK, I may do some word-smithing for both.\n> \n> Here are some comments on v3:\n\nThanks for looing at it!\n\n> 1.\n> + XLogRecPtr initial_effective_xmin = InvalidXLogRecPtr;\n> + XLogRecPtr initial_catalog_effective_xmin = InvalidXLogRecPtr;\n> + XLogRecPtr initial_restart_lsn = InvalidXLogRecPtr;\n> \n> Prefix 'initial_' makes the variable names a bit longer, I think we\n> can just use effective_xmin, catalog_effective_xmin and restart_lsn,\n> the code updating then only when if (!terminated) tells one that they\n> aren't updated every time.\n\nI'm not sure about that. I prefer to see meaningfull names instead of having\nto read the code where they are used.\n\n> 2.\n> + /*\n> + * We'll release the slot's mutex soon, so it's possible that\n> + * those values change since the process holding the slot has been\n> + * terminated (if any), so record them here to ensure we would\n> + * report the slot as obsolete correctly.\n> + */\n> \n> This needs a bit more info as to why and how effective_xmin,\n> catalog_effective_xmin and restart_lsn can move ahead after signaling\n> a backend and before the signalled backend reports back.\n\nI'm not sure of the added value of such extra details in this comment and if\nthe comment would be easy to maintain. I've the feeling that knowing it's possible\nis enough here. Happy to hear what others think about it too.\n\n> 3.\n> + /*\n> + * Assert that the conflict cause recorded previously before we\n> + * terminate the process did not change now for any reason.\n> + */\n> + Assert(!(conflict_prev != RS_INVAL_NONE && terminated &&\n> + conflict_prev != conflict));\n> \n> It took a while for me to understand the above condition, can we\n> simplify it like below using De Morgan's laws for better readability?\n> \n> Assert((conflict_prev == RS_INVAL_NONE) || !terminated ||\n> (conflict_prev == conflict));\n\nI don't have a strong opinon on this, looks like a matter of taste.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Feb 2024 09:49:24 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Mon, Feb 19, 2024 at 09:49:24AM +0000, Bertrand Drouvot wrote:\n> On Mon, Feb 19, 2024 at 01:45:16PM +0530, Bharath Rupireddy wrote:\n>> Prefix 'initial_' makes the variable names a bit longer, I think we\n>> can just use effective_xmin, catalog_effective_xmin and restart_lsn,\n>> the code updating then only when if (!terminated) tells one that they\n>> aren't updated every time.\n> \n> I'm not sure about that. I prefer to see meaningfull names instead of having\n> to read the code where they are used.\n\nPrefixing these with \"initial_\" is fine, IMO. That shows the\nintention that these come from the slot's data before doing the\ntermination. So I'm OK with what's been proposed in v3.\n\n>> This needs a bit more info as to why and how effective_xmin,\n>> catalog_effective_xmin and restart_lsn can move ahead after signaling\n>> a backend and before the signalled backend reports back.\n> \n> I'm not sure of the added value of such extra details in this comment and if\n> the comment would be easy to maintain. I've the feeling that knowing it's possible\n> is enough here. Happy to hear what others think about it too.\n\nDocumenting that the risk exists rather than all the possible reasons\nwhy this could happen is surely more maintainable. In short, I'm OK\nwith what the patch does, just telling that it is possible.\n\n>> + Assert(!(conflict_prev != RS_INVAL_NONE && terminated &&\n>> + conflict_prev != conflict));\n>> \n>> It took a while for me to understand the above condition, can we\n>> simplify it like below using De Morgan's laws for better readability?\n>> \n>> Assert((conflict_prev == RS_INVAL_NONE) || !terminated ||\n>> (conflict_prev == conflict));\n> \n> I don't have a strong opinon on this, looks like a matter of taste.\n\nBoth are the same to me, so I have no extra opinion to offer. ;)\n--\nMichael", "msg_date": "Tue, 20 Feb 2024 08:51:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Tue, Feb 20, 2024 at 08:51:17AM +0900, Michael Paquier wrote:\n> Prefixing these with \"initial_\" is fine, IMO. That shows the\n> intention that these come from the slot's data before doing the\n> termination. So I'm OK with what's been proposed in v3.\n\nI was looking at that a second time, and just concluded that this is\nOK, so I've applied that down to 16, wordsmithing a bit the comments.\n\n> Both are the same to me, so I have no extra opinion to offer. ;)\n\nI've kept this one as-is, though.\n--\nMichael", "msg_date": "Tue, 20 Feb 2024 14:33:44 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 20, 2024 at 02:33:44PM +0900, Michael Paquier wrote:\n> On Tue, Feb 20, 2024 at 08:51:17AM +0900, Michael Paquier wrote:\n> > Prefixing these with \"initial_\" is fine, IMO. That shows the\n> > intention that these come from the slot's data before doing the\n> > termination. So I'm OK with what's been proposed in v3.\n> \n> I was looking at that a second time, and just concluded that this is\n> OK, so I've applied that down to 16, wordsmithing a bit the comments.\n\nThanks!\nFWIW, I've started to write a POC regarding the test we mentioned up-thread.\n\nThe POC test is based on what has been submitted by Michael in [1]. The POC test\nseems to work fine and it seems that nothing more is needed in [1] (at some point\nI thought I would need the ability to wake up multiple \"wait\" injection points\nin sequence but that was not necessary).\n\nI'll polish and propose my POC test once [1] is pushed (unless you're curious\nabout it before).\n\n[1]: https://www.postgresql.org/message-id/flat/ZdLuxBk5hGpol91B%40paquier.xyz\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Feb 2024 16:03:53 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 20, 2024 at 04:03:53PM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Tue, Feb 20, 2024 at 02:33:44PM +0900, Michael Paquier wrote:\n> > On Tue, Feb 20, 2024 at 08:51:17AM +0900, Michael Paquier wrote:\n> > > Prefixing these with \"initial_\" is fine, IMO. That shows the\n> > > intention that these come from the slot's data before doing the\n> > > termination. So I'm OK with what's been proposed in v3.\n> > \n> > I was looking at that a second time, and just concluded that this is\n> > OK, so I've applied that down to 16, wordsmithing a bit the comments.\n> \n> Thanks!\n> FWIW, I've started to write a POC regarding the test we mentioned up-thread.\n> \n> The POC test is based on what has been submitted by Michael in [1]. The POC test\n> seems to work fine and it seems that nothing more is needed in [1] (at some point\n> I thought I would need the ability to wake up multiple \"wait\" injection points\n> in sequence but that was not necessary).\n> \n> I'll polish and propose my POC test once [1] is pushed (unless you're curious\n> about it before).\n\nThough [1] mentioned up-thread is not pushed yet, I'm Sharing the POC patch now\n(see the attached).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 26 Feb 2024 14:01:45 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Mon, Feb 26, 2024 at 02:01:45PM +0000, Bertrand Drouvot wrote:\n> Though [1] mentioned up-thread is not pushed yet, I'm Sharing the POC patch now\n> (see the attached).\n\nI have looked at what you have here.\n\nFirst, in a build where 818fefd8fd is included, this makes the test\nscript a lot slower. Most of the logic is quick, but we're spending\n10s or so checking that catalog_xmin has advanced. Could it be\npossible to make that faster?\n\nA second issue is the failure mode when 818fefd8fd is reverted. The\ntest is getting stuck when we are waiting on the standby to catch up,\nuntil a timeout decides to kick in to fail the test, and all the\nprevious tests pass. Could it be possible to make that more\nresponsive? I assume that in the failure mode we would get an\nincorrect conflict_reason for injection_inactiveslot, succeeding in\nchecking the failure.\n\n+ my $terminated = 0;\n+ for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n+ {\n+ if ($node_standby->log_contains(\n+ 'terminating process .* to release replication slot \\\"injection_activeslot\\\"', $logstart))\n+ {\n+ $terminated = 1;\n+ last;\n+ }\n+ usleep(100_000);\n+ }\n+ ok($terminated, 'terminating process holding the active slot is logged with injection point');\n\nThe LOG exists when we are sure that the startup process is waiting\nin the injection point, so this loop could be replaced with something\nlike:\n+ $node_standby->wait_for_event('startup', 'TerminateProcessHoldingSlot');\n+ ok( $node_standby->log_contains('terminating process .* .. ', 'termin .. ';)\n\nNit: the name of the injection point should be\nterminate-process-holding-slot rather than\nTerminateProcessHoldingSlot, to be consistent with the other ones. \n--\nMichael", "msg_date": "Tue, 5 Mar 2024 09:42:20 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 05, 2024 at 09:42:20AM +0900, Michael Paquier wrote:\n> On Mon, Feb 26, 2024 at 02:01:45PM +0000, Bertrand Drouvot wrote:\n> > Though [1] mentioned up-thread is not pushed yet, I'm Sharing the POC patch now\n> > (see the attached).\n> \n> I have looked at what you have here.\n\nThanks!\n\n> First, in a build where 818fefd8fd is included, this makes the test\n> script a lot slower. Most of the logic is quick, but we're spending\n> 10s or so checking that catalog_xmin has advanced. Could it be\n> possible to make that faster?\n\nYeah, v2 attached changes this. It moves the injection point after the process\nhas been killed so that another process can decode at wish (without the need\nto wait for a walsender timeout) to reach LogicalConfirmReceivedLocation().\n\n> A second issue is the failure mode when 818fefd8fd is reverted. The\n> test is getting stuck when we are waiting on the standby to catch up,\n> until a timeout decides to kick in to fail the test, and all the\n> previous tests pass. Could it be possible to make that more\n> responsive? I assume that in the failure mode we would get an\n> incorrect conflict_reason for injection_inactiveslot, succeeding in\n> checking the failure.\n\nI try to simulate a revert of 818fefd8fd (replacing \"!terminated\" by \"1 == 1\"\nbefore the initial_* assignements). The issue is that then the new ASSERT is\ntriggered leading to the standby shutdown. So, I'm not sure how to improve this\ncase.\n\n> + my $terminated = 0;\n> + for (my $i = 0; $i < 10 * $PostgreSQL::Test::Utils::timeout_default; $i++)\n> + {\n> + if ($node_standby->log_contains(\n> + 'terminating process .* to release replication slot \\\"injection_activeslot\\\"', $logstart))\n> + {\n> + $terminated = 1;\n> + last;\n> + }\n> + usleep(100_000);\n> + }\n> + ok($terminated, 'terminating process holding the active slot is logged with injection point');\n> \n> The LOG exists when we are sure that the startup process is waiting\n> in the injection point, so this loop could be replaced with something\n> like:\n> + $node_standby->wait_for_event('startup', 'TerminateProcessHoldingSlot');\n> + ok( $node_standby->log_contains('terminating process .* .. ', 'termin .. ';)\n> \n\nYeah, now that wait_for_event() is there, let's use it: done in v2.\n\n> Nit: the name of the injection point should be\n> terminate-process-holding-slot rather than\n> TerminateProcessHoldingSlot, to be consistent with the other ones. \n\nDone in v2.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 5 Mar 2024 10:17:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Tue, Mar 05, 2024 at 10:17:03AM +0000, Bertrand Drouvot wrote:\n> On Tue, Mar 05, 2024 at 09:42:20AM +0900, Michael Paquier wrote:\n>> First, in a build where 818fefd8fd is included, this makes the test\n>> script a lot slower. Most of the logic is quick, but we're spending\n>> 10s or so checking that catalog_xmin has advanced. Could it be\n>> possible to make that faster?\n> \n> Yeah, v2 attached changes this. It moves the injection point after the process\n> has been killed so that another process can decode at wish (without the need\n> to wait for a walsender timeout) to reach LogicalConfirmReceivedLocation().\n\nAh, OK. Indeed that's much faster this way.\n\n> I try to simulate a revert of 818fefd8fd (replacing \"!terminated\" by \"1 == 1\"\n> before the initial_* assignements).\n\nYeah. I can see how this messes up with the calculation of the\nconditions, which is enough from my perspective, even if we don't have\nany sanity checks in 818fefd8fd originally.\n\n> The issue is that then the new ASSERT is\n> triggered leading to the standby shutdown. So, I'm not sure how to improve this\n> case.\n\nIt's been mentioned recently that we are not good at detecting crashes\nin the TAP tests. I am wondering if we should check the status of the\nnode in the most popular routines of Cluster.pm and die hard, as one\nway to make the tests more responsive.. A topic for a different\nthread, for sure.\n\n> Done in v2.\n\nReworded a few things and applied this version.\n--\nMichael", "msg_date": "Wed, 6 Mar 2024 14:47:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 06, 2024 at 02:47:55PM +0900, Michael Paquier wrote:\n> On Tue, Mar 05, 2024 at 10:17:03AM +0000, Bertrand Drouvot wrote:\n> > On Tue, Mar 05, 2024 at 09:42:20AM +0900, Michael Paquier wrote:\n> > The issue is that then the new ASSERT is\n> > triggered leading to the standby shutdown. So, I'm not sure how to improve this\n> > case.\n> \n> It's been mentioned recently that we are not good at detecting crashes\n> in the TAP tests. I am wondering if we should check the status of the\n> node in the most popular routines of Cluster.pm and die hard, as one\n> way to make the tests more responsive.. A topic for a different\n> thread, for sure.\n\nRight, somehow out of context here.\n\n> > Done in v2.\n> \n> Reworded a few things and applied this version.\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 09:17:58 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Wed, Mar 06, 2024 at 09:17:58AM +0000, Bertrand Drouvot wrote:\n> Right, somehow out of context here.\n\nWe're not yet in the green yet, one of my animals has complained:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-06%2010%3A10%3A03\n\nThis is telling us that the xmin horizon is unchanged, and the test\ncannot move on with the injection point wake up that would trigger the\nfollowing logs:\n2024-03-06 20:12:59.039 JST [21143] LOG: invalidating obsolete replication slot \"injection_activeslot\"\n2024-03-06 20:12:59.039 JST [21143] DETAIL: The slot conflicted with xid horizon 770.\n\nNot sure what to think about that yet.\n--\nMichael", "msg_date": "Wed, 6 Mar 2024 20:21:14 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Wed, Mar 6, 2024 at 4:51 PM Michael Paquier <[email protected]> wrote:\n>\n> On Wed, Mar 06, 2024 at 09:17:58AM +0000, Bertrand Drouvot wrote:\n> > Right, somehow out of context here.\n>\n> We're not yet in the green yet, one of my animals has complained:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-06%2010%3A10%3A03\n>\n> This is telling us that the xmin horizon is unchanged, and the test\n> cannot move on with the injection point wake up that would trigger the\n> following logs:\n> 2024-03-06 20:12:59.039 JST [21143] LOG: invalidating obsolete replication slot \"injection_activeslot\"\n> 2024-03-06 20:12:59.039 JST [21143] DETAIL: The slot conflicted with xid horizon 770.\n>\n> Not sure what to think about that yet.\n\nWindows - Server 2019, VS 2019 - Meson & ninja on my CI setup isn't\nhappy about that as well [1]. It looks like the slot's catalog_xmin on\nthe standby isn't moving forward.\n\n[1]\nhttps://cirrus-ci.com/task/5132148995260416\n\n[09:11:17.851] 285/285 postgresql:recovery /\nrecovery/035_standby_logical_decoding ERROR\n553.48s (exit status 255 or signal 127 SIGinvalid)\n[09:11:17.855] >>>\nINITDB_TEMPLATE=C:/cirrus/build/tmp_install/initdb-template\nenable_injection_points=yes\nPG_REGRESS=C:\\cirrus\\build\\src/test\\regress\\pg_regress.exe\nMALLOC_PERTURB_=172\nREGRESS_SHLIB=C:\\cirrus\\build\\src/test\\regress\\regress.dll\nPATH=C:/cirrus/build/tmp_install/usr/local/pgsql/bin;C:\\cirrus\\build\\src/test\\recovery;C:/cirrus/build/src/test/recovery/test;C:\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX64\\x64;C:\\BuildTools\\MSBuild\\Current\\bin\\Roslyn;C:\\Program\nFiles (x86)\\Windows Kits\\10\\bin\\10.0.20348.0\\x64;C:\\Program Files\n(x86)\\Windows Kits\\10\\bin\\x64;C:\\BuildTools\\\\MSBuild\\Current\\Bin;C:\\Windows\\Microsoft.NET\\Framework64\\v4.0.30319;C:\\BuildTools\\Common7\\IDE\\;C:\\BuildTools\\Common7\\Tools\\;C:\\BuildTools\\VC\\Auxiliary\\Build;C:\\zstd\\zstd-v1.5.2-win64;C:\\zlib;C:\\lz4;C:\\icu;C:\\winflexbison;C:\\strawberry\\5.26.3.1\\perl\\bin;C:\\python\\Scripts\\;C:\\python\\;C:\\Windows\nKits\\10\\Debuggers\\x64;C:\\Program\nFiles\\Git\\usr\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\;C:\\Windows\\System32\\OpenSSH\\;C:\\ProgramData\\GooGet;C:\\Program\nFiles\\Google\\Compute Engine\\metadata_scripts;C:\\Program Files\n(x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin;C:\\Program\nFiles\\PowerShell\\7\\;C:\\Program Files\\Google\\Compute\nEngine\\sysprep;C:\\ProgramData\\chocolatey\\bin;C:\\Program\nFiles\\Git\\cmd;C:\\Program Files\\Git\\mingw64\\bin;C:\\Program\nFiles\\Git\\usr\\bin;C:\\Windows\\system32\\config\\systemprofile\\AppData\\Local\\Microsoft\\WindowsApps\nC:\\python\\python3.EXE C:\\cirrus\\build\\..\\src/tools/testwrap --basedir\nC:\\cirrus\\build --srcdir C:\\cirrus\\src/test\\recovery --testgroup\nrecovery --testname 035_standby_logical_decoding --\nC:\\strawberry\\5.26.3.1\\perl\\bin\\perl.EXE -I C:/cirrus/src/test/perl -I\nC:\\cirrus\\src/test\\recovery\nC:/cirrus/src/test/recovery/t/035_standby_logical_decoding.pl\n[09:11:17.855] ------------------------------------- 8<\n-------------------------------------\n[09:11:17.855] stderr:\n[09:11:17.855] # poll_query_until timed out executing this query:\n\n[09:11:17.855] # SELECT (SELECT catalog_xmin::text::int - 770 from\npg_catalog.pg_replication_slots where slot_name =\n'injection_activeslot') > 0\n\n[09:11:17.855] # expecting this output:\n\n[09:11:17.855] # t\n\n[09:11:17.855] # last actual query output:\n\n[09:11:17.855] # f\n\n[09:11:17.855] # with stderr:\n\n[09:11:17.855] # Tests were run but no plan was declared and\ndone_testing() was not seen.\n\n[09:11:17.855] # Looks like your test exited with 255 just after 57.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 17:45:56 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 06, 2024 at 05:45:56PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 6, 2024 at 4:51 PM Michael Paquier <[email protected]> wrote:\n> >\n> > On Wed, Mar 06, 2024 at 09:17:58AM +0000, Bertrand Drouvot wrote:\n> > > Right, somehow out of context here.\n> >\n> > We're not yet in the green yet, one of my animals has complained:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hachi&dt=2024-03-06%2010%3A10%3A03\n> >\n> > This is telling us that the xmin horizon is unchanged, and the test\n> > cannot move on with the injection point wake up that would trigger the\n> > following logs:\n> > 2024-03-06 20:12:59.039 JST [21143] LOG: invalidating obsolete replication slot \"injection_activeslot\"\n> > 2024-03-06 20:12:59.039 JST [21143] DETAIL: The slot conflicted with xid horizon 770.\n> >\n> > Not sure what to think about that yet.\n> \n> Windows - Server 2019, VS 2019 - Meson & ninja on my CI setup isn't\n> happy about that as well [1]. It looks like the slot's catalog_xmin on\n> the standby isn't moving forward.\n> \n\nThank you both for the report! I did a few test manually and can see the issue\nfrom times to times. When the issue occurs, the logical decoding was able to\ngo through the place where LogicalConfirmReceivedLocation() updates the\nslot's catalog_xmin before being killed. In that case I can see that the\ncatalog_xmin is updated to the xid horizon.\n\nMeans in a failed test we have something like:\n\nslot's catalog_xmin: 839 and \"The slot conflicted with xid horizon 839.\" \n\nWhile when the test is ok you'll see something like:\n\nslot's catalog_xmin: 841 and \"The slot conflicted with xid horizon 842.\"\n\nIn the failing test the call to SELECT pg_logical_slot_get_changes() does\nnot advance the slot's catalog xmin anymore.\n\nTo fix this, I think we need a new transacion to decode from the primary before\nexecuting pg_logical_slot_get_changes(). But this transaction has to be replayed\non the standby first by the startup process. Which means we need to wakeup\n\"terminate-process-holding-slot\" and that we probably need another injection\npoint somewehere in this test.\n\nI'll look at it unless you've another idea?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 17:45:56 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" }, { "msg_contents": "On Wed, Mar 06, 2024 at 05:45:56PM +0000, Bertrand Drouvot wrote:\n> Thank you both for the report! I did a few test manually and can see the issue\n> from times to times. When the issue occurs, the logical decoding was able to\n> go through the place where LogicalConfirmReceivedLocation() updates the\n> slot's catalog_xmin before being killed. In that case I can see that the\n> catalog_xmin is updated to the xid horizon.\n\nTwo buildfarm machines have complained here, and one of them twice in\na row. That's quite amazing, because a couple of dozen runs done in a\nrow on the same host as these animals all pass. The CI did not\ncomplain either (did 2~3 runs there yesterday).\n\n> Means in a failed test we have something like:\n> \n> slot's catalog_xmin: 839 and \"The slot conflicted with xid horizon 839.\"\n>\n> While when the test is ok you'll see something like:\n> \n> slot's catalog_xmin: 841 and \"The slot conflicted with xid horizon 842.\"\n\nPerhaps we should also make the test report the catalog_xmin of the\nslot. That may make debugging a bit easier.\n\n> In the failing test the call to SELECT pg_logical_slot_get_changes() does\n> not advance the slot's catalog xmin anymore.\n\nIs that something that we could enforce in the test in a stronger way,\ncross-checking the xmin horizon before and after the call?\n\n> To fix this, I think we need a new transacion to decode from the primary before\n> executing pg_logical_slot_get_changes(). But this transaction has to be replayed\n> on the standby first by the startup process. Which means we need to wakeup\n> \"terminate-process-holding-slot\" and that we probably need another injection\n> point somewehere in this test.\n>\n> I'll look at it unless you've another idea?\n\nI am wondering if there is something else lurking here, actually, so\nfor now I am going to revert the change as it is annoying to get\nsporadic failures in the CF bot at this time of the year and there are\na lot of patches under discussion. Let's give it more time and more\nthoughts, without pressure.\n--\nMichael", "msg_date": "Thu, 7 Mar 2024 09:54:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix race condition in InvalidatePossiblyObsoleteSlot()" } ]
[ { "msg_contents": "Pursuant to a comment I made a few months ago[1], I propose the attached\nchanges to replication slots documentation. In essence, I want to\nexplain that replication slots are good, and the max_size GUC, before\nmoving on to explain that the other methods are worse.\n\nThanks\n\n[1] https://postgr.es/m/[email protected]\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"", "msg_date": "Mon, 15 Jan 2024 16:37:49 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "minor replication slot docs edits" }, { "msg_contents": "On Mon, Jan 15, 2024 at 9:08 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Pursuant to a comment I made a few months ago[1], I propose the attached\n> changes to replication slots documentation. In essence, I want to\n> explain that replication slots are good, and the max_size GUC, before\n> moving on to explain that the other methods are worse.\n>\n> [1] https://postgr.es/m/[email protected]\n\nThanks for the patch. The wording looks good to me. However, I have\nsome comments on the placement of the note:\n\n1. How about bundling this in a <note> </note> or <caution> </caution>?\n\n+ <para>\n+ Beware that replication slots can retain so many WAL segments that they\n+ fill up the space allocated for <literal>pg_wal</literal>.\n+ <xref linkend=\"guc-max-slot-wal-keep-size\"/> can be used to limit the size\n+ of WAL files retained by replication slots.\n+ </para>\n\n2. I think the better place for this note is at the end after the\n\"Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own,\nwithout\" paragraph. It will then be like we introduce what replication\nslot is and why it is better over other mechanisms to retain WAL and\nthen caution the users of it retaining WAL.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:38:43 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor replication slot docs edits" }, { "msg_contents": "On 2024-Jan-16, Bharath Rupireddy wrote:\n\n> Thanks for the patch. The wording looks good to me. However, I have\n> some comments on the placement of the note:\n> \n> 1. How about bundling this in a <note> </note> or <caution> </caution>?\n\nYeah, I considered this too, but I discarded the idea because my\nimpression of <caution> and <note> was that they attract too much\nattention off the main text; it should be the other way around. But\nthat's not really something for this patch to solve, and we use\n<caution> boxes in many other places and nobody complains about this.\nSo I made it a <caution>.\n\n> 2. I think the better place for this note is at the end after the\n> \"Similarly, <xref linkend=\"guc-hot-standby-feedback\"/> on its own,\n> without\" paragraph. It will then be like we introduce what replication\n> slot is and why it is better over other mechanisms to retain WAL and\n> then caution the users of it retaining WAL.\n\nMakes sense.\n\nI have pushed it. I made one other terminology change from \"primary\" to\n\"primary server\", but only in that subsection. We use \"primary\" as a\nstandalone term extensively in other sections of this chapter, and I\ndon't like it very much, but I didn't want to make this more invasive.\n\nAnother thing I noticed is that we could change all (or most of) the\n<varname> tags to <xref linkend=\"guc-...\"/>, but it's also a much larger\nchange. Having (some of?) these variable names be links would be useful\nIMO.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 18 Jan 2024 11:49:18 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: minor replication slot docs edits" }, { "msg_contents": "On Thu, Jan 18, 2024 at 9:49 PM Alvaro Herrera <[email protected]> wrote:\n...\n>\n> Another thing I noticed is that we could change all (or most of) the\n> <varname> tags to <xref linkend=\"guc-...\"/>, but it's also a much larger\n> change. Having (some of?) these variable names be links would be useful\n> IMO.\n>\n\n+1 to do this.\n\nIMO these should all be coded like <link\nlinkend=\"guc-XXX\"><varname>XXX</varname></link>, because the resulting\nrendering looks much better with the GUC name using a varname font\ninstead of just plain text that <xref> gives.\n\nI am happy to take on the task if nobody else wants to.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 19 Jan 2024 09:56:53 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: minor replication slot docs edits" } ]
[ { "msg_contents": "d=# create type varbitrange as range (subtype = varbit);\nCREATE TYPE\nd=# \\dT+\n List of data types\n Schema | Name | Internal name | Size | Elements | Owner | Access privileges | Description \n--------+------------------+------------------+------+----------+----------+-------------------+-------------\n public | varbitmultirange | varbitmultirange | var | | postgres | | \n public | varbitrange | varbitrange | var | | postgres | | \n(2 rows)\n\nd=# create user joe;\nCREATE ROLE\nd=# alter type varbitrange owner to joe;\nALTER TYPE\nd=# \\dT+\n List of data types\n Schema | Name | Internal name | Size | Elements | Owner | Access privileges | Description \n--------+------------------+------------------+------+----------+----------+-------------------+-------------\n public | varbitmultirange | varbitmultirange | var | | postgres | | \n public | varbitrange | varbitrange | var | | joe | | \n(2 rows)\n\nThat's pretty broken, isn't it? joe would own the multirange if he'd\ncreated the range to start with. Even if you think the ownerships\nideally should be separable, this behavior causes existing pg_dump\nfiles to restore incorrectly, because pg_dump assumes it need not emit\nany commands about the multirange.\n\nA related issue is that you can manually alter the multirange's\nownership:\n\nd=# alter type varbitmultirange owner to joe;\nALTER TYPE\n\nwhich while it has some value in allowing recovery from this bug,\nis inconsistent with our handling of other dependent types such\nas arrays:\n\nd=# alter type _varbitrange owner to joe;\nERROR: cannot alter array type varbitrange[]\nHINT: You can alter type varbitrange, which will alter the array type as well.\n\nPossibly the thing to do about that is to forbid it in HEAD\nfor consistency, while still allowing it in back branches\nso that people can clean up inconsistent ownership if needed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Jan 2024 13:27:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "On Mon, Jan 15, 2024 at 1:27 PM Tom Lane <[email protected]> wrote:\n> That's pretty broken, isn't it? joe would own the multirange if he'd\n> created the range to start with. Even if you think the ownerships\n> ideally should be separable, this behavior causes existing pg_dump\n> files to restore incorrectly, because pg_dump assumes it need not emit\n> any commands about the multirange.\n\nI agree that pg_dump doing the wrong thing is bad, but the SQL example\ndoesn't look broken if you ignore pg_dump. I have a feeling that the\nsource of the awkwardness here is that one SQL command is creating two\nobjects, and unlike the case of a table and a TOAST table, one is not\nan implementation detail of the other or clearly subordinate to the\nother. But how does that prevent us from making pg_dump restore the\nownership and permissions on each separately? If ownership is a\nproblem, aren't permissions also?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:17:08 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jan 15, 2024 at 1:27 PM Tom Lane <[email protected]> wrote:\n>> That's pretty broken, isn't it? joe would own the multirange if he'd\n>> created the range to start with. Even if you think the ownerships\n>> ideally should be separable, this behavior causes existing pg_dump\n>> files to restore incorrectly, because pg_dump assumes it need not emit\n>> any commands about the multirange.\n\n> I agree that pg_dump doing the wrong thing is bad, but the SQL example\n> doesn't look broken if you ignore pg_dump.\n\nI'm reasoning by analogy to array types, which are automatically\ncreated and automatically updated to keep the same ownership\netc. properties as their base type. To the extent that multirange\ntypes don't act exactly like that, I say it's a bug/oversight in the\nmultirange patch. So I think this is a backend bug, not a pg_dump\nbug.\n\n> I have a feeling that the\n> source of the awkwardness here is that one SQL command is creating two\n> objects, and unlike the case of a table and a TOAST table, one is not\n> an implementation detail of the other or clearly subordinate to the\n> other.\n\nHow is a multirange not subordinate to the underlying range type?\nIt can't exist without it, and we automatically create it without\nany further information when you make the range type. That smells\na lot like the way we handle array types. The array behavior is of\nvery long standing and surprises nobody.\n\n> But how does that prevent us from making pg_dump restore the\n> ownership and permissions on each separately? If ownership is a\n> problem, aren't permissions also?\n\nProbably, and I wouldn't be surprised if we've also failed to make\nmultiranges follow arrays in the permissions department. An\narray type can't have an ACL of its own, IIRC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Jan 2024 14:28:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "On Mon, Jan 15, 2024 at 2:28 PM Tom Lane <[email protected]> wrote:\n> I'm reasoning by analogy to array types, which are automatically\n> created and automatically updated to keep the same ownership\n> etc. properties as their base type. To the extent that multirange\n> types don't act exactly like that, I say it's a bug/oversight in the\n> multirange patch. So I think this is a backend bug, not a pg_dump\n> bug.\n\nOh...\n\nWell, I guess maybe I'm just clueless. I thought that the range and\nmultirange were two essentially independent objects being created by\nthe same command. But I haven't studied the implementation so maybe\nI'm completely wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Jan 2024 15:01:25 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Jan 15, 2024 at 2:28 PM Tom Lane <[email protected]> wrote:\n>> I'm reasoning by analogy to array types, which are automatically\n>> created and automatically updated to keep the same ownership\n>> etc. properties as their base type. To the extent that multirange\n>> types don't act exactly like that, I say it's a bug/oversight in the\n>> multirange patch. So I think this is a backend bug, not a pg_dump\n>> bug.\n\n> Well, I guess maybe I'm just clueless. I thought that the range and\n> multirange were two essentially independent objects being created by\n> the same command. But I haven't studied the implementation so maybe\n> I'm completely wrong.\n\nThey're by no means independent. What would it mean to have a\nmultirange without the underlying range type? Also, we already\ntreat the multirange as dependent for some things:\n\nd=# create type varbitrange as range (subtype = varbit);\nCREATE TYPE\nd=# \\dT\n List of data types\n Schema | Name | Description \n--------+------------------+-------------\n public | varbitmultirange | \n public | varbitrange | \n(2 rows)\n\nd=# drop type varbitmultirange;\nERROR: cannot drop type varbitmultirange because type varbitrange requires it\nHINT: You can drop type varbitrange instead.\nd=# drop type varbitrange restrict;\nDROP TYPE\nd=# \\dT\n List of data types\n Schema | Name | Description \n--------+------+-------------\n(0 rows)\n\nSo I think we're looking at a half-baked dependency design,\nnot two independent objects.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:46:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "On Tue, Jan 16, 2024 at 11:46 AM Tom Lane <[email protected]> wrote:\n> They're by no means independent. What would it mean to have a\n> multirange without the underlying range type?\n\nIt would mean just that - no more, and no less. If it's possible to\nimagine a data type that stores pairs of values from the underlying\ndata type with the constraint that the first is less than the second,\nplus the ability to specify inclusive or exclusive bounds and the\nability to have infinite bounds, then it's equally possible to imagine\na data type that represents a set of such ranges such that no two\nranges in the set overlap. And you need not imagine that the former\ndata type must exist in order for the latter to exist. Theoretically,\nthey're just two different data types that somebody could decide to\ncreate.\n\n> Also, we already\n> treat the multirange as dependent for some things:\n\nBut this seems like an entirely valid point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Jan 2024 12:06:43 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jan 16, 2024 at 11:46 AM Tom Lane <[email protected]> wrote:\n>> Also, we already\n>> treat the multirange as dependent for some things:\n\n> But this seems like an entirely valid point.\n\nYeah, it's a bit of a muddle. But there is no syntax for making\na standalone multirange type, so it seems to me that we've mostly\ndetermined that multiranges are dependent types. There are just\na few places that didn't get the word.\n\nAttached is a proposed patch to enforce that ownership and permissions\nof a multirange are those of the underlying range type, in ways\nparallel to how we treat array types. This is all that I found by\nlooking for calls to IsTrueArrayType(). It's possible that there's\nsome dependent-type behavior somewhere that isn't conditioned on that,\nbut I can't think of a good way to search.\n\nIf we don't do this, then we need some hacking in pg_dump to get it\nto save and restore these properties of multiranges, so it's not like\nthe status quo is acceptable.\n\nI'd initially thought that perhaps we could back-patch parts of this,\nbut now I'm not sure; it seems like these behaviors are a bit\nintertwined. Given the lack of field complaints I'm inclined to leave\nthings alone in the back branches.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 12 Feb 2024 17:55:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER TYPE OWNER fails to recurse to multirange" } ]
[ { "msg_contents": "While working on [1], I noticed some strange code in\nDiscreteKnapsack() which seems to be aiming to copy the Bitmapset.\n\nIt's not that obvious at a casual glance, but:\n\nsets[j] = bms_del_members(sets[j], sets[j]);\n\nthis is aiming to zero all the words in the set by passing the same\nset in both parameters.\n\nNow that 00b41463c changed Bitmapset to have NULL be the only valid\nrepresentation of an empty set, this code no longer makes sense. We\nmay as well just bms_free() the original set and bms_copy() in the new\nset as the bms_del_members() call will always pfree the set anyway.\n\nI've done that in the attached.\n\nI did consider if we might want bms_merge_members() or\nbms_exchange_members() or some other function suitably named function\nto perform a del/add operation, but given the lack of complaints about\nany performance regressions here, I think it's not worthwhile.\n\nThe code could also be adjusted to:\n\nsets[j] = bms_add_members(sets[j], sets[ow]);\nsets[j] = bms_del_members(sets[j], sets[j]);\nsets[j] = bms_add_members(sets[j], sets[ow]); // re-add any deletions\n\nso that the set never becomes fully empty... but ... that's pretty horrid.\n\n00b41463c is in PG16, but I'm not proposing to backpatch this. The\nmisleading comment does not seem critical enough and the resulting\nbehaviour isn't changing, just the performance characteristics.\n\nUnless there's some objection, I plan to push this in the next day or two.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvoXPoaDYEjMj9e1ihZZZynCtGqdAppWgPZMaMQ222NAkw@mail.gmail.com", "msg_date": "Tue, 16 Jan 2024 16:32:31 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Tue, Jan 16, 2024 at 11:32 AM David Rowley <[email protected]> wrote:\n\n> While working on [1], I noticed some strange code in\n> DiscreteKnapsack() which seems to be aiming to copy the Bitmapset.\n>\n> It's not that obvious at a casual glance, but:\n>\n> sets[j] = bms_del_members(sets[j], sets[j]);\n>\n> this is aiming to zero all the words in the set by passing the same\n> set in both parameters.\n>\n> Now that 00b41463c changed Bitmapset to have NULL be the only valid\n> representation of an empty set, this code no longer makes sense. We\n> may as well just bms_free() the original set and bms_copy() in the new\n> set as the bms_del_members() call will always pfree the set anyway.\n>\n> I've done that in the attached.\n\n\n+1. This is actually what happens with the original code, i.e.\nbms_del_members() will pfree sets[j] and bms_add_members() will bms_copy\nsets[ow] to sets[j]. But the new code looks more natural.\n\nI also checked other callers of bms_del_members() and did not find\nanother case that passes the same set in both parameters.\n\nThanks\nRichard\n\nOn Tue, Jan 16, 2024 at 11:32 AM David Rowley <[email protected]> wrote:While working on [1], I noticed some strange code in\nDiscreteKnapsack() which seems to be aiming to copy the Bitmapset.\n\nIt's not that obvious at a casual glance, but:\n\nsets[j] = bms_del_members(sets[j], sets[j]);\n\nthis is aiming to zero all the words in the set by passing the same\nset in both parameters.\n\nNow that 00b41463c changed Bitmapset to have NULL be the only valid\nrepresentation of an empty set, this code no longer makes sense.  We\nmay as well just bms_free() the original set and bms_copy() in the new\nset as the bms_del_members() call will always pfree the set anyway.\n\nI've done that in the attached.+1.  This is actually what happens with the original code, i.e.bms_del_members() will pfree sets[j] and bms_add_members() will bms_copysets[ow] to sets[j].  But the new code looks more natural.I also checked other callers of bms_del_members() and did not findanother case that passes the same set in both parameters.ThanksRichard", "msg_date": "Tue, 16 Jan 2024 17:18:24 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Tue, 16 Jan 2024 at 16:32, David Rowley <[email protected]> wrote:\n>\n> While working on [1], I noticed some strange code in\n> DiscreteKnapsack() which seems to be aiming to copy the Bitmapset.\n>\n> It's not that obvious at a casual glance, but:\n>\n> sets[j] = bms_del_members(sets[j], sets[j]);\n>\n> this is aiming to zero all the words in the set by passing the same\n> set in both parameters.\n>\n> Now that 00b41463c changed Bitmapset to have NULL be the only valid\n> representation of an empty set, this code no longer makes sense. We\n> may as well just bms_free() the original set and bms_copy() in the new\n> set as the bms_del_members() call will always pfree the set anyway.\n>\n> I've done that in the attached.\n>\n> I did consider if we might want bms_merge_members() or\n> bms_exchange_members() or some other function suitably named function\n> to perform a del/add operation, but given the lack of complaints about\n> any performance regressions here, I think it's not worthwhile.\n\nAfter looking at this again and reading more code, I see that\nDiscreteKnapsack() goes to some efforts to minimise memory\nallocations.\n\nThe functions's header comment mentions \"The bitmapsets are all\npre-initialized with an unused high bit so that memory allocation is\ndone only once.\".\n\nI tried adding some debugging output to track how many additional\nallocations we're now causing as a result of 00b41463c. Previously\nthere'd have just been max_weight allocations, but now there's those\nplus the number that's mentioned for \"frees\" below.\n\nNOTICE: DiscreteKnapsack: frees = 373, max_weight = 140, extra = 266.43%\nNOTICE: DiscreteKnapsack: frees = 373, max_weight = 140, extra = 266.43%\nNOTICE: DiscreteKnapsack: frees = 267, max_weight = 100, extra = 267.00%\nNOTICE: DiscreteKnapsack: frees = 267, max_weight = 100, extra = 267.00%\nNOTICE: DiscreteKnapsack: frees = 200, max_weight = 140, extra = 142.86%\nNOTICE: DiscreteKnapsack: frees = 200, max_weight = 140, extra = 142.86%\nNOTICE: DiscreteKnapsack: frees = 30, max_weight = 40, extra = 75.00%\nNOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\nNOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\nNOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\nNOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\n\nand by the looks of the code, the worst case is much worse.\n\nGiven that the code original code was written in a very deliberate way\nto avoid reallocations, I now think that we should maintain that.\n\nI've attached a patch which adds bms_replace_members(). It's basically\nlike bms_copy() but attempts to reuse the member from another set. I\nconsidered if the new function should be called bms_copy_inplace(),\nbut left it as bms_replace_members() for now.\n\nNow I wonder if this should be backpatched to PG16.\n\nDavid", "msg_date": "Thu, 18 Jan 2024 13:34:46 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "\nHi,\n\nDavid Rowley <[email protected]> writes:\n\n> On Tue, 16 Jan 2024 at 16:32, David Rowley <[email protected]> wrote:\n>>\n>>\n>> Now that 00b41463c changed Bitmapset to have NULL be the only valid\n>> representation of an empty set, this code no longer makes sense. We\n>> may as well just bms_free() the original set and bms_copy() in the new\n>> set as the bms_del_members() call will always pfree the set anyway.\n\nI want to know if \"user just want to zero out the flags in bitmapset\nbut keeping the memory allocation\" is a valid requirement. Commit\n00b41463c makes it is hard IIUC. The user case I have is I want to\nkeep the detoast datum in slot->tts_values[1] so that any further\naccess doesn't need to detoast it again, I used a 'Bitmapset' in\nTupleTableSlot which shows which attributes is detoast. all of the\ndetoast values should be pfree-d in ExecClearTuple. However if a\nbms_free the bitmapset everytime in ExecClearTuple and allocate the\nmemory again later makes some noticable performance regression (5%\ndifference in my workload). That is still a open items for that patch. \n\n> ...\n\n> The functions's header comment mentions \"The bitmapsets are all\n> pre-initialized with an unused high bit so that memory allocation is\n> done only once.\".\n> NOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\n> NOTICE: DiscreteKnapsack: frees = 110, max_weight = 60, extra = 183.33%\n>\n> and by the looks of the code, the worst case is much worse.\n>\n\nLooks like this is another user case of \"user just wants to zero out the\nflags in bitmapset but keeping the memory allocation\".\n\n[1] https://www.postgresql.org/message-id/flat/[email protected]\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 09:41:41 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Thu, Jan 18, 2024 at 8:35 AM David Rowley <[email protected]> wrote:\n\n> The functions's header comment mentions \"The bitmapsets are all\n> pre-initialized with an unused high bit so that memory allocation is\n> done only once.\".\n\n\nAh, I neglected to notice this when reviewing the v1 patch. I guess\nit's implemented this way due to performance considerations, right?\n\n\n> I've attached a patch which adds bms_replace_members(). It's basically\n> like bms_copy() but attempts to reuse the member from another set. I\n> considered if the new function should be called bms_copy_inplace(),\n> but left it as bms_replace_members() for now.\n\n\nDo you think we can use 'memcpy(a, b, BITMAPSET_SIZE(b->nwords))'\ndirectly in the new bms_replace_members() instead of copying the\nbitmapwords one by one, like:\n\n- i = 0;\n- do\n- {\n- a->words[i] = b->words[i];\n- } while (++i < b->nwords);\n-\n- a->nwords = b->nwords;\n+ memcpy(a, b, BITMAPSET_SIZE(b->nwords));\n\nBut I'm not sure if this is an improvement or not.\n\nThanks\nRichard\n\nOn Thu, Jan 18, 2024 at 8:35 AM David Rowley <[email protected]> wrote:\nThe functions's header comment mentions \"The bitmapsets are all\npre-initialized with an unused high bit so that memory allocation is\ndone only once.\".Ah, I neglected to notice this when reviewing the v1 patch.  I guessit's implemented this way due to performance considerations, right? \nI've attached a patch which adds bms_replace_members(). It's basically\nlike bms_copy() but attempts to reuse the member from another set. I\nconsidered if the new function should be called bms_copy_inplace(),\nbut left it as bms_replace_members() for now.Do you think we can use 'memcpy(a, b, BITMAPSET_SIZE(b->nwords))'directly in the new bms_replace_members() instead of copying thebitmapwords one by one, like:-   i = 0;-   do-   {-       a->words[i] = b->words[i];-   } while (++i < b->nwords);--   a->nwords = b->nwords;+   memcpy(a, b, BITMAPSET_SIZE(b->nwords));But I'm not sure if this is an improvement or not.ThanksRichard", "msg_date": "Thu, 18 Jan 2024 10:22:24 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "Thanks for having a look at this again.\n\nOn Thu, 18 Jan 2024 at 15:22, Richard Guo <[email protected]> wrote:\n> Do you think we can use 'memcpy(a, b, BITMAPSET_SIZE(b->nwords))'\n> directly in the new bms_replace_members() instead of copying the\n> bitmapwords one by one, like:\n>\n> - i = 0;\n> - do\n> - {\n> - a->words[i] = b->words[i];\n> - } while (++i < b->nwords);\n> -\n> - a->nwords = b->nwords;\n> + memcpy(a, b, BITMAPSET_SIZE(b->nwords));\n>\n> But I'm not sure if this is an improvement or not.\n\nI considered this earlier but felt it was going against the method\nused in other places in the file. However, on relooking I do see\nbms_copy() does a memcpy().\n\nI'm still in favour of keeping it the way the v2 patch does it for 2 reasons:\n\n1) Ignoring bms_copy(), we use do/while in all other functions where\nwe operate on all words in the set.\n2) memcpy isn't that fast for small numbers of bytes when that number\nof bytes isn't known at compile-time.\n\nThe do/while method can take advantage of knowing that the Bitmapset\nwill have at least 1 word allowing a single loop check when the set\nonly has a single word, which I expect most Bitmapsets do.\n\nOf course, memcpy() is fewer lines of code and this likely isn't that\nperformance critical, so there's certainly arguments for memcpy().\nHowever, it isn't quite as few lines as the patch you pasted. We\nstill need to overwrite a->nwords to ensure we grow the set or shrink\nit to trim off any trailing zero words (which I didn't feel any need\nto actually set to 0).\n\nDavid\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:24:51 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Thu, 18 Jan 2024 at 14:58, Andy Fan <[email protected]> wrote:\n> I want to know if \"user just want to zero out the flags in bitmapset\n> but keeping the memory allocation\" is a valid requirement. Commit\n> 00b41463c makes it is hard IIUC.\n\nLooking at your patch, I see:\n\n+/*\n+ * does this break commit 00b41463c21615f9bf3927f207e37f9e215d32e6?\n+ * but I just found alloc memory and free the memory is too bad\n+ * for this current feature. So let see ...;\n+ */\n+void\n+bms_zero(Bitmapset *a)\n\nI understand the requirement here, but, to answer the question in the\ncomment -- Yes, that does violate the requirements for how an empty\nset is represented and as of c7e5e994b and possibly earlier, any\nsubsequent Bitmapset operations will cause an Assert failure due to\nthe trailing zero word(s) left by bms_zero().\n\n> Looks like this is another user case of \"user just wants to zero out the\n> flags in bitmapset but keeping the memory allocation\".\n\nI don't really see a way to have it work the way you want without\nviolating the new representation of an empty Bitmapset. Per [1],\nthere's quite a performance advantage to 00b41463c plus the additional\ndon't allow trailing empty words rule done in a8c09daa8. So I don't\nwish the rules were more lax.\n\nMaybe Bitmapsets aren't the best fit for your need. Maybe it would be\nbetter to have a more simple fixed size bitset that you could allocate\nin the same allocation as the TupleTableSlot's tts_null and tts_values\narrays. The slot's TupleDesc should know how many bits are needed.\n\nDavid\n\n[1] https://postgr.es/m/CAJ2pMkYcKHFBD_OMUSVyhYSQU0-j9T6NZ0pL6pwbZsUCohWc7Q%40mail.gmail.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 17:08:33 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "\nHi, \nDavid Rowley <[email protected]> writes:\n\n> On Thu, 18 Jan 2024 at 14:58, Andy Fan <[email protected]> wrote:\n>> I want to know if \"user just want to zero out the flags in bitmapset\n>> but keeping the memory allocation\" is a valid requirement. Commit\n>> 00b41463c makes it is hard IIUC.\n>\n> Looking at your patch, I see:\n>\n> +/*\n> + * does this break commit 00b41463c21615f9bf3927f207e37f9e215d32e6?\n> + * but I just found alloc memory and free the memory is too bad\n> + * for this current feature. So let see ...;\n> + */\n> +void\n> +bms_zero(Bitmapset *a)\n>\n> I understand the requirement here, but, to answer the question in the\n> comment -- Yes, that does violate the requirements for how an empty\n> set is represented and as of c7e5e994b and possibly earlier, any\n> subsequent Bitmapset operations will cause an Assert failure due to\n> the trailing zero word(s) left by bms_zero().\n\nThanks for the understanding and confirmation.\n\n>> Looks like this is another user case of \"user just wants to zero out the\n>> flags in bitmapset but keeping the memory allocation\".\n>\n> I don't really see a way to have it work the way you want without\n> violating the new representation of an empty Bitmapset. Per [1],\n> there's quite a performance advantage to 00b41463c plus the additional\n> don't allow trailing empty words rule done in a8c09daa8. So I don't\n> wish the rules were more lax.\n\nI agree with this.\n\n>\n> Maybe Bitmapsets aren't the best fit for your need. Maybe it would be\n> better to have a more simple fixed size bitset that you could allocate\n> in the same allocation as the TupleTableSlot's tts_null and tts_values\n> arrays. The slot's TupleDesc should know how many bits are needed.\n\nI think this is the direction we have to go. There is no bitset struct\nyet, so I prefer to design it as below, The APIs are pretty similar with\nBitmapset. \n\ntypdef char Bitset;\n\nBitset *bitset_init(int size);\nvoid bitset_add_member(Bitset *bitset, int x);\nvoid bitset_del_member(Bitset *bitset, int x);\nBitset *bitset_add_members(Bitset *bitset1, Bitset *bitset2);\nbool bitset_is_member(int x);\nint bitset_next_member(Bitset *bitset, int i);\nBitset *bitset_clear();\n\nAfter this, we may use it for DiscreteKnapsack as well since the\nnum_items is fixed as well and this user case wants the memory allocation \nis done only once.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 13:10:57 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "\nHi,\n\nDavid Rowley <[email protected]> writes:\n\n> Given that the code original code was written in a very deliberate way\n> to avoid reallocations, I now think that we should maintain that.\n>\n> I've attached a patch which adds bms_replace_members(). It's basically\n> like bms_copy() but attempts to reuse the member from another set. I\n> considered if the new function should be called bms_copy_inplace(),\n> but left it as bms_replace_members() for now.\n\nI find the following code in DiscreteKnapsack is weird.\n\n\n\tfor (i = 0; i <= max_weight; ++i)\n\t{\n\t\tvalues[i] = 0;\n\n** memory allocation here, and the num_items bit is removed later **\n\t\n\t\tsets[i] = bms_make_singleton(num_items);\n\t}\n\n\n ** num_items bit is removed here **\n\tresult = bms_del_member(bms_copy(sets[max_weight]), num_items);\n\nI can't access the github.com now so I can't test my idea, but basiclly\nI think we may need some improvement here. like 'sets[i] = NULL;' at the\nfirst and remove the bms_del_member later.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 20:00:54 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Fri, 19 Jan 2024 at 01:07, Andy Fan <[email protected]> wrote:\n> I find the following code in DiscreteKnapsack is weird.\n>\n>\n> for (i = 0; i <= max_weight; ++i)\n> {\n> values[i] = 0;\n>\n> ** memory allocation here, and the num_items bit is removed later **\n>\n> sets[i] = bms_make_singleton(num_items);\n> }\n>\n>\n> ** num_items bit is removed here **\n> result = bms_del_member(bms_copy(sets[max_weight]), num_items);\n\nIt does not seem weird to me. If the set is going to have multiple\nwords then adding a member 1 higher than the highest we'll ever add\nensures the set has enough words and we don't need to repalloc to grow\nthe set when we bms_add_member().\n\nDavid\n\n\n", "msg_date": "Fri, 19 Jan 2024 10:31:33 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "On Thu, 18 Jan 2024 at 16:24, David Rowley <[email protected]> wrote:\n> On Thu, 18 Jan 2024 at 15:22, Richard Guo <[email protected]> wrote:\n> > Do you think we can use 'memcpy(a, b, BITMAPSET_SIZE(b->nwords))'\n> > directly in the new bms_replace_members() instead of copying the\n> > bitmapwords one by one, like:\n> >\n> > - i = 0;\n> > - do\n> > - {\n> > - a->words[i] = b->words[i];\n> > - } while (++i < b->nwords);\n> > -\n> > - a->nwords = b->nwords;\n> > + memcpy(a, b, BITMAPSET_SIZE(b->nwords));\n> >\n> > But I'm not sure if this is an improvement or not.\n>\n> I considered this earlier but felt it was going against the method\n> used in other places in the file. However, on relooking I do see\n> bms_copy() does a memcpy().\n\nI feel it's not worth debating the memcpy thing any further, so I've\npushed the v2 patch.\n\nThanks for reviewing.\n\nDavid\n\n\n", "msg_date": "Fri, 19 Jan 2024 10:46:51 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" }, { "msg_contents": "\nDavid Rowley <[email protected]> writes:\n\n> On Fri, 19 Jan 2024 at 01:07, Andy Fan <[email protected]> wrote:\n>> I find the following code in DiscreteKnapsack is weird.\n>>\n>>\n>> for (i = 0; i <= max_weight; ++i)\n>> {\n>> values[i] = 0;\n>>\n>> ** memory allocation here, and the num_items bit is removed later **\n>>\n>> sets[i] = bms_make_singleton(num_items);\n>> }\n>>\n>>\n>> ** num_items bit is removed here **\n>> result = bms_del_member(bms_copy(sets[max_weight]), num_items);\n>\n> It does not seem weird to me. If the set is going to have multiple\n> words then adding a member 1 higher than the highest we'll ever add\n> ensures the set has enough words and we don't need to repalloc to grow\n> the set when we bms_add_member().\n\nHmm, I missed this part, thanks for the explaination. If bitset feature\ncan get in someday, the future user case like this can use bitset\ndirectly to avoid this trick method. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:01:03 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange Bitmapset manipulation in DiscreteKnapsack()" } ]
[ { "msg_contents": "\nHi, hackers\n\nRecently, I'm trying to implement a new TAM for PostgreSQL, I find there is no\nAPI for handling table's option. For example:\n\n CREATE TABLE t (...) USING new_am WITH (...);\n\nIs it possible add a new API to handle table's option in TableAmRoutine?\n\n--\nRegrads,\nJapin Li.\n\n\n", "msg_date": "Tue, 16 Jan 2024 15:44:09 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Introduce a new API for TableAmRoutine" } ]
[ { "msg_contents": "hi.\nhttps://www.postgresql.org/docs/current/sql-merge.html\n\nCompatibility section:\n\"WITH clause\"\nshould be\n\n<literal>WITH</literal> clause\n\n\n", "msg_date": "Tue, 16 Jan 2024 18:30:14 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "sql-merge.html Compatibility section, typo." }, { "msg_contents": "> On 16 Jan 2024, at 11:30, jian he <[email protected]> wrote:\n> \n> hi.\n> https://www.postgresql.org/docs/current/sql-merge.html\n> \n> Compatibility section:\n> \"WITH clause\"\n> should be\n> \n> <literal>WITH</literal> clause\n\nAgreed, the rest of the page marks up \"WITH clause\" like that so I'll go ahead\nand backpatch that down to v15.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:37:47 +0100", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql-merge.html Compatibility section, typo." } ]
[ { "msg_contents": "Dear PostgreSQL Team,\n\n\n\nI am writing to inform you that our PostgreSQL database service is\ncurrently down. We are experiencing an unexpected interruption, and we are\nseeking your expertise to help us resolve this issue promptly.\n\nWe would greatly appreciate your immediate attention to this matter. If\nthere are specific steps we should follow or additional information you\nrequire, please let us know as soon as possible.\n\nYour assistance in resolving this issue is crucial, and we are confident in\nyour expertise to help us bring the PostgreSQL database back online.\n\n\n\nHere are some details about the current situation:\n\n*1) checking the status:-*\n\n/apps/postgresdb/pgsql/bin/pg_ctl status -D /apps/postgresdb/pgsql/data\npg_ctl: no server running\n\n\n*2) Starting the server*\n\n/apps/postgresdb/pgsql/bin/pg_ctl start -D /apps/postgresdb/pgsql/data\nwaiting for server to start....2024-01-15 11:15:08.010 GMT [] LOG:\n listening on IPv4 address \"0.0.0.0\", port\nLOG: listening on IPv6 address \"::\", port\nLOG: listening on Unix socket \"/tmp/.s.PGSQL.\"\nLOG: database system was interrupted while in recovery at 2024-01-15\n10:51:44 GMT\nHINT: This probably means that some data is corrupted and you will have to\nuse the last backup for recovery.\nFATAL: the database system is starting up\nLOG: database system was not properly shut down; automatic recovery in\nprogress\nLOG: redo starts at 0/\nFATAL: could not access status of transaction\nDETAIL: Could not read from file \"pg_xact/0001\" at offset 204800: Success.\nCONTEXT: WAL redo at 0/7A845458 for Transaction/COMMIT: 2023-12-30\n23:26:16.017062+00\nLOG: startup process (PID 2731458) exited with exit code 1\nLOG: aborting startup due to startup process failure\nLOG: database system is shut down\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n\nThanks\nBablu Nayak\n\nDear PostgreSQL Team, I am writing to inform you that our PostgreSQL database service is currently down. We are experiencing an unexpected interruption, and we are seeking your expertise to help us resolve this issue promptly.We would greatly appreciate your immediate attention to this matter. If there are specific steps we should follow or additional information you require, please let us know as soon as possible.Your assistance in resolving this issue is crucial, and we are confident in your expertise to help us bring the PostgreSQL database back online. Here are some details about the current situation:1) checking the status:-/apps/postgresdb/pgsql/bin/pg_ctl status -D /apps/postgresdb/pgsql/datapg_ctl: no server running2) Starting the server/apps/postgresdb/pgsql/bin/pg_ctl start -D /apps/postgresdb/pgsql/datawaiting for server to start....2024-01-15 11:15:08.010 GMT [] LOG:  listening on IPv4 address \"0.0.0.0\", port LOG:  listening on IPv6 address \"::\", port LOG:  listening on Unix socket \"/tmp/.s.PGSQL.\"LOG:  database system was interrupted while in recovery at 2024-01-15 10:51:44 GMTHINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.FATAL:  the database system is starting upLOG:  database system was not properly shut down; automatic recovery in progressLOG:  redo starts at 0/FATAL:  could not access status of transactionDETAIL:  Could not read from file \"pg_xact/0001\" at offset 204800: Success.CONTEXT:  WAL redo at 0/7A845458 for Transaction/COMMIT: 2023-12-30 23:26:16.017062+00LOG:  startup process (PID 2731458) exited with exit code 1LOG:  aborting startup due to startup process failureLOG:  database system is shut down stopped waitingpg_ctl: could not start serverExamine the log output.Thanks Bablu Nayak", "msg_date": "Tue, 16 Jan 2024 16:52:20 +0530", "msg_from": "Bablu Kumar Nayak <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Database Service Interruption" }, { "msg_contents": "On Tuesday, January 16, 2024, Bablu Kumar Nayak <\[email protected]> wrote:\n\n> Dear PostgreSQL Team,\n>\n>\n>\n> I am writing to inform you that our PostgreSQL database service is\n> currently down. We are experiencing an unexpected interruption, and we are\n> seeking your expertise to help us resolve this issue promptly.\n>\n> We would greatly appreciate your immediate attention to this matter. If\n> there are specific steps we should follow or additional information you\n> require, please let us know as soon as possible.\n>\n\nThis list is here to discuss patch development for the project. As an open\nsource project there isn’t really a formal support list, though requests\nfor help are expected to be sent to the -general list, or maybe -admin in\nthis case.\n\nIf urgency is indeed a thing in your world you might consider paid support\nwhich comes with an expectation of timeliness for help.\n\nYou might need to just restore from a backup as the log file says.\n\nDavid J.\n\nOn Tuesday, January 16, 2024, Bablu Kumar Nayak <[email protected]> wrote:Dear PostgreSQL Team, I am writing to inform you that our PostgreSQL database service is currently down. We are experiencing an unexpected interruption, and we are seeking your expertise to help us resolve this issue promptly.We would greatly appreciate your immediate attention to this matter. If there are specific steps we should follow or additional information you require, please let us know as soon as possible.This list is here to discuss patch development for the project.  As an open source project there isn’t really a formal support list, though requests for help are expected to be sent to the -general list, or maybe -admin in this case.If urgency is indeed a thing in your world you might consider paid support which comes with an expectation of timeliness for help.You might need to just restore from a backup as the log file says.David J.", "msg_date": "Tue, 16 Jan 2024 06:21:12 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Database Service Interruption" } ]
[ { "msg_contents": "This issue has been reported in the <pgsql-bugs> list at the link below, but has not received a reply.\nhttps://www.postgresql.org/message-id/18286-f6273332500c2a62%40postgresql.org\nHopefully to get some response from kernel hackers, thanks!\n\nHi,\nWhen reindex the partitioned table's index and the drop index are executed concurrently, we may encounter the error \"could not open relation with OID”.\n\nThe reconstruction of the partitioned table's index is completed in multiple transactions and can be simply summarized into the following steps:\n1. Obtain the oids of all partition indexes in the ReindexPartitions function, and then commit the transaction to release all locks.\n2. Reindex each index in turn\n 2.1 Start a new transaction\n 2.2 Check whether the index still exists\n 2.3 Call the reindex_index function to complete the index rebuilding work\n 2.4 Submit transaction\n\nThere is no lock between steps 2.2 and 2.3 to protect the heap table and index from being deleted, so whether the heap table still exists is determined in the reindex_index function, but the index is not checked.\n\nOne fix I can think of is: after successfully opening the heap table in reindex_index, check again whether the index still exists, Something like this:\ndiff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c\nindex 143fae01eb..21777ec98c 100644\n--- a/src/backend/catalog/index.c\n+++ b/src/backend/catalog/index.c\n@@ -3594,6 +3594,17 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,\n if (!heapRelation)\n return;\n \n+ /*\n+ * Before opening the index, check if the index relation still exists.\n+ * If index relation is gone, leave.\n+ */\n+ if (params->options & REINDEXOPT_MISSING_OK != 0 &&\n+ !SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexId)))\n+ {\n+ table_close(heapRelation, NoLock);\n+ return;\n+ }\n+\n /*\n * Switch to the table owner's userid, so that any index functions are run\n * as that user. Also lock down security-restricted operations and\n\nThe above analysis is based on the latest master branch.\n\nI'm not sure if my idea is reasonable, I hope you can give me some suggestions. Thanks.\n\nBest Regards,\n\n\nFei Changhong\nAlibaba Cloud Computing Ltd.\n\n\nThis issue has been reported in the <pgsql-bugs> list at the link below, but has not received a reply.https://www.postgresql.org/message-id/18286-f6273332500c2a62%40postgresql.orgHopefully to get some response from kernel hackers, thanks!Hi,When reindex the partitioned table's index and the drop index are executed concurrently, we may encounter the error \"could not open relation with OID”.The reconstruction of the partitioned table's index is completed in multiple transactions and can be simply summarized into the following steps:1. Obtain the oids of all partition indexes in the ReindexPartitions function, and then commit the transaction to release all locks.2. Reindex each index in turn   2.1 Start a new transaction   2.2 Check whether the index still exists   2.3 Call the reindex_index function to complete the index rebuilding work   2.4 Submit transactionThere is no lock between steps 2.2 and 2.3 to protect the heap table and index from being deleted, so whether the heap table still exists is determined in the reindex_index function, but the index is not checked.One fix I can think of is: after successfully opening the heap table in reindex_index, check again whether the index still exists, Something like this:diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.cindex 143fae01eb..21777ec98c 100644--- a/src/backend/catalog/index.c+++ b/src/backend/catalog/index.c@@ -3594,6 +3594,17 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,        if (!heapRelation)                return; +       /*+        * Before opening the index, check if the index relation still exists.+        * If index relation is gone, leave.+        */+       if (params->options & REINDEXOPT_MISSING_OK != 0 &&+               !SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexId)))+       {+               table_close(heapRelation, NoLock);+               return;+       }+        /*         * Switch to the table owner's userid, so that any index functions are run         * as that user.  Also lock down security-restricted operations andThe above analysis is based on the latest master branch.I'm not sure if my idea is reasonable, I hope you can give me some suggestions. Thanks.Best Regards,\nFei ChanghongAlibaba Cloud Computing Ltd.", "msg_date": "Tue, 16 Jan 2024 19:51:47 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": true, "msg_subject": "\"ERROR: could not open relation with OID 16391\" error was encountered\n when reindexing" }, { "msg_contents": "Hi,\n\n> This issue has been reported in the <pgsql-bugs> list at the link below, but has not received a reply.\n> https://www.postgresql.org/message-id/18286-f6273332500c2a62%40postgresql.org\n> Hopefully to get some response from kernel hackers, thanks!\n>\n> Hi,\n> When reindex the partitioned table's index and the drop index are executed concurrently, we may encounter the error \"could not open relation with OID”.\n>\n> The reconstruction of the partitioned table's index is completed in multiple transactions and can be simply summarized into the following steps:\n> 1. Obtain the oids of all partition indexes in the ReindexPartitions function, and then commit the transaction to release all locks.\n> 2. Reindex each index in turn\n> 2.1 Start a new transaction\n> 2.2 Check whether the index still exists\n> 2.3 Call the reindex_index function to complete the index rebuilding work\n> 2.4 Submit transaction\n>\n> There is no lock between steps 2.2 and 2.3 to protect the heap table and index from being deleted, so whether the heap table still exists is determined in the reindex_index function, but the index is not checked.\n>\n> One fix I can think of is: after successfully opening the heap table in reindex_index, check again whether the index still exists, Something like this:\n> diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c\n> index 143fae01eb..21777ec98c 100644\n> --- a/src/backend/catalog/index.c\n> +++ b/src/backend/catalog/index.c\n> @@ -3594,6 +3594,17 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,\n> if (!heapRelation)\n> return;\n>\n> + /*\n> + * Before opening the index, check if the index relation still exists.\n> + * If index relation is gone, leave.\n> + */\n> + if (params->options & REINDEXOPT_MISSING_OK != 0 &&\n> + !SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexId)))\n> + {\n> + table_close(heapRelation, NoLock);\n> + return;\n> + }\n> +\n> /*\n> * Switch to the table owner's userid, so that any index functions are run\n> * as that user. Also lock down security-restricted operations and\n>\n> The above analysis is based on the latest master branch.\n>\n> I'm not sure if my idea is reasonable, I hope you can give me some suggestions. Thanks.\n\nAny chance you could provide minimal steps to reproduce the issue on\nan empty PG instance, ideally as a script? That's going to be helpful\nto reproduce / investigate the issue and also make sure that it's\nfixed.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 16 Jan 2024 15:06:34 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "Thank you for your attention.\n\n> \n> Any chance you could provide minimal steps to reproduce the issue on\n> an empty PG instance, ideally as a script? That's going to be helpful\n> to reproduce / investigate the issue and also make sure that it's\n> fixed.\n\n\nI have provided a python script in the attachment to minimize the reproduction of the issue.\n\nThe specific reproduction steps are as follows:\n1. Initialize the data\n```\nDROP TABLE IF EXISTS tbl_part;\nCREATE TABLE tbl_part (a integer) PARTITION BY RANGE (a);\nCREATE TABLE tbl_part_p1 PARTITION OF tbl_part FOR VALUES FROM (0) TO (10);\nCREATE INDEX ON tbl_part(a);\n```\n2. session1 reindex and gdb break at index.c:3585\n```\nREINDEX INDEX tbl_part_a_idx;\n```\n3. session2 drop index succeed\n\n```\nDROP INDEX tbl_part_a_idx;\n```\n4. session1 gdb continue\n\n\nBest Regards,\nFei Changhong\nAlibaba Cloud Computing Ltd.", "msg_date": "Tue, 16 Jan 2024 22:26:52 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "I have provided a python script in the attachment to minimize the reproduction of the issue.\r\n\r\nI'm sorry that I lost the attached script in my last reply, but I've added it in this reply.\r\n\r\n\r\nYou can also use psql to reproduce it with the following steps:\r\n1. Initialize the data\r\n```\r\nDROP TABLE IF EXISTS tbl_part;\r\nCREATE TABLE tbl_part (a integer) PARTITION BY RANGE (a);\r\nCREATE TABLE tbl_part_p1 PARTITION OF tbl_part FOR VALUES FROM (0) TO (10);\r\nCREATE INDEX ON tbl_part(a);\r\n```\r\n2. session1 reindex and gdb break at index.c:3585\r\n```\r\nREINDEX INDEX tbl_part_a_idx;\r\n```\r\n3. session2 drop index succeed\r\n\r\n\r\n```\r\nDROP INDEX tbl_part_a_idx;\r\n```\r\n4. session1 gdb continue\r\n\r\n\r\n\r\nBest Regards,\r\nFei Changhong\r\nAlibaba Cloud Computing Ltd.", "msg_date": "Wed, 17 Jan 2024 00:21:11 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]> writes:\n> 2. session1 reindex and gdb break at index.c:3585\n\nThis is extremely nonspecific, as line numbers in our code change\nconstantly. Please quote a chunk of code surrounding that\nand indicate which line you are trying to stop at.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Jan 2024 11:34:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "> This is extremely nonspecific, as line numbers in our code change\n> constantly. Please quote a chunk of code surrounding that\n> and indicate which line you are trying to stop at.\n\nThanks for the suggestion, I've refined the steps below to reproduce:\n1. Initialize the data\n```\nDROP TABLE IF EXISTS tbl_part;\nCREATE TABLE tbl_part (a integer) PARTITION BY RANGE (a);\nCREATE TABLE tbl_part_p1 PARTITION OF tbl_part FOR VALUES FROM (0) TO (10);\nCREATE INDEX ON tbl_part(a);\n```\n2. session1 reindex and the gdb break after the reindex_index function successfully obtains the heapId, as noted in the code chunk below:\n\nreindex_index(Oid indexId, bool skip_constraint_checks, char persistence,\n\t\t\t const ReindexParams *params)\n{\n\t......\n\t/*\n\t * Open and lock the parent heap relation. ShareLock is sufficient since\n\t * we only need to be sure no schema or data changes are going on.\n\t */\n\theapId = IndexGetRelation(indexId,\n\t\t\t\t\t\t\t (params->options & REINDEXOPT_MISSING_OK) != 0);\n\t====> gdb break at here\n\t/* if relation is missing, leave */\n\tif (!OidIsValid(heapId))\n\t\treturn;\n```\nREINDEX INDEX tbl_part_a_idx;\n```\n3. session2 drop index succeed\n\n```\nDROP INDEX tbl_part_a_idx;\n```\n4. session1 gdb continue\n\n\nBest Regards,\nFei Changhong\nAlibaba Cloud Computing Ltd.\n\n\nThis is extremely nonspecific, as line numbers in our code changeconstantly.  Please quote a chunk of code surrounding thatand indicate which line you are trying to stop at.Thanks for the suggestion, I've refined the steps below to reproduce:1. Initialize the data```DROP TABLE IF EXISTS tbl_part;CREATE TABLE tbl_part (a integer) PARTITION BY RANGE (a);CREATE TABLE tbl_part_p1 PARTITION OF tbl_part FOR VALUES FROM (0) TO (10);CREATE INDEX ON tbl_part(a);```2. session1 reindex and the gdb break after the reindex_index function successfully obtains the heapId, as noted in the code chunk below:reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,  const ReindexParams *params){ ...... /* * Open and lock the parent heap relation.  ShareLock is sufficient since * we only need to be sure no schema or data changes are going on. */ heapId = IndexGetRelation(indexId,  (params->options & REINDEXOPT_MISSING_OK) != 0); ====> gdb break at here /* if relation is missing, leave */ if (!OidIsValid(heapId)) return;```REINDEX INDEX tbl_part_a_idx;```3. session2 drop index succeed```DROP INDEX tbl_part_a_idx;```4. session1 gdb continue\nBest Regards,Fei ChanghongAlibaba Cloud Computing Ltd.", "msg_date": "Wed, 17 Jan 2024 00:54:26 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "On Wed, Jan 17, 2024 at 12:54:26AM +0800, feichanghong wrote:\n>> This is extremely nonspecific, as line numbers in our code change\n>> constantly. Please quote a chunk of code surrounding that\n>> and indicate which line you are trying to stop at.\n> \n> Thanks for the suggestion, I've refined the steps below to reproduce:\n\nYeah, thanks for the steps. I am not surprised that there are still a\nfew holes in this area. CONCURRENTLY can behave differently depending\non the step where the old index is getting opened.\n\nFor this specific job, I have always wanted a try_index_open() that\nwould attempt to open the index with a relkind check, perhaps we could\nintroduce one and reuse it here?\n--\nMichael", "msg_date": "Wed, 17 Jan 2024 15:44:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "> \n> For this specific job, I have always wanted a try_index_open() that\n> would attempt to open the index with a relkind check, perhaps we could\n> introduce one and reuse it here?\n\n\nYes, replacing index_open with try_index_open solves the problem. The\nidea is similar to my initial report of \"after successfully opening the heap\ntable in reindex_index, check again whether the index still exists”.\nBut it should be better to introduce try_index_open.\n\n\nBest Regards,\nFei Changhong\nAlibaba Cloud Computing Ltd.", "msg_date": "Wed, 17 Jan 2024 15:56:14 +0800", "msg_from": "feichanghong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "It has been verified that the patch in the attachment can solve the\r\nabove problems. I sincerely look forward to your suggestions!", "msg_date": "Wed, 17 Jan 2024 16:03:46 +0800", "msg_from": "\"=?ISO-8859-1?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" }, { "msg_contents": "On Wed, Jan 17, 2024 at 04:03:46PM +0800, feichanghong wrote:\n> It has been verified that the patch in the attachment can solve the\n> above problems. I sincerely look forward to your suggestions!\n\nThanks for the patch. I have completely forgotten to update this\nthread. Except for a few comments and names that I've tweaked, this\nwas OK, so applied and backpatched after splitting things into two:\n- One commit for try_index_open().\n- Second commit for the fix in reindex_index().\n\nI've looked at the concurrent paths as well, and even if these involve\nmore relations opened we maintain a session lock on the parent\nrelations that we manipulate, so I could not see a pattern where the\nindex would be dropped and where we'd try to open it. Now, there are\ncases where it is possible to deadlock for the concurrent paths, but\nthat's not new: schema or database level reindexes can also hit that.\n\nThis is one of these areas where tests are hard to write now because\nwe want to stop operations at specific points but we cannot. \n--\nMichael", "msg_date": "Fri, 19 Jan 2024 14:26:54 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"ERROR: could not open relation with OID 16391\" error was\n encountered when reindexing" } ]
[ { "msg_contents": "Hi developers,\r\n\r\nI was working on loans and bank financing, specifically focusing on Amortization Systems. I had the need to reverse the counter for the total number of installments or for a specific set of installments. This \"reversal\" is essentially a reverse \"row_number\" function. I realized that it is to \"hard work\" to write PL/foo functions for this or even to implement it in just SQL using little code.\r\n\r\nTo streamline the daily process, I conducted a laboratory (prototype, test) using the PostgreSQL 14.3 version doing a small customization. I implemented the window function \"row_number_desc,\" as detailed below.\r\n\r\nI would like to assess the feasibility of incorporating this into a future version of Postgres, given its significant utility and practicality in handling bank contract installments in many fields of Finacial Math, because to do use \"row_number_desc() over()\" is most easy that write a PL/foo or a big lenght SQL string that to do the \"descendent case\".\r\n\r\nWhat is your opinion regarding this suggestion?\r\nIs it possible to make this a 'feature patch' candidate to PostgreSQL 17?\r\n\r\nSUMMARY (IMPLEMENTATION and RESULT):\r\n-------------------------------------------------------------------------------------\r\n/home/postgresql-14.3-custom/src/backend/utils/adt/windowfuncs.c\r\n\r\n/*\r\n * row_number_desc\r\n * Performs the inverse of row_number function, is a descendent result.\r\n */\r\nDatum\r\nwindow_row_number_desc(PG_FUNCTION_ARGS)\r\n{\r\n WindowObject winobj = PG_WINDOW_OBJECT();\r\n    int64 totalrows = WinGetPartitionRowCount(winobj);\r\n int64 curpos = WinGetCurrentPosition(winobj);\r\n            \r\n WinSetMarkPosition(winobj, curpos);\r\n PG_RETURN_INT64(totalrows - curpos);\r\n}\r\n-------------------------------------------------------------------------------------\r\n/home/postgresql-14.3-custom/src/include/catalog/pg_proc.dat\r\n\r\n{ oid => '13882', descr => 'row number descendent within partition',\r\n proname => 'row_number_desc', prokind => 'w', proisstrict => 'f',\r\n prorettype => 'int8', proargtypes => '', prosrc => 'window_row_number_desc' },\r\n\r\nNote: In this step, I know that I'll need to use an unused OID returned by the 'src/include/catalog/unused_oids' script.\r\n-------------------------------------------------------------------------------------\r\n/home/postgresql-14.3-custom/src/backend/catalog/postgres.bki\r\ninsert ( 13882 row_number_desc 11 10 12 1 0 0 0 w f f f f i s 0 0 20 '' _null_ _null_ _null_ _null_ _null_ window_row_number_desc _null_ _null_ _null_ _null_ )\r\n\r\nNote: In this step, I know that I'll need to use an unused OID returned by the 'src/include/catalog/unused_oids' script.\r\n-------------------------------------------------------------------------------------\r\nperl -I /home/postgresql-14.3-custom/src/backend/catalog Gen_fmgrtab.pl --include-path / /home/postgresql-14.3-custom/src/include/catalog/pg_proc.dat --output /home\r\n\r\nApplying the \"row_number() over() DESC\" function (basic example):\r\n[cid:7f9bab27-09e3-4a6c-b47c-6cd7983d79a4]\r\n\r\nTks,\r\nMaiquel Orestes Grassi.", "msg_date": "Tue, 16 Jan 2024 12:54:16 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n> Hi developers,\n>\n> I was working on loans and bank financing, specifically focusing on\n> Amortization Systems. I had the need to reverse the counter for the total\n> number of installments or for a specific set of installments. This\n> \"reversal\" is essentially a reverse \"row_number\" function. I realized that\n> it is to \"hard work\" to write PL/foo functions for this or even to\n> implement it in just SQL using little code.\n>\n\nI think “row_number() over (order by … desc)” is a sufficient way to get\nthis behavior and this isn’t something useful enough to warrant being the\nfirst ordering-specific function in the system.\n\nDavid J.\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n\nHi developers,\n\nI was working on loans and bank financing, specifically focusing on Amortization Systems. I had the need to reverse the counter for the total number of installments or for a specific set of installments. This \"reversal\" is essentially a reverse \"row_number\"\n function. I realized that it is to \"hard work\" to write PL/foo functions for this or even to implement it in just SQL using little code.I think “row_number() over (order by … desc)”  is a sufficient way to get this behavior and this isn’t something useful enough to warrant being the first ordering-specific function in the system.David J.", "msg_date": "Tue, 16 Jan 2024 06:30:34 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "Hello David, how are you?\n\nFirstly, I apologize if I wasn't clear in what I intended to propose. I used a very specific example here, and it wasn't very clear what I really wanted to bring up for discussion.\n\nI understand that it's possible to order the \"returned dataset\" using \"order by ... desc.\" However, I would like someone to help me think about the following scenario:\n\nImagine I have a dataset that is returned to my front-end, and I want to reverse enumerate them (exactly the concept of Math enumerating integers). The row_number does the ascending enumeration, but I need the descending enumeration. I don't have a base column to use for \"order by,\" and I also can't use CTID column.\n\nFurthermore, imagine that I have a list of hashes, and I would use \"order by\" on this column or another column to do the reverse enumeration. This wouldn't work because I wouldn't have the correct reverse enumeration, meaning the reversal of the data would not be original.\n\nIt's not about reverse ordering; it's about reverse enumeration.\n\nI apologize again for not being clear in the first interaction.\n\nHow can I do this without using my reversed enumeration \"row_number desc\" function?\n\nRegards,\nMaiquel O. Grassi.\n________________________________\nDe: David G. Johnston <[email protected]>\nEnviado: terça-feira, 16 de janeiro de 2024 11:30\nPara: Maiquel Grassi <[email protected]>\nCc: [email protected] <[email protected]>\nAssunto: Re: New Window Function: ROW_NUMBER_DESC() OVER() ?\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]<mailto:[email protected]>> wrote:\nHi developers,\n\nI was working on loans and bank financing, specifically focusing on Amortization Systems. I had the need to reverse the counter for the total number of installments or for a specific set of installments. This \"reversal\" is essentially a reverse \"row_number\" function. I realized that it is to \"hard work\" to write PL/foo functions for this or even to implement it in just SQL using little code.\n\nI think “row_number() over (order by … desc)” is a sufficient way to get this behavior and this isn’t something useful enough to warrant being the first ordering-specific function in the system.\n\nDavid J.\n\n\n\n\n\n\n\n\nHello David, how are you?\n\nFirstly, I apologize if I wasn't clear in what I intended to propose. I used a very specific example here, and it wasn't very clear what I really wanted to bring up for discussion.\n\n\nI understand that it's possible to order the \"returned dataset\" using \"order by ... desc.\" However, I would like someone to help me think about the following\n scenario:\n\n\nImagine I have a dataset that is returned to my front-end, and I want to reverse enumerate them (exactly the concept of Math enumerating integers). The row_number\n does the ascending enumeration, but I need the descending enumeration. I don't have a base column to use for \"order by,\" and I also can't use CTID column.\n\n\nFurthermore, imagine that I have a list of hashes, and I would use \"order by\" on this column or another column to do the reverse enumeration. This wouldn't\n work because I wouldn't have the correct reverse enumeration, meaning the reversal of the data would not be original.\n\n\nIt's not about reverse ordering; it's about reverse enumeration.\n\n\nI apologize again for not being clear in the first interaction.\n\n\nHow can I do this without using my reversed enumeration \"row_number desc\" function?\n\nRegards,\nMaiquel O. Grassi.\n\n\nDe: David G. Johnston <[email protected]>\nEnviado: terça-feira, 16 de janeiro de 2024 11:30\nPara: Maiquel Grassi <[email protected]>\nCc: [email protected] <[email protected]>\nAssunto: Re: New Window Function: ROW_NUMBER_DESC() OVER() ?\n \n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n\n\nHi developers,\n\nI was working on loans and bank financing, specifically focusing on Amortization Systems. I had the need to reverse the counter for the total number of installments or for a specific set of installments. This \"reversal\" is essentially a reverse \"row_number\"\n function. I realized that it is to \"hard work\" to write PL/foo functions for this or even to implement it in just SQL using little code.\n\n\n\n\n\n\nI think “row_number() over (order by … desc)”  is a sufficient way to get this behavior and this isn’t something useful enough to warrant being the first ordering-specific function in the system.\n\n\n\nDavid J.", "msg_date": "Tue, 16 Jan 2024 15:51:17 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n> Hello David, how are you?\n>\n> Firstly, I apologize if I wasn't clear in what I intended to propose. I\n> used a very specific example here, and it wasn't very clear what I really\n> wanted to bring up for discussion.\n>\n> I understand that it's possible to order the \"returned dataset\" using\n> \"order by ... desc.\"\n>\n>\nIt is, but it is also possible to order a window frame/partition by\nspecifying order by in the over clause. Which is what I showed, and what\nyou should try to use. That orders the enumeration, you can still order,\nor not, the output dataset.\n\n\n\n> I don't have a base column to use for \"order by,\" and I also can't use\n> CTID column.\n>\n\nThen you really don’t have an ordering in the data itself. This is unusual\nand not really worth adding a new function to deal with.\n\n\n>\n> How can I do this without using my reversed enumeration \"row_number desc\"\n> function?\n>\n\nCount() over() - row_number() over()\n\n Please don’t top-post replies, in-line and trim like I’m doing.\n\nDavid J.\n\nP.s. if you really don’t care about logical order you probably should just\nlet your front-end deal with it.\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\nHello David, how are you?\n\nFirstly, I apologize if I wasn't clear in what I intended to propose. I used a very specific example here, and it wasn't very clear what I really wanted to bring up for discussion.\n\n\nI understand that it's possible to order the \"returned dataset\" using \"order by ... desc.\"It is, but it is also possible to order a window frame/partition by specifying order by in the over clause.  Which is what I showed, and what you should try to use.  That orders the enumeration, you can still order, or not, the output dataset. I don't have a base column to use for \"order by,\" and I also can't use CTID column.Then you really don’t have an ordering in the data itself.  This is unusual and not really worth adding a new function to deal with. \n\n\nHow can I do this without using my reversed enumeration \"row_number desc\" function?Count() over() - row_number() over() Please don’t top-post replies, in-line and trim like I’m doing.David J.P.s. if you really don’t care about logical order you probably should just let your front-end deal with it.", "msg_date": "Tue, 16 Jan 2024 09:24:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "Hi,\n\nCount() over() - row_number() over()\n\n But if my dataset is significantly large? Wouldn't calling two window functions instead of one be much slower?\n Is count() over() - row_number() over() faster than row_number_desc() over()?\n\nMaiquel.\n\n\n\n\n\n\n\nHi,\n\nCount() over() - row_number() over()\n   \n   But if my dataset is significantly large? Wouldn't calling two window functions instead of one be much slower?\n   Is count() over() - row_number() over() faster than row_number_desc() over()?\n\nMaiquel.", "msg_date": "Tue, 16 Jan 2024 17:08:24 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n> Hi,\n>\n> Count() over() - row_number() over()\n>\n> But if my dataset is significantly large? Wouldn't calling two window\n> functions instead of one be much slower?\n> Is *count() over() - row_number() over()* faster than *row_number_desc()\n> over()*?\n>\n>\nI doubt it is materially different, you need that count regardless so the\neffort is expended no matter if you put it in an SQL expression or build it\ninto the window function. But as you are the one arguing for the new\nfeature demonstrating that the status quo is deficient is your job.\n\nDavid J.\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\nHi,\n\nCount() over() - row_number() over()\n   \n   But if my dataset is significantly large? Wouldn't calling two window functions instead of one be much slower?\n   Is count() over() - row_number() over() faster than row_number_desc() over()?I doubt it is materially different, you need that count regardless so the effort is expended no matter if you put it in an SQL expression or build it into the window function.  But as you are the one arguing for the new feature demonstrating that the status quo is deficient is your job.David J.", "msg_date": "Tue, 16 Jan 2024 10:52:42 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "I doubt it is materially different, you need that count regardless so the effort is expended no matter if you put it in an SQL expression or build it into the window function. But as you are the one arguing for the new feature demonstrating that the status quo is deficient is your job.\r\n\r\n---//---\r\n\r\nOk, I'll run the tests to validate these performances and draw some conclusions.\r\n\r\nHowever, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset of one unit. To resolve this, I need to add 1. This way, simulating a reverse row_number() becomes even more laborious.\r\n\r\nSELECT\r\n      row_number() over()\r\n      , row_number_desc() over()\r\n      , count(*) over() - row_number() over() as FROM pg_catalog.pg_database;\r\n row_number | row_number_desc | count_minus_row_number\r\n------------+-----------------+------------------------\r\n 1 | 3 | 2\r\n 2 | 2 | 1\r\n 3 | 1 | 0\r\n(3 rows)\r\n\r\npostgres=# SELECT row_number() over(), row_number_desc() over(), count(*) over() - row_number() over() as count_minus_row_number, count(*) over() - row_number() over() + 1 AS count_minus_row_number_plus_one FROM pg_catalog.pg_database;\r\n row_number | row_number_desc | count_minus_row_number | count_minus_row_number_plus_one\r\n------------+-----------------+------------------------+---------------------------------\r\n 1 | 3 | 2 | 3\r\n 2 | 2 | 1 | 2\r\n 3 | 1 | 0 | 1\r\n(3 rows)\r\n\r\nTks,\r\nMaiquel.\r\n\n\n\n\n\n\n\nI doubt it is materially different, you need that count regardless so the effort is expended no matter if you put it in an SQL expression or build it into the window\r\n function.  But as you are the one arguing for the new feature demonstrating that the status quo is deficient is your job.\n\r\n---//---\n\r\nOk, I'll run the tests to validate these performances and draw some conclusions.\n\n\nHowever, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset\r\n of one unit. To resolve this, I need to add 1. This way, simulating a reverse row_number() becomes even more laborious.\n\r\nSELECT\r\n      row_number() over()\n      , row_number_desc() over()\n      , count(*) over() - row_number() over() as FROM pg_catalog.pg_database;\n row_number | row_number_desc | count_minus_row_number\n------------+-----------------+------------------------\n          1 |               3 |                      2\n          2 |               2 |                      1\n          3 |               1 |                      0\n(3 rows)\n\n\npostgres=# SELECT row_number() over(), row_number_desc() over(), count(*) over() - row_number() over() as count_minus_row_number, count(*) over() - row_number()\r\n over() + 1 AS count_minus_row_number_plus_one FROM pg_catalog.pg_database;\n row_number | row_number_desc | count_minus_row_number | count_minus_row_number_plus_one\n------------+-----------------+------------------------+---------------------------------\n          1 |               3 |                      2 |                               3\n          2 |               2 |                      1 |                               2\n          3 |               1 |                      0 |                               1\n(3 rows)\n\n\nTks,\r\nMaiquel.", "msg_date": "Tue, 16 Jan 2024 19:46:05 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "> On 16 Jan 2024, at 16:51, Maiquel Grassi <[email protected]> wrote:\n> \n> \n> Imagine I have a dataset that is returned to my front-end, and I want to reverse enumerate them (exactly the concept of Math enumerating integers). The row_number does the ascending enumeration, but I need the descending enumeration.\n\nYou can do:\n\n-(ROW_NUMBER() OVER ()) AS descending\n\n(note “-“ in front)\n\n> I don't have a base column to use for \"order by,\"\n\nI think that’s the main issue: what (semantically) does row_number() mean in that case? You could equally well generate random numbers?\n\n\n— \nMichal\n\n\nOn 16 Jan 2024, at 16:51, Maiquel Grassi <[email protected]> wrote:Imagine I have a dataset that is returned to my front-end, and I want to reverse enumerate them (exactly the concept of Math enumerating integers). The row_number does the ascending enumeration, but I need the descending enumeration. You can do:-(ROW_NUMBER() OVER ()) AS descending(note “-“ in front)I don't have a base column to use for \"order by,\"I think that’s the main issue: what (semantically) does row_number() mean in that case? You could equally well generate random numbers?— Michal", "msg_date": "Tue, 16 Jan 2024 20:50:57 +0100", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n>\n>\n> However, initially, I have one more obstacle in your feedback. If I use\n> count(*) over() - row_number() over(), it gives me an offset of one unit.\n> To resolve this, I need to add 1.\n>\n> This way, simulating a reverse row_number() becomes even more laborious.\n>\n\nI don’t really understand why you think this reverse inserted counting is\neven a good idea so I don’t really care how laborious it is to implement\nwith existing off-the-shelf tools. A window function named “descending” is\nnon-standard and seemingly non-sensical and should not be added. You can\nspecify order by in the over clause and that is what you should be doing.\nMortgage payments are usually monthly, so order by date.\n\nDavid J.\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\nHowever, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset\n of one unit. To resolve this, I need to add 1. This way, simulating a reverse row_number() becomes even more laborious.I don’t really understand why you think this reverse inserted counting is even a good idea so I don’t really care how laborious it is to implement with existing off-the-shelf tools.  A window function named “descending” is non-standard and seemingly non-sensical and should not be added.  You can specify order by in the over clause and that is what you should be doing.  Mortgage payments are usually monthly, so order by date.David J.", "msg_date": "Tue, 16 Jan 2024 12:55:23 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "You can do:\n\n-(ROW_NUMBER() OVER ()) AS descending\n\n(note “-“ in front)\n\nI don't have a base column to use for \"order by,\"\n\nI think that’s the main issue: what (semantically) does row_number() mean in that case? You could equally well generate random numbers?\n\n\n--//--\n\nWhat I want to do is inverse the enumeration using a simple solution. I want to look at the enumeration of the dataset list from bottom to top, not from top to bottom. I don't want to reverse the sign of the integers. The generated integers in output remain positive.The returned dataset can be from any query. What I need is exactly the opposite of row_number().\n\ncount(*) over() - row_number() + 1 works.\n\nBut I think for a large volume of data, its performance will be inferior to the suggested row_number_desc() over(). I may be very wrong, so I will test it.\n\nMaiquel.\n\n\n\n\n\n\n\n\n\n\nYou can do:\n\n\n\n-(ROW_NUMBER() OVER ()) AS descending\n\n\n(note “-“ in front)\n\n\n\nI don't have a base column to use for \"order by,\"\n\n\n\nI think that’s the main issue: what (semantically) does row_number() mean in that case? You could equally well generate random numbers?\n\n\n--//--\n\nWhat I want to do is inverse the enumeration using a simple solution. I want to look at the enumeration of the dataset list from bottom to top, not from top to bottom. I don't want to reverse the sign of the integers. The generated integers in output remain\n positive.The returned dataset can be from any query. What I need is exactly the opposite of row_number().\n\n\ncount(*) over() - row_number() + 1 works.\n\n\nBut I think for a large volume of data, its performance will be inferior to the suggested row_number_desc() over(). I may be very wrong,\n so I will test it.\n\nMaiquel.", "msg_date": "Tue, 16 Jan 2024 20:11:04 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "However, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset of one unit. To resolve this, I need to add 1.\n\nThis way, simulating a reverse row_number() becomes even more laborious.\n\nI don’t really understand why you think this reverse inserted counting is even a good idea so I don’t really care how laborious it is to implement with existing off-the-shelf tools. A window function named “descending” is non-standard and seemingly non-sensical and should not be added. You can specify order by in the over clause and that is what you should be doing. Mortgage payments are usually monthly, so order by date.\n\nDavid J.\n\n--//--\n\nWe are just raising hypotheses and discussing healthy possibilities here. This is a suggestion for knowledge and community growth. Note that this is not about a new \"feature patch.\" I am asking for the community's opinion in general. Your responses are largely appearing aggressive and depreciative. Kindly request you to be more welcoming in your answers and not oppressive. This way, the community progresses more rapidly.\n\nMaiquel.\n\n\n\n\n\n\n\n\n\nHowever, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset of one unit. To resolve this, I need to add 1. \n\n\n\nThis way, simulating a reverse row_number() becomes even more laborious.\n\n\n\nI don’t really understand why you think this reverse inserted counting is even a good idea so I don’t really care how laborious it is to implement with existing off-the-shelf tools.  A window function named “descending” is non-standard and seemingly non-sensical\n and should not be added.  You can specify order by in the over clause and that is what you should be doing.  Mortgage payments are usually monthly, so order by date.\n\n\nDavid J.\n\n--//--\n\n\nWe are just raising hypotheses and discussing healthy possibilities here. This is a suggestion for knowledge and community growth. Note\n that this is not about a new \"feature patch.\" I am asking for the community's opinion in general. Your responses are largely appearing aggressive and depreciative. Kindly request you to be more welcoming in your answers and not oppressive. This way, the community\n progresses more rapidly.\n\nMaiquel.", "msg_date": "Tue, 16 Jan 2024 20:27:10 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n> However, initially, I have one more obstacle in your feedback. If I use\n> count(*) over() - row_number() over(), it gives me an offset of one unit.\n> To resolve this, I need to add 1.\n>\n>\n> This way, simulating a reverse row_number() becomes even more laborious.\n>\n>\n> I don’t really understand why you think this reverse inserted counting is\n> even a good idea so I don’t really care how laborious it is to implement\n> with existing off-the-shelf tools. A window function named “descending” is\n> non-standard and seemingly non-sensical and should not be added. You can\n> specify order by in the over clause and that is what you should be doing.\n> Mortgage payments are usually monthly, so order by date.\n>\n> David J.\n>\n> --//--\n>\n> We are just raising hypotheses and discussing healthy possibilities here.\n> This is a suggestion for knowledge and community growth. Note that this is\n> not about a new \"feature patch.\n>\n>\nThat is not how your initial post here came across. It seemed quite\nconcrete in goal and use case motivating that goal.\n\n\n> I am asking for the community's opinion in general. Your responses are\n> largely appearing aggressive and depreciative. Kindly request you to be\n> more welcoming in your answers and not oppressive. This way, the community\n> progresses more rapidly..\n>\n>\nThe people in this community are quite capable and willing to write a\ncontrary opinion to mine. Not sure how to make “this new proposed function\nshouldn’t be added to core”, and trying to explain why not,\nnon-oppressive. I can add “thank you for taking the time to try and\nimprove PostgreSQL” in front to soften the blow of rejection but I tend to\njust get to the point.\n\nDavid J.\n\nOn Tuesday, January 16, 2024, Maiquel Grassi <[email protected]> wrote:\n\n\nHowever, initially, I have one more obstacle in your feedback. If I use count(*) over() - row_number() over(), it gives me an offset of one unit. To resolve this, I need to add 1. \n\n\n\nThis way, simulating a reverse row_number() becomes even more laborious.\n\n\n\nI don’t really understand why you think this reverse inserted counting is even a good idea so I don’t really care how laborious it is to implement with existing off-the-shelf tools.  A window function named “descending” is non-standard and seemingly non-sensical\n and should not be added.  You can specify order by in the over clause and that is what you should be doing.  Mortgage payments are usually monthly, so order by date.\n\n\nDavid J.\n\n--//--\n\n\nWe are just raising hypotheses and discussing healthy possibilities here. This is a suggestion for knowledge and community growth. Note\n that this is not about a new \"feature patch.That is not how your initial post here came across.  It seemed quite concrete in goal and use case motivating that goal.I am asking for the community's opinion in general. Your responses are largely appearing aggressive and depreciative. Kindly request you to be more welcoming in your answers and not oppressive. This way, the community\n progresses more rapidly..\n\n\nThe people in this community are quite capable and willing to write a contrary opinion to mine.  Not sure how to make “this new proposed function shouldn’t be added to core”, and trying to explain why not, non-oppressive.  I can add “thank you for taking the time to try and improve PostgreSQL” in front to soften the blow of rejection but I tend to just get to the point.David J.", "msg_date": "Tue, 16 Jan 2024 13:39:45 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "The people in this community are quite capable and willing to write a contrary opinion to mine. Not sure how to make “this new proposed function shouldn’t be added to core”, and trying to explain why not, non-oppressive. I can add “thank you for taking the time to try and improve PostgreSQL” in front to soften the blow of rejection but I tend to just get to the point.\n\nDavid J.\n\n----//----\n\nThank you for your opinion. We built together one more insight on PostgreSQL for the community.\n\nBest regards,\nMaiquel O.\n\n\n\n\n\n\n\n\nThe people in this community are quite capable and willing to write a contrary opinion to mine.  Not sure how to make “this new proposed function shouldn’t be added to core”, and trying to explain why not, non-oppressive.  I can add “thank you for taking\n the time to try and improve PostgreSQL” in front to soften the blow of rejection but I tend to just get to the point.\n\n\n\nDavid J.\n\n----//----\n\nThank you for your opinion. We built together one more insight on PostgreSQL for the community.\n\nBest regards,\nMaiquel O.", "msg_date": "Tue, 16 Jan 2024 21:58:15 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "But as you are the one arguing for the new feature demonstrating that the status quo is deficient is your job.\n\n--//--\n\nI performed these three tests(take a look below) quite simple but functional, so that we can get an idea of the performance. Apparently, we have a higher cost in using \"count(*) - row_number() + 1\" than in using \"row_number_desc() over()\".\n\nPerhaps, if we think in terms of SQL standards, my suggested name may not have been the best. The name could be anything else. I don't have another suggestion. Does anyone have a better one? I leave it open for others to also reflect.\n\n\n\npostgres=# select * into public.foo_1 from generate_series(1,1000000);\nSELECT 1000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..38276.25 rows=1128375 width=8) (actual time=244.878..475.595 rows=1000000 loops=1)\n -> Seq Scan on foo_1 (cost=0.00..15708.75 rows=1128375 width=0) (actual time=0.033..91.486 rows=1000000 loops=1)\n Planning Time: 0.073 ms\n Execution Time: 505.375 ms\n(4 rows)\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..26925.00 rows=1000000 width=8) (actual time=141.107..427.100 rows=1000000 loops=1)\n -> Seq Scan on foo_1 (cost=0.00..14425.00 rows=1000000 width=0) (actual time=0.031..61.651 rows=1000000 loops=1)\n Planning Time: 0.051 ms\n Execution Time: 466.535 ms\n(4 rows)\n\n\n\npostgres=# select * into public.foo_2 from generate_series(1,10000000);\nSELECT 10000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_2;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..344247.31 rows=9999977 width=8) (actual time=2621.014..5145.325 rows=10000000 loops=1)\n -> Seq Scan on foo_2 (cost=0.00..144247.77 rows=9999977 width=0) (actual time=0.031..821.533 rows=10000000 loops=1)\n Planning Time: 0.085 ms\n Execution Time: 5473.422 ms\n(4 rows)\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_2;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..269247.48 rows=9999977 width=8) (actual time=1941.915..4527.896 rows=10000000 loops=1)\n -> Seq Scan on foo_2 (cost=0.00..144247.77 rows=9999977 width=0) (actual time=0.029..876.802 rows=10000000 loops=1)\n Planning Time: 0.030 ms\n Execution Time: 4871.278 ms\n(4 rows)\n\n\n\n\npostgres=# select * into public.foo_3 from generate_series(1,100000000);\nSELECT 100000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_3;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..3827434.70 rows=112831890 width=8) (actual time=56823.080..84295.660 rows=100000000 loops=1)\n -> Seq Scan on foo_3 (cost=0.00..1570796.90 rows=112831890 width=0) (actual time=1.010..37735.121 rows=100000000 loops=1)\n Planning Time: 1.018 ms\n Execution Time: 87677.572 ms\n(4 rows)\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_3;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.00..2981195.53 rows=112831890 width=8) (actual time=29523.037..55517.349 rows=100000000 loops=1)\n -> Seq Scan on foo_3 (cost=0.00..1570796.90 rows=112831890 width=0) (actual time=12.638..19050.614 rows=100000000 loops=1)\n Planning Time: 55.653 ms\n Execution Time: 59001.423 ms\n(4 rows)\n\n\n\nRegards,\nMaiquel.\n\n\n\n\n\n\n\nBut as you are the one arguing for the new feature demonstrating that the status quo is deficient is your job.\n\n--//--\n\nI performed these three tests(take a look below) quite simple but functional, so that we can get an idea of the performance. Apparently, we have a higher cost in using \"count(*) - row_number() + 1\" than in using \"row_number_desc() over()\".\n\n\nPerhaps, if we think in terms of SQL standards, my suggested name may not have been the best. The name could be anything else. I don't\n have another suggestion. Does anyone have a better one? I leave it open for others to also reflect.\n\n\n\npostgres=# select * into public.foo_1 from generate_series(1,1000000);\nSELECT 1000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_1;\n                                                      QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..38276.25 rows=1128375 width=8) (actual time=244.878..475.595 rows=1000000 loops=1)\n   ->  Seq Scan on foo_1  (cost=0.00..15708.75 rows=1128375 width=0) (actual time=0.033..91.486 rows=1000000 loops=1)\n Planning Time: 0.073 ms\n Execution Time: 505.375 ms\n(4 rows)\n\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_1;\n                                                      QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..26925.00 rows=1000000 width=8) (actual time=141.107..427.100 rows=1000000 loops=1)\n   ->  Seq Scan on foo_1  (cost=0.00..14425.00 rows=1000000 width=0) (actual time=0.031..61.651 rows=1000000 loops=1)\n Planning Time: 0.051 ms\n Execution Time: 466.535 ms\n(4 rows)\n\n\n\n\npostgres=# select * into public.foo_2 from generate_series(1,10000000);\nSELECT 10000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_2;\n                                                       QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..344247.31 rows=9999977 width=8) (actual time=2621.014..5145.325 rows=10000000 loops=1)\n   ->  Seq Scan on foo_2  (cost=0.00..144247.77 rows=9999977 width=0) (actual time=0.031..821.533 rows=10000000 loops=1)\n Planning Time: 0.085 ms\n Execution Time: 5473.422 ms\n(4 rows)\n\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_2;\n                                                       QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..269247.48 rows=9999977 width=8) (actual time=1941.915..4527.896 rows=10000000 loops=1)\n   ->  Seq Scan on foo_2  (cost=0.00..144247.77 rows=9999977 width=0) (actual time=0.029..876.802 rows=10000000 loops=1)\n Planning Time: 0.030 ms\n Execution Time: 4871.278 ms\n(4 rows)\n\n\n\n\n\npostgres=# select * into public.foo_3 from generate_series(1,100000000);\nSELECT 100000000\npostgres=# explain analyze select count(*) over() - row_number() over() + 1 from public.foo_3;\n                                                          QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..3827434.70 rows=112831890 width=8) (actual time=56823.080..84295.660 rows=100000000 loops=1)\n   ->  Seq Scan on foo_3  (cost=0.00..1570796.90 rows=112831890 width=0) (actual time=1.010..37735.121 rows=100000000 loops=1)\n Planning Time: 1.018 ms\n Execution Time: 87677.572 ms\n(4 rows)\n\n\npostgres=# explain analyze select row_number_desc() over() from public.foo_3;\n                                                           QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n WindowAgg  (cost=0.00..2981195.53 rows=112831890 width=8) (actual time=29523.037..55517.349 rows=100000000 loops=1)\n   ->  Seq Scan on foo_3  (cost=0.00..1570796.90 rows=112831890 width=0) (actual time=12.638..19050.614 rows=100000000 loops=1)\n Planning Time: 55.653 ms\n Execution Time: 59001.423 ms\n(4 rows)\n\n\n\nRegards,\nMaiquel.", "msg_date": "Wed, 17 Jan 2024 00:25:33 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Wed, 17 Jan 2024 at 08:51, Michał Kłeczek <[email protected]> wrote:\n> I think that’s the main issue: what (semantically) does row_number() mean in that case? You could equally well generate random numbers?\n\nWell, not quite random as at least row_number() would ensure the\nnumber is unique in the result set. The point I think you're trying to\nmake is very valid though.\n\nTo reinforce that point, here's an example how undefined the behaviour\nthat Maique is relying on:\n\ncreate table t (a int primary key);\ninsert into t values(3),(2),(4),(1),(5);\n\nselect a,row_number() over() from t; -- Seq Scan\n a | row_number\n---+------------\n 3 | 1\n 2 | 2\n 4 | 3\n 1 | 4\n 5 | 5\n\nset enable_seqscan=0;\nset enable_bitmapscan=0;\n\nselect a,row_number() over() from t; -- Index Scan\n a | row_number\n---+------------\n 1 | 1\n 2 | 2\n 3 | 3\n 4 | 4\n 5 | 5\n\ni.e the row numbers are just assigned in whichever order they're given\nto the WindowAgg node.\n\nMaique,\n\nAs far as I see your proposal, you want to allow something that is\nundefined to be reversed. I don't think this is a good idea at all.\nAs mentioned by others, you should have ORDER BY clauses and just add\na DESC.\n\nIf you were looking for something to optimize in this rough area, then\nperhaps adding some kind of \"Backward WindowAgg\" node (by overloading\nthe existing node) to allow queries such as the following to be\nexecuted without an additional sort.\n\nSELECT a,row_number() over (order by a desc) from t order by a;\n\nThe planner complexity is likely fairly easy to implement that. I\ndon't think we'd need to generate any additional Paths. We could\ninvent some pathkeys_contained_in_reverse() function and switch on the\nBackward flag if it is.\n\nThe complexity would be in nodeWindowAgg.c... perhaps too much\ncomplexity for it to be worthwhile and not add additional overhead to\nthe non-backward case.\n\nOr, it might be easier to invent \"Backward Materialize\" instead and\njust have the planner use on of those instead of the final sort.\n\nDavid\n\n\n", "msg_date": "Wed, 17 Jan 2024 14:36:13 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "As far as I see your proposal, you want to allow something that is\nundefined to be reversed. I don't think this is a good idea at all.\nAs mentioned by others, you should have ORDER BY clauses and just add\na DESC.\n\nIf you were looking for something to optimize in this rough area, then\nperhaps adding some kind of \"Backward WindowAgg\" node (by overloading\nthe existing node) to allow queries such as the following to be\nexecuted without an additional sort.\n\nSELECT a,row_number() over (order by a desc) from t order by a;\n\nThe planner complexity is likely fairly easy to implement that. I\ndon't think we'd need to generate any additional Paths. We could\ninvent some pathkeys_contained_in_reverse() function and switch on the\nBackward flag if it is.\n\nThe complexity would be in nodeWindowAgg.c... perhaps too much\ncomplexity for it to be worthwhile and not add additional overhead to\nthe non-backward case.\n\nOr, it might be easier to invent \"Backward Materialize\" instead and\njust have the planner use on of those instead of the final sort.\n\nDavid\n\n\n\n\n\n\n\nAs far as\n I see your proposal, you want to allow something that is\nundefined to be reversed.  I don't think this is a good idea at all.\nAs mentioned by others, you should have ORDER BY clauses and just add\na DESC.\n\nIf you were looking for something to optimize in this rough area, then\nperhaps adding some kind of \"Backward WindowAgg\" node (by overloading\nthe existing node) to allow queries such as the following to be\nexecuted without an additional sort.\n\nSELECT a,row_number() over (order by a desc) from t order by a;\n\nThe planner complexity is likely fairly easy to implement that. I\ndon't think we'd need to generate any additional Paths. We could\ninvent some pathkeys_contained_in_reverse() function and switch on the\nBackward flag if it is.\n\nThe complexity would be in nodeWindowAgg.c... perhaps too much\ncomplexity for it to be worthwhile and not add additional overhead to\nthe non-backward case.\n\nOr, it might be easier to invent \"Backward Materialize\" instead and\njust have the planner use on of those instead of the final sort.\n\nDavid", "msg_date": "Wed, 17 Jan 2024 02:17:22 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "As far as I see your proposal, you want to allow something that is\nundefined to be reversed. I don't think this is a good idea at all.\nAs mentioned by others, you should have ORDER BY clauses and just add\na DESC.\n\n\n * Okay, now I'm convinced of that.\n\nIf you were looking for something to optimize in this rough area, then\nperhaps adding some kind of \"Backward WindowAgg\" node (by overloading\nthe existing node) to allow queries such as the following to be\nexecuted without an additional sort.\n\nSELECT a,row_number() over (order by a desc) from t order by a;\n\n\n * David, considering this optimization, allowing for that, do you believe it is plausible to try advancing towards a possible Proof of Concept (PoC) implementation?\n\nMaiquel.\n\n\n\n\n\n\n\n\nAs far as I see your proposal, you want to allow something that is\nundefined to be reversed.  I don't think this is a good idea at all.\nAs mentioned by others, you should have ORDER BY clauses and just add\na DESC.\n\n\n\n\nOkay, now I'm convinced of that.\n\n\nIf you were looking for something to optimize in this rough area, then\nperhaps adding some kind of \"Backward WindowAgg\" node (by overloading\nthe existing node) to allow queries such as the following to be\nexecuted without an additional sort.\n\n\nSELECT a,row_number() over (order by a desc) from t order by a;\n\n\n\n\nDavid, considering this optimization, allowing for that, do you believe it is plausible to try advancing towards a possible Proof of Concept (PoC) implementation?\n\n\nMaiquel.", "msg_date": "Wed, 17 Jan 2024 02:28:09 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "On Wed, 17 Jan 2024 at 15:28, Maiquel Grassi <[email protected]> wrote:\n> On Wed, 17 Jan 2024 at 14:36, David Rowley <[email protected]> wrote:\n> > If you were looking for something to optimize in this rough area, then\n> > perhaps adding some kind of \"Backward WindowAgg\" node (by overloading\n> > the existing node) to allow queries such as the following to be\n> > executed without an additional sort.\n> >\n> > SELECT a,row_number() over (order by a desc) from t order by a;\n>\n> David, considering this optimization, allowing for that, do you believe it is plausible to try advancing towards a possible Proof of Concept (PoC) implementation?\n\nI think the largest factor which would influence the success of that\nwould be how much more complex nodeWindowAgg.c would become.\n\nThere's a couple of good ways to ensure such a patch fails:\n\n1. Copy and paste all the code out of nodeWindowAgg.c and create\nnodeWindowAggBackward.c and leave a huge maintenance burden. (don't do\nthis)\n2. Make nodeWindowAgg.c much more complex and slower by adding dozens\nof conditions to check if we're in backward mode.\n\nI've not taken the time to study nodeWindowAgg.c to know how much more\ncomplex supporting reading the tuples backwards would make it.\nCertainly the use of tuplestore_trim() would have to change and\nobviously way we read stored tuples back would need to be adjusted. It\nmight just add much more complexity than it would be worth. Part of\nthe work would be finding this out.\n\nIf making the changes to nodeWindowAgg.c is too complex, then\nadjusting nodeMaterial.c would at least put us in a better position\nthan having to sort twice. You'd have to add a bool isbackward flag\nto MaterialPath and then likely add a ScanDirection normal_dir to\nMaterialState then set \"dir\" in ExecMaterial() using\nScanDirectionCombine of the two scan directions. At least some of\nwhat's there would work as a result of that, but likely a few other\nthings in ExecMaterial() would need to be rejiggered. explain.c would\nneed to show \"Backward Material\", etc.\n\nBoth cases you'd need to modify planner.c's create_one_window_path()\nand invent a function such as pathkeys_count_contained_in_backward()\nor at least pathkeys_contained_in_backward() to detect when you need\nto use the backward node type.\n\nI'd go looking at nodeWindowAgg.c first, if you're interested.\n\nDavid\n\n\n", "msg_date": "Wed, 17 Jan 2024 16:16:55 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 17 Jan 2024 at 15:28, Maiquel Grassi <[email protected]> wrote:\n>> On Wed, 17 Jan 2024 at 14:36, David Rowley <[email protected]> wrote:\n>>> If you were looking for something to optimize in this rough area, then\n>>> perhaps adding some kind of \"Backward WindowAgg\" node (by overloading\n>>> the existing node) to allow queries such as the following to be\n>>> executed without an additional sort.\n>>> \n>>> SELECT a,row_number() over (order by a desc) from t order by a;\n\n>> David, considering this optimization, allowing for that, do you believe it is plausible to try advancing towards a possible Proof of Concept (PoC) implementation?\n\n> I think the largest factor which would influence the success of that\n> would be how much more complex nodeWindowAgg.c would become.\n\nEven if a workable patch for that is presented, should we accept it?\nI'm having a hard time believing that this requirement is common\nenough to justify more than a microscopic addition of complexity.\nThis whole area is devilishly complicated already, and I can think of\na bunch of improvements that I'd rate as more worthy of developer\neffort than this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Jan 2024 23:13:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: New Window Function: ROW_NUMBER_DESC() OVER() ?" }, { "msg_contents": "Even if a workable patch for that is presented, should we accept it?\nI'm having a hard time believing that this requirement is common\nenough to justify more than a microscopic addition of complexity.\nThis whole area is devilishly complicated already, and I can think of\na bunch of improvements that I'd rate as more worthy of developer\neffort than this.\n\n--//--\n\n\nThanks for the advice. I understand that an improvement you consider microscopic may not be worth spending time trying to implement it (considering you are already warning that a good patch might not be accepted). But since you mentioned that you can think of several possible improvements, more worthy of time investment, could you share at least one of them with us that you consider a candidate for an effort?\n\nRegards,\nMaiquel.\n\n\n\n\n\n\n\n\nEven if a workable patch for that is presented, should we accept it?\nI'm having a hard time believing that this requirement is common\nenough to justify more than a microscopic addition of complexity.\nThis whole area is devilishly complicated already, and I can think of\na bunch of improvements that I'd rate as more worthy of developer\neffort than this.\n\n--//--\n\n\nThanks for the advice. I understand that an improvement you consider microscopic may not be worth spending time trying to implement it (considering you are already warning that a\n good patch might not be accepted). But since you mentioned that you can think of several possible improvements, more worthy of time investment, could you share at least one of them with us that you consider a candidate for an effort?\n\n\nRegards,\nMaiquel.", "msg_date": "Wed, 17 Jan 2024 08:52:33 +0000", "msg_from": "Maiquel Grassi <[email protected]>", "msg_from_op": true, "msg_subject": "RE: New Window Function: ROW_NUMBER_DESC() OVER() ?" } ]
[ { "msg_contents": "Hi all,\nI think the comment above the function DecodeInsert()\nin src/backend/replication/logical/decode.c should be\n+ * *Inserts *can contain the new tuple.\n, rather than\n- * *Deletes *can contain the new tuple.\n\nPlease correct me if I'm wrong, thanks a lot.", "msg_date": "Wed, 17 Jan 2024 08:46:55 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Fix a typo of func DecodeInsert()" }, { "msg_contents": "On Wed, Jan 17, 2024 at 8:47 AM Yongtao Huang <[email protected]>\nwrote:\n\n> Hi all,\n> I think the comment above the function DecodeInsert()\n> in src/backend/replication/logical/decode.c should be\n> + * *Inserts *can contain the new tuple.\n> , rather than\n> - * *Deletes *can contain the new tuple.\n>\n\nNice catch. +1.\n\nI kind of wonder if it would be clearer to state that \"XLOG_HEAP_INSERT\ncan contain the new tuple\", in order to differentiate it from\nXLOG_HEAP2_MULTI_INSERT.\n\nThanks\nRichard\n\nOn Wed, Jan 17, 2024 at 8:47 AM Yongtao Huang <[email protected]> wrote:Hi all,I think the comment above the function DecodeInsert() in src/backend/replication/logical/decode.c should be+ * Inserts can contain the new tuple., rather than- * Deletes can contain the new tuple.Nice catch.  +1.I kind of wonder if it would be clearer to state that \"XLOG_HEAP_INSERTcan contain the new tuple\", in order to differentiate it fromXLOG_HEAP2_MULTI_INSERT.ThanksRichard", "msg_date": "Wed, 17 Jan 2024 09:10:33 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a typo of func DecodeInsert()" }, { "msg_contents": "Thank you. I prefer to keep the comments of these three functions\n*DecodeInsert()*, *DecodeUpdate()*, and *DecodeDelete()* aligned.\n```\n/*\n * Parse XLOG_HEAP_INSERT (not MULTI_INSERT!) records into tuplebufs.\n *\n * Inserts can contain the new tuple.\n */\nstatic void\nDecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n\n/*\n * Parse XLOG_HEAP_UPDATE and XLOG_HEAP_HOT_UPDATE, which have the same\nlayout\n * in the record, from wal into proper tuplebufs.\n *\n * Updates can possibly contain a new tuple and the old primary key.\n */\nstatic void\nDecodeUpdate(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n\n/*\n * Parse XLOG_HEAP_DELETE from wal into proper tuplebufs.\n *\n * Deletes can possibly contain the old primary key.\n */\nstatic void\nDecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n\n```\n\nBest wishes\n\nYongtao Huang\n\n\nRichard Guo <[email protected]> 于2024年1月17日周三 09:10写道:\n\n>\n> On Wed, Jan 17, 2024 at 8:47 AM Yongtao Huang <[email protected]>\n> wrote:\n>\n>> Hi all,\n>> I think the comment above the function DecodeInsert()\n>> in src/backend/replication/logical/decode.c should be\n>> + * *Inserts *can contain the new tuple.\n>> , rather than\n>> - * *Deletes *can contain the new tuple.\n>>\n>\n> Nice catch. +1.\n>\n> I kind of wonder if it would be clearer to state that \"XLOG_HEAP_INSERT\n> can contain the new tuple\", in order to differentiate it from\n> XLOG_HEAP2_MULTI_INSERT.\n>\n> Thanks\n> Richard\n>\n\nThank you. I prefer to keep the comments of these three functions DecodeInsert(),  DecodeUpdate(), and DecodeDelete() aligned.```/* * Parse XLOG_HEAP_INSERT (not MULTI_INSERT!) records into tuplebufs. * * Inserts can contain the new tuple. */static voidDecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)/* * Parse XLOG_HEAP_UPDATE and XLOG_HEAP_HOT_UPDATE, which have the same layout * in the record, from wal into proper tuplebufs. * * Updates can possibly contain a new tuple and the old primary key. */static voidDecodeUpdate(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)/* * Parse XLOG_HEAP_DELETE from wal into proper tuplebufs. * * Deletes can possibly contain the old primary key. */static voidDecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)```Best wishesYongtao Huang Richard Guo <[email protected]> 于2024年1月17日周三 09:10写道:On Wed, Jan 17, 2024 at 8:47 AM Yongtao Huang <[email protected]> wrote:Hi all,I think the comment above the function DecodeInsert() in src/backend/replication/logical/decode.c should be+ * Inserts can contain the new tuple., rather than- * Deletes can contain the new tuple.Nice catch.  +1.I kind of wonder if it would be clearer to state that \"XLOG_HEAP_INSERTcan contain the new tuple\", in order to differentiate it fromXLOG_HEAP2_MULTI_INSERT.ThanksRichard", "msg_date": "Wed, 17 Jan 2024 12:18:12 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix a typo of func DecodeInsert()" }, { "msg_contents": "On Wed, Jan 17, 2024 at 12:18:12PM +0800, Yongtao Huang wrote:\n> Thank you. I prefer to keep the comments of these three functions\n> *DecodeInsert()*, *DecodeUpdate()*, and *DecodeDelete()* aligned.\n\nNot sure either what we would gain with a more complicated description\nin this area knowing that there is also DecodeMultiInsert(), so I have\njust fixed the top of DecodeInsert() as you have suggested as it is\nclearly wrong. Thanks.\n--\nMichael", "msg_date": "Wed, 17 Jan 2024 17:04:36 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix a typo of func DecodeInsert()" } ]
[ { "msg_contents": "Had some time to watch code run through an extensive test suite, so \nthought I would propose this patch that is probably about 75% of the way \nto the stated $subject. I had to add in a hack for Meson, and I couldn't \nfigure out a good hack for autotools.\n\nI think a good solution would be to distribute pgindent and \npg_bsd_indent. At Neon, we are trying to format our extension code using \npgindent. I am sure there are other extension authors out there too that \nformat using pgindent. Distributing pg_bsd_indent and pgindent in the \npostgresql-devel package would be a great help to those of us that \npgindent out of tree code. It would also have the added benefit of \nadding the tools to $PREFIX/bin, which would make the test that I added \nnot need a hack to get the pg_bsd_indent executable.\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Tue, 16 Jan 2024 19:22:23 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Add pgindent test to check if codebase is correctly formatted" }, { "msg_contents": "On Tue, Jan 16, 2024 at 07:22:23PM -0600, Tristan Partin wrote:\n> I think a good solution would be to distribute pgindent and pg_bsd_indent.\n> At Neon, we are trying to format our extension code using pgindent. I am\n> sure there are other extension authors out there too that format using\n> pgindent. Distributing pg_bsd_indent and pgindent in the postgresql-devel\n> package would be a great help to those of us that pgindent out of tree code.\n> It would also have the added benefit of adding the tools to $PREFIX/bin,\n> which would make the test that I added not need a hack to get the\n> pg_bsd_indent executable.\n\nSo your point is that pg_bsd_indent and pgindent are in the source tree,\nbut not in any package distribution? Isn't that a packager decision?\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 16 Jan 2024 20:27:40 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pgindent test to check if codebase is correctly formatted" }, { "msg_contents": "On Tue Jan 16, 2024 at 7:27 PM CST, Bruce Momjian wrote:\n> On Tue, Jan 16, 2024 at 07:22:23PM -0600, Tristan Partin wrote:\n> > I think a good solution would be to distribute pgindent and pg_bsd_indent.\n> > At Neon, we are trying to format our extension code using pgindent. I am\n> > sure there are other extension authors out there too that format using\n> > pgindent. Distributing pg_bsd_indent and pgindent in the postgresql-devel\n> > package would be a great help to those of us that pgindent out of tree code.\n> > It would also have the added benefit of adding the tools to $PREFIX/bin,\n> > which would make the test that I added not need a hack to get the\n> > pg_bsd_indent executable.\n>\n> So your point is that pg_bsd_indent and pgindent are in the source tree,\n> but not in any package distribution? Isn't that a packager decision?\n\nIt requires changes to at least the Meson build files. pg_bsd_indent is \nnot marked for installation currently. There is a TODO there. pgindent \nhas no install_data() for instance. pg_bsd_indent seemingly gets \ninstalled somewhere in autotools given the contents of its Makefile, but \nI didn't see anything in my install tree afterward.\n\nSure RPM/DEB packagers can solve this issue downstream, but that doesn't \nhelp those of that run \"meson install\" or \"make install\" upstream. \nPackagers are probably more likely to package the tools if they are \nmarked for installation by upstream too.\n\nHope this helps to better explain what changes would be required within \nthe Postgres source tree.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 16 Jan 2024 19:32:47 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pgindent test to check if codebase is correctly formatted" }, { "msg_contents": "On Tue, Jan 16, 2024 at 07:32:47PM -0600, Tristan Partin wrote:\n> It requires changes to at least the Meson build files. pg_bsd_indent is not\n> marked for installation currently. There is a TODO there. pgindent has no\n> install_data() for instance. pg_bsd_indent seemingly gets installed\n> somewhere in autotools given the contents of its Makefile, but I didn't see\n> anything in my install tree afterward.\n> \n> Sure RPM/DEB packagers can solve this issue downstream, but that doesn't\n> help those of that run \"meson install\" or \"make install\" upstream. Packagers\n> are probably more likely to package the tools if they are marked for\n> installation by upstream too.\n> \n> Hope this helps to better explain what changes would be required within the\n> Postgres source tree.\n\nYes, it does, thanks.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 16 Jan 2024 20:35:13 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pgindent test to check if codebase is correctly formatted" }, { "msg_contents": "Hmm, should this also install typedefs.list and pgindent.man?\nWhat about the tooling to reformat Perl code?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Linux transformó mi computadora, de una `máquina para hacer cosas',\nen un aparato realmente entretenido, sobre el cual cada día aprendo\nalgo nuevo\" (Jaime Salinas)\n\n\n", "msg_date": "Wed, 17 Jan 2024 10:50:59 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add pgindent test to check if codebase is correctly formatted" }, { "msg_contents": "On Wed Jan 17, 2024 at 3:50 AM CST, Alvaro Herrera wrote:\n> Hmm, should this also install typedefs.list and pgindent.man?\n> What about the tooling to reformat Perl code?\n\nGood point about pgindent.man. It would definitely be good to install \nalongside pgindent and pg_bsd_indent.\n\nI don't know if we need to install the typedefs.list file. I think it \nwould just be good enough to also install the find_typedefs script. But \nit needs some fixing up first[0]. Extension authors can then just \ngenerate their own typedefs.list that will include the typedefs of the \nextension and the typedefs of the postgres types they use. At least, \nthat is what we have found works at Neon.\n\nI cannot vouch for extension authors writing Perl but I think it could \nmake sense to install the src/test/perl tree, so extension authors could \nmore easily write tests for their extensions in Perl. But we could \ninstall the perltidy file and whatever else too. I keep my Perl writing \nto a minimum, so I am not the best person to vouch for these usecases.\n\n[0]: https://www.postgresql.org/message-id/aaa59ef5-dce8-7369-5cae-487727664127%40dunslane.net\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 17 Jan 2024 10:15:38 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add pgindent test to check if codebase is correctly formatted" } ]
[ { "msg_contents": "Hi,\n\n132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to \nskip malformed data, but there is no way to watch the number of skipped \nrows during COPY.\n\nAttached patch adds tuples_skipped to pg_stat_progress_copy, which \ncounts the number of skipped tuples because source data is malformed.\nIf SAVE_ERROR_TO is not specified, this column remains zero.\n\nThe advantage would be that users can quickly notice and stop COPYing \nwhen there is a larger amount of skipped data than expected, for \nexample.\n\nAs described in commit log, it is expected to add more choices for \nSAVE_ERROR_TO like 'log' and using such options may enable us to know \nthe number of skipped tuples during COPY, but exposed in \npg_stat_progress_copy would be easier to monitor.\n\n\nWhat do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Wed, 17 Jan 2024 14:22:03 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Add tuples_skipped to pg_stat_progress_copy" }, { "msg_contents": "On Wed, Jan 17, 2024 at 2:22 PM torikoshia <[email protected]> wrote:\n>\n> Hi,\n>\n> 132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to\n> skip malformed data, but there is no way to watch the number of skipped\n> rows during COPY.\n>\n> Attached patch adds tuples_skipped to pg_stat_progress_copy, which\n> counts the number of skipped tuples because source data is malformed.\n> If SAVE_ERROR_TO is not specified, this column remains zero.\n>\n> The advantage would be that users can quickly notice and stop COPYing\n> when there is a larger amount of skipped data than expected, for\n> example.\n>\n> As described in commit log, it is expected to add more choices for\n> SAVE_ERROR_TO like 'log' and using such options may enable us to know\n> the number of skipped tuples during COPY, but exposed in\n> pg_stat_progress_copy would be easier to monitor.\n>\n>\n> What do you think?\n\n+1\n\nThe patch is pretty simple. Here is a comment:\n\n+ (if <literal>SAVE_ERROR_TO</literal> is specified, otherwise zero).\n+ </para></entry>\n+ </row>\n\nTo be precise, this counter only advances when a value other than\n'ERROR' is specified to SAVE_ERROR_TO option.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 14:47:49 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add tuples_skipped to pg_stat_progress_copy" }, { "msg_contents": "On 2024-01-17 14:47, Masahiko Sawada wrote:\n> On Wed, Jan 17, 2024 at 2:22 PM torikoshia <[email protected]> \n> wrote:\n>> \n>> Hi,\n>> \n>> 132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to\n>> skip malformed data, but there is no way to watch the number of \n>> skipped\n>> rows during COPY.\n>> \n>> Attached patch adds tuples_skipped to pg_stat_progress_copy, which\n>> counts the number of skipped tuples because source data is malformed.\n>> If SAVE_ERROR_TO is not specified, this column remains zero.\n>> \n>> The advantage would be that users can quickly notice and stop COPYing\n>> when there is a larger amount of skipped data than expected, for\n>> example.\n>> \n>> As described in commit log, it is expected to add more choices for\n>> SAVE_ERROR_TO like 'log' and using such options may enable us to know\n>> the number of skipped tuples during COPY, but exposed in\n>> pg_stat_progress_copy would be easier to monitor.\n>> \n>> \n>> What do you think?\n> \n> +1\n> \n> The patch is pretty simple. Here is a comment:\n> \n> + (if <literal>SAVE_ERROR_TO</literal> is specified, otherwise \n> zero).\n> + </para></entry>\n> + </row>\n> \n> To be precise, this counter only advances when a value other than\n> 'ERROR' is specified to SAVE_ERROR_TO option.\n\nThanks for your comment and review!\n\nUpdated the patch according to your comment and option name change by \nb725b7eec.\n\n\nBTW, based on this patch, I think we can add another option which \nspecifies the maximum tolerable number of malformed rows.\nI remember this was discussed in [1], and feel it would be useful when \nloading 'dirty' data but there is a limit to how dirty it can be.\nAttached 0002 is WIP patch for this(I haven't added doc yet).\n\nThis may be better discussed in another thread, but any comments(e.g. \nnecessity of this option, option name) are welcome.\n\n\n[1] \nhttps://www.postgresql.org/message-id/752672.1699474336%40sss.pgh.pa.us\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation", "msg_date": "Tue, 23 Jan 2024 01:02:15 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add tuples_skipped to pg_stat_progress_copy" }, { "msg_contents": "On Tue, Jan 23, 2024 at 1:02 AM torikoshia <[email protected]> wrote:\n>\n> On 2024-01-17 14:47, Masahiko Sawada wrote:\n> > On Wed, Jan 17, 2024 at 2:22 PM torikoshia <[email protected]>\n> > wrote:\n> >>\n> >> Hi,\n> >>\n> >> 132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to\n> >> skip malformed data, but there is no way to watch the number of\n> >> skipped\n> >> rows during COPY.\n> >>\n> >> Attached patch adds tuples_skipped to pg_stat_progress_copy, which\n> >> counts the number of skipped tuples because source data is malformed.\n> >> If SAVE_ERROR_TO is not specified, this column remains zero.\n> >>\n> >> The advantage would be that users can quickly notice and stop COPYing\n> >> when there is a larger amount of skipped data than expected, for\n> >> example.\n> >>\n> >> As described in commit log, it is expected to add more choices for\n> >> SAVE_ERROR_TO like 'log' and using such options may enable us to know\n> >> the number of skipped tuples during COPY, but exposed in\n> >> pg_stat_progress_copy would be easier to monitor.\n> >>\n> >>\n> >> What do you think?\n> >\n> > +1\n> >\n> > The patch is pretty simple. Here is a comment:\n> >\n> > + (if <literal>SAVE_ERROR_TO</literal> is specified, otherwise\n> > zero).\n> > + </para></entry>\n> > + </row>\n> >\n> > To be precise, this counter only advances when a value other than\n> > 'ERROR' is specified to SAVE_ERROR_TO option.\n>\n> Thanks for your comment and review!\n>\n> Updated the patch according to your comment and option name change by\n> b725b7eec.\n\nThanks! The patch looks good to me. I'm going to push it tomorrow,\nbarring any objections.\n\n>\n>\n> BTW, based on this patch, I think we can add another option which\n> specifies the maximum tolerable number of malformed rows.\n> I remember this was discussed in [1], and feel it would be useful when\n> loading 'dirty' data but there is a limit to how dirty it can be.\n> Attached 0002 is WIP patch for this(I haven't added doc yet).\n\nYeah, it could be a good option.\n\n> This may be better discussed in another thread, but any comments(e.g.\n> necessity of this option, option name) are welcome.\n\nI'd recommend forking a new thread for this option. As far as I\nremember, there also was an opinion that \"reject limit\" stuff is not\nvery useful.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 17:05:29 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add tuples_skipped to pg_stat_progress_copy" }, { "msg_contents": "On 2024-01-24 17:05, Masahiko Sawada wrote:\n> On Tue, Jan 23, 2024 at 1:02 AM torikoshia <[email protected]> \n> wrote:\n>> \n>> On 2024-01-17 14:47, Masahiko Sawada wrote:\n>> > On Wed, Jan 17, 2024 at 2:22 PM torikoshia <[email protected]>\n>> > wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> 132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to\n>> >> skip malformed data, but there is no way to watch the number of\n>> >> skipped\n>> >> rows during COPY.\n>> >>\n>> >> Attached patch adds tuples_skipped to pg_stat_progress_copy, which\n>> >> counts the number of skipped tuples because source data is malformed.\n>> >> If SAVE_ERROR_TO is not specified, this column remains zero.\n>> >>\n>> >> The advantage would be that users can quickly notice and stop COPYing\n>> >> when there is a larger amount of skipped data than expected, for\n>> >> example.\n>> >>\n>> >> As described in commit log, it is expected to add more choices for\n>> >> SAVE_ERROR_TO like 'log' and using such options may enable us to know\n>> >> the number of skipped tuples during COPY, but exposed in\n>> >> pg_stat_progress_copy would be easier to monitor.\n>> >>\n>> >>\n>> >> What do you think?\n>> >\n>> > +1\n>> >\n>> > The patch is pretty simple. Here is a comment:\n>> >\n>> > + (if <literal>SAVE_ERROR_TO</literal> is specified, otherwise\n>> > zero).\n>> > + </para></entry>\n>> > + </row>\n>> >\n>> > To be precise, this counter only advances when a value other than\n>> > 'ERROR' is specified to SAVE_ERROR_TO option.\n>> \n>> Thanks for your comment and review!\n>> \n>> Updated the patch according to your comment and option name change by\n>> b725b7eec.\n> \n> Thanks! The patch looks good to me. I'm going to push it tomorrow,\n> barring any objections.\n\nThanks!\n\n>> \n>> BTW, based on this patch, I think we can add another option which\n>> specifies the maximum tolerable number of malformed rows.\n>> I remember this was discussed in [1], and feel it would be useful when\n>> loading 'dirty' data but there is a limit to how dirty it can be.\n>> Attached 0002 is WIP patch for this(I haven't added doc yet).\n> \n> Yeah, it could be a good option.\n> \n>> This may be better discussed in another thread, but any comments(e.g.\n>> necessity of this option, option name) are welcome.\n> \n> I'd recommend forking a new thread for this option. As far as I\n> remember, there also was an opinion that \"reject limit\" stuff is not\n> very useful.\n\nOK, I'll make another thread for this.\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n", "msg_date": "Thu, 25 Jan 2024 11:25:33 +0900", "msg_from": "torikoshia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add tuples_skipped to pg_stat_progress_copy" }, { "msg_contents": "On Thu, Jan 25, 2024 at 11:25 AM torikoshia <[email protected]> wrote:\n>\n> On 2024-01-24 17:05, Masahiko Sawada wrote:\n> > On Tue, Jan 23, 2024 at 1:02 AM torikoshia <[email protected]>\n> > wrote:\n> >>\n> >> On 2024-01-17 14:47, Masahiko Sawada wrote:\n> >> > On Wed, Jan 17, 2024 at 2:22 PM torikoshia <[email protected]>\n> >> > wrote:\n> >> >>\n> >> >> Hi,\n> >> >>\n> >> >> 132de9968840c introduced SAVE_ERROR_TO option to COPY and enabled to\n> >> >> skip malformed data, but there is no way to watch the number of\n> >> >> skipped\n> >> >> rows during COPY.\n> >> >>\n> >> >> Attached patch adds tuples_skipped to pg_stat_progress_copy, which\n> >> >> counts the number of skipped tuples because source data is malformed.\n> >> >> If SAVE_ERROR_TO is not specified, this column remains zero.\n> >> >>\n> >> >> The advantage would be that users can quickly notice and stop COPYing\n> >> >> when there is a larger amount of skipped data than expected, for\n> >> >> example.\n> >> >>\n> >> >> As described in commit log, it is expected to add more choices for\n> >> >> SAVE_ERROR_TO like 'log' and using such options may enable us to know\n> >> >> the number of skipped tuples during COPY, but exposed in\n> >> >> pg_stat_progress_copy would be easier to monitor.\n> >> >>\n> >> >>\n> >> >> What do you think?\n> >> >\n> >> > +1\n> >> >\n> >> > The patch is pretty simple. Here is a comment:\n> >> >\n> >> > + (if <literal>SAVE_ERROR_TO</literal> is specified, otherwise\n> >> > zero).\n> >> > + </para></entry>\n> >> > + </row>\n> >> >\n> >> > To be precise, this counter only advances when a value other than\n> >> > 'ERROR' is specified to SAVE_ERROR_TO option.\n> >>\n> >> Thanks for your comment and review!\n> >>\n> >> Updated the patch according to your comment and option name change by\n> >> b725b7eec.\n> >\n> > Thanks! The patch looks good to me. I'm going to push it tomorrow,\n> > barring any objections.\n>\n> Thanks!\n\nPushed (commit 729439607).\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Jan 2024 14:57:32 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add tuples_skipped to pg_stat_progress_copy" } ]
[ { "msg_contents": "Hi all,\n\nrorqual has failed today with a very interesting failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-01-17%2005%3A06%3A31\n\nThis has caused an assertion failure for a 2PC transaction when\nreplaying one of the tests from the main regression suite:\n2024-01-17 05:08:23.143 UTC [3242608] DETAIL: Last completed transaction was at log time 2024-01-17 05:08:22.920244+00.\nTRAP: failed Assert(\"epoch > 0\"), File: \"../pgsql/src/backend/access/transam/twophase.c\", Line: 969, PID: 3242610\npostgres: standby_1: startup recovering 00000001000000000000000C(ExceptionalCondition+0x83)[0x55746c7838c1]\npostgres: standby_1: startup recovering 00000001000000000000000C(+0x194f0e)[0x55746c371f0e]\npostgres: standby_1: startup recovering 00000001000000000000000C(StandbyTransactionIdIsPrepared+0x29)[0x55746c373120]\npostgres: standby_1: startup recovering 00000001000000000000000C(StandbyReleaseOldLocks+0x3f)[0x55746c621357]\npostgres: standby_1: startup recovering 00000001000000000000000C(ProcArrayApplyRecoveryInfo+0x50)[0x55746c61bbb5]\npostgres: standby_1: startup recovering 00000001000000000000000C(standby_redo+0xe1)[0x55746c621490]\npostgres: standby_1: startup recovering 00000001000000000000000C(PerformWalRecovery+0xa5e)[0x55746c392404]\npostgres: standby_1: startup recovering 00000001000000000000000C(StartupXLOG+0x3ac)[0x55746c3862b8]\npostgres: standby_1: startup recovering 00000001000000000000000C(StartupProcessMain+0xd9)[0x55746c5a60f6]\npostgres: standby_1: startup recovering 00000001000000000000000C(AuxiliaryProcessMain+0x172)[0x55746c59bbdd]\npostgres: standby_1: startup recovering 00000001000000000000000C(+0x3c4235)[0x55746c5a1235]\npostgres: standby_1: startup recovering 00000001000000000000000C(PostmasterMain+0x1401)[0x55746c5a5a10]\npostgres: standby_1: startup recovering 00000001000000000000000C(main+0x835)[0x55746c4e90ce]\n/lib/x86_64-linux-gnu/libc.so.6(+0x276ca)[0x7f67bbb846ca]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f67bbb84785]\npostgres: standby_1: startup recovering 00000001000000000000000C(_start+0x21)[0x55746c2b61d1]\n\nThis refers to the following in twophase.c with\nAdjustToFullTransactionId(): \n nextXid = XidFromFullTransactionId(nextFullXid);\n epoch = EpochFromFullTransactionId(nextFullXid);\n\n if (unlikely(xid > nextXid))\n { \n /* Wraparound occurred, must be from a prev epoch. */\n Assert(epoch > 0);\n epoch--;\n }\n\nThis would mean that we've found a way to get a negative epoch, which\nshould not be possible.\n\nAlexander, you have added this code in 5a1dfde8334b when switching the\n2PC file names to use FullTransactionIds. Could you check please?\n--\nMichael", "msg_date": "Wed, 17 Jan 2024 14:47:33 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Assertion failure with epoch when replaying standby records for 2PC" }, { "msg_contents": "Hi, Michael!\n\nOn Wed, Jan 17, 2024 at 7:47 AM Michael Paquier <[email protected]> wrote:\n> rorqual has failed today with a very interesting failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-01-17%2005%3A06%3A31\n>\n> This has caused an assertion failure for a 2PC transaction when\n> replaying one of the tests from the main regression suite:\n> 2024-01-17 05:08:23.143 UTC [3242608] DETAIL: Last completed transaction was at log time 2024-01-17 05:08:22.920244+00.\n> TRAP: failed Assert(\"epoch > 0\"), File: \"../pgsql/src/backend/access/transam/twophase.c\", Line: 969, PID: 3242610\n> postgres: standby_1: startup recovering 00000001000000000000000C(ExceptionalCondition+0x83)[0x55746c7838c1]\n> postgres: standby_1: startup recovering 00000001000000000000000C(+0x194f0e)[0x55746c371f0e]\n> postgres: standby_1: startup recovering 00000001000000000000000C(StandbyTransactionIdIsPrepared+0x29)[0x55746c373120]\n> postgres: standby_1: startup recovering 00000001000000000000000C(StandbyReleaseOldLocks+0x3f)[0x55746c621357]\n> postgres: standby_1: startup recovering 00000001000000000000000C(ProcArrayApplyRecoveryInfo+0x50)[0x55746c61bbb5]\n> postgres: standby_1: startup recovering 00000001000000000000000C(standby_redo+0xe1)[0x55746c621490]\n> postgres: standby_1: startup recovering 00000001000000000000000C(PerformWalRecovery+0xa5e)[0x55746c392404]\n> postgres: standby_1: startup recovering 00000001000000000000000C(StartupXLOG+0x3ac)[0x55746c3862b8]\n> postgres: standby_1: startup recovering 00000001000000000000000C(StartupProcessMain+0xd9)[0x55746c5a60f6]\n> postgres: standby_1: startup recovering 00000001000000000000000C(AuxiliaryProcessMain+0x172)[0x55746c59bbdd]\n> postgres: standby_1: startup recovering 00000001000000000000000C(+0x3c4235)[0x55746c5a1235]\n> postgres: standby_1: startup recovering 00000001000000000000000C(PostmasterMain+0x1401)[0x55746c5a5a10]\n> postgres: standby_1: startup recovering 00000001000000000000000C(main+0x835)[0x55746c4e90ce]\n> /lib/x86_64-linux-gnu/libc.so.6(+0x276ca)[0x7f67bbb846ca]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f67bbb84785]\n> postgres: standby_1: startup recovering 00000001000000000000000C(_start+0x21)[0x55746c2b61d1]\n>\n> This refers to the following in twophase.c with\n> AdjustToFullTransactionId():\n> nextXid = XidFromFullTransactionId(nextFullXid);\n> epoch = EpochFromFullTransactionId(nextFullXid);\n>\n> if (unlikely(xid > nextXid))\n> {\n> /* Wraparound occurred, must be from a prev epoch. */\n> Assert(epoch > 0);\n> epoch--;\n> }\n>\n> This would mean that we've found a way to get a negative epoch, which\n> should not be possible.\n>\n> Alexander, you have added this code in 5a1dfde8334b when switching the\n> 2PC file names to use FullTransactionIds. Could you check please?\n\nThank you for reporting! I'm going to look at this in the next couple of days.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Wed, 17 Jan 2024 23:08:39 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with epoch when replaying standby records for\n 2PC" }, { "msg_contents": "On Wed, Jan 17, 2024 at 11:08 PM Alexander Korotkov\n<[email protected]> wrote:\n> On Wed, Jan 17, 2024 at 7:47 AM Michael Paquier <[email protected]> wrote:\n> > rorqual has failed today with a very interesting failure:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rorqual&dt=2024-01-17%2005%3A06%3A31\n> >\n> > This has caused an assertion failure for a 2PC transaction when\n> > replaying one of the tests from the main regression suite:\n> > 2024-01-17 05:08:23.143 UTC [3242608] DETAIL: Last completed transaction was at log time 2024-01-17 05:08:22.920244+00.\n> > TRAP: failed Assert(\"epoch > 0\"), File: \"../pgsql/src/backend/access/transam/twophase.c\", Line: 969, PID: 3242610\n> > postgres: standby_1: startup recovering 00000001000000000000000C(ExceptionalCondition+0x83)[0x55746c7838c1]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(+0x194f0e)[0x55746c371f0e]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(StandbyTransactionIdIsPrepared+0x29)[0x55746c373120]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(StandbyReleaseOldLocks+0x3f)[0x55746c621357]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(ProcArrayApplyRecoveryInfo+0x50)[0x55746c61bbb5]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(standby_redo+0xe1)[0x55746c621490]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(PerformWalRecovery+0xa5e)[0x55746c392404]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(StartupXLOG+0x3ac)[0x55746c3862b8]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(StartupProcessMain+0xd9)[0x55746c5a60f6]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(AuxiliaryProcessMain+0x172)[0x55746c59bbdd]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(+0x3c4235)[0x55746c5a1235]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(PostmasterMain+0x1401)[0x55746c5a5a10]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(main+0x835)[0x55746c4e90ce]\n> > /lib/x86_64-linux-gnu/libc.so.6(+0x276ca)[0x7f67bbb846ca]\n> > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x85)[0x7f67bbb84785]\n> > postgres: standby_1: startup recovering 00000001000000000000000C(_start+0x21)[0x55746c2b61d1]\n> >\n> > This refers to the following in twophase.c with\n> > AdjustToFullTransactionId():\n> > nextXid = XidFromFullTransactionId(nextFullXid);\n> > epoch = EpochFromFullTransactionId(nextFullXid);\n> >\n> > if (unlikely(xid > nextXid))\n> > {\n> > /* Wraparound occurred, must be from a prev epoch. */\n> > Assert(epoch > 0);\n> > epoch--;\n> > }\n> >\n> > This would mean that we've found a way to get a negative epoch, which\n> > should not be possible.\n> >\n> > Alexander, you have added this code in 5a1dfde8334b when switching the\n> > 2PC file names to use FullTransactionIds. Could you check please?\n>\n> Thank you for reporting! I'm going to look at this in the next couple of days.\n\nOh, that is a forgotten piece I've already discovered.\nhttps://www.postgresql.org/message-id/CAPpHfdv%3DVahovNqJHBqr0ejHvx%3DeDuGYySC48Wcvp%2BGDxYLCJg%40mail.gmail.com\nI'm going to do some additional checks and push.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 19 Jan 2024 16:28:00 +0200", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Assertion failure with epoch when replaying standby records for\n 2PC" } ]
[ { "msg_contents": "Hi hackers,\n\nDuring logical replication, if there is a large write transaction, some\nspill files will be written to disk, depending on the setting of\nlogical_decoding_work_mem.\n\nThis behavior can effectively avoid OOM, but if the transaction\ngenerates a lot of change before commit, a large number of files may\nfill the disk. For example, you can update a TB-level table.\n\nHowever, I found an inelegant phenomenon. If the modified large table is not\npublished, its changes will also be written with a large number of spill files.\nLook at an example below:\n\npublisher:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE PUBLICATION mypub FOR TABLE public.tbl_pub;\n```\n\nsubscriber:\n```\ncreate table tbl_pub(id int, val1 text, val2 text,val3 text);\ncreate table tbl_t1(id int, val1 text, val2 text,val3 text);\nCREATE SUBSCRIPTION mysub CONNECTION 'host=127.0.0.1 port=5432\nuser=postgres dbname=postgres' PUBLICATION mypub;\n```\n\npublisher:\n```\nbegin;\ninsert into tbl_t1 select i,repeat('xyzzy', i),repeat('abcba',\ni),repeat('dfds', i) from generate_series(0,999999) i;\n```\n\nLater you will see a large number of spill files in the\n\"/$PGDATA/pg_replslot/mysub/\" directory.\n```\n$ll -sh\ntotal 4.5G\n4.0K -rw------- 1 postgres postgres 200 Nov 30 09:24 state\n17M -rw------- 1 postgres postgres 17M Nov 30 08:22 xid-750-lsn-0-10000000.spill\n12M -rw------- 1 postgres postgres 12M Nov 30 08:20 xid-750-lsn-0-1000000.spill\n17M -rw------- 1 postgres postgres 17M Nov 30 08:23 xid-750-lsn-0-11000000.spill\n......\n```\n\nWe can see that table tbl_t1 is not published in mypub. It also won't be sent\ndownstream because it's not subscribed.\nAfter the transaction is reorganized, the pgoutput decoding plugin filters out\nchanges to these unpublished relationships when sending logical changes.\nSee function pgoutput_change.\n\nMost importantly, if we filter out unpublished relationship-related\nchanges after constructing the changes but before queuing the changes\ninto a transaction, will it reduce the workload of logical decoding\nand avoid disk\nor memory growth as much as possible?\n\nThe patch in the attachment is a prototype, which can effectively reduce the\nmemory and disk space usage during logical replication.\n\nDesign:\n1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n\n2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\nIts main implementation is based on the table filter in the\npgoutput_change function.\nIts main function is to determine whether the change needs to be published based\non the parameters of the publication, and if not, filter it.\n\n3. After constructing a change and before Queue a change into a transaction,\nuse RelidByRelfilenumber to obtain the relation associated with the change,\njust like obtaining the relation in the ReorderBufferProcessTXN function.\n\n4. Relation may be a toast, and there is no good way to get its real\ntable relation based on toast relation. Here, I get the real table oid\nthrough toast relname, and then get the real table relation.\n\n5. This filtering takes into account INSERT/UPDATE/INSERT. Other\nchanges have not been considered yet and can be expanded in the future.\n\nTest:\n1. Added a test case 034_table_filter.pl\n2. Like the case above, create two tables, the published table tbl_pub and\nthe non-published table tbl_t1\n3. Insert 10,000 rows of toast data into tbl_t1 on the publisher, and use\npg_ls_replslotdir to record the total size of the slot directory\nevery second.\n4. Compare the size of the slot directory at the beginning of the\ntransaction(size1),\nthe size at the end of the transaction (size2), and the average\nsize of the entire process(size3).\n5. Assert(size1==size2==size3)\n\nSincerely look forward to your feedback.\nRegards, lijie", "msg_date": "Wed, 17 Jan 2024 14:15:18 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Reduce useless changes before reassembly during logical replication" }, { "msg_contents": "On Wed, Jan 17, 2024 at 11:45 AM li jie <[email protected]> wrote:\n>\n> Hi hackers,\n>\n> During logical replication, if there is a large write transaction, some\n> spill files will be written to disk, depending on the setting of\n> logical_decoding_work_mem.\n>\n> This behavior can effectively avoid OOM, but if the transaction\n> generates a lot of change before commit, a large number of files may\n> fill the disk. For example, you can update a TB-level table.\n>\n> However, I found an inelegant phenomenon. If the modified large table is not\n> published, its changes will also be written with a large number of spill files.\n> Look at an example below:\n\nThanks. I agree that decoding and queuing the changes of unpublished\ntables' data into reorder buffer is an unnecessary task for walsender.\nIt takes processing efforts (CPU overhead), consumes disk space and\nuses memory configured via logical_decoding_work_mem for a replication\nconnection inefficiently.\n\n> Later you will see a large number of spill files in the\n>\n> We can see that table tbl_t1 is not published in mypub. It also won't be sent\n> downstream because it's not subscribed.\n> After the transaction is reorganized, the pgoutput decoding plugin filters out\n> changes to these unpublished relationships when sending logical changes.\n> See function pgoutput_change.\n\nRight. Here's my testing [1].\n\n> Most importantly, if we filter out unpublished relationship-related\n> changes after constructing the changes but before queuing the changes\n> into a transaction, will it reduce the workload of logical decoding\n> and avoid disk\n> or memory growth as much as possible?\n\nRight. It can.\n\n> The patch in the attachment is a prototype, which can effectively reduce the\n> memory and disk space usage during logical replication.\n>\n> Design:\n> 1. Added a callback LogicalDecodeFilterByRelCB for the output plugin.\n>\n> 2. Added this callback function pgoutput_table_filter for the pgoutput plugin.\n> Its main implementation is based on the table filter in the\n> pgoutput_change function.\n> Its main function is to determine whether the change needs to be published based\n> on the parameters of the publication, and if not, filter it.\n>\n> 3. After constructing a change and before Queue a change into a transaction,\n> use RelidByRelfilenumber to obtain the relation associated with the change,\n> just like obtaining the relation in the ReorderBufferProcessTXN function.\n>\n> 4. Relation may be a toast, and there is no good way to get its real\n> table relation based on toast relation. Here, I get the real table oid\n> through toast relname, and then get the real table relation.\n>\n> 5. This filtering takes into account INSERT/UPDATE/INSERT. Other\n> changes have not been considered yet and can be expanded in the future.\n\nDesign of this patch is based on the principle of logical decoding\nfiltering things out early on and looks very similar to\nfilter_prepare_cb_wrapper/pg_decode_filter_prepare and\nfilter_by_origin_cb/pgoutput_origin_filter. Per my understanding this\ndesign looks okay unless I'm missing anything.\n\n> Test:\n> 1. Added a test case 034_table_filter.pl\n> 2. Like the case above, create two tables, the published table tbl_pub and\n> the non-published table tbl_t1\n> 3. Insert 10,000 rows of toast data into tbl_t1 on the publisher, and use\n> pg_ls_replslotdir to record the total size of the slot directory\n> every second.\n> 4. Compare the size of the slot directory at the beginning of the\n> transaction(size1),\n> the size at the end of the transaction (size2), and the average\n> size of the entire process(size3).\n> 5. Assert(size1==size2==size3)\n\nI bet that the above test with 10K rows is going to take a noticeable\ntime on some buildfarm members (it took 6 seconds on my dev system\nwhich is an AWS EC2 instance). And, the above test can get flaky.\nTherefore, IMO, the concrete way of testing this feature is by looking\nat the server logs for the following message using\nPostgreSQL::Test::Cluster log_contains().\n\n+filter_done:\n+\n+ if (result && RelationIsValid(relation))\n+ elog(DEBUG1, \"logical filter change by table %s\",\nRelationGetRelationName(relation));\n+\n\nHere are some comments on the v1 patch:\n1.\n@@ -1415,9 +1419,6 @@ pgoutput_change(LogicalDecodingContext *ctx,\nReorderBufferTXN *txn,\n TupleTableSlot *old_slot = NULL;\n TupleTableSlot *new_slot = NULL;\n\n- if (!is_publishable_relation(relation))\n- return;\n-\n\nInstead of removing is_publishable_relation from pgoutput_change, I\nthink it can just be turned into an assertion\nAssert(is_publishable_relation(relation));, no?\n\n2.\n+ switch (change->action)\n+ {\n+ /* intentionally fall through */\n\nPerhaps, it must use /* FALLTHROUGH */ just like elsewhere in the\ncode, otherwise a warning is thrown.\n\n3. From commit message:\nMost of the code in the FilterByTable function is transplanted from\nthe ReorderBufferProcessTXN\nfunction, which can be called before the ReorderBufferQueueChange function.It is\n\nI think the above note can just be above the FilterByTable function\nfor better understanding.\n\n+static bool\n+FilterByTable(LogicalDecodingContext *ctx, ReorderBufferChange *change)\n+{\n\n4. Why is FilterByTable(ctx, change) call placed after DecodeXLogTuple\nin DecodeInsert, DecodeUpdate and DecodeDelete? Is there a use for\ndecoded tuples done by DecodeXLogTuple in the new callback\nfilter_by_table_cb? If not, can we move FilterByTable call before\nDecodeXLogTuple to avoid some more extra processing?\n\n5. Why is ReorderBufferChange needed as a parameter to FilterByTable\nand filter_by_table_cb? Can't just the LogicalDecodingContext and\nrelation name, the change action be enough to decide if the table is\npublishable or not? If done this way, it can avoid some more\nprocessing, no?\n\n6. Please run pgindent and pgperltidy on the new source code and new\nTAP test file respectively.\n\n[1]\nHEAD:\npostgres=# BEGIN;\nBEGIN\nTime: 0.110 ms\npostgres=*# insert into tbl_t1 select i,repeat('xyzzy',\ni),repeat('abcba', i),repeat('dfds', i) from generate_series(0,99999)\ni;\nINSERT 0 100000\nTime: 379488.265 ms (06:19.488)\npostgres=*#\n\nubuntu:~/postgres/pg17/bin$ du -sh\n/home/ubuntu/postgres/pg17/bin/db17/pg_replslot/mysub\n837M /home/ubuntu/postgres/pg17/bin/db17/pg_replslot/mysub\nubuntu:~/postgres/pg17/bin$ du -sh /home/ubuntu/postgres/pg17/bin/db17\n2.6G /home/ubuntu/postgres/pg17/bin/db17\n\nPATCHED:\npostgres=# BEGIN;\nBEGIN\nTime: 0.105 ms\npostgres=*# insert into tbl_t1 select i,repeat('xyzzy',\ni),repeat('abcba', i),repeat('dfds', i) from generate_series(0,99999)\ni;\nINSERT 0 100000\nTime: 380044.554 ms (06:20.045)\n\nubuntu:~/postgres$ du -sh /home/ubuntu/postgres/pg17/bin/db17/pg_replslot/mysub\n8.0K /home/ubuntu/postgres/pg17/bin/db17/pg_replslot/mysub\nubuntu:~/postgres$ du -sh /home/ubuntu/postgres/pg17/bin/db17\n1.8G /home/ubuntu/postgres/pg17/bin/db17\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 12:12:45 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" }, { "msg_contents": "On Thu, Jan 18, 2024 at 12:12 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 11:45 AM li jie <[email protected]> wrote:\n> >\n> > Hi hackers,\n> >\n> > During logical replication, if there is a large write transaction, some\n> > spill files will be written to disk, depending on the setting of\n> > logical_decoding_work_mem.\n> >\n> > This behavior can effectively avoid OOM, but if the transaction\n> > generates a lot of change before commit, a large number of files may\n> > fill the disk. For example, you can update a TB-level table.\n> >\n> > However, I found an inelegant phenomenon. If the modified large table is not\n> > published, its changes will also be written with a large number of spill files.\n> > Look at an example below:\n>\n> Thanks. I agree that decoding and queuing the changes of unpublished\n> tables' data into reorder buffer is an unnecessary task for walsender.\n> It takes processing efforts (CPU overhead), consumes disk space and\n> uses memory configured via logical_decoding_work_mem for a replication\n> connection inefficiently.\n>\n\nThis is all true but note that in successful cases (where the table is\npublished) all the work done by FilterByTable(accessing caches,\ntransaction-related stuff) can add noticeable overhead as anyway we do\nthat later in pgoutput_change(). I think I gave the same comment\nearlier as well but didn't see any satisfactory answer or performance\ndata for successful cases to back this proposal. Note, users can\nconfigure to stream_in_progress transactions in which case they\nshouldn't see such a big problem. However, I agree that if we can find\nsome solution where there is no noticeable overhead then that would be\nworth considering.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Jan 2024 14:47:18 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" }, { "msg_contents": "On Thu, Jan 18, 2024 at 2:47 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 12:12 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Wed, Jan 17, 2024 at 11:45 AM li jie <[email protected]> wrote:\n> > >\n> > > Hi hackers,\n> > >\n> > > During logical replication, if there is a large write transaction, some\n> > > spill files will be written to disk, depending on the setting of\n> > > logical_decoding_work_mem.\n> > >\n> > > This behavior can effectively avoid OOM, but if the transaction\n> > > generates a lot of change before commit, a large number of files may\n> > > fill the disk. For example, you can update a TB-level table.\n> > >\n> > > However, I found an inelegant phenomenon. If the modified large table is not\n> > > published, its changes will also be written with a large number of spill files.\n> > > Look at an example below:\n> >\n> > Thanks. I agree that decoding and queuing the changes of unpublished\n> > tables' data into reorder buffer is an unnecessary task for walsender.\n> > It takes processing efforts (CPU overhead), consumes disk space and\n> > uses memory configured via logical_decoding_work_mem for a replication\n> > connection inefficiently.\n> >\n>\n> This is all true but note that in successful cases (where the table is\n> published) all the work done by FilterByTable(accessing caches,\n> transaction-related stuff) can add noticeable overhead as anyway we do\n> that later in pgoutput_change().\n\nRight. Overhead for published tables need to be studied. A possible\nway is to mark the checks performed in\nFilterByTable/filter_by_table_cb and skip the same checks in\npgoutput_change. I'm not sure if this works without any issues though.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:44:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" }, { "msg_contents": "Hi, \n>\n> This is all true but note that in successful cases (where the table is\n> published) all the work done by FilterByTable(accessing caches,\n> transaction-related stuff) can add noticeable overhead as anyway we do\n> that later in pgoutput_change(). I think I gave the same comment\n> earlier as well but didn't see any satisfactory answer or performance\n> data for successful cases to back this proposal.\n\nI did some benchmark yesterday at [1] and found it adds 20% cpu time.\nthen come out a basic idea, I think it deserves a share. \"transaction\nrelated stuff\" comes from the syscache/systable access except the\nHistorySansphot. and the syscache is required in the following\nsistuations: \n\n1. relfilenode (from wal) -> relid.\n2. relid -> namespaceid (to check if the relid is a toast relation).\n3. if toast, get its origianl relid.\n4. access the data from pg_publication_tables.\n5. see if the relid is a partition, if yes, we may get its root\nrelation.\n\nAcutally we already has a RelationSyncCache for #4, and it *only* need\nto access syscache when replicate_valid is false, I think this case\nshould be rare, but the caller doesn't know it, so the caller must\nprepare the transaction stuff in advance even in the most case they are\nnot used. So I think we can get a optimization here.\n\nthen the attached patch is made.\n\nAuthor: yizhi.fzh <[email protected]>\nDate: Wed Feb 21 18:40:03 2024 +0800\n\n Make get_rel_sync_entry less depending on transaction state.\n \n get_rel_sync_entry needs transaction only a replicate_valid = false\n entry is found, this should be some rare case. However the caller can't\n know if a entry is valid, so they have to prepare the transaction state\n before calling this function. Such preparation is expensive.\n \n This patch makes the get_rel_sync_entry can manage a transaction stage\n only if necessary. so the callers don't need to prepare it blindly.\n\nThen comes to #1, acutally we have RelfilenumberMapHash as a cache, when\nthe cache is hit (suppose this is a usual case), no transaction stuff\nrelated. I have two ideas then:\n\n1. Optimize the cache hit sistuation like what we just did for\nget_rel_sync_entry for the all the 5 kinds of data and only pay the\neffort for cache miss case. for the data for #2, #3, #5, all the keys\nare relid, so I think a same HTAB should be OK.\n\n2. add the content for #1, #2, #3, #5 to wal when wal_level is set to\nlogical. \n\nIn either case, the changes for get_rel_sync_entry should be needed. \n\n> Note, users can\n> configure to stream_in_progress transactions in which case they\n> shouldn't see such a big problem.\n\nPeople would see the changes is spilled to disk, but the CPU cost for\nReorder should be still paid I think. \n\n[1] https://www.postgresql.org/message-id/87o7cadqj3.fsf%40163.com\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 22 Feb 2024 16:11:12 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" }, { "msg_contents": "Hi,\n\nSorry I replied too late.\n\n> This is all true but note that in successful cases (where the table is\n> published) all the work done by FilterByTable(accessing caches,\n> transaction-related stuff) can add noticeable overhead as anyway we do\n> that later in pgoutput_change().\n\nYou are correct. Frequent opening of transactions and access to cache will\ncause a lot of overhead, which Andy has tested and proved.\n\nThe root cause is because every dml wal record needs to do this, which is really\nwasteful. I use a hash table LocatorFilterCache to solve this problem.\nAfter getting\na RelFileLocator, I go to the hash table to check its\nPublicationActions and filter it\nbased on the PublicationActions to determine whether it has been published.\n\nThe effect of my test is very obvious: (perf record)\nv1:\n Children Self Command Shared O Symbol\n+ 22.04% 1.53% postgres postgres [.] FilterByTable\n\nv2:\n Children Self Command Shared O Symbol\n+ 0.58% 0.00% postgres postgres [.] ReorderBufferFilterByLocator\n\nv1 patch introduces 20% overhead, while v2 only has 0.58%.\n\n\n> Note, users can\n>configure to stream_in_progress transactions in which case they\n> shouldn't see such a big problem.\n\nYes, stream mode can prevent these irrelevant changes from being written to\ndisk or sent to downstream.\nHowever, CPU and memory consumption will also be incurred when processing\nthese useless changes. Here is my simple test[1]:\n\nbase on master :\n\nCPU stat: perf stat -p pid -e cycles -I 1000\n# time counts unit events\n76.007070936 9,691,035 cycles\n77.007163484 5,977,694 cycles\n78.007252533 5,924,703 cycles\n79.007346862 5,861,934 cycles\n80.007438070 5,858,264 cycles\n81.007527122 6,408,759 cycles\n82.007615711 6,397,988 cycles\n83.007705685 5,520,407 cycles\n84.007794387 5,359,162 cycles\n85.007884879 5,194,079 cycles\n86.007979797 5,391,270 cycles\n87.008069606 5,474,536 cycles\n88.008162827 5,594,190 cycles\n89.008256327 5,610,023 cycles\n90.008349583 5,627,350 cycles\n91.008437785 6,273,510 cycles\n92.008527938 580,934,205 cycles\n93.008620136 4,404,672 cycles\n94.008711818 4,599,074 cycles\n95.008805591 4,374,958 cycles\n96.008894543 4,300,180 cycles\n97.008987582 4,157,892 cycles\n98.009077445 4,072,178 cycles\n99.009163475 4,043,875 cycles\n100.009254888 5,382,667 cycles\n\nmemory stat: pistat -p pid -r 1 10\n07:57:18 AM UID PID minflt/s majflt/s VSZ RSS %MEM Command\n07:57:19 AM 1000 11848 233.00 0.00 386872 81276 0.01 postgres\n07:57:20 AM 1000 11848 235.00 0.00 387008 82068 0.01 postgres\n07:57:21 AM 1000 11848 236.00 0.00 387144 83124 0.01 postgres\n07:57:22 AM 1000 11848 236.00 0.00 387144 83916 0.01 postgres\n07:57:23 AM 1000 11848 236.00 0.00 387280 84972 0.01 postgres\n07:57:24 AM 1000 11848 334.00 0.00 337000 36928 0.00 postgres\n07:57:25 AM 1000 11848 3.00 0.00 337000 36928 0.00 postgres\n07:57:26 AM 1000 11848 0.00 0.00 337000 36928 0.00 postgres\n07:57:27 AM 1000 11848 0.00 0.00 337000 36928 0.00 postgres\n07:57:28 AM 1000 11848 0.00 0.00 337000 36928 0.00 postgres\nAverage: 1000 11848 151.30 0.00 362045 60000 0.01 postgres\n\nAfter patched:\n# time counts unit events\n76.007623310 4,237,505 cycles\n77.007717436 3,989,618 cycles\n78.007813848 3,965,857 cycles\n79.007906412 3,601,715 cycles\n80.007998111 3,670,835 cycles\n81.008092670 3,495,844 cycles\n82.008187456 3,822,695 cycles\n83.008281335 5,034,146 cycles\n84.008374998 3,867,683 cycles\n85.008470245 3,996,927 cycles\n86.008563783 3,823,893 cycles\n87.008658628 3,825,472 cycles\n88.008755246 3,823,079 cycles\n89.008849719 3,966,083 cycles\n90.008945774 4,012,704 cycles\n91.009044492 4,026,860 cycles\n92.009139621 3,860,912 cycles\n93.009242485 3,961,533 cycles\n94.009346304 3,799,897 cycles\n95.009440164 3,959,602 cycles\n96.009534251 3,960,405 cycles\n97.009625904 3,762,581 cycles\n98.009716518 4,859,490 cycles\n99.009807720 3,940,845 cycles\n100.009901399 3,888,095 cycles\n\n08:01:47 AM UID PID minflt/s majflt/s VSZ RSS %MEM Command\n08:01:48 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:49 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:50 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:51 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:52 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:53 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:54 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:55 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:56 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n08:01:57 AM 1000 19466 0.00 0.00 324424 15140 0.00 postgres\nAverage: 1000 19466 0.00 0.00 324424 15140 0.00 postgres\n\nThrough comparison, it is found that patch is also profitable for stream mode.\nOf course, LocatorFilterCache also need to deal with invalidation, such as the\ncorresponding relation invalidate, or pg_publication changes, just like\nRelationSyncCache and RelfilenumberMapHash.\nBut ddl is a small amount after all, which is insignificant compared to a\nlarge amount of dml.\n\nAnother problem is that the LocatorFilterCache looks redundant compared\n to RelationSyncCache and RelfilenumberMapHash. like this:\n1. RelfilenumberMapHash: relfilenode -> relation oid\n2. RelationSyncCache: relation oid-> PublicationActions\n3. LocatorFilterCache: RelFileLocator-> PublicationActions\n\nThe reason is that you cannot simply access two caches from the\nrelfilenode --> PublicationActions, and you must use historical\nsnapshots to access\ntransactions and relcache in the middle, so there is no good solution\nfor this for the\ntime being, ugly but effective.\n\n\n>Therefore, IMO, the concrete way of testing this feature is by looking\n>at the server logs for the following message using\n>PostgreSQL::Test::Cluster log_contains().\nthinks, done.\n\n>Instead of removing is_publishable_relation from pgoutput_change, I\n>think it can just be turned into an assertion\n>Assert(is_publishable_relation(relation));, no?\nyes, done.\n\n>Perhaps, it must use /* FALLTHROUGH */ just like elsewhere in the\n>code, otherwise a warning is thrown.\n/* intentionally fall through */ can also avoid warnings.\n\n>Can't just the LogicalDecodingContext and\n>relation name, the change action be enough to decide if the table is\n>publishable or not? If done this way, it can avoid some more\n>processing, no?\nyes, RelFileLocator filtering is used directly in v2, and change is\nno longer required.\n\n>Please run pgindent and pgperltidy on the new source code and new\n>TAP test file respectively.\nok.\n\n[1]: https://www.postgresql.org/message-id/CAGfChW62f5NTNbLsqO-6_CrmKPqBEQtWPcPDafu8pCwZznk%3Dxw%40mail.gmail.com", "msg_date": "Wed, 6 Mar 2024 18:00:59 +0800", "msg_from": "li jie <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reduce useless changes before reassembly during logical\n replication" } ]
[ { "msg_contents": "Hi All,\n\nWith the attached patch, the backup manifest will have a new key item as\n\"System-Identifier\" 64-bit integer whose value is derived from pg_control\nwhile\ngenerating it, and the manifest version bumps to 2.\n\nThis helps to identify the correct database server and/or backup for the\nsubsequent backup operations. pg_verifybackup validates the manifest system\nidentifier against the backup control file and fails if they don’t match.\nSimilarly, pg_basebackup increment backup will fail if the manifest system\nidentifier does not match with the server system identifier. The\npg_combinebackup is already a bit smarter -- checks the system identifier\nfrom\nthe pg_control of all the backups, with this patch the manifest system\nidentifier also validated.\n\nFor backward compatibility, the manifest system identifier validation will\nbe\nskipped for version 1.\n\n-- \nRegards,\nAmul Sul\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Jan 2024 17:00:52 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Add system identifier to backup manifest" }, { "msg_contents": "On 2024-Jan-17, Amul Sul wrote:\n\n> This helps to identify the correct database server and/or backup for the\n> subsequent backup operations. pg_verifybackup validates the manifest system\n> identifier against the backup control file and fails if they don’t match.\n> Similarly, pg_basebackup increment backup will fail if the manifest system\n> identifier does not match with the server system identifier. The\n> pg_combinebackup is already a bit smarter -- checks the system identifier\n> from\n> the pg_control of all the backups, with this patch the manifest system\n> identifier also validated.\n\nHmm, okay, but what if I take a full backup from a primary server and\nlater I want an incremental from a standby, or the other way around?\nWill this prevent me from using such a combination?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\nhttps://postgr.es/m/[email protected]\n\n\n", "msg_date": "Wed, 17 Jan 2024 12:45:05 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 17, 2024 at 5:15 PM Alvaro Herrera <[email protected]>\nwrote:\n\n> On 2024-Jan-17, Amul Sul wrote:\n>\n> > This helps to identify the correct database server and/or backup for the\n> > subsequent backup operations. pg_verifybackup validates the manifest\n> system\n> > identifier against the backup control file and fails if they don’t match.\n> > Similarly, pg_basebackup increment backup will fail if the manifest\n> system\n> > identifier does not match with the server system identifier. The\n> > pg_combinebackup is already a bit smarter -- checks the system identifier\n> > from\n> > the pg_control of all the backups, with this patch the manifest system\n> > identifier also validated.\n>\n> Hmm, okay, but what if I take a full backup from a primary server and\n> later I want an incremental from a standby, or the other way around?\n> Will this prevent me from using such a combination?\n>\n\nYes, that worked for me where the system identifier was the same on\nmaster as well standby.\n\nRegards,\nAmul\n\nOn Wed, Jan 17, 2024 at 5:15 PM Alvaro Herrera <[email protected]> wrote:On 2024-Jan-17, Amul Sul wrote:\n\n> This helps to identify the correct database server and/or backup for the\n> subsequent backup operations.  pg_verifybackup validates the manifest system\n> identifier against the backup control file and fails if they don’t match.\n> Similarly, pg_basebackup increment backup will fail if the manifest system\n> identifier does not match with the server system identifier.  The\n> pg_combinebackup is already a bit smarter -- checks the system identifier\n> from\n> the pg_control of all the backups, with this patch the manifest system\n> identifier also validated.\n\nHmm, okay, but what if I take a full backup from a primary server and\nlater I want an incremental from a standby, or the other way around?\nWill this prevent me from using such a combination? Yes, that worked for me where the system identifier was the same onmaster as well standby.Regards,Amul", "msg_date": "Wed, 17 Jan 2024 17:33:31 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 17, 2024 at 6:45 AM Alvaro Herrera <[email protected]> wrote:\n> Hmm, okay, but what if I take a full backup from a primary server and\n> later I want an incremental from a standby, or the other way around?\n> Will this prevent me from using such a combination?\n\nThe system identifier had BETTER match in such cases. If it doesn't,\nsomebody's run pg_resetwal on your standby since it was created... and\nin that case, no incremental backup for you!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 08:46:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 17, 2024 at 6:31 AM Amul Sul <[email protected]> wrote:\n> With the attached patch, the backup manifest will have a new key item as\n> \"System-Identifier\" 64-bit integer whose value is derived from pg_control while\n> generating it, and the manifest version bumps to 2.\n>\n> This helps to identify the correct database server and/or backup for the\n> subsequent backup operations. pg_verifybackup validates the manifest system\n> identifier against the backup control file and fails if they don’t match.\n> Similarly, pg_basebackup increment backup will fail if the manifest system\n> identifier does not match with the server system identifier. The\n> pg_combinebackup is already a bit smarter -- checks the system identifier from\n> the pg_control of all the backups, with this patch the manifest system\n> identifier also validated.\n\nThanks for working on this. Without this, I think what happens is that\nyou can potentially take an incremental backup from the \"wrong\"\nserver, if the states of the systems are such that all of the other\nsanity checks pass. When you run pg_combinebackup, it'll discover the\nproblem and tell you, but you ideally want to discover such errors at\nbackup time rather than at restore time. This addresses that. And,\noverall, I think it's a pretty good patch. But I nonetheless have a\nbunch of comments.\n\n- The associated value is always the integer 1.\n+ The associated value is the integer, either 1 or 2.\n\nis an integer. Beginning in <productname>PostgreSQL</productname> 17,\nit is 2; in older versions, it is 1.\n\n+ context.identity_cb = manifest_process_identity;\n\nI'm not really on board with calling the system identifier \"identity\"\nthroughout the patch. I think it should just say system_identifier. If\nwe were going to abbreviate, I'd prefer something like \"sysident\" that\nlooks like it's derived from \"system identifier\" rather than\n\"identity\" which is a different word altogether. But I don't think we\nshould abbreviate unless doing so creates *ridiculously* long\nidentifier names.\n\n+static void\n+manifest_process_identity(JsonManifestParseContext *context,\n+ int manifest_version,\n+ uint64 manifest_system_identifier)\n+{\n+ uint64 system_identifier;\n+\n+ /* Manifest system identifier available in version 2 or later */\n+ if (manifest_version == 1)\n+ return;\n\nI think you've got the wrong idea here. I think this function would\nonly get called if System-Identifier is present in the manifest, so if\nit's a v1 manifest, this would never get called, so this if-statement\nwould not ever do anything useful. I think what you should do is (1)\nif the client supplies a v1 manifest, reject it, because surely that's\nfrom an older server version that doesn't support incremental backup;\nbut do that when the version is parsed rather than here; and (2) also\ndetect and reject the case when it's supposedly a v2 manifest but this\nis absent.\n\n(1) should really be done when the version number is parsed, so I\nsuspect you may need to add manifest_version_cb.\n\n+static void\n+combinebackup_identity_cb(JsonManifestParseContext *context,\n+ int manifest_version,\n+ uint64 manifest_system_identifier)\n+{\n+ parser_context *private_context = context->private_data;\n+ uint64 system_identifier = private_context->system_identifier;\n+\n+ /* Manifest system identifier available in version 2 or later */\n+ if (manifest_version == 1)\n+ return;\n\nVery similar to the above case. Just reject a version 1 manifest as\nsoon as we see the version number. In this function, set a flag\nindicating we saw the system identifier; if at the end of parsing that\nflag is not set, kaboom.\n\n- parse_manifest_file(manifest_path, &context.ht, &first_wal_range);\n+ parse_manifest_file(manifest_path, &context.ht, &first_wal_range,\n+ context.backup_directory);\n\nDon't do this! parse_manifest_file() should just record everything\nfound in the manifest in the context object. Syntax validation should\nhappen while parsing the manifest (e.g. \"CAT/DOG\" is not a valid LSN\nand we should reject that at this stage) but semantic validation\nshould happen later (e.g. \"0/0\" can't be a the correct backup end LSN\nbut we don't figure that out while parsing, but rather later). I think\nyou should actually move validation of the system identifier to the\npoint where the directory walk encounters the control file (and update\nthe docs and tests to match that decision). Imagine if you wanted to\nvalidate a tar-format backup; then you wouldn't have random access to\nthe directory. You'd see the manifest file first, and then all the\nfiles in a random order, with one chance to look at each one.\n\n(This is, in fact, a feature I think we should implement.)\n\n- if (strcmp(token, \"1\") != 0)\n+ parse->manifest_version = atoi(token);\n+ if (parse->manifest_version != 1 && parse->manifest_version != 2)\n json_manifest_parse_failure(parse->context,\n \"unexpected manifest version\");\n\nPlease either (a) don't do a string-to-integer conversion and just\nstrcmp() twice or (b) use strtol so that you can check that it\nsucceeded. I don't want to accept manifest version 1a as 1.\n\n+/*\n+ * Validate manifest system identifier against the database server system\n+ * identifier.\n+ */\n\nThis comment assumes you know what the callback is going to do, but\nyou don't. This should be more like the comment for\njson_manifest_finalize_file or json_manifest_finalize_wal_range.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Jan 2024 10:10:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "I have also done a review of the patch and some testing. The patch looks\ngood, and I agree with Robert's comments.\n\nOn Wed, Jan 17, 2024 at 8:40 PM Robert Haas <[email protected]> wrote:\n>\n> On Wed, Jan 17, 2024 at 6:31 AM Amul Sul <[email protected]> wrote:\n> > With the attached patch, the backup manifest will have a new key item as\n> > \"System-Identifier\" 64-bit integer whose value is derived from\npg_control while\n> > generating it, and the manifest version bumps to 2.\n> >\n> > This helps to identify the correct database server and/or backup for the\n> > subsequent backup operations. pg_verifybackup validates the manifest\nsystem\n> > identifier against the backup control file and fails if they don’t\nmatch.\n> > Similarly, pg_basebackup increment backup will fail if the manifest\nsystem\n> > identifier does not match with the server system identifier. The\n> > pg_combinebackup is already a bit smarter -- checks the system\nidentifier from\n> > the pg_control of all the backups, with this patch the manifest system\n> > identifier also validated.\n>\n> Thanks for working on this. Without this, I think what happens is that\n> you can potentially take an incremental backup from the \"wrong\"\n> server, if the states of the systems are such that all of the other\n> sanity checks pass. When you run pg_combinebackup, it'll discover the\n> problem and tell you, but you ideally want to discover such errors at\n> backup time rather than at restore time. This addresses that. And,\n> overall, I think it's a pretty good patch. But I nonetheless have a\n> bunch of comments.\n>\n> - The associated value is always the integer 1.\n> + The associated value is the integer, either 1 or 2.\n>\n> is an integer. Beginning in <productname>PostgreSQL</productname> 17,\n> it is 2; in older versions, it is 1.\n>\n> + context.identity_cb = manifest_process_identity;\n>\n> I'm not really on board with calling the system identifier \"identity\"\n> throughout the patch. I think it should just say system_identifier. If\n> we were going to abbreviate, I'd prefer something like \"sysident\" that\n> looks like it's derived from \"system identifier\" rather than\n> \"identity\" which is a different word altogether. But I don't think we\n> should abbreviate unless doing so creates *ridiculously* long\n> identifier names.\n>\n> +static void\n> +manifest_process_identity(JsonManifestParseContext *context,\n> + int manifest_version,\n> + uint64 manifest_system_identifier)\n> +{\n> + uint64 system_identifier;\n> +\n> + /* Manifest system identifier available in version 2 or later */\n> + if (manifest_version == 1)\n> + return;\n>\n> I think you've got the wrong idea here. I think this function would\n> only get called if System-Identifier is present in the manifest, so if\n> it's a v1 manifest, this would never get called, so this if-statement\n> would not ever do anything useful. I think what you should do is (1)\n> if the client supplies a v1 manifest, reject it, because surely that's\n> from an older server version that doesn't support incremental backup;\n> but do that when the version is parsed rather than here; and (2) also\n> detect and reject the case when it's supposedly a v2 manifest but this\n> is absent.\n>\n> (1) should really be done when the version number is parsed, so I\n> suspect you may need to add manifest_version_cb.\n>\n> +static void\n> +combinebackup_identity_cb(JsonManifestParseContext *context,\n> + int manifest_version,\n> + uint64 manifest_system_identifier)\n> +{\n> + parser_context *private_context = context->private_data;\n> + uint64 system_identifier = private_context->system_identifier;\n> +\n> + /* Manifest system identifier available in version 2 or later */\n> + if (manifest_version == 1)\n> + return;\n>\n> Very similar to the above case. Just reject a version 1 manifest as\n> soon as we see the version number. In this function, set a flag\n> indicating we saw the system identifier; if at the end of parsing that\n> flag is not set, kaboom.\n>\n> - parse_manifest_file(manifest_path, &context.ht, &first_wal_range);\n> + parse_manifest_file(manifest_path, &context.ht, &first_wal_range,\n> + context.backup_directory);\n>\n> Don't do this! parse_manifest_file() should just record everything\n> found in the manifest in the context object. Syntax validation should\n> happen while parsing the manifest (e.g. \"CAT/DOG\" is not a valid LSN\n> and we should reject that at this stage) but semantic validation\n> should happen later (e.g. \"0/0\" can't be a the correct backup end LSN\n> but we don't figure that out while parsing, but rather later). I think\n> you should actually move validation of the system identifier to the\n> point where the directory walk encounters the control file (and update\n> the docs and tests to match that decision). Imagine if you wanted to\n> validate a tar-format backup; then you wouldn't have random access to\n> the directory. You'd see the manifest file first, and then all the\n> files in a random order, with one chance to look at each one.\n>\n> (This is, in fact, a feature I think we should implement.)\n>\n> - if (strcmp(token, \"1\") != 0)\n> + parse->manifest_version = atoi(token);\n> + if (parse->manifest_version != 1 && parse->manifest_version != 2)\n> json_manifest_parse_failure(parse->context,\n> \"unexpected manifest version\");\n>\n> Please either (a) don't do a string-to-integer conversion and just\n> strcmp() twice or (b) use strtol so that you can check that it\n> succeeded. I don't want to accept manifest version 1a as 1.\n>\n\n> > +/*\n> > + * Validate manifest system identifier against the database server\n> system\n> > + * identifier.\n> > + */\n> >\n> > This comment assumes you know what the callback is going to do, but\n> > you don't. This should be more like the comment for\n> > json_manifest_finalize_file or json_manifest_finalize_wal_range.\n\n\nThis comment caught me off-guard too. After some testing and detailed\nreview I found that this is\ncalled by pg_verifybackup and pg_combinebackup both of which do not\nvalidate against any\nrunning database system.\n\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n-- \nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nI have also done a review of the patch and some testing. The patch looks good, and I agree with Robert's comments.On Wed, Jan 17, 2024 at 8:40 PM Robert Haas <[email protected]> wrote:>> On Wed, Jan 17, 2024 at 6:31 AM Amul Sul <[email protected]> wrote:> > With the attached patch, the backup manifest will have a new key item as> > \"System-Identifier\" 64-bit integer whose value is derived from pg_control while> > generating it, and the manifest version bumps to 2.> >> > This helps to identify the correct database server and/or backup for the> > subsequent backup operations.  pg_verifybackup validates the manifest system> > identifier against the backup control file and fails if they don’t match.> > Similarly, pg_basebackup increment backup will fail if the manifest system> > identifier does not match with the server system identifier.  The> > pg_combinebackup is already a bit smarter -- checks the system identifier from> > the pg_control of all the backups, with this patch the manifest system> > identifier also validated.>> Thanks for working on this. Without this, I think what happens is that> you can potentially take an incremental backup from the \"wrong\"> server, if the states of the systems are such that all of the other> sanity checks pass. When you run pg_combinebackup, it'll discover the> problem and tell you, but you ideally want to discover such errors at> backup time rather than at restore time. This addresses that. And,> overall, I think it's a pretty good patch. But I nonetheless have a> bunch of comments.>> -      The associated value is always the integer 1.> +      The associated value is the integer, either 1 or 2.>> is an integer. Beginning in <productname>PostgreSQL</productname> 17,> it is 2; in older versions, it is 1.>> + context.identity_cb = manifest_process_identity;>> I'm not really on board with calling the system identifier \"identity\"> throughout the patch. I think it should just say system_identifier. If> we were going to abbreviate, I'd prefer something like \"sysident\" that> looks like it's derived from \"system identifier\" rather than> \"identity\" which is a different word altogether. But I don't think we> should abbreviate unless doing so creates *ridiculously* long> identifier names.>> +static void> +manifest_process_identity(JsonManifestParseContext *context,> +   int manifest_version,> +   uint64 manifest_system_identifier)> +{> + uint64 system_identifier;> +> + /* Manifest system identifier available in version 2 or later */> + if (manifest_version == 1)> + return;>> I think you've got the wrong idea here. I think this function would> only get called if System-Identifier is present in the manifest, so if> it's a v1 manifest, this would never get called, so this if-statement> would not ever do anything useful. I think what you should do is (1)> if the client supplies a v1 manifest, reject it, because surely that's> from an older server version that doesn't support incremental backup;> but do that when the version is parsed rather than here; and (2) also> detect and reject the case when it's supposedly a v2 manifest but this> is absent.>> (1) should really be done when the version number is parsed, so I> suspect you may need to add manifest_version_cb.>> +static void> +combinebackup_identity_cb(JsonManifestParseContext *context,> +   int manifest_version,> +   uint64 manifest_system_identifier)> +{> + parser_context *private_context = context->private_data;> + uint64 system_identifier = private_context->system_identifier;> +> + /* Manifest system identifier available in version 2 or later */> + if (manifest_version == 1)> + return;>> Very similar to the above case. Just reject a version 1 manifest as> soon as we see the version number. In this function, set a flag> indicating we saw the system identifier; if at the end of parsing that> flag is not set, kaboom.>> - parse_manifest_file(manifest_path, &context.ht, &first_wal_range);> + parse_manifest_file(manifest_path, &context.ht, &first_wal_range,> + context.backup_directory);>> Don't do this! parse_manifest_file() should just record everything> found in the manifest in the context object. Syntax validation should> happen while parsing the manifest (e.g. \"CAT/DOG\" is not a valid LSN> and we should reject that at this stage) but semantic validation> should happen later (e.g. \"0/0\" can't be a the correct backup end LSN> but we don't figure that out while parsing, but rather later). I think> you should actually move validation of the system identifier to the> point where the directory walk encounters the control file (and update> the docs and tests to match that decision). Imagine if you wanted to> validate a tar-format backup; then you wouldn't have random access to> the directory. You'd see the manifest file first, and then all the> files in a random order, with one chance to look at each one.>> (This is, in fact, a feature I think we should implement.)>> - if (strcmp(token, \"1\") != 0)> + parse->manifest_version = atoi(token);> + if (parse->manifest_version != 1 && parse->manifest_version != 2)>   json_manifest_parse_failure(parse->context,>   \"unexpected manifest version\");>> Please either (a) don't do a string-to-integer conversion and just> strcmp() twice or (b) use strtol so that you can check that it> succeeded. I don't want to accept manifest version 1a as 1.>> +/*> + * Validate manifest system identifier against the database server system> + * identifier.> + */>> This comment assumes you know what the callback is going to do, but> you don't. This should be more like the comment for> json_manifest_finalize_file or json_manifest_finalize_wal_range.This comment caught me off-guard too. After some testing and detailed review I found that this iscalled by pg_verifybackup and pg_combinebackup both of which do not validate against anyrunning database system. >> --> Robert Haas> EDB: http://www.enterprisedb.com>>-- Thanks & Regards,Sravan VelagandulaEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company", "msg_date": "Thu, 18 Jan 2024 06:39:15 +0530", "msg_from": "Sravan Kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 17, 2024 at 08:46:09AM -0500, Robert Haas wrote:\n> On Wed, Jan 17, 2024 at 6:45 AM Alvaro Herrera <[email protected]> wrote:\n>> Hmm, okay, but what if I take a full backup from a primary server and\n>> later I want an incremental from a standby, or the other way around?\n>> Will this prevent me from using such a combination?\n> \n> The system identifier had BETTER match in such cases. If it doesn't,\n> somebody's run pg_resetwal on your standby since it was created... and\n> in that case, no incremental backup for you!\n\nThere is an even stronger check than that at replay as we also store\nthe system identifier in XLogLongPageHeaderData and cross-check it\nwith the contents of the control file. Having a field in the backup\nmanifest makes for a much faster detection, even if that's not the\nsame as replaying things, it can avoid a lot of problems when\ncombining backup pieces. I'm +1 for Amul's patch concept.\n--\nMichael", "msg_date": "Thu, 18 Jan 2024 11:20:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 17, 2024 at 8:40 PM Robert Haas <[email protected]> wrote:\n\n> On Wed, Jan 17, 2024 at 6:31 AM Amul Sul <[email protected]> wrote:\n> > With the attached patch, the backup manifest will have a new key item as\n> > \"System-Identifier\" 64-bit integer whose value is derived from\n> pg_control while\n> > generating it, and the manifest version bumps to 2.\n> >\n> > This helps to identify the correct database server and/or backup for the\n> > subsequent backup operations. pg_verifybackup validates the manifest\n> system\n> > identifier against the backup control file and fails if they don’t match.\n> > Similarly, pg_basebackup increment backup will fail if the manifest\n> system\n> > identifier does not match with the server system identifier. The\n> > pg_combinebackup is already a bit smarter -- checks the system\n> identifier from\n> > the pg_control of all the backups, with this patch the manifest system\n> > identifier also validated.\n>\n> Thanks for working on this. Without this, I think what happens is that\n> you can potentially take an incremental backup from the \"wrong\"\n> server, if the states of the systems are such that all of the other\n> sanity checks pass. When you run pg_combinebackup, it'll discover the\n> problem and tell you, but you ideally want to discover such errors at\n> backup time rather than at restore time. This addresses that. And,\n> overall, I think it's a pretty good patch. But I nonetheless have a\n> bunch of comments.\n>\n\nThank you for the review.\n\n\n>\n> - The associated value is always the integer 1.\n> + The associated value is the integer, either 1 or 2.\n>\n> is an integer. Beginning in <productname>PostgreSQL</productname> 17,\n> it is 2; in older versions, it is 1.\n>\n\nOk,\n\n\n> + context.identity_cb = manifest_process_identity;\n>\n> I'm not really on board with calling the system identifier \"identity\"\n> throughout the patch. I think it should just say system_identifier. If\n> we were going to abbreviate, I'd prefer something like \"sysident\" that\n> looks like it's derived from \"system identifier\" rather than\n> \"identity\" which is a different word altogether. But I don't think we\n> should abbreviate unless doing so creates *ridiculously* long\n> identifier names.\n>\n\nOk, used \"system identifier\" at all the places.\n\n\n> +static void\n> +manifest_process_identity(JsonManifestParseContext *context,\n> + int manifest_version,\n> + uint64 manifest_system_identifier)\n> +{\n> + uint64 system_identifier;\n> +\n> + /* Manifest system identifier available in version 2 or later */\n> + if (manifest_version == 1)\n> + return;\n>\n> I think you've got the wrong idea here. I think this function would\n> only get called if System-Identifier is present in the manifest, so if\n> it's a v1 manifest, this would never get called, so this if-statement\n> would not ever do anything useful. I think what you should do is (1)\n> if the client supplies a v1 manifest, reject it, because surely that's\n> from an older server version that doesn't support incremental backup;\n> but do that when the version is parsed rather than here; and (2) also\n> detect and reject the case when it's supposedly a v2 manifest but this\n> is absent.\n>\n> (1) should really be done when the version number is parsed, so I\n> suspect you may need to add manifest_version_cb.\n>\n> +static void\n> +combinebackup_identity_cb(JsonManifestParseContext *context,\n> + int manifest_version,\n> + uint64 manifest_system_identifier)\n> +{\n> + parser_context *private_context = context->private_data;\n> + uint64 system_identifier = private_context->system_identifier;\n> +\n> + /* Manifest system identifier available in version 2 or later */\n> + if (manifest_version == 1)\n> + return;\n>\n> Very similar to the above case. Just reject a version 1 manifest as\n> soon as we see the version number. In this function, set a flag\n> indicating we saw the system identifier; if at the end of parsing that\n> flag is not set, kaboom.\n>\n\nOk, I added a version_cb callback. Using this pg_combinebackup &\npg_basebackup\nwill report an error for manifest version 1, whereas pg_verifybackup\ndoesn't (not needed IIUC).\n\n\n>\n> - parse_manifest_file(manifest_path, &context.ht, &first_wal_range);\n> + parse_manifest_file(manifest_path, &context.ht, &first_wal_range,\n> + context.backup_directory);\n>\n> Don't do this! parse_manifest_file() should just record everything\n> found in the manifest in the context object. Syntax validation should\n> happen while parsing the manifest (e.g. \"CAT/DOG\" is not a valid LSN\n> and we should reject that at this stage) but semantic validation\n> should happen later (e.g. \"0/0\" can't be a the correct backup end LSN\n> but we don't figure that out while parsing, but rather later). I think\n> you should actually move validation of the system identifier to the\n> point where the directory walk encounters the control file (and update\n> the docs and tests to match that decision). Imagine if you wanted to\n> validate a tar-format backup; then you wouldn't have random access to\n> the directory. You'd see the manifest file first, and then all the\n> files in a random order, with one chance to look at each one.\n>\n>\nAgree. I have moved the system identifier validation after manifest\nparsing.\nBut, not in the directory walkthrough since in pg_combinebackup, we don't\nreally needed to open the pg_control file to get the system identifier,\nwhich we\nhave from the check_control_files().\n\n\n> (This is, in fact, a feature I think we should implement.)\n>\n> - if (strcmp(token, \"1\") != 0)\n> + parse->manifest_version = atoi(token);\n> + if (parse->manifest_version != 1 && parse->manifest_version != 2)\n> json_manifest_parse_failure(parse->context,\n> \"unexpected manifest version\");\n>\n> Please either (a) don't do a string-to-integer conversion and just\n> strcmp() twice or (b) use strtol so that you can check that it\n> succeeded. I don't want to accept manifest version 1a as 1.\n>\n\nUnderstood, corrected in the attached version.\n\n\n> +/*\n> + * Validate manifest system identifier against the database server system\n> + * identifier.\n> + */\n>\n> This comment assumes you know what the callback is going to do, but\n> you don't. This should be more like the comment for\n> json_manifest_finalize_file or json_manifest_finalize_wal_range.\n>\n\nOk.\n\nUpdated version is attached.\n\nRegards,\nAmul", "msg_date": "Fri, 19 Jan 2024 22:36:26 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Jan 18, 2024 at 6:39 AM Sravan Kumar <[email protected]>\nwrote:\n\n> I have also done a review of the patch and some testing. The patch looks\n> good, and I agree with Robert's comments.\n>\n\nThank you for your review, testing and the offline discussion.\n\nRegards,\nAmul\n\nOn Thu, Jan 18, 2024 at 6:39 AM Sravan Kumar <[email protected]> wrote:I have also done a review of the patch and some testing. The patch looks good, and I agree with Robert's comments.Thank you for your review, testing and the offline discussion.  Regards,Amul", "msg_date": "Fri, 19 Jan 2024 22:42:49 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Fri, Jan 19, 2024 at 10:36 PM Amul Sul <[email protected]> wrote:\n\n> On Wed, Jan 17, 2024 at 8:40 PM Robert Haas <[email protected]> wrote:\n>\n>>\n>>\n> Updated version is attached.\n>\n\nAnother updated version attached -- fix missing manifest version check in\npg_verifybackup before system identifier validation.\n\nRegards,\nAmul", "msg_date": "Mon, 22 Jan 2024 10:08:07 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Jan 22, 2024 at 10:08 AM Amul Sul <[email protected]> wrote:\n\n>\n>\n> On Fri, Jan 19, 2024 at 10:36 PM Amul Sul <[email protected]> wrote:\n>\n>> On Wed, Jan 17, 2024 at 8:40 PM Robert Haas <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>> Updated version is attached.\n>>\n>\n> Another updated version attached -- fix missing manifest version check in\n> pg_verifybackup before system identifier validation.\n>\n\nThinking a bit more on this, I realized parse_manifest_file() has many out\nparameters. Instead parse_manifest_file() should simply return manifest data\nlike load_backup_manifest(). Attached 0001 patch doing the same, and\nremoved\nparser_context structure, and added manifest_data, and did the required\nadjustments to pg_verifybackup code.\n\nRegards,\nAmul", "msg_date": "Mon, 22 Jan 2024 12:51:25 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Jan 22, 2024 at 2:22 AM Amul Sul <[email protected]> wrote:\n> Thinking a bit more on this, I realized parse_manifest_file() has many out\n> parameters. Instead parse_manifest_file() should simply return manifest data\n> like load_backup_manifest(). Attached 0001 patch doing the same, and removed\n> parser_context structure, and added manifest_data, and did the required\n> adjustments to pg_verifybackup code.\n\n InitializeBackupManifest(&manifest, opt->manifest,\n-\nopt->manifest_checksum_type);\n+\nopt->manifest_checksum_type,\n+ GetSystemIdentifier());\n\nInitializeBackupManifest() can just call GetSystemIdentifier() itself,\ninstead of passing another parameter, I think.\n\n+ if (manifest_version == 1)\n+ context->error_cb(context,\n+ \"%s: backup manifest\ndoesn't support incremental changes\",\n+\nprivate_context->backup_directory);\n\nI think this is weird. First, there doesn't seem to be any reason to\nbounce through error_cb() here. You could just call pg_fatal(), as we\ndo elsewhere in this file. Second, there doesn't seem to be any need\nto include the backup directory in this error message. We include the\nfile name (not the directory name) in errors that pertain to the file\nitself, like if we can't open or read it. But we don't do that for\nsemantic errors about the manifest contents (cf.\ncombinebackup_per_file_cb). This file would need a lot fewer charges\nif you didn't feed the backup directory name through here. Third, the\nerror message is not well-chosen, because a user who looks at it won't\nunderstand WHY the manifest doesn't support incremental changes. I\nsuggest \"backup manifest version 1 does not support incremental\nbackup\".\n\n+ /* Incremental backups supported on manifest version 2 or later */\n+ if (manifest_version == 1)\n+ context->error_cb(context,\n+ \"incremental backups\ncannot be taken for this backup\");\n\nLet's use the same error message as in the previous case here also.\n\n+ for (i = 0; i < n_backups; i++)\n+ {\n+ if (manifests[i]->system_identifier != system_identifier)\n+ {\n+ char *controlpath;\n+\n+ controlpath = psprintf(\"%s/%s\",\nprior_backup_dirs[i], \"global/pg_control\");\n+\n+ pg_fatal(\"manifest is from different database\nsystem: manifest database system identifier is %llu, %s system\nidentifier is %llu\",\n+ (unsigned long long)\nmanifests[i]->system_identifier,\n+ controlpath,\n+ (unsigned long long)\nsystem_identifier);\n+ }\n+ }\n\ncheck_control_files() already verifies that all of the control files\ncontain the same system identifier as each other, so what we're really\nchecking here is that the backup manifest in each directory has the\nsame system identifier as the control file in that same directory. One\nproblem is that backup manifests are optional here, as per the comment\nin load_backup_manifests(), so you need to skip over NULL entries\ncleanly to avoid seg faulting if one is missing. I also think the\nerror message should be changed. How about \"%s: manifest system\nidentifier is %llu, but control file has %llu\"?\n\n+ context->error_cb(context,\n+ \"manifest is from\ndifferent database system: manifest database system identifier is\n%llu, pg_control database system identifier is %llu\",\n+ (unsigned long long)\nmanifest_system_identifier,\n+ (unsigned long long)\nsystem_identifier);\n\nAnd here, while I'm kibitzing, how about \"manifest system identifier\nis %llu, but this system's identifier is %llu\"?\n\n- qr/could not open directory/,\n+ qr/could not open file/,\n\nI don't think that the expected error message here should be changing.\nDoes it really, with the latest patch version? Why? Can we fix that?\n\n+ else if (!parse->saw_system_identifier_field &&\n+\nstrcmp(parse->manifest_version, \"1\") != 0)\n\nI don't think this has any business testing the manifest version.\nThat's something to sort out at some later stage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Jan 2024 12:23:24 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Jan 24, 2024 at 10:53 PM Robert Haas <[email protected]> wrote:\n\n> On Mon, Jan 22, 2024 at 2:22 AM Amul Sul <[email protected]> wrote:\n> > Thinking a bit more on this, I realized parse_manifest_file() has many\n> out\n> > parameters. Instead parse_manifest_file() should simply return manifest\n> data\n> > like load_backup_manifest(). Attached 0001 patch doing the same, and\n> removed\n> > parser_context structure, and added manifest_data, and did the required\n> > adjustments to pg_verifybackup code.\n>\n> InitializeBackupManifest(&manifest, opt->manifest,\n> -\n> opt->manifest_checksum_type);\n> +\n> opt->manifest_checksum_type,\n> +\n> GetSystemIdentifier());\n>\n> InitializeBackupManifest() can just call GetSystemIdentifier() itself,\n> instead of passing another parameter, I think.\n>\n\nOk.\n\n\n>\n> + if (manifest_version == 1)\n> + context->error_cb(context,\n> + \"%s: backup manifest\n> doesn't support incremental changes\",\n> +\n> private_context->backup_directory);\n>\n> I think this is weird. First, there doesn't seem to be any reason to\n> bounce through error_cb() here. You could just call pg_fatal(), as we\n> do elsewhere in this file. Second, there doesn't seem to be any need\n> to include the backup directory in this error message. We include the\n> file name (not the directory name) in errors that pertain to the file\n> itself, like if we can't open or read it. But we don't do that for\n> semantic errors about the manifest contents (cf.\n> combinebackup_per_file_cb). This file would need a lot fewer charges\n> if you didn't feed the backup directory name through here. Third, the\n> error message is not well-chosen, because a user who looks at it won't\n> understand WHY the manifest doesn't support incremental changes. I\n> suggest \"backup manifest version 1 does not support incremental\n> backup\".\n>\n> + /* Incremental backups supported on manifest version 2 or later */\n> + if (manifest_version == 1)\n> + context->error_cb(context,\n> + \"incremental backups\n> cannot be taken for this backup\");\n>\n> Let's use the same error message as in the previous case here also.\n>\n\nOk.\n\n\n> + for (i = 0; i < n_backups; i++)\n> + {\n> + if (manifests[i]->system_identifier != system_identifier)\n> + {\n> + char *controlpath;\n> +\n> + controlpath = psprintf(\"%s/%s\",\n> prior_backup_dirs[i], \"global/pg_control\");\n> +\n> + pg_fatal(\"manifest is from different database\n> system: manifest database system identifier is %llu, %s system\n> identifier is %llu\",\n> + (unsigned long long)\n> manifests[i]->system_identifier,\n> + controlpath,\n> + (unsigned long long)\n> system_identifier);\n> + }\n> + }\n>\n> check_control_files() already verifies that all of the control files\n> contain the same system identifier as each other, so what we're really\n> checking here is that the backup manifest in each directory has the\n> same system identifier as the control file in that same directory. One\n> problem is that backup manifests are optional here, as per the comment\n> in load_backup_manifests(), so you need to skip over NULL entries\n> cleanly to avoid seg faulting if one is missing. I also think the\n> error message should be changed. How about \"%s: manifest system\n> identifier is %llu, but control file has %llu\"?\n>\n\nOk.\n\n\n> + context->error_cb(context,\n> + \"manifest is from\n> different database system: manifest database system identifier is\n> %llu, pg_control database system identifier is %llu\",\n> + (unsigned long long)\n> manifest_system_identifier,\n> + (unsigned long long)\n> system_identifier);\n>\n> And here, while I'm kibitzing, how about \"manifest system identifier\n> is %llu, but this system's identifier is %llu\"?\n>\n\nI used \"database system identifier\" instead of \"this system's identifier \"\nlike\nwe are using in WalReceiverMain() and libpqrcv_identify_system().\n\n\n> - qr/could not open directory/,\n> + qr/could not open file/,\n>\n> I don't think that the expected error message here should be changing.\n> Does it really, with the latest patch version? Why? Can we fix that?\n>\n\nBecause, we were trying to access pg_control to check the system identifier\nbefore any other backup directory/file validation.\n\n\n> + else if (!parse->saw_system_identifier_field &&\n> +\n> strcmp(parse->manifest_version, \"1\") != 0)\n>\n> I don't think this has any business testing the manifest version.\n> That's something to sort out at some later stage.\n>\n\nThat is for backward compatibility, otherwise, we would have an \"expected\nsystem identifier\" error for manifest version 1.\n\nThank you for the review-comments, updated version attached.\n\nRegards,\nAmul", "msg_date": "Thu, 25 Jan 2024 13:22:00 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Jan 25, 2024 at 2:52 AM Amul Sul <[email protected]> wrote:\n> Thank you for the review-comments, updated version attached.\n\nI generally agree with 0001. I spent a long time thinking about your\ndecision to make verifier_context contain a pointer to manifest_data\ninstead of, as it does currently, a pointer to manifest_files_hash. I\ndon't think that's a horrible idea, but it also doesn't seem to be\nused anywhere currently. One advantage of the current approach is that\nwe know that none of the code downstream of verify_backup_directory()\nor verify_backup_checksums() actually cares about anything other than\nthe manifest_files_hash. That's kind of nice. If we didn't change this\nas you have done here, then we would need to continue passing the WAL\nranges to parse_required_walI() and the system identifier would have\nto be passed explicitly to the code that checks the system identifier,\nbut that's not such a bad thing, either. It makes it clear which\nfunctions are using which information.\n\nBut before you go change anything there, exactly when should 0002 be\nchecking the system identifier in the control file? What happens now\nis that we first walk over the directory tree and make sure we have\nthe files (verify_backup_directory) and then go through and verify\nchecksums in a second pass (verify_backup_checksums). We do this\nbecause it lets us report problems that can be detected cheaply --\nlike missing files -- relatively quickly, and problems that are more\nexpensive to detect -- like mismatching checksums -- only after we've\nreported all the cheap-to-detect problems. At what stage should we\nverify the control file? I don't really like verifying it first, as\nyou've done, because I think the error message change in\n004_options.pl is a clear regression. When the whole directory is\nmissing, it's much more pleasant to complain about the directory being\nmissing than some file inside the directory being missing.\n\nWhat I'd be inclined to suggest is that you have verify_backup_file()\nnotice when the file it's being asked to verify is the control file,\nand have it check the system identifier at that stage. I think if you\ndo that, then the error message change in 004_options.pl goes away.\nNow, to do that, you'd need to have the whole manifest_data available\nfrom the context, not just the manifest_files_hash, so that you can\nsee the expected system identifier. And, interestingly, if you take\nthis approach, then it appears to me that 0001 is correct as-is and\ndoesn't need any changes.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jan 2024 16:36:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Feb 1, 2024 at 3:06 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Jan 25, 2024 at 2:52 AM Amul Sul <[email protected]> wrote:\n> > Thank you for the review-comments, updated version attached.\n>\n> I generally agree with 0001. I spent a long time thinking about your\n> decision to make verifier_context contain a pointer to manifest_data\n> instead of, as it does currently, a pointer to manifest_files_hash. I\n> don't think that's a horrible idea, but it also doesn't seem to be\n> used anywhere currently. One advantage of the current approach is that\n> we know that none of the code downstream of verify_backup_directory()\n> or verify_backup_checksums() actually cares about anything other than\n> the manifest_files_hash. That's kind of nice. If we didn't change this\n> as you have done here, then we would need to continue passing the WAL\n> ranges to parse_required_walI() and the system identifier would have\n> to be passed explicitly to the code that checks the system identifier,\n> but that's not such a bad thing, either. It makes it clear which\n> functions are using which information.\n>\n\nI intended to minimize the out param of parse_manifest_file(), which\ncurrently\nreturns manifest_files_hash and manifest_wal_range, and I need two more --\nmanifest versions and the system identifier.\n\nBut before you go change anything there, exactly when should 0002 be\n> checking the system identifier in the control file? What happens now\n> is that we first walk over the directory tree and make sure we have\n> the files (verify_backup_directory) and then go through and verify\n> checksums in a second pass (verify_backup_checksums). We do this\n> because it lets us report problems that can be detected cheaply --\n> like missing files -- relatively quickly, and problems that are more\n> expensive to detect -- like mismatching checksums -- only after we've\n> reported all the cheap-to-detect problems. At what stage should we\n> verify the control file? I don't really like verifying it first, as\n> you've done, because I think the error message change in\n> 004_options.pl is a clear regression. When the whole directory is\n> missing, it's much more pleasant to complain about the directory being\n> missing than some file inside the directory being missing.\n>\n> What I'd be inclined to suggest is that you have verify_backup_file()\n> notice when the file it's being asked to verify is the control file,\n> and have it check the system identifier at that stage. I think if you\n> do that, then the error message change in 004_options.pl goes away.\n> Now, to do that, you'd need to have the whole manifest_data available\n> from the context, not just the manifest_files_hash, so that you can\n> see the expected system identifier. And, interestingly, if you take\n> this approach, then it appears to me that 0001 is correct as-is and\n> doesn't need any changes.\n>\n\nYeah, we can do that, but I think it is a bit inefficient to have strcmp()\ncheck for the pg_control file on each verify_backup_file() call, despite, we\nknow that path. Also, I think, we need additional handling to ensure that\nthe\nsystem identifier has been verified in verify_backup_file(), what if the\npg_control file itself missing from the backup -- might be a rare case, but\npossible.\n\nFor now, we can do the system identifier validation after\nverify_backup_directory().\n\nRegards,\nAmul\n\nOn Thu, Feb 1, 2024 at 3:06 AM Robert Haas <[email protected]> wrote:On Thu, Jan 25, 2024 at 2:52 AM Amul Sul <[email protected]> wrote:\n> Thank you for the review-comments, updated version attached.\n\nI generally agree with 0001. I spent a long time thinking about your\ndecision to make verifier_context contain a pointer to manifest_data\ninstead of, as it does currently, a pointer to manifest_files_hash. I\ndon't think that's a horrible idea, but it also doesn't seem to be\nused anywhere currently. One advantage of the current approach is that\nwe know that none of the code downstream of verify_backup_directory()\nor verify_backup_checksums() actually cares about anything other than\nthe manifest_files_hash. That's kind of nice. If we didn't change this\nas you have done here, then we would need to continue passing the WAL\nranges to parse_required_walI() and the system identifier would have\nto be passed explicitly to the code that checks the system identifier,\nbut that's not such a bad thing, either. It makes it clear which\nfunctions are using which information. I intended to minimize the out param of parse_manifest_file(), which currentlyreturns manifest_files_hash and manifest_wal_range, and I need two more --manifest versions and the system identifier.\nBut before you go change anything there, exactly when should 0002 be\nchecking the system identifier in the control file? What happens now\nis that we first walk over the directory tree and make sure we have\nthe files (verify_backup_directory) and then go through and verify\nchecksums in a second pass (verify_backup_checksums). We do this\nbecause it lets us report problems that can be detected cheaply --\nlike missing files -- relatively quickly, and problems that are more\nexpensive to detect -- like mismatching checksums -- only after we've\nreported all the cheap-to-detect problems. At what stage should we\nverify the control file? I don't really like verifying it first, as\nyou've done, because I think the error message change in\n004_options.pl is a clear regression. When the whole directory is\nmissing, it's much more pleasant to complain about the directory being\nmissing than some file inside the directory being missing.\n\nWhat I'd be inclined to suggest is that you have verify_backup_file()\nnotice when the file it's being asked to verify is the control file,\nand have it check the system identifier at that stage. I think if you\ndo that, then the error message change in 004_options.pl goes away.\nNow, to do that, you'd need to have the whole manifest_data available\nfrom the context, not just the manifest_files_hash, so that you can\nsee the expected system identifier. And, interestingly, if you take\nthis approach, then it appears to me that 0001 is correct as-is and\ndoesn't need any changes. Yeah, we can do that, but I think it is a bit inefficient to have strcmp()check for the pg_control file on each verify_backup_file() call, despite, weknow that path.  Also, I think, we need additional handling to ensure that thesystem identifier has been verified in verify_backup_file(), what if thepg_control file itself missing from the backup -- might be a rare case, butpossible.For now, we can do the system identifier validation afterverify_backup_directory().Regards,Amul", "msg_date": "Thu, 1 Feb 2024 12:47:27 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Feb 1, 2024 at 2:18 AM Amul Sul <[email protected]> wrote:\n> I intended to minimize the out param of parse_manifest_file(), which currently\n> returns manifest_files_hash and manifest_wal_range, and I need two more --\n> manifest versions and the system identifier.\n\nSure, but you could do context.ht = manifest_data->files instead of\ncontext.manifest = manifest_data. The question isn't whether you\nshould return the whole manifest_data from parse_manifest_file -- I\nagree with that decision -- but rather whether you should feed the\nwhole thing through into the context, or just the file hash.\n\n> Yeah, we can do that, but I think it is a bit inefficient to have strcmp()\n> check for the pg_control file on each verify_backup_file() call, despite, we\n> know that path. Also, I think, we need additional handling to ensure that the\n> system identifier has been verified in verify_backup_file(), what if the\n> pg_control file itself missing from the backup -- might be a rare case, but\n> possible.\n>\n> For now, we can do the system identifier validation after\n> verify_backup_directory().\n\nYes, that's another option, but I don't think it's as good.\n\nSuppose you do it that way. Then what will happen when the file is\naltogether missing or inaccessible? I think verify_backup_file() will\ncomplain, and then you'll have to do something ugly to avoid having\nverify_system_identifier() emit the same complaint all over again.\nRemember, unless you find some complicated way of passing data around,\nit won't know whether verify_backup_file() emitted a warning or not --\nit can redo the stat() and see what happens, but it's not absolutely\nguaranteed to be the same as what happened before. Making sure that\nyou always emit any given complaint once rather than twice or zero\ntimes is going to be tricky.\n\nIt seems more natural to me to just accept the (small) cost of a\nstrcmp() in verify_backup_file(). If the initial stat() fails, it\nemits whatever complaint is appropriate and returns and the logic to\ncheck the system identifier is never reached. If it succeeds, you can\nproceed to try to open the file and do what you need to do.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 1 Feb 2024 13:32:57 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Fri, Feb 2, 2024 at 12:03 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Feb 1, 2024 at 2:18 AM Amul Sul <[email protected]> wrote:\n> > I intended to minimize the out param of parse_manifest_file(), which\n> currently\n> > returns manifest_files_hash and manifest_wal_range, and I need two more\n> --\n> > manifest versions and the system identifier.\n>\n> Sure, but you could do context.ht = manifest_data->files instead of\n> context.manifest = manifest_data. The question isn't whether you\n> should return the whole manifest_data from parse_manifest_file -- I\n> agree with that decision -- but rather whether you should feed the\n> whole thing through into the context, or just the file hash.\n>\n> > Yeah, we can do that, but I think it is a bit inefficient to have\n> strcmp()\n> > check for the pg_control file on each verify_backup_file() call,\n> despite, we\n> > know that path. Also, I think, we need additional handling to ensure\n> that the\n> > system identifier has been verified in verify_backup_file(), what if the\n> > pg_control file itself missing from the backup -- might be a rare case,\n> but\n> > possible.\n> >\n> > For now, we can do the system identifier validation after\n> > verify_backup_directory().\n>\n> Yes, that's another option, but I don't think it's as good.\n>\n> Suppose you do it that way. Then what will happen when the file is\n> altogether missing or inaccessible? I think verify_backup_file() will\n> complain, and then you'll have to do something ugly to avoid having\n> verify_system_identifier() emit the same complaint all over again.\n> Remember, unless you find some complicated way of passing data around,\n> it won't know whether verify_backup_file() emitted a warning or not --\n> it can redo the stat() and see what happens, but it's not absolutely\n> guaranteed to be the same as what happened before. Making sure that\n> you always emit any given complaint once rather than twice or zero\n> times is going to be tricky.\n>\n> It seems more natural to me to just accept the (small) cost of a\n> strcmp() in verify_backup_file(). If the initial stat() fails, it\n> emits whatever complaint is appropriate and returns and the logic to\n> check the system identifier is never reached. If it succeeds, you can\n> proceed to try to open the file and do what you need to do.\n>\n\nOk, I did that way in the attached version, I have passed the control file's\nfull path as a second argument to verify_system_identifier() what we gets in\nverify_backup_file(), but that is not doing any useful things with it,\nsince we\nwere using get_controlfile() to open the control file, which takes the\ndirectory as an input and computes the full path on its own.\n\nRegards,\nAmul", "msg_date": "Wed, 14 Feb 2024 12:29:07 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Feb 14, 2024 at 12:29:07PM +0530, Amul Sul wrote:\n> Ok, I did that way in the attached version, I have passed the control file's\n> full path as a second argument to verify_system_identifier() what we gets in\n> verify_backup_file(), but that is not doing any useful things with it,\n> since we\n> were using get_controlfile() to open the control file, which takes the\n> directory as an input and computes the full path on its own.\n\nI've read through the patch, and that's pretty cool.\n\n-static void\n-parse_manifest_file(char *manifest_path, manifest_files_hash **ht_p,\n-\t\t\t\t\tmanifest_wal_range **first_wal_range_p)\n+static manifest_data *\n+parse_manifest_file(char *manifest_path)\n\nIn 0001, should the comment describing this routine be updated as\nwell?\n\n+ identifier with pg_control of the backup directory or fails verification \n\nThis is missing a <filename> markup here.\n\n+ <productname>PostgreSQL</productname> 17, it is 2; in older versions,\n+ it is 1. \n\nPerhaps a couple of <literal>s here.\n\n+\tif (strcmp(parse->manifest_version, \"1\") != 0 &&\n+\t\tstrcmp(parse->manifest_version, \"2\") != 0)\n+\t\tjson_manifest_parse_failure(parse->context,\n+\t\t\t\t\t\t\t\t\t\"unexpected manifest version\");\n+\n+\t/* Parse version. */\n+\tversion = strtoi64(parse->manifest_version, &ep, 10);\n+\tif (*ep)\n+\t\tjson_manifest_parse_failure(parse->context,\n+\t\t\t\t\t\t\t\t\t\"manifest version not an integer\");\n+\n+\t/* Invoke the callback for version */\n+\tcontext->version_cb(context, version);\n\nShouldn't these two checks be reversed? And is there actually a need\nfor the first check at all knowing that the version callback should be\nin charge of performing the validation vased on the version received?\n\n+my $node2;\n+{\n+\tlocal $ENV{'INITDB_TEMPLATE'} = undef;\n\nNot sure that it is a good idea to duplicate this pattern twice.\nShouldn't this be something we'd want to control with an option in the\ninit() method instead?\n\n+static void\n+verify_system_identifier(verifier_context *context, char *controlpath) \n\nRelying both on controlpath, being a full path to the control file\nincluding the data directory, and context->backup_directory to read\nthe contents of the control file looks a bit weird. Wouldn't it be\ncleaner to just use one of them?\n--\nMichael", "msg_date": "Thu, 15 Feb 2024 10:48:15 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Feb 15, 2024 at 7:18 AM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Feb 14, 2024 at 12:29:07PM +0530, Amul Sul wrote:\n> > Ok, I did that way in the attached version, I have passed the control\n> file's\n> > full path as a second argument to verify_system_identifier() what we\n> gets in\n> > verify_backup_file(), but that is not doing any useful things with it,\n> > since we\n> > were using get_controlfile() to open the control file, which takes the\n> > directory as an input and computes the full path on its own.\n>\n> I've read through the patch, and that's pretty cool.\n>\n\nThank you for looking into this.\n\n\n> -static void\n> -parse_manifest_file(char *manifest_path, manifest_files_hash **ht_p,\n> - manifest_wal_range\n> **first_wal_range_p)\n> +static manifest_data *\n> +parse_manifest_file(char *manifest_path)\n>\n> In 0001, should the comment describing this routine be updated as\n> well?\n>\n\nOk, updated in the attached version.\n\n\n>\n> + identifier with pg_control of the backup directory or fails\n> verification\n>\n> This is missing a <filename> markup here.\n>\n\nDone, in the attached version.\n\n\n>\n> + <productname>PostgreSQL</productname> 17, it is 2; in older\n> versions,\n> + it is 1.\n>\n> Perhaps a couple of <literal>s here.\n>\nDone.\n\n\n> + if (strcmp(parse->manifest_version, \"1\") != 0 &&\n> + strcmp(parse->manifest_version, \"2\") != 0)\n> + json_manifest_parse_failure(parse->context,\n> +\n> \"unexpected manifest version\");\n> +\n> + /* Parse version. */\n> + version = strtoi64(parse->manifest_version, &ep, 10);\n> + if (*ep)\n> + json_manifest_parse_failure(parse->context,\n> +\n> \"manifest version not an integer\");\n> +\n> + /* Invoke the callback for version */\n> + context->version_cb(context, version);\n>\n> Shouldn't these two checks be reversed? And is there actually a need\n> for the first check at all knowing that the version callback should be\n> in charge of performing the validation vased on the version received?\n>\n\nMake sense, reversed the order.\n\nI think, particular allowed versions should be placed at the central place,\nand\nthe callback can check and react on the versions suitable to them, IMHO.\n\n\n> +my $node2;\n> +{\n> + local $ENV{'INITDB_TEMPLATE'} = undef;\n>\n> Not sure that it is a good idea to duplicate this pattern twice.\n> Shouldn't this be something we'd want to control with an option in the\n> init() method instead?\n>\n\nYes, I did that in a separate patch, see 0001 patch.\n\n\n> +static void\n> +verify_system_identifier(verifier_context *context, char *controlpath)\n>\n> Relying both on controlpath, being a full path to the control file\n> including the data directory, and context->backup_directory to read\n> the contents of the control file looks a bit weird. Wouldn't it be\n> cleaner to just use one of them?\n>\n\nWell, yes, I had to have the same feeling, how about having another function\nthat can accept a full path of pg_control?\n\nI tried in the 0002 patch, where the original function is renamed to\nget_dir_controlfile(), which accepts the data directory path as before, and\nget_controlfile() now accepts the full path of the pg_control file.\n\nKindly have a look at the attached version.\n\nRegards,\nAmul", "msg_date": "Thu, 15 Feb 2024 15:05:06 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Feb 15, 2024 at 3:05 PM Amul Sul <[email protected]> wrote:\n> Kindly have a look at the attached version.\n\nIMHO, 0001 looks fine, except probably the comment could be phrased a\nbit more nicely. That can be left for whoever commits this to\nwordsmith. Michael, what are your plans?\n\n0002 seems like a reasonable approach, but there's a hunk in the wrong\npatch: 0004 modifies pg_combinebackup's check_control_files to use\nget_dir_controlfile() rather than git_controlfile(), but it looks to\nme like that should be part of 0002.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 15 Feb 2024 17:41:46 +0530", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Feb 15, 2024 at 05:41:46PM +0530, Robert Haas wrote:\n> On Thu, Feb 15, 2024 at 3:05 PM Amul Sul <[email protected]> wrote:\n> > Kindly have a look at the attached version.\n> \n> IMHO, 0001 looks fine, except probably the comment could be phrased a\n> bit more nicely.\n\nAnd the new option should be documented at the top of the init()\nroutine for perldoc.\n\n> That can be left for whoever commits this to\n> wordsmith. Michael, what are your plans?\n\nNot much, so feel free to not wait for me. I've just read through the\npatch because I like the idea/feature :)\n\n> 0002 seems like a reasonable approach, but there's a hunk in the wrong\n> patch: 0004 modifies pg_combinebackup's check_control_files to use\n> get_dir_controlfile() rather than git_controlfile(), but it looks to\n> me like that should be part of 0002.\n\nI'm slightly concerned about 0002 that silently changes the meaning of\nget_controlfile(). That would impact extension code without people\nknowing about it when compiling, just when they run their stuff under\n17~.\n--\nMichael", "msg_date": "Mon, 19 Feb 2024 07:52:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Feb 19, 2024 at 4:22 AM Michael Paquier <[email protected]> wrote:\n\n> On Thu, Feb 15, 2024 at 05:41:46PM +0530, Robert Haas wrote:\n> > On Thu, Feb 15, 2024 at 3:05 PM Amul Sul <[email protected]> wrote:\n> > > Kindly have a look at the attached version.\n> >\n> > IMHO, 0001 looks fine, except probably the comment could be phrased a\n> > bit more nicely.\n>\n> And the new option should be documented at the top of the init()\n> routine for perldoc.\n>\n\nAdded in the attached version.\n\n\n> > That can be left for whoever commits this to\n> > wordsmith. Michael, what are your plans?\n>\n> Not much, so feel free to not wait for me. I've just read through the\n> patch because I like the idea/feature :)\n>\n\nThank you, that helped a lot.\n\n> 0002 seems like a reasonable approach, but there's a hunk in the wrong\n> > patch: 0004 modifies pg_combinebackup's check_control_files to use\n> > get_dir_controlfile() rather than git_controlfile(), but it looks to\n> > me like that should be part of 0002.\n>\n\nFixed in the attached version.\n\n\n> I'm slightly concerned about 0002 that silently changes the meaning of\n> get_controlfile(). That would impact extension code without people\n> knowing about it when compiling, just when they run their stuff under\n> 17~.\n>\n\nAgreed, now they will have an error as _could not read file \"<DataDir>\": Is\na\ndirectory_. But, IIUC, that what usually happens with the dev version, and\nthe\nextension needs to be updated for compatibility with the newer version for\nthe\nsame reason.\n\nRegards,\nAmul", "msg_date": "Mon, 19 Feb 2024 12:06:19 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Feb 19, 2024 at 12:06:19PM +0530, Amul Sul wrote:\n> On Mon, Feb 19, 2024 at 4:22 AM Michael Paquier <[email protected]> wrote:\n>> And the new option should be documented at the top of the init()\n>> routine for perldoc.\n> \n> Added in the attached version.\n\nI've done some wordsmithing on 0001 and it is OK, so I've applied it\nto move the needle. Hope that helps.\n--\nMichael", "msg_date": "Wed, 21 Feb 2024 13:31:12 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Feb 19, 2024 at 12:06:19PM +0530, Amul Sul wrote:\n> Agreed, now they will have an error as _could not read file \"<DataDir>\": Is\n> a directory_. But, IIUC, that what usually happens with the dev version, and\n> the extension needs to be updated for compatibility with the newer version for\n> the same reason.\n\nI was reading through the remaining pieces of the patch set, and are\nyou sure that there is a need for 0002 at all? The only reason why\nget_dir_controlfile() is introduced is to be able to get the contents\nof a control file with a full path to it, and not a data folder. Now,\nif I look closely, with 0002~0004 applied, the only two callers of\nget_controlfile() are pg_combinebackup.c and pg_verifybackup.c. Both\nof them have an access to the backup directories, which point to the\nroot of the data folder. pg_combinebackup can continue to use\nbackup_dirs[i]. pg_verifybackup has an access to the backup directory\nin the context data, if I'm reading this right, so you could just use\nthat in verify_system_identifier().\n--\nMichael", "msg_date": "Fri, 1 Mar 2024 14:58:37 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Feb 21, 2024 at 10:01 AM Michael Paquier <[email protected]>\nwrote:\n\n> On Mon, Feb 19, 2024 at 12:06:19PM +0530, Amul Sul wrote:\n> > On Mon, Feb 19, 2024 at 4:22 AM Michael Paquier <[email protected]>\n> wrote:\n> >> And the new option should be documented at the top of the init()\n> >> routine for perldoc.\n> >\n> > Added in the attached version.\n>\n> I've done some wordsmithing on 0001 and it is OK, so I've applied it\n> to move the needle. Hope that helps.\n>\n\nThank you very much.\n\nRegards,\nAmul\n\nOn Wed, Feb 21, 2024 at 10:01 AM Michael Paquier <[email protected]> wrote:On Mon, Feb 19, 2024 at 12:06:19PM +0530, Amul Sul wrote:\n> On Mon, Feb 19, 2024 at 4:22 AM Michael Paquier <[email protected]> wrote:\n>> And the new option should be documented at the top of the init()\n>> routine for perldoc.\n> \n> Added in the attached version.\n\nI've done some wordsmithing on 0001 and it is OK, so I've applied it\nto move the needle.  Hope that helps.Thank you very much.Regards,Amul", "msg_date": "Mon, 4 Mar 2024 11:02:50 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Fri, Mar 1, 2024 at 11:28 AM Michael Paquier <[email protected]> wrote:\n\n> On Mon, Feb 19, 2024 at 12:06:19PM +0530, Amul Sul wrote:\n> > Agreed, now they will have an error as _could not read file \"<DataDir>\":\n> Is\n> > a directory_. But, IIUC, that what usually happens with the dev version,\n> and\n> > the extension needs to be updated for compatibility with the newer\n> version for\n> > the same reason.\n>\n> I was reading through the remaining pieces of the patch set, and are\n> you sure that there is a need for 0002 at all? The only reason why\n> get_dir_controlfile() is introduced is to be able to get the contents\n> of a control file with a full path to it, and not a data folder. Now,\n> if I look closely, with 0002~0004 applied, the only two callers of\n> get_controlfile() are pg_combinebackup.c and pg_verifybackup.c. Both\n> of them have an access to the backup directories, which point to the\n> root of the data folder. pg_combinebackup can continue to use\n> backup_dirs[i]. pg_verifybackup has an access to the backup directory\n> in the context data, if I'm reading this right, so you could just use\n> that in verify_system_identifier().\n>\n\nYes, you are correct. Both the current caller of get_controlfile() has\naccess to the root directory.\n\nI have dropped the 0002 patch -- I don't have a very strong opinion to\nrefactor\nget_controlfile() apart from saying that it might be good to have both\nversions :) .\n\nRegards,\nAmul", "msg_date": "Mon, 4 Mar 2024 11:04:56 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Mar 4, 2024 at 12:35 AM Amul Sul <[email protected]> wrote:\n> Yes, you are correct. Both the current caller of get_controlfile() has\n> access to the root directory.\n>\n> I have dropped the 0002 patch -- I don't have a very strong opinion to refactor\n> get_controlfile() apart from saying that it might be good to have both versions :) .\n\nI don't have an enormously strong opinion on what the right thing to\ndo is here either, but I am not convinced that the change proposed by\nMichael is an improvement. After all, that leaves us with the\nsituation where we know the path to the control file in three\ndifferent places. First, verify_backup_file() does a strcmp() against\nthe string \"global/pg_control\" to decide whether to call\nverify_backup_file(). Then, verify_system_identifier() uses that\nstring to construct a pathname to the file that it will be read. Then,\nget_controlfile() reconstructs the same pathname using it's own logic.\nThat's all pretty disagreeable. Hard-coded constants are hard to avoid\ncompletely, but here it looks an awful lot like we're trying to\nhardcode the same constant into as many different places as we can.\nThe now-dropped patch seems like an effort to avoid this, and while\nit's possible that it wasn't the best way to avoid this, I still think\navoiding it somehow is probably the right idea.\n\nI get a compiler warning with 0002, too:\n\n../pgsql/src/backend/backup/basebackup_incremental.c:960:22: warning:\ncall to undeclared function 'GetSystemIdentifier'; ISO C99 and later\ndo not support implicit function declarations\n[-Wimplicit-function-declaration]\n system_identifier = GetSystemIdentifier();\n ^\n1 warning generated.\n\nBut I've committed 0001.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Mar 2024 14:47:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Mon, Mar 4, 2024 at 2:47 PM Robert Haas <[email protected]> wrote:\n> I don't have an enormously strong opinion on what the right thing to\n> do is here either, but I am not convinced that the change proposed by\n> Michael is an improvement. After all, that leaves us with the\n> situation where we know the path to the control file in three\n> different places. First, verify_backup_file() does a strcmp() against\n> the string \"global/pg_control\" to decide whether to call\n> verify_backup_file(). Then, verify_system_identifier() uses that\n> string to construct a pathname to the file that it will be read. Then,\n> get_controlfile() reconstructs the same pathname using it's own logic.\n> That's all pretty disagreeable. Hard-coded constants are hard to avoid\n> completely, but here it looks an awful lot like we're trying to\n> hardcode the same constant into as many different places as we can.\n> The now-dropped patch seems like an effort to avoid this, and while\n> it's possible that it wasn't the best way to avoid this, I still think\n> avoiding it somehow is probably the right idea.\n\nSo with that in mind, here's my proposal. This is an adjustment of\nAmit's previous refactoring patch. He renamed the existing\nget_controlfile() to get_dir_controlfile() and made a new\nget_controlfile() that accepted the path to the control file itself. I\nchose to instead leave the existing get_controlfile() alone and add a\nnew get_controlfile_by_exact_path(). I think this is better, because\nmost of the existing callers find it more convenient to pass the path\nto the data directory rather than the path to the controlfile, so the\npatch is smaller this way, and less prone to cause headaches for\npeople back-patching or maintaining out-of-core code. But it still\ngives us a way to avoid repeatedly constructing the same pathname.\n\nIf nobody objects, I plan to commit this version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 6 Mar 2024 11:05:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Mar 06, 2024 at 11:05:36AM -0500, Robert Haas wrote:\n> So with that in mind, here's my proposal. This is an adjustment of\n> Amit's previous refactoring patch. He renamed the existing\n> get_controlfile() to get_dir_controlfile() and made a new\n> get_controlfile() that accepted the path to the control file itself. I\n> chose to instead leave the existing get_controlfile() alone and add a\n> new get_controlfile_by_exact_path(). I think this is better, because\n> most of the existing callers find it more convenient to pass the path\n> to the data directory rather than the path to the controlfile, so the\n> patch is smaller this way, and less prone to cause headaches for\n> people back-patching or maintaining out-of-core code. But it still\n> gives us a way to avoid repeatedly constructing the same pathname.\n\nYes, that was my primary concern with the previous versions of the\npatch.\n\n> If nobody objects, I plan to commit this version.\n\nYou are not changing silently the internals of get_controlfile(), so\nno objections here. The name of the new routine could be shorter, but\nbeing short of ideas what you are proposing looks fine by me.\n--\nMichael", "msg_date": "Thu, 7 Mar 2024 13:07:28 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Mar 7, 2024 at 9:37 AM Michael Paquier <[email protected]> wrote:\n\n> On Wed, Mar 06, 2024 at 11:05:36AM -0500, Robert Haas wrote:\n> > So with that in mind, here's my proposal. This is an adjustment of\n> > Amit's previous refactoring patch. He renamed the existing\n> > get_controlfile() to get_dir_controlfile() and made a new\n> > get_controlfile() that accepted the path to the control file itself. I\n> > chose to instead leave the existing get_controlfile() alone and add a\n> > new get_controlfile_by_exact_path(). I think this is better, because\n> > most of the existing callers find it more convenient to pass the path\n> > to the data directory rather than the path to the controlfile, so the\n> > patch is smaller this way, and less prone to cause headaches for\n> > people back-patching or maintaining out-of-core code. But it still\n> > gives us a way to avoid repeatedly constructing the same pathname.\n>\n> Yes, that was my primary concern with the previous versions of the\n> patch.\n>\n> > If nobody objects, I plan to commit this version.\n>\n> You are not changing silently the internals of get_controlfile(), so\n> no objections here. The name of the new routine could be shorter, but\n> being short of ideas what you are proposing looks fine by me.\n>\n\nCould be get_controlfile_by_path() ?\n\nRegards,\nAmul\n\nOn Thu, Mar 7, 2024 at 9:37 AM Michael Paquier <[email protected]> wrote:On Wed, Mar 06, 2024 at 11:05:36AM -0500, Robert Haas wrote:\n> So with that in mind, here's my proposal. This is an adjustment of\n> Amit's previous refactoring patch. He renamed the existing\n> get_controlfile() to get_dir_controlfile() and made a new\n> get_controlfile() that accepted the path to the control file itself. I\n> chose to instead leave the existing get_controlfile() alone and add a\n> new get_controlfile_by_exact_path(). I think this is better, because\n> most of the existing callers find it more convenient to pass the path\n> to the data directory rather than the path to the controlfile, so the\n> patch is smaller this way, and less prone to cause headaches for\n> people back-patching or maintaining out-of-core code. But it still\n> gives us a way to avoid repeatedly constructing the same pathname.\n\nYes, that was my primary concern with the previous versions of the\npatch.\n\n> If nobody objects, I plan to commit this version.\n\nYou are not changing silently the internals of get_controlfile(), so\nno objections here.  The name of the new routine could be shorter, but\nbeing short of ideas what you are proposing looks fine by me.Could be get_controlfile_by_path() ?Regards,Amul", "msg_date": "Thu, 7 Mar 2024 09:51:52 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Wed, Mar 6, 2024 at 11:22 PM Amul Sul <[email protected]> wrote:\n>> You are not changing silently the internals of get_controlfile(), so\n>> no objections here. The name of the new routine could be shorter, but\n>> being short of ideas what you are proposing looks fine by me.\n>\n> Could be get_controlfile_by_path() ?\n\nIt could. I just thought this was clearer. I agree that it's a bit\nlong, but I don't think this is worth bikeshedding very much. If at a\nlater time somebody feels strongly that it needs to be changed, so be\nit. Right now, getting on with the business at hand is more important,\nIMHO.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Mar 2024 09:16:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Mar 7, 2024 at 9:16 AM Robert Haas <[email protected]> wrote:\n> It could. I just thought this was clearer. I agree that it's a bit\n> long, but I don't think this is worth bikeshedding very much. If at a\n> later time somebody feels strongly that it needs to be changed, so be\n> it. Right now, getting on with the business at hand is more important,\n> IMHO.\n\nHere's a new version of the patch set, rebased over my version of 0001\nand with various other corrections:\n\n* Tidy up grammar in documentation.\n* In manifest_process_version, the test checked whether the manifest\nversion == 1, but the comment talked about versions >= 2. Make the\ncomment match the code.\n* In load_backup_manifest, avoid changing the existing placement of a\nvariable declaration.\n* Rename verify_system_identifier to verify_control_file because if we\nwere verifying multiple things about the control file we'd still want\nto only read it one.\n* Tweak the coding of verify_backup_file and verify_control_file to\navoid repeated path construction.\n* Remove saw_system_identifier_field. This looks like it's trying to\nenforce a rule that the system identifier must immediately follow the\nversion, but we don't insist on anything like that for files or wal\nranges, so there seems to be no reason to do it here.\n* Remove bogus \"unrecognized top-level field\" test from\n005_bad_manifest.pl. The JSON included here doesn't include any\nunrecognized top-level field, so the fact that we were getting that\nerror message was wrong. After removing saw_system_identifier_field,\nwe no longer get the wrong error message any more, so the test started\nfailing.\n* Remove \"expected system identifier\" test from 005_bad_manifest.pl.\nThis was basically a test that saw_system_identifier_field was\nworking.\n* Header comment adjustment for\njson_manifest_finalize_system_identifier. The last sentence was\ncut-and-pasted from somewhere that it made sense to here, where it\ndoesn't. There's only ever one system identifier.\n\nI plan to commit this, barring objections.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 7 Mar 2024 14:51:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Fri, Mar 8, 2024 at 1:22 AM Robert Haas <[email protected]> wrote:\n\n> On Thu, Mar 7, 2024 at 9:16 AM Robert Haas <[email protected]> wrote:\n> > It could. I just thought this was clearer. I agree that it's a bit\n> > long, but I don't think this is worth bikeshedding very much. If at a\n> > later time somebody feels strongly that it needs to be changed, so be\n> > it. Right now, getting on with the business at hand is more important,\n> > IMHO.\n>\n> Here's a new version of the patch set, rebased over my version of 0001\n> and with various other corrections:\n>\n> * Tidy up grammar in documentation.\n> * In manifest_process_version, the test checked whether the manifest\n> version == 1, but the comment talked about versions >= 2. Make the\n> comment match the code.\n> * In load_backup_manifest, avoid changing the existing placement of a\n> variable declaration.\n> * Rename verify_system_identifier to verify_control_file because if we\n> were verifying multiple things about the control file we'd still want\n> to only read it one.\n> * Tweak the coding of verify_backup_file and verify_control_file to\n> avoid repeated path construction.\n> * Remove saw_system_identifier_field. This looks like it's trying to\n> enforce a rule that the system identifier must immediately follow the\n> version, but we don't insist on anything like that for files or wal\n> ranges, so there seems to be no reason to do it here.\n> * Remove bogus \"unrecognized top-level field\" test from\n> 005_bad_manifest.pl. The JSON included here doesn't include any\n> unrecognized top-level field, so the fact that we were getting that\n> error message was wrong. After removing saw_system_identifier_field,\n> we no longer get the wrong error message any more, so the test started\n> failing.\n> * Remove \"expected system identifier\" test from 005_bad_manifest.pl.\n> This was basically a test that saw_system_identifier_field was\n> working.\n> * Header comment adjustment for\n> json_manifest_finalize_system_identifier. The last sentence was\n> cut-and-pasted from somewhere that it made sense to here, where it\n> doesn't. There's only ever one system identifier.\n>\n>\nThank you for the improvement.\n\nThe caller of verify_control_file() has the full path of the control file\nthat\ncan pass it and avoid recomputing. With this change, it doesn't really need\nverifier_context argument -- only the manifest's system identifier is enough\nalong with the control file path. Did the same in the attached delta patch\nfor v11-0002 patch, please have a look, thanks.\n\nRegards,\nAmul", "msg_date": "Fri, 8 Mar 2024 10:43:57 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Fri, Mar 8, 2024 at 12:14 AM Amul Sul <[email protected]> wrote:\n> Thank you for the improvement.\n>\n> The caller of verify_control_file() has the full path of the control file that\n> can pass it and avoid recomputing. With this change, it doesn't really need\n> verifier_context argument -- only the manifest's system identifier is enough\n> along with the control file path. Did the same in the attached delta patch\n> for v11-0002 patch, please have a look, thanks.\n\nThose seem like sensible changes. I incorporated them and committed. I also:\n\n* ran pgindent, which changed a bit of your formatting\n* changed some BAIL_OUT calls to die; I think we should hardly ever be\nusing BAIL_OUT, as that terminates the entire TAP test run, not just\nthe current file\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Mar 2024 15:18:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Mar 14, 2024 at 12:48 AM Robert Haas <[email protected]> wrote:\n\n> On Fri, Mar 8, 2024 at 12:14 AM Amul Sul <[email protected]> wrote:\n> > Thank you for the improvement.\n> >\n> > The caller of verify_control_file() has the full path of the control\n> file that\n> > can pass it and avoid recomputing. With this change, it doesn't really\n> need\n> > verifier_context argument -- only the manifest's system identifier is\n> enough\n> > along with the control file path. Did the same in the attached delta\n> patch\n> > for v11-0002 patch, please have a look, thanks.\n>\n> Those seem like sensible changes. I incorporated them and committed. I\n> also:\n>\n> * ran pgindent, which changed a bit of your formatting\n> * changed some BAIL_OUT calls to die; I think we should hardly ever be\n> using BAIL_OUT, as that terminates the entire TAP test run, not just\n> the current file\n>\n\nThank you, Robert.\n\nRegards,\nAmul\n\nOn Thu, Mar 14, 2024 at 12:48 AM Robert Haas <[email protected]> wrote:On Fri, Mar 8, 2024 at 12:14 AM Amul Sul <[email protected]> wrote:\n> Thank you for the improvement.\n>\n> The caller of verify_control_file() has the full path of the control file that\n> can pass it and avoid recomputing. With this change, it doesn't really need\n> verifier_context argument -- only the manifest's system identifier is enough\n> along with the control file path.  Did the same in the attached delta patch\n> for v11-0002 patch, please have a look, thanks.\n\nThose seem like sensible changes. I incorporated them and committed. I also:\n\n* ran pgindent, which changed a bit of your formatting\n* changed some BAIL_OUT calls to die; I think we should hardly ever be\nusing BAIL_OUT, as that terminates the entire TAP test run, not just\nthe current fileThank you, Robert.Regards,Amul", "msg_date": "Thu, 14 Mar 2024 20:35:03 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add system identifier to backup manifest" }, { "msg_contents": "On Thu, Mar 14, 2024 at 11:05 AM Amul Sul <[email protected]> wrote:\n> Thank you, Robert.\n\nThanks for the patch!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 12:38:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add system identifier to backup manifest" } ]
[ { "msg_contents": "If txn1 begin after SNAPBUILD_BUILDING_SNAPSHOT and commit before\r\nSNAPBUILD_FULL_SNAPSHOT(so txn1 will not be processed by DecodeCommit), and\r\ntxn2 begin after SNAPBUILD_FULL_SNAPSHOT and commit after\r\nSNAPBUILD_CONSISTENT(so txn2 will be replayed), how to ensure that txn2\r\ncould see the changes made by txn1?\r\n\r\nThanks\nIf txn1 begin after SNAPBUILD_BUILDING_SNAPSHOT and commit beforeSNAPBUILD_FULL_SNAPSHOT(so txn1 will not be processed by DecodeCommit), andtxn2 begin after SNAPBUILD_FULL_SNAPSHOT and commit afterSNAPBUILD_CONSISTENT(so txn2 will be replayed), how to ensure that txn2could see the changes made by txn1?Thanks", "msg_date": "Wed, 17 Jan 2024 23:57:21 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "[pg16]Question about building snapshot in logical decoding" } ]
[ { "msg_contents": "Hi.\n\nPSA a small patch to adjust the first-word capitalisation of some\nerrmsg/ errdetail/ errhint so they comply with the guidelines.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 18 Jan 2024 09:17:08 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "modify first-word capitalisation of some messages" }, { "msg_contents": "On 17.01.24 23:17, Peter Smith wrote:\n> PSA a small patch to adjust the first-word capitalisation of some\n> errmsg/ errdetail/ errhint so they comply with the guidelines.\n\ncommitted, thanks\n\n\n\n", "msg_date": "Thu, 18 Jan 2024 09:38:50 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: modify first-word capitalisation of some messages" } ]
[ { "msg_contents": "Hi,\n\nI had reported a possible subscription 'disable_on_error' bug found\nwhile reviewing another patch.\n\nI am including my initial report and Nisha's analysis again here so\nthat this topic has its own thread.\n\n==================\nINITIAL REPORT [1]\n==================\n\n...\nI see now that any ALTER of the subscription's connection, even to\nsome value that fails, will restart a new worker (like ALTER of any\nother subscription parameters). For a bad connection, it will continue\nto relaunch-worker/ERROR over and over. e.g.\n\n----------\ntest_sub=# \\r2024-01-17 09:34:28.665 AEDT [11274] LOG: logical\nreplication apply worker for subscription \"sub4\" has started\n2024-01-17 09:34:28.666 AEDT [11274] ERROR: could not connect to the\npublisher: invalid port number: \"-1\"\n2024-01-17 09:34:28.667 AEDT [928] LOG: background worker \"logical\nreplication apply worker\" (PID 11274) exited with exit code 1\n2024-01-17 09:34:33.669 AEDT [11391] LOG: logical replication apply\nworker for subscription \"sub4\" has started\n2024-01-17 09:34:33.669 AEDT [11391] ERROR: could not connect to the\npublisher: invalid port number: \"-1\"\n2024-01-17 09:34:33.670 AEDT [928] LOG: background worker \"logical\nreplication apply worker\" (PID 11391) exited with exit code 1\netc...\n----------\n\nWhile experimenting with the bad connection ALTER I also tried setting\n'disable_on_error' like below:\n\nALTER SUBSCRIPTION sub4 SET (disable_on_error);\nALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';\n\n...but here the subscription did not become DISABLED as I expected it\nwould do on the next connection error iteration. It remains enabled\nand just continues to loop relaunch/ERROR indefinitely same as before.\n\nThat looks like it may be a bug. Thoughts?\n\n=====================\nANALYSIS BY NISHA [2]\n=====================\n\nIdeally, if the already running apply worker in\n\"LogicalRepApplyLoop()\" has any exception/error it will be handled and\nthe subscription will be disabled if 'disable_on_error' is set -\n\nstart_apply(XLogRecPtr origin_startpos)\n{\nPG_TRY();\n{\nLogicalRepApplyLoop(origin_startpos);\n}\nPG_CATCH();\n{\nif (MySubscription->disableonerr)\nDisableSubscriptionAndExit();\n...\n\nWhat is happening in this case is that the control reaches the\nfunction - run_apply_worker() -> start_apply() -> LogicalRepApplyLoop\n-> maybe_reread_subscription()\n...\n/*\n* Exit if any parameter that affects the remote connection was changed.\n* The launcher will start a new worker but note that the parallel apply\n* worker won't restart if the streaming option's value is changed from\n* 'parallel' to any other value or the server decides not to stream the\n* in-progress transaction.\n*/\nif (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||\n...\n\nand it sees a change in the parameter and calls apply_worker_exit().\nThis will exit the current process, without throwing an exception to\nthe caller and the postmaster will try to restart the apply worker.\nThe new apply worker, before reaching the start_apply() [where we\nhandle exception], will hit the code to establish the connection to\nthe publisher -\n\nApplyWorkerMain() -> run_apply_worker() -\n...\nLogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo,\ntrue /* replication */ ,\ntrue,\nmust_use_password,\nMySubscription->name, &err);\n\nif (LogRepWorkerWalRcvConn == NULL)\n ereport(ERROR,\n (errcode(ERRCODE_CONNECTION_FAILURE),\n errmsg(\"could not connect to the publisher: %s\", err)));\n...\n\nand due to the bad connection string in the subscription, it will error out.\n[28680] ERROR: could not connect to the publisher: invalid port number: \"-1\"\n[3196] LOG: background worker \"logical replication apply worker\" (PID\n28680) exited with exit code 1\n\nNow, the postmaster keeps trying to restart the apply worker and it\nwill keep failing until the connection string is corrected or the\nsubscription is disabled manually.\n\nI think this is a bug that needs to be handled in run_apply_worker()\nwhen disable_on_error is set.\n\n======\n[1] https://www.postgresql.org/message-id/CAHut%2BPvcf7P2dHbCeWPM4jQ%3DyHqf4WFS_C6eVb8V%3DbcZPMMp7A%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CABdArM6ORu%2BKpS_kXd-jwwPBqYPo1YqZjwwGnqmVanWgdHCggA%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Jan 2024 10:15:28 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "subscription disable_on_error not working after ALTER SUBSCRIPTION\n set bad conninfo" }, { "msg_contents": "On Thu, Jan 18, 2024 at 8:16 AM Peter Smith <[email protected]> wrote:\n>\n> Hi,\n>\n> I had reported a possible subscription 'disable_on_error' bug found\n> while reviewing another patch.\n>\n> I am including my initial report and Nisha's analysis again here so\n> that this topic has its own thread.\n>\n> ==================\n> INITIAL REPORT [1]\n> ==================\n>\n> ...\n> I see now that any ALTER of the subscription's connection, even to\n> some value that fails, will restart a new worker (like ALTER of any\n> other subscription parameters). For a bad connection, it will continue\n> to relaunch-worker/ERROR over and over. e.g.\n>\n> ----------\n> test_sub=# \\r2024-01-17 09:34:28.665 AEDT [11274] LOG: logical\n> replication apply worker for subscription \"sub4\" has started\n> 2024-01-17 09:34:28.666 AEDT [11274] ERROR: could not connect to the\n> publisher: invalid port number: \"-1\"\n> 2024-01-17 09:34:28.667 AEDT [928] LOG: background worker \"logical\n> replication apply worker\" (PID 11274) exited with exit code 1\n> 2024-01-17 09:34:33.669 AEDT [11391] LOG: logical replication apply\n> worker for subscription \"sub4\" has started\n> 2024-01-17 09:34:33.669 AEDT [11391] ERROR: could not connect to the\n> publisher: invalid port number: \"-1\"\n> 2024-01-17 09:34:33.670 AEDT [928] LOG: background worker \"logical\n> replication apply worker\" (PID 11391) exited with exit code 1\n> etc...\n> ----------\n>\n> While experimenting with the bad connection ALTER I also tried setting\n> 'disable_on_error' like below:\n>\n> ALTER SUBSCRIPTION sub4 SET (disable_on_error);\n> ALTER SUBSCRIPTION sub4 CONNECTION 'port = -1';\n>\n> ...but here the subscription did not become DISABLED as I expected it\n> would do on the next connection error iteration. It remains enabled\n> and just continues to loop relaunch/ERROR indefinitely same as before.\n>\n> That looks like it may be a bug. Thoughts?\n\nAlthough we can improve it to handle this case too, I'm not sure it's\na bug. The doc says[1]:\n\nSpecifies whether the subscription should be automatically disabled if\nany errors are detected by subscription workers during data\nreplication from the publisher.\n\nWhen an apply worker is trying to establish a connection, it's not\nreplicating data from the publisher.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 18 Jan 2024 10:54:48 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" }, { "msg_contents": "On Thu, Jan 18, 2024 at 12:55 PM Masahiko Sawada <[email protected]> wrote:\n>\n...\n>\n> Although we can improve it to handle this case too, I'm not sure it's\n> a bug. The doc says[1]:\n>\n> Specifies whether the subscription should be automatically disabled if\n> any errors are detected by subscription workers during data\n> replication from the publisher.\n>\n> When an apply worker is trying to establish a connection, it's not\n> replicating data from the publisher.\n>\n> Regards,\n>\n> [1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n\nYeah, I had also seen that wording of those docs. And I agree it\nleaves open some room for doubts because strictly from that wording it\ncan be interpreted that establishing the connection is not actually\n\"data replication from the publisher\" in which case maybe there is no\nbug.\n\nOTOH, it was not clear to me if that precise wording was the intention\nor not. It could have been written as \"Specifies whether the\nsubscription should be automatically disabled if any errors are\ndetected by subscription workers.\" which would mean the same thing 99%\nof the time except that would mean that the current behaviour is a\nbug.\n\nI tried looking at the original thread where this feature was born [1]\nbut it is still unclear to me if 'disable_on_error' was meant for\nevery kind of error or only data replication errors.\n\nIndeed. even the commit message [2] seems to have an each-way bet:\n* It talks about errors applying changes --- \"Logical replication\napply workers for a subscription can easily get stuck in an infinite\nloop of attempting to apply a change...\"\n* But, it also says it covers any errors --- \"When true, both the\ntablesync worker and apply worker catch any errors thrown...\"\n\n~\n\nMaybe we should be asking ourselves how a user intuitively expects\nthis option to behave. IMO the answer is right there in the option\nname - the subscription says 'disable_on_error' and I got an error, so\nI expected the subscription to be disabled. To wriggle out of it by\nsaying \"Ah, but we did not mean _those_ kinds of errors\" doesn't quite\nseem genuine to me.\n\n======\n[1] https://www.postgresql.org/message-id/flat/CAA4eK1KsaVgkO%3DRbjj0bcXZTpeV1QVm0TGkdxZiH73MHfxf6oQ%40mail.gmail.com#d4a0db154fbeca356a494c50ac877ff1\n[2] https://github.com/postgres/postgres/commit/705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 18 Jan 2024 16:44:51 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" }, { "msg_contents": "On Thu, Jan 18, 2024 at 11:15 AM Peter Smith <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 12:55 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> ...\n> >\n> > Although we can improve it to handle this case too, I'm not sure it's\n> > a bug. The doc says[1]:\n> >\n> > Specifies whether the subscription should be automatically disabled if\n> > any errors are detected by subscription workers during data\n> > replication from the publisher.\n> >\n> > When an apply worker is trying to establish a connection, it's not\n> > replicating data from the publisher.\n> >\n> > Regards,\n> >\n> > [1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n> >\n> > --\n> > Masahiko Sawada\n> > Amazon Web Services: https://aws.amazon.com\n>\n> Yeah, I had also seen that wording of those docs. And I agree it\n> leaves open some room for doubts because strictly from that wording it\n> can be interpreted that establishing the connection is not actually\n> \"data replication from the publisher\" in which case maybe there is no\n> bug.\n>\n\nAs far as I remember that was the intention. The idea was if there is\nany conflict during apply that users manually need to fix, they have\nthe provision to stop repeating the error. If we wish to extend the\npurpose of this option for another valid use case and there is a good\nway to achieve the same then we can discuss but I don't think we need\nto change it in back-branches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Jan 2024 15:24:10 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" }, { "msg_contents": "On Thu, Jan 18, 2024 at 8:54 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 11:15 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, Jan 18, 2024 at 12:55 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > ...\n> > >\n> > > Although we can improve it to handle this case too, I'm not sure it's\n> > > a bug. The doc says[1]:\n> > >\n> > > Specifies whether the subscription should be automatically disabled if\n> > > any errors are detected by subscription workers during data\n> > > replication from the publisher.\n> > >\n> > > When an apply worker is trying to establish a connection, it's not\n> > > replicating data from the publisher.\n> > >\n> > > Regards,\n> > >\n> > > [1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n> > >\n> > > --\n> > > Masahiko Sawada\n> > > Amazon Web Services: https://aws.amazon.com\n> >\n> > Yeah, I had also seen that wording of those docs. And I agree it\n> > leaves open some room for doubts because strictly from that wording it\n> > can be interpreted that establishing the connection is not actually\n> > \"data replication from the publisher\" in which case maybe there is no\n> > bug.\n> >\n>\n> As far as I remember that was the intention. The idea was if there is\n> any conflict during apply that users manually need to fix, they have\n> the provision to stop repeating the error. If we wish to extend the\n> purpose of this option for another valid use case and there is a good\n> way to achieve the same then we can discuss but I don't think we need\n> to change it in back-branches.\n>\n> --\n\nIn case we want to proceed with this, here is a simple POC patch that\nseems to do the job.\n\n~~~\n\nRESULT:\n\ntest_sub=# create subscription sub1 connection 'dbname=test_pub'\npublication pub1;\nNOTICE: created replication slot \"sub1\" on publisher\nCREATE SUBSCRIPTION\n2024-01-19 11:50:33.385 AEDT [17905] LOG: logical replication apply\nworker for subscription \"sub1\" has started\n2024-01-19 11:50:33.398 AEDT [17907] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has started\n2024-01-19 11:50:33.481 AEDT [17907] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has\nfinished\n\ntest_sub=# alter subscription sub1 set (disable_on_error);\nALTER SUBSCRIPTION\n\ntest_sub=# alter subscription sub1 connection 'port=-1';\n2024-01-19 11:51:00.696 AEDT [17905] LOG: logical replication worker\nfor subscription \"sub1\" will restart because of a parameter change\nALTER SUBSCRIPTION\n2024-01-19 11:51:00.704 AEDT [18649] LOG: logical replication apply\nworker for subscription \"sub1\" has started\n2024-01-19 11:51:00.704 AEDT [18649] ERROR: could not connect to the\npublisher: invalid port number: \"-1\"\n2024-01-19 11:51:00.705 AEDT [18649] LOG: subscription \"sub1\" has\nbeen disabled because of an error\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Fri, 19 Jan 2024 12:00:57 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" }, { "msg_contents": "On Thu, Jan 18, 2024 at 6:54 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 11:15 AM Peter Smith <[email protected]> wrote:\n> >\n> > On Thu, Jan 18, 2024 at 12:55 PM Masahiko Sawada <[email protected]> wrote:\n> > >\n> > ...\n> > >\n> > > Although we can improve it to handle this case too, I'm not sure it's\n> > > a bug. The doc says[1]:\n> > >\n> > > Specifies whether the subscription should be automatically disabled if\n> > > any errors are detected by subscription workers during data\n> > > replication from the publisher.\n> > >\n> > > When an apply worker is trying to establish a connection, it's not\n> > > replicating data from the publisher.\n> > >\n> > > Regards,\n> > >\n> > > [1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n> > >\n> > > --\n> > > Masahiko Sawada\n> > > Amazon Web Services: https://aws.amazon.com\n> >\n> > Yeah, I had also seen that wording of those docs. And I agree it\n> > leaves open some room for doubts because strictly from that wording it\n> > can be interpreted that establishing the connection is not actually\n> > \"data replication from the publisher\" in which case maybe there is no\n> > bug.\n> >\n>\n> As far as I remember that was the intention. The idea was if there is\n> any conflict during apply that users manually need to fix, they have\n> the provision to stop repeating the error. If we wish to extend the\n> purpose of this option for another valid use case and there is a good\n> way to achieve the same then we can discuss but I don't think we need\n> to change it in back-branches.\n\nI agree not to change it in back-branches.\n\nWhat is the use case of extending disable_on_error?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Jan 2024 17:20:52 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" }, { "msg_contents": "On Fri, Jan 19, 2024 at 7:21 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Thu, Jan 18, 2024 at 6:54 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Thu, Jan 18, 2024 at 11:15 AM Peter Smith <[email protected]> wrote:\n> > >\n> > > On Thu, Jan 18, 2024 at 12:55 PM Masahiko Sawada <[email protected]> wrote:\n> > > >\n> > > ...\n> > > >\n> > > > Although we can improve it to handle this case too, I'm not sure it's\n> > > > a bug. The doc says[1]:\n> > > >\n> > > > Specifies whether the subscription should be automatically disabled if\n> > > > any errors are detected by subscription workers during data\n> > > > replication from the publisher.\n> > > >\n> > > > When an apply worker is trying to establish a connection, it's not\n> > > > replicating data from the publisher.\n> > > >\n> > > > Regards,\n> > > >\n> > > > [1] https://www.postgresql.org/docs/devel/sql-createsubscription.html#SQL-CREATESUBSCRIPTION-PARAMS-WITH-DISABLE-ON-ERROR\n> > > >\n> > > > --\n> > > > Masahiko Sawada\n> > > > Amazon Web Services: https://aws.amazon.com\n> > >\n> > > Yeah, I had also seen that wording of those docs. And I agree it\n> > > leaves open some room for doubts because strictly from that wording it\n> > > can be interpreted that establishing the connection is not actually\n> > > \"data replication from the publisher\" in which case maybe there is no\n> > > bug.\n> > >\n> >\n> > As far as I remember that was the intention. The idea was if there is\n> > any conflict during apply that users manually need to fix, they have\n> > the provision to stop repeating the error. If we wish to extend the\n> > purpose of this option for another valid use case and there is a good\n> > way to achieve the same then we can discuss but I don't think we need\n> > to change it in back-branches.\n>\n> I agree not to change it in back-branches.\n>\n> What is the use case of extending disable_on_error?\n>\n\nThe use-case is that with my user-hat on I had assumed\ndisable_on_error behaviour was as per the name implied so the\nsubscription would disable on getting (any) error. OTOH I agree, my\nexpectation is not exactly what the current docs say.\n\nAlso, I had thought the motivation for that option was to avoid having\ninfinite repeating errors that might be caused by the user or data.\ne.g. a simple typo in the conninfo can cause this error and AFAIK the\nALTER will appear successful so the user won't know anything about it\nunless they also check the logs. OTOH something like a connection\nerror may only be temporary (caused by a network issue?) and not\ncaused by a user typo at all, so I can see perhaps that is why\ndisable_on_error is OK to be excluding connection errors.\n\n TBH I think there are pros and cons to doing nothing and leaving the\nexisting behaviour as-is or extending it -- I'm happy to go either\nway.\n\nAnother idea is to leave behaviour unchanged, but add a note in the\ndocs like \"Note: connection errors (e.g. specifying a bad conninfo\nusing ALTER SUBSCRIPTION) will not cause the subscription to become\ndisabled\"\n\nThoughts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 30 Jan 2024 14:14:40 +1100", "msg_from": "Peter Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: subscription disable_on_error not working after ALTER\n SUBSCRIPTION set bad conninfo" } ]
[ { "msg_contents": "Hi,\n\nI was testing the index prefetch and streamIO patches and I added \ncachestat() syscall to get a better view of the prefetching.\n\nIt's a new linux syscall, it requires 6.5, it provides numerous \ninteresting information from the VM for the range of pages examined.\nIt's way way faster than the old mincore() and provides much more \nvaluable information:\n\n     uint64 nr_cache;        /* Number of cached pages */\n     uint64 nr_dirty;           /* Number of dirty pages */\n     uint64 nr_writeback;  /* Number of pages marked for writeback. */\n     uint64 nr_evicted;       /* Number of pages evicted from the cache. */\n     /*\n     * Number of recently evicted pages. A page is recently evicted if its\n     * last eviction was recent enough that its reentry to the cache would\n     * indicate that it is actively being used by the system, and that there\n     * is memory pressure on the system.\n     */\n     uint64 nr_recently_evicted;\n\n\nWhile here I also added some quick tweaks to suspend prefetching on \nmemory pressure.\nIt's working but I have absolutely not checked the performance impact of \nmy additions.\n\nSharing here for others to tests and adding in CF in case there is \ninterest to go further in this direction.\n\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D", "msg_date": "Thu, 18 Jan 2024 01:24:54 +0100", "msg_from": "=?UTF-8?Q?C=C3=A9dric_Villemain?= <[email protected]>", "msg_from_op": true, "msg_subject": "linux cachestat in file Readv and Prefetch" } ]
[ { "msg_contents": "Hi,\n\nI was testing the index prefetch and streamIO patches and I added \ncachestat() syscall to get a better view of the prefetching.\n\nIt's a new linux syscall, it requires 6.5, it provides numerous \ninteresting information from the VM for the range of pages examined.\nIt's way way faster than the old mincore() and provides much more \nvaluable information:\n\n     uint64 nr_cache;        /* Number of cached pages */\n     uint64 nr_dirty;           /* Number of dirty pages */\n     uint64 nr_writeback;  /* Number of pages marked for writeback. */\n     uint64 nr_evicted;       /* Number of pages evicted from the cache. */\n     /*\n     * Number of recently evicted pages. A page is recently evicted if its\n     * last eviction was recent enough that its reentry to the cache would\n     * indicate that it is actively being used by the system, and that there\n     * is memory pressure on the system.\n     */\n     uint64 nr_recently_evicted;\n\n\nWhile here I also added some quick tweaks to suspend prefetching on \nmemory pressure.\nIt's working but I have absolutely not checked the performance impact of \nmy additions.\n\nSharing here for others to tests and adding in CF in case there is \ninterest to go further in this direction.\n\n\n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D\n\n-- \n---\nCédric Villemain +33 (0)6 20 30 22 52\nhttps://Data-Bene.io\nPostgreSQL Expertise, Support, Training, R&D", "msg_date": "Thu, 18 Jan 2024 01:25:41 +0100", "msg_from": "Cedric Villemain <[email protected]>", "msg_from_op": true, "msg_subject": "linux cachestat in file Readv and Prefetch" }, { "msg_contents": "\n\nOn 1/18/24 01:25, Cedric Villemain wrote:\n> Hi,\n>\n> I was testing the index prefetch and streamIO patches and I added\n> cachestat() syscall to get a better view of the prefetching.\n>\n> It's a new linux syscall, it requires 6.5, it provides numerous\n> interesting information from the VM for the range of pages examined.\n> It's way way faster than the old mincore() and provides much more\n> valuable information:\n>\n> uint64 nr_cache; /* Number of cached pages */\n> uint64 nr_dirty; /* Number of dirty pages */\n> uint64 nr_writeback; /* Number of pages marked for writeback. */\n> uint64 nr_evicted; /* Number of pages evicted from the cache. */\n> /*\n> * Number of recently evicted pages. A page is recently evicted if its\n> * last eviction was recent enough that its reentry to the cache would\n> * indicate that it is actively being used by the system, and that\nthere\n> * is memory pressure on the system.\n> */\n> uint64 nr_recently_evicted;\n>\n>\n> While here I also added some quick tweaks to suspend prefetching on\n> memory pressure.\n\nI may be missing some important bit behind this idea, but this does not\nseem like a great idea to me. The comment added to FilePrefetch says this:\n\n /*\n * last time we visit this file (somewhere), nr_recently_evicted pages\n * of the range were just removed from vm cache, it's a sign a memory\n * pressure. so do not prefetch further.\n * it is hard to guess if it is always the right choice in absence of\n * more information like:\n * - prefetching distance expected overall\n * - access pattern/backend maybe\n */\n\nFirstly, is this even a good way to detect memory pressure? It's clearly\nlimited to a single 1GB segment, so what's the chance we'll even see the\n\"pressure\" on a big database with many files?\n\nIf we close/reopen the file (which on large databases we tend to do very\noften) how does that affect the data reported for the file descriptor?\n\nI'm not sure I even agree with the idea that we should stop prefetching\nwhen there is memory pressure. IMHO it's perfectly fine to keep\nprefetching stuff even if it triggers eviction of unnecessary pages from\npage cache. That's kinda why the eviction exists.\n\n\n> It's working but I have absolutely not checked the performance impact of\n> my additions.\n>\n\nWell ... I'd argue at least some basic evaluation of performance is a\nrather important / expected part of a proposal for a patch that aims to\nimprove a performance-focused feature. It's impossible to have any sort\nof discussion about such patch without that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Sun, 18 Feb 2024 00:10:44 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux cachestat in file Readv and Prefetch" }, { "msg_contents": "On Sat, Feb 17, 2024 at 6:10 PM Tomas Vondra\n<[email protected]> wrote:\n> I may be missing some important bit behind this idea, but this does not\n> seem like a great idea to me. The comment added to FilePrefetch says this:\n>\n> /*\n> * last time we visit this file (somewhere), nr_recently_evicted pages\n> * of the range were just removed from vm cache, it's a sign a memory\n> * pressure. so do not prefetch further.\n> * it is hard to guess if it is always the right choice in absence of\n> * more information like:\n> * - prefetching distance expected overall\n> * - access pattern/backend maybe\n> */\n>\n> Firstly, is this even a good way to detect memory pressure? It's clearly\n> limited to a single 1GB segment, so what's the chance we'll even see the\n> \"pressure\" on a big database with many files?\n>\n> If we close/reopen the file (which on large databases we tend to do very\n> often) how does that affect the data reported for the file descriptor?\n>\n> I'm not sure I even agree with the idea that we should stop prefetching\n> when there is memory pressure. IMHO it's perfectly fine to keep\n> prefetching stuff even if it triggers eviction of unnecessary pages from\n> page cache. That's kinda why the eviction exists.\n\nI agree with all of these criticisms. I think it's the job of\npg_prewarm to do what the user requests, not to second-guess whether\nthe user requested the right thing. One of the things that frustrates\npeople about the ring-buffer system is that it's hard to get all of\nyour data cached in shared_buffers by just reading it, e.g. SELECT *\nFROM my_table. If pg_prewarm also isn't guaranteed to actually read\nyour data, and may decide that your data didn't need to be read after\nall, then what exactly is a user supposed to do if they're absolutely\nsure that they know better than PostgreSQL itself and want to\nguarantee that their data actually does get read?\n\nSo I think a feature like this would at the very least need to be\noptional, but it's unclear to me why we'd want it at all, and I feel\nlike Cedric's email doesn't really answer that question. I suppose\nthat if you could detect useless prefetching and skip it, you'd save a\nbit of work, but under what circumstances does anyone use pg_prewarm\nso aggressively as to make that a problem, and why wouldn't the\nsolution be for the user to just calm down a little bit? There\nshouldn't be any particular reason why the user can't know both the\nsize of shared_buffers and the approximate size of the OS cache;\nindeed, they can probably know the latter much more reliably than\nPostgreSQL itself can. So it should be fairly easy to avoid just\nprefetching more data than will fit, and then you don't have to worry\nabout any of this. And you'll probably get a better result, too,\nbecause, along the same lines as Tomas's remarks above, I doubt that\nthis would be an accurate method anyway.\n\n> Well ... I'd argue at least some basic evaluation of performance is a\n> rather important / expected part of a proposal for a patch that aims to\n> improve a performance-focused feature. It's impossible to have any sort\n> of discussion about such patch without that.\n\nRight.\n\nI'm going to mark this patch as Rejected in the CommitFest application\nfor now. If in subsequent discussion that comes to seem like the wrong\nresult, then we can revise accordingly, but right now it looks\nextremely unlikely to me that this is something that we'd want.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 14 Mar 2024 14:04:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux cachestat in file Readv and Prefetch" } ]
[ { "msg_contents": "Greetings,\nGetting the following error\n\n[1146/2086] Generating src/backend/postgres.def with a custom command\n(wrapped by meson to set PATH)\nFAILED: src/backend/postgres.def\n\"C:\\Program Files\\Meson\\meson.exe\" \"--internal\" \"exe\" \"--unpickle\"\n\"C:\\Users\\davec\\projects\\postgresql\\build\\meson-private\\meson_exe_perl.EXE_53b41ebc2e76cfc92dd6a2af212140770543faae.dat\"\nwhile executing ['c:\\\\perl\\\\bin\\\\perl.EXE', '../src/backend/../tools/\nmsvc_gendef.pl', '--arch', 'aarch64', '--tempdir',\n'src/backend/postgres.def.p', '--deffile', 'src/backend/postgres.def',\n'src/backend/postgres_lib.a', 'src/common/libpgcommon_srv.a',\n'src/port/libpgport_srv.a']\n--- stdout ---\n\n--- stderr ---\nUsage: msvc_gendef.pl --arch <arch> --deffile <deffile> --tempdir <tempdir>\nfiles-or-directories\n arch: x86 | x86_64\n deffile: path of the generated file\n tempdir: directory for temporary files\n files or directories: object files or directory containing object files\n\nlog attached\n\nDave Cramer", "msg_date": "Wed, 17 Jan 2024 21:07:19 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "compiling postgres on windows arm using meson" }, { "msg_contents": "Hi,\n\nOn Thu, 18 Jan 2024 at 05:07, Dave Cramer <[email protected]> wrote:\n>\n> Greetings,\n> Getting the following error\n>\n> [1146/2086] Generating src/backend/postgres.def with a custom command (wrapped by meson to set PATH)\n> FAILED: src/backend/postgres.def\n> \"C:\\Program Files\\Meson\\meson.exe\" \"--internal\" \"exe\" \"--unpickle\" \"C:\\Users\\davec\\projects\\postgresql\\build\\meson-private\\meson_exe_perl.EXE_53b41ebc2e76cfc92dd6a2af212140770543faae.dat\"\n> while executing ['c:\\\\perl\\\\bin\\\\perl.EXE', '../src/backend/../tools/msvc_gendef.pl', '--arch', 'aarch64', '--tempdir', 'src/backend/postgres.def.p', '--deffile', 'src/backend/postgres.def', 'src/backend/postgres_lib.a', 'src/common/libpgcommon_srv.a', 'src/port/libpgport_srv.a']\n> --- stdout ---\n>\n> --- stderr ---\n> Usage: msvc_gendef.pl --arch <arch> --deffile <deffile> --tempdir <tempdir> files-or-directories\n> arch: x86 | x86_64\n> deffile: path of the generated file\n> tempdir: directory for temporary files\n> files or directories: object files or directory containing object files\n>\n> log attached\n\n From the docs [1]: PostgreSQL will only build for the x64 architecture\non 64-bit Windows.\n\nSo, I think that is expected.\n\n[1] https://www.postgresql.org/docs/current/install-windows-full.html#INSTALL-WINDOWS-FULL-64-BIT\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 30 Jan 2024 16:43:31 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compiling postgres on windows arm using meson" } ]
[ { "msg_contents": "The overall trend in machine learning embedding sizes has been growing\nrapidly over the last few years from 128 up to 4K dimensions yielding\nadditional value and quality improvements. It's not clear when this trend\nin growth will ease. The leading text embedding models\n<https://d2Vt1P04.na1.hs-sales-engage.com/Ctc/I9+23284/d2Vt1P04/Jl22-6qcW7lCdLW6lZ3mZW2xQSTk7n4W72W13GRJl8P4L5lW65mzGl6rb-LDW3fZkLK7wL6F7W7bLKRl3_KR8gW2H6rqc3sKVWHW4D9qCR2clFY9W7HsFTL26WFWlW5Hfcrf2QPpJMW3LsmXC7QBywcW6f_40K6rHypwN72RsbhFLs2vW6jlt_y6pFdc9V91HDm4pT7BnVccdx84tkPBQW6PJxqG2F_FLmV6W6fc5JT11jW8vC6FB5DCKGjW1854vH3kDmt-W9lxPZm4_rYkDW4L32gg19WxLrW6-_S_-5MBHNYW2MsMBv25m7NkW5tPMCP5x2DzRf7CCRKR04>\ngenerate\nnow exceeds the index storage available in IndexTupleData.t_info.\n\nThe current index tuple size is stored in 13 bits of IndexTupleData.t_info,\nwhich limits the max size of an index tuple to 2^13 = 8129 bytes. Vectors\nimplemented by pgvector\n<https://d2Vt1P04.na1.hs-sales-engage.com/Ctc/I9+23284/d2Vt1P04/JkM2-6qcW6N1vHY6lZ3nBW1KW3_33qLHXZW5SdJZV6V1sGTW4c2GQ_3MLxkdW2lzQbs2W87JKW772TLX7BpFlQW8-WNlD7GgH2tW3yzJG98NPhgFW3QMP2h5CKxzKN4DD1QlzH6WrW1ByHLF3QYtPQW1W8HLB2Jl6vZW8C8pKB9fvQMtW7wJpwd3-8fwWW60mRbF435_NkW253WL721Q95QW20Z-xk3_22C0W1Thshf6-_qGbW6rz9tX72gbKyW5L9ktk1Vtn-dW8601Jv3ZfHxhW7ZW-6L86RX2ZW293jnQ921NT6f2Kg23K04>\ncurrently use\na 32 bit float for elements, which limits vector size to 2K\ndimensions, which is no longer state of the art.\n\nI've attached a patch that increases IndexTupleData.t_info from 16bits to\n32bits allowing for significantly larger index tuple sizes. I would guess\nthis patch is not a complete implementation that allows for migration from\nprevious versions, but it does compile and initdb succeeds. I'd be happy to\ncontinue work if the core team is receptive to an update in this area, and\nI'd appreciate any feedback the community has on the approach.\n\nI imagine it might be worth hiding this change behind a compile time\nconfiguration parameter similar to blocksize. I'm sure there are\nimplications I'm unaware of with this change, but I wanted to start the\ndiscussion around a bit of code to see how much would actually need to\nchange.\n\nAlso, I believe this is my first mailing list post in a decade or 2, so let\nme know if I've missed something important. BTW, thanks for all your work\nover the decades!", "msg_date": "Wed, 17 Jan 2024 21:10:05 -0800", "msg_from": "Montana Low <[email protected]>", "msg_from_op": true, "msg_subject": "Increasing IndexTupleData.t_info from uint16 to uint32" }, { "msg_contents": "Montana Low <[email protected]> writes:\n> I've attached a patch that increases IndexTupleData.t_info from 16bits to\n> 32bits allowing for significantly larger index tuple sizes.\n\nI fear this idea is a non-starter because it'd break on-disk\ncompatibility. Certainly, if we were to try to pursue it, there'd\nneed to be an enormous amount of effort spent on dealing with existing\nindexes and transition mechanisms. I don't think you've made the case\nwhy that would be time well spent.\n\nOn a micro level, this makes sizeof(IndexTupleData) be not maxaligned,\nwhich is likely to cause problems on alignment-picky hardware, or else\nresult in space wastage if we were careful to MAXALIGN() everywhere.\n(Which we should have been, but I don't care to bet on it.) A lot of\npeople would be sad if their indexes got noticeably bigger when they\nweren't getting anything out of that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Jan 2024 10:46:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing IndexTupleData.t_info from uint16 to uint32" }, { "msg_contents": "I wrote:\n> On a micro level, this makes sizeof(IndexTupleData) be not maxaligned,\n> which is likely to cause problems on alignment-picky hardware, or else\n> result in space wastage if we were careful to MAXALIGN() everywhere.\n> (Which we should have been, but I don't care to bet on it.) A lot of\n> people would be sad if their indexes got noticeably bigger when they\n> weren't getting anything out of that.\n\nAfter thinking about that a bit more, there might be a way out that\nboth avoids bloating index tuples that don't need it, and avoids\nthe compatibility problem. How about defining that if the\nINDEX_SIZE_MASK bits aren't zero, they are the tuple size as now;\nbut if they are zero, then the size appears in a separate uint16\nfield following the existing IndexTupleData fields. We could perhaps\nalso rethink how the nulls bitmap storage works in this \"v2\"\nindex tuple header layout? In any case, I'd expect to end up in\na place where (on 64-bit hardware) you pay an extra MAXALIGN quantum\nfor either an oversize tuple or a nulls bitmap, but only one quantum\nwhen you have both, and nothing above today when the tuple is not\noversize.\n\nThis'd complicate tuple construction and inspection a bit, but\nit would avoid building an enormous lot of infrastructure to deal\nwith transitioning to a not-upward-compatible definition.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Jan 2024 11:10:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing IndexTupleData.t_info from uint16 to uint32" }, { "msg_contents": "On Thu, 18 Jan 2024 at 13:41, Montana Low <[email protected]> wrote:\n>\n> The overall trend in machine learning embedding sizes has been growing rapidly over the last few years from 128 up to 4K dimensions yielding additional value and quality improvements. It's not clear when this trend in growth will ease. The leading text embedding models generate now exceeds the index storage available in IndexTupleData.t_info.\n>\n> The current index tuple size is stored in 13 bits of IndexTupleData.t_info, which limits the max size of an index tuple to 2^13 = 8129 bytes. Vectors implemented by pgvector currently use a 32 bit float for elements, which limits vector size to 2K dimensions, which is no longer state of the art.\n>\n> I've attached a patch that increases IndexTupleData.t_info from 16bits to 32bits allowing for significantly larger index tuple sizes. I would guess this patch is not a complete implementation that allows for migration from previous versions, but it does compile and initdb succeeds. I'd be happy to continue work if the core team is receptive to an update in this area, and I'd appreciate any feedback the community has on the approach.\n\nI'm not sure why this is needed.\nVector data indexing generally requires bespoke index methods, which\nare not currently available in the core PostgreSQL repository, and\nindexes are not at all required to utilize the IndexTupleData format\nfor their data tuples (one example of this being BRIN).\nThe only hard requirement in AMs which use Postgres' relfile format is\nthat they follow the Page layout and optionally the pd_linp/ItemId\narray, which limit the size of Page tuples to 2^15-1 (see\nItemIdData.lp_len) and ~2^16-bytes\n(PageHeaderData.pd_pagesize_version).\n\nNext, the only non-internal use of IndexTuple is in IndexOnlyScans.\nHowever, here the index may fill the scandesc->xs_hitup with a heap\ntuple instead, which has a length stored in uint32, too. So, I don't\nquite see why this would be required for all indexes.\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 18 Jan 2024 17:22:24 +0100", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing IndexTupleData.t_info from uint16 to uint32" }, { "msg_contents": "Hi,\n\n> > The overall trend in machine learning embedding sizes has been growing rapidly over the last few years from 128 up to 4K dimensions yielding additional value and quality improvements. It's not clear when this trend in growth will ease. The leading text embedding models generate now exceeds the index storage available in IndexTupleData.t_info.\n> >\n> > The current index tuple size is stored in 13 bits of IndexTupleData.t_info, which limits the max size of an index tuple to 2^13 = 8129 bytes. Vectors implemented by pgvector currently use a 32 bit float for elements, which limits vector size to 2K dimensions, which is no longer state of the art.\n> >\n> > I've attached a patch that increases IndexTupleData.t_info from 16bits to 32bits allowing for significantly larger index tuple sizes. I would guess this patch is not a complete implementation that allows for migration from previous versions, but it does compile and initdb succeeds. I'd be happy to continue work if the core team is receptive to an update in this area, and I'd appreciate any feedback the community has on the approach.\n\nIf I read this correctly, basically the patch adds 16 useless bits for\nall applications except for ML ones...\n\nPerhaps implementing an alternative storage specifically for ML using\nTAM interface would be a better approach?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Jan 2024 13:40:31 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing IndexTupleData.t_info from uint16 to uint32" } ]
[ { "msg_contents": "Hi,\n\nPQsendSyncMessage() was added in\nhttps://commitfest.postgresql.org/46/4262/. It allows users to add a\nSync message without flushing the buffer.\n\nAs a follow-up, this change adds an additional meta-command to\npgbench, \\syncpipeline, which will call PQsendSyncMessage(). This will\nmake it possible to benchmark impact and improvements of using\nPQsendSyncMessage through pgbench.\n\nRegards,\nAnthonin", "msg_date": "Thu, 18 Jan 2024 09:48:28 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Thu, Jan 18, 2024 at 09:48:28AM +0100, Anthonin Bonnefoy wrote:\n> PQsendSyncMessage() was added in\n> https://commitfest.postgresql.org/46/4262/. It allows users to add a\n> Sync message without flushing the buffer.\n> \n> As a follow-up, this change adds an additional meta-command to\n> pgbench, \\syncpipeline, which will call PQsendSyncMessage(). This will\n> make it possible to benchmark impact and improvements of using\n> PQsendSyncMessage through pgbench.\n\nThanks for sending that as a separate patch.\n\nAs a matter of fact, I have already looked at what you are proposing\nhere for the sake of the other thread when checking the difference in\nnumbers with PQsendSyncMessage(). The logic looks sound, but I have a\ncomment about the docs: could it be better to group \\syncpipeline with\n\\startpipeline and \\endpipeline? \\syncpipeline requires a pipeline to\nwork.\n--\nMichael", "msg_date": "Fri, 19 Jan 2024 13:08:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Fri, Jan 19, 2024 at 5:08 AM Michael Paquier <[email protected]> wrote:\n> The logic looks sound, but I have a\n> comment about the docs: could it be better to group \\syncpipeline with\n> \\startpipeline and \\endpipeline? \\syncpipeline requires a pipeline to\n> work.\n\nI've updated the doc to group the commands. It does look better and\nmore consistent with similar command groups like \\if.\n\nRegards,\nAnthonin", "msg_date": "Fri, 19 Jan 2024 08:55:31 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Fri, Jan 19, 2024 at 08:55:31AM +0100, Anthonin Bonnefoy wrote:\n> I've updated the doc to group the commands. It does look better and\n> more consistent with similar command groups like \\if.\n\nI was playing with a few meta command scenarios while looking at this\npatch, and this sequence generates an error that should never happen:\n$ cat /tmp/test.sql\n\\startpipeline\n\\syncpipeline\n$ pgbench -n -f /tmp/test.sql -M extended\n[...]\npgbench: error: unexpected transaction status 1\npgbench: error: client 0 aborted while receiving the transaction status\n\nIt looks to me that we need to be much smarter than that for the error\nhandling we'd need when a sync request is optionally sent when a\ntransaction stops at the end of pgbench. Could you look at it?\n--\nMichael", "msg_date": "Mon, 22 Jan 2024 15:16:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "That looks like a bug with how opened pipelines are not caught at the\nend of the script processing. startpipeline seems to have similar\nrelated issues.\n\n$ cat only_startpipeline.sql\n\\startpipeline\nSELECT 1;\n\nWith only 1 transaction, pgbench will consider this a success despite\nnot sending anything since the pipeline was not flushed:\npgbench -t1 -Mextended -f only_startpipeline.sql\n[...]\nnumber of transactions per client: 1\nnumber of transactions actually processed: 1/1\n\nWith 2 transactions, the error will happen when \\startpipeline is\ncalled a second time:\npgbench -t2 -Mextended -f only_startpipeline.sql\n[...]\npgbench: error: client 0 aborted in command 0 (startpipeline) of\nscript 0; already in pipeline mode\nnumber of transactions per client: 2\nnumber of transactions actually processed: 1/2\n\nI've split the changes into two patches.\n0001 introduces a new error when the end of a pgbench script is\nreached while there's still an ongoing pipeline.\n0002 adds the \\syncpipeline command (original patch with an additional\ntest case).\n\nRegards,\nAnthonin\n\nOn Mon, Jan 22, 2024 at 7:16 AM Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Jan 19, 2024 at 08:55:31AM +0100, Anthonin Bonnefoy wrote:\n> > I've updated the doc to group the commands. It does look better and\n> > more consistent with similar command groups like \\if.\n>\n> I was playing with a few meta command scenarios while looking at this\n> patch, and this sequence generates an error that should never happen:\n> $ cat /tmp/test.sql\n> \\startpipeline\n> \\syncpipeline\n> $ pgbench -n -f /tmp/test.sql -M extended\n> [...]\n> pgbench: error: unexpected transaction status 1\n> pgbench: error: client 0 aborted while receiving the transaction status\n>\n> It looks to me that we need to be much smarter than that for the error\n> handling we'd need when a sync request is optionally sent when a\n> transaction stops at the end of pgbench. Could you look at it?\n> --\n> Michael", "msg_date": "Mon, 22 Jan 2024 10:11:20 +0100", "msg_from": "Anthonin Bonnefoy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On 2024-Jan-22, Anthonin Bonnefoy wrote:\n\n> That looks like a bug with how opened pipelines are not caught at the\n> end of the script processing. startpipeline seems to have similar\n> related issues.\n\nAh, yeah. Your fix looks necessary on a quick look. I'll review and\nsee about backpatching this.\n\n> 0002 adds the \\syncpipeline command (original patch with an additional\n> test case).\n\nI can look into this one later, unless Michael wants to.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:59:00 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On 2024-Jan-22, Anthonin Bonnefoy wrote:\n\n> 0001 introduces a new error when the end of a pgbench script is\n> reached while there's still an ongoing pipeline.\n\nPushed, backpatched to 14. I reworded the error message to be\n\n client %d aborted: end of script reached with pipeline open\n\nI hope this is OK. I debated a half a dozen alternatives (\"with open\npipeline\", \"without closing pipeline\", \"with unclosed pipeline\" (???),\n\"leaving pipeline open\") and decided this was the least ugly.\n\nThanks,\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Mon, 22 Jan 2024 17:53:13 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Mon, Jan 22, 2024 at 05:53:13PM +0100, Alvaro Herrera wrote:\n> I hope this is OK. I debated a half a dozen alternatives (\"with open\n> pipeline\", \"without closing pipeline\", \"with unclosed pipeline\" (???),\n> \"leaving pipeline open\") and decided this was the least ugly.\n\nThat looks OK to me. Thanks for looking at that!\n--\nMichael", "msg_date": "Tue, 23 Jan 2024 12:57:09 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Mon, Jan 22, 2024 at 01:59:00PM +0100, Alvaro Herrera wrote:\n> On 2024-Jan-22, Anthonin Bonnefoy wrote:\n>> 0002 adds the \\syncpipeline command (original patch with an additional\n>> test case).\n> \n> I can look into this one later, unless Michael wants to.\n\nThe patch seemed rather OK at quick glance as long as there is a test\nto check for error path with a \\syncpipeline still on the stack of\nmetacommands to handle.\nAnyway, I wanted to study this one and learn a bit more about the\nerror stuff that was happening on pgbench side. Now, if you feel\nstrongly about it, please go ahead!\n--\nMichael", "msg_date": "Tue, 23 Jan 2024 13:08:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" }, { "msg_contents": "On Tue, Jan 23, 2024 at 01:08:24PM +0900, Michael Paquier wrote:\n> Anyway, I wanted to study this one and learn a bit more about the\n> error stuff that was happening on pgbench side.\n\nWell, I've spend some time studying this part, and the error handling\nwas looking correct based on the safety measures added in\n49f7c6c44a5f, so I've just moved on and applied it.\n--\nMichael", "msg_date": "Wed, 24 Jan 2024 17:13:53 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add \\syncpipeline command to pgbench" } ]
[ { "msg_contents": "Hi,\n\nIn [1], David and I talked about a requirement that a user just want\nto unset the all bits in a Bitmapset but keep the allocated memory\nun-deallocated for later use. It is impossible for the current\nBitmapset. So David suggested a Bitset struct for this purpose. I start\nthis new thread so that the original thread can focus on its own\npurpose.\n\ncommit 0ee7e4789e58d6820e4c1ff62979910c0b01cdbb (HEAD -> s_stuck_v2)\nAuthor: yizhi.fzh <[email protected]>\nDate: Thu Jan 18 16:52:30 2024 +0800\n\n Introduce a Bitset data struct.\n \n While Bitmapset is designed for variable-length of bits, Bitset is\n designed for fixed-length of bits, the fixed length must be specified at\n the bitset_init stage and keep unchanged at the whole lifespan. Because\n of this, some operations on Bitset is simpler than Bitmapset.\n \n The bitset_clear unsets all the bits but kept the allocated memory, this\n capacity is impossible for bit Bitmapset for some solid reasons.\n \n Also for performance aspect, the functions for Bitset removed some\n unlikely checks, instead with some Asserts.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvpdp9LyAoMXvS7iCX-t3VonQM3fTWCmhconEvORrQ%2BZYA%40mail.gmail.com\n\nAny feedback is welcome.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Thu, 18 Jan 2024 20:07:34 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Make a Bitset which is resetable" } ]
[ { "msg_contents": "Hello, hackers,\n\nI believe there is a small problem in the 039_end_of_wal.pl's \n\"xl_tot_len zero\" test. It supposes that after immediate shutdown the \nserver, upon startup recovery, should always produce a message matching \n\"invalid record length at .*: wanted 24, got 0\". However, if the \ncircumstances are just right and we happened to hit exactly on the edge \nof WAL page, then the message on startup recovery would be \"invalid \nmagic number 0000 in log segment .*, offset .*\". The test does not take \nthat into account.\n\nNow, reproducing this is somewhat tricky, because exact position in WAL \nat the test time depends on what exactly initdb did, and that not only \ndiffers in different major verisons, but also depends on e.g. username \nlength, locales available, and, perhaps, more. Even though originally \nthis problem was found \"in the wild\" on one particular system on one \nparticular code branch, I've written small helper patch to make \nreproduction on master easier, see \n0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch.\n\nThis patch adds single emit_message of (hopefully) the right size to \nmake sure we hit end of WAL block right by the time we call \n$node->stop('immediate') in \"xl_tot_len zero\" test. With this patch, \n\"xl_tot_len zero\" test fails every time because the server writes \n\"invalid magic number 0000 in log segment\" while the test still only \nexpects \"invalid record length at .*: wanted 24, got 0\". If course, this \n0001 patch is *not* meant to be committed, but only as an issue \nreproduction helper.\n\nI can think of two possible fixes:\n\n1. Update advance_out_of_record_splitting_zone to also avoid stopping at\n exactly the block end:\n\n my $page_offset = $end_lsn % $WAL_BLOCK_SIZE;\n- while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold)\n+ while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold || \n$page_offset <= $SizeOfXLogShortPHD)\n {\nsee 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch\n\nWe need to compare with $SizeOfXLogShortPHD (and not with zero) because \nat that point, even though we didn't actually write out new WAL page\nyet, it's header is already in place in memory and taken in account\nfor LSN reporting.\n\n2. Alternatively, amend \"xl_tot_len zero\" test to expect \"invalid magic\n number 0000 in WAL segment\" message as well:\n\n $node->start;\n ok( $node->log_contains(\n+ \"invalid magic number 0000 in WAL segment|\" .\n \"invalid record length at .*: expected at least 24, got 0\", \n$log_size\n ),\nsee 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch\n\nI think it makes sense to backport whatever the final change would be to \nall branches with 039_end_of_wal (REL_12+).\n\nAny thoughts?\n\nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru", "msg_date": "Thu, 18 Jan 2024 15:47:22 +0300", "msg_from": "Anton Voloshin <[email protected]>", "msg_from_op": true, "msg_subject": "039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin\n<[email protected]> wrote:\n> I believe there is a small problem in the 039_end_of_wal.pl's\n> \"xl_tot_len zero\" test. It supposes that after immediate shutdown the\n> server, upon startup recovery, should always produce a message matching\n> \"invalid record length at .*: wanted 24, got 0\". However, if the\n> circumstances are just right and we happened to hit exactly on the edge\n> of WAL page, then the message on startup recovery would be \"invalid\n> magic number 0000 in log segment .*, offset .*\". The test does not take\n> that into account.\n\nHi Anton,\n\nThanks for figuring this out! Right, I see. I will look more closely\nwhen I'm back from summer vacation in a few days, but first reaction:\n\n> Now, reproducing this is somewhat tricky, because exact position in WAL\n> at the test time depends on what exactly initdb did, and that not only\n> differs in different major verisons, but also depends on e.g. username\n> length, locales available, and, perhaps, more. Even though originally\n> this problem was found \"in the wild\" on one particular system on one\n> particular code branch, I've written small helper patch to make\n> reproduction on master easier, see\n> 0001-repro-for-039_end_of_wal-s-problem-with-page-end.patch.\n>\n> This patch adds single emit_message of (hopefully) the right size to\n> make sure we hit end of WAL block right by the time we call\n> $node->stop('immediate') in \"xl_tot_len zero\" test. With this patch,\n> \"xl_tot_len zero\" test fails every time because the server writes\n> \"invalid magic number 0000 in log segment\" while the test still only\n> expects \"invalid record length at .*: wanted 24, got 0\". If course, this\n> 0001 patch is *not* meant to be committed, but only as an issue\n> reproduction helper.\n>\n> I can think of two possible fixes:\n>\n> 1. Update advance_out_of_record_splitting_zone to also avoid stopping at\n> exactly the block end:\n>\n> my $page_offset = $end_lsn % $WAL_BLOCK_SIZE;\n> - while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold)\n> + while ($page_offset >= $WAL_BLOCK_SIZE - $page_threshold ||\n> $page_offset <= $SizeOfXLogShortPHD)\n> {\n> see 0002-fix-xl_tot_len-zero-test-amend-advance_out_of.patch\n>\n> We need to compare with $SizeOfXLogShortPHD (and not with zero) because\n> at that point, even though we didn't actually write out new WAL page\n> yet, it's header is already in place in memory and taken in account\n> for LSN reporting.\n\nI like the fact that this preserves the same end-of-WAL case that\nwe're trying to test. I don't yet have an opinion on the best way to\ndo it though. Would it be enough to add emit_message($node, 0) after\nadvance_out_of_record_splitting_zone()? The thing about this one\nspecific test that is different from the later ones is that it doesn't\nactually write a record header at all, it was relying purely on\npre-existing trailing zeroes, but it assumed the page header would be\nvalid. As you figured out, that isn't true if we were right on the\npage boundary. Perhaps advance_out_of_record_splitting_zone()\nfollowed by emit_message(0) would make that always true, even then?\n\n> 2. Alternatively, amend \"xl_tot_len zero\" test to expect \"invalid magic\n> number 0000 in WAL segment\" message as well:\n>\n> $node->start;\n> ok( $node->log_contains(\n> + \"invalid magic number 0000 in WAL segment|\" .\n> \"invalid record length at .*: expected at least 24, got 0\",\n> $log_size\n> ),\n> see 0003-alt.fix-for-xl_tot_len-zero-test-accept-invalid.patch\n\nTolerating the two different messages would weaken the test.\n\n> I think it makes sense to backport whatever the final change would be to\n> all branches with 039_end_of_wal (REL_12+).\n\n+1\n\n\n", "msg_date": "Fri, 19 Jan 2024 11:35:30 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Fri, Jan 19, 2024 at 11:35:30AM +1300, Thomas Munro wrote:\n> On Fri, Jan 19, 2024 at 1:47 AM Anton Voloshin\n> <[email protected]> wrote:\n>> I believe there is a small problem in the 039_end_of_wal.pl's\n>> \"xl_tot_len zero\" test. It supposes that after immediate shutdown the\n>> server, upon startup recovery, should always produce a message matching\n>> \"invalid record length at .*: wanted 24, got 0\". However, if the\n>> circumstances are just right and we happened to hit exactly on the edge\n>> of WAL page, then the message on startup recovery would be \"invalid\n>> magic number 0000 in log segment .*, offset .*\". The test does not take\n>> that into account.\n> \n> Thanks for figuring this out! Right, I see. I will look more closely\n> when I'm back from summer vacation in a few days, but first reaction:\n\nThomas, are you planning to look at that?\n--\nMichael", "msg_date": "Thu, 15 Feb 2024 13:28:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "Hello, Thomas,\n\nOn 19/01/2024 01:35, Thomas Munro wrote:\n> I don't yet have an opinion on the best way to\n> do it though. Would it be enough to add emit_message($node, 0) after\n> advance_out_of_record_splitting_zone()?\n\nYes, indeed that seems to be enough. At least I could not produce any \nmore \"xl_tot_len zero\" failures with that addition.\n\nI like this solution the best.\n\n> Tolerating the two different messages would weaken the test.\n\nI agree, I also don't really like this option.\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n", "msg_date": "Thu, 15 Feb 2024 12:40:37 +0300", "msg_from": "Anton Voloshin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Thu, Feb 15, 2024 at 10:40 PM Anton Voloshin\n<[email protected]> wrote:\n> On 19/01/2024 01:35, Thomas Munro wrote:\n> > I don't yet have an opinion on the best way to\n> > do it though. Would it be enough to add emit_message($node, 0) after\n> > advance_out_of_record_splitting_zone()?\n>\n> Yes, indeed that seems to be enough. At least I could not produce any\n> more \"xl_tot_len zero\" failures with that addition.\n>\n> I like this solution the best.\n\nOh, it looks like this new build farm animal \"skimmer\" might be\nreminding us about this issue:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD\n\nI don't know why it changed, but since this is an LSN/page alignment\nthing, it could be due to external things like an OS upgrade adding\nmore locales or something that affects initdb. Will look soon and\nfix.\n\n\n", "msg_date": "Mon, 6 May 2024 14:51:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Oh, it looks like this new build farm animal \"skimmer\" might be\n> reminding us about this issue:\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=skimmer&br=HEAD\n> I don't know why it changed,\n\nAt this point it seems indisputable that 7d2c7f08d9 is what broke\nskimmer, but that didn't go anywhere near WAL-related code, so how?\n\nMy best guess is that that changed the amount of WAL generated by\ninitdb just enough to make the problem reproduce on this animal.\nHowever, why's it *only* happening on this animal? The amount of\nWAL we generate isn't all that system-specific.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 05 May 2024 23:05:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Mon, 6 May 2024 at 15:06, Tom Lane <[email protected]> wrote:\n> My best guess is that that changed the amount of WAL generated by\n> initdb just enough to make the problem reproduce on this animal.\n> However, why's it *only* happening on this animal? The amount of\n> WAL we generate isn't all that system-specific.\n\nI'd say that's a good theory as it's now passing again [1] after the\nrecent system_views.sql change done in 521a7156ab.\n\nDavid\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2024-05-06%2017%3A43%3A38\n\n\n", "msg_date": "Mon, 13 May 2024 08:58:35 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Mon, 6 May 2024 at 15:06, Tom Lane <[email protected]> wrote:\n>> My best guess is that that changed the amount of WAL generated by\n>> initdb just enough to make the problem reproduce on this animal.\n>> However, why's it *only* happening on this animal? The amount of\n>> WAL we generate isn't all that system-specific.\n\n> I'd say that's a good theory as it's now passing again [1] after the\n> recent system_views.sql change done in 521a7156ab.\n\nHm. It occurs to me that there *is* a system-specific component to\nthe amount of WAL emitted during initdb: the number of locales\nthat \"locale -a\" prints translates directly to the number of\nrows inserted into pg_collation. So maybe skimmer has a locale\nset that's a bit different from anybody else's, and that's what\nlet it see this issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 12 May 2024 17:39:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On 13/05/2024 00:39, Tom Lane wrote:\n> Hm. It occurs to me that there *is* a system-specific component to\n> the amount of WAL emitted during initdb: the number of locales\n> that \"locale -a\" prints translates directly to the number of\n> rows inserted into pg_collation. [...]\n\nYes. Another system-specific circumstance affecting WAL position is the \nname length of the unix user doing initdb. I've seen 039_end_of_wal \nfailing consistently under user aaaaaaaa but working fine with aaaa, \nboth on the same machine at the same time.\n\nTo be more precise, on one particular machine under those particular \ncircumstances (including set of locales) it would work for any username \nwith length < 8 or >= 16, but would fail for length 8..15 (in bytes, not \ncharacters, if non-ASCII usernames were used).\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n", "msg_date": "Mon, 13 May 2024 12:49:54 +0300", "msg_from": "Anton Voloshin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "I am seeing the exact problem described in this thread on my laptop since\ncommit 490f869. I have yet to do a thorough investigation, but what I've\nseen thus far does seem to fit the subtle-differences-in-generated-WAL\ntheory. If no one is planning to pick up the fix soon, I will try to.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 23 Aug 2024 17:33:10 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Sat, Aug 24, 2024 at 10:33 AM Nathan Bossart\n<[email protected]> wrote:\n> I am seeing the exact problem described in this thread on my laptop since\n> commit 490f869. I have yet to do a thorough investigation, but what I've\n> seen thus far does seem to fit the subtle-differences-in-generated-WAL\n> theory. If no one is planning to pick up the fix soon, I will try to.\n\nSorry for dropping that. It looks like we know approximately how to\nstabilise it, and I'll look at it early next week if you don't beat me\nto it, but please feel free if you would like to.\n\n\n", "msg_date": "Sat, 24 Aug 2024 10:43:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Sat, Aug 24, 2024 at 10:43 AM Thomas Munro <[email protected]> wrote:\n> On Sat, Aug 24, 2024 at 10:33 AM Nathan Bossart\n> <[email protected]> wrote:\n> > I am seeing the exact problem described in this thread on my laptop since\n> > commit 490f869. I have yet to do a thorough investigation, but what I've\n> > seen thus far does seem to fit the subtle-differences-in-generated-WAL\n> > theory. If no one is planning to pick up the fix soon, I will try to.\n>\n> Sorry for dropping that. It looks like we know approximately how to\n> stabilise it, and I'll look at it early next week if you don't beat me\n> to it, but please feel free if you would like to.\n\nIt fails reliably if you nail down the initial conditions like this:\n\n $TLI = $node->safe_psql('postgres',\n \"SELECT timeline_id FROM pg_control_checkpoint();\");\n\n+$node->safe_psql('postgres', \"SELECT pg_switch_wal();\");\n+emit_message($node, 7956);\n+\n my $end_lsn;\n my $prev_lsn;\n\nThe fix I propose to commit shortly is just the first of those new\nlines, to homogenise the initial state. See attached. The previous\nidea works too, I think, but this bigger hammer is more obviously\nremoving variation.", "msg_date": "Thu, 29 Aug 2024 17:41:36 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> The fix I propose to commit shortly is just the first of those new\n> lines, to homogenise the initial state. See attached. The previous\n> idea works too, I think, but this bigger hammer is more obviously\n> removing variation.\n\n+1, but a comment explaining the need for the pg_switch_wal call\nseems in order.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Aug 2024 01:55:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "On Thu, Aug 29, 2024 at 01:55:27AM -0400, Tom Lane wrote:\n> Thomas Munro <[email protected]> writes:\n>> The fix I propose to commit shortly is just the first of those new\n>> lines, to homogenise the initial state. See attached. The previous\n>> idea works too, I think, but this bigger hammer is more obviously\n>> removing variation.\n> \n> +1, but a comment explaining the need for the pg_switch_wal call\n> seems in order.\n\n+1\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 29 Aug 2024 09:34:32 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" }, { "msg_contents": "Pushed. Thanks!\n\n\n", "msg_date": "Sat, 31 Aug 2024 15:10:28 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 039_end_of_wal: error in \"xl_tot_len zero\" test" } ]
[ { "msg_contents": "Hi all,\n\nI think the correct placeholder for var *startoff* should be *%d*.\nThanks for your time.\n\nBest\n\nYongtao Huang", "msg_date": "Thu, 18 Jan 2024 21:32:55 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Fix incorrect format placeholders in walreceiver.c" }, { "msg_contents": "On Thu, Jan 18, 2024 at 09:32:55PM +0800, Yongtao Huang wrote:\n> I think the correct placeholder for var *startoff* should be *%d*.\n> Thanks for your time.\n\nThanks, fixed.\n--\nMichael", "msg_date": "Fri, 19 Jan 2024 13:24:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix incorrect format placeholders in walreceiver.c" } ]
[ { "msg_contents": "Hello hackers,\n\n[ reporting this bug here due to limitations of the bug reporting form ]\n\nWhen a node, that acted as a primary, becomes a standby as in the\nfollowing script:\n[ ... some WAL-logged activity ... ]\n$primary->teardown_node;\n$standby->promote;\n\n$primary->enable_streaming($standby);\n$primary->start;\n\nit might not go online, due to the error:\nnew timeline N forked off current database system timeline M before current recovery point X/X\n\nA complete TAP test is attached.\nI put it in src/test/recovery/t, run as follows:\nfor i in `seq 100`; do echo \"iteration $i\"; timeout 60 make -s check -C src/test/recovery PROVE_TESTS=\"t/099*\" || break; \ndone\nand get:\n...\niteration 7\n# +++ tap check in src/test/recovery +++\nt/099_change_roles.pl .. ok\nAll tests successful.\nFiles=1, Tests=20, 14 wallclock secs ( 0.01 usr  0.00 sys +  4.20 cusr  4.75 csys =  8.96 CPU)\nResult: PASS\niteration 8\n# +++ tap check in src/test/recovery +++\nt/099_change_roles.pl .. 9/? make: *** [Makefile:23: check] Terminated\n\nWith wal_debug enabled (and log_min_messages=DEBUG2, log_statement=all),\nI see the following in the _node1.log:\n2024-01-18 15:21:02.258 UTC [663701] 099_change_roles.pl LOG: INSERT @ 0/304DBF0:  - Transaction/COMMIT: 2024-01-18 \n15:21:02.258739+00\n2024-01-18 15:21:02.258 UTC [663701] 099_change_roles.pl STATEMENT: INSERT INTO t VALUES (10, 'inserted on node1');\n2024-01-18 15:21:02.258 UTC [663701] 099_change_roles.pl LOG:  xlog flush request 0/304DBF0; write 0/0; flush 0/0\n2024-01-18 15:21:02.258 UTC [663701] 099_change_roles.pl STATEMENT: INSERT INTO t VALUES (10, 'inserted on node1');\n2024-01-18 15:21:02.258 UTC [663671] node2 DEBUG:  write 0/304DBF0 flush 0/304DB78 apply 0/304DB78 reply_time 2024-01-18 \n15:21:02.2588+00\n2024-01-18 15:21:02.258 UTC [663671] node2 DEBUG:  write 0/304DBF0 flush 0/304DBF0 apply 0/304DB78 reply_time 2024-01-18 \n15:21:02.258809+00\n2024-01-18 15:21:02.258 UTC [663671] node2 DEBUG:  write 0/304DBF0 flush 0/304DBF0 apply 0/304DBF0 reply_time 2024-01-18 \n15:21:02.258864+00\n2024-01-18 15:21:02.259 UTC [663563] DEBUG:  server process (PID 663701) exited with exit code 0\n2024-01-18 15:21:02.260 UTC [663563] DEBUG:  forked new backend, pid=663704 socket=8\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl LOG: statement: INSERT INTO t VALUES (1000 * 1 + 608, \n'background activity');\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl LOG: INSERT @ 0/304DC40:  - Heap/INSERT: off: 12, flags: 0x00\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl STATEMENT: INSERT INTO t VALUES (1000 * 1 + 608, 'background \nactivity');\n2024-01-18 15:21:02.261 UTC [663563] DEBUG:  postmaster received shutdown request signal\n2024-01-18 15:21:02.261 UTC [663563] LOG:  received immediate shutdown request\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl LOG: INSERT @ 0/304DC68:  - Transaction/COMMIT: 2024-01-18 \n15:21:02.261828+00\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl STATEMENT: INSERT INTO t VALUES (1000 * 1 + 608, 'background \nactivity');\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl LOG:  xlog flush request 0/304DC68; write 0/0; flush 0/0\n2024-01-18 15:21:02.261 UTC [663704] 099_change_roles.pl STATEMENT: INSERT INTO t VALUES (1000 * 1 + 608, 'background \nactivity');\n...\n2024-01-18 15:21:02.262 UTC [663563] LOG:  database system is shut down\n...\n2024-01-18 15:21:02.474 UTC [663810] LOG:  starting PostgreSQL 16.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n11.3.0-1ubuntu1~22.04) 11.3.0, 64-bit\n...\n2024-01-18 15:21:02.478 UTC [663816] LOG:  REDO @ 0/304DBC8; LSN 0/304DBF0: prev 0/304DB78; xid 898; len 8 - \nTransaction/COMMIT: 2024-01-18 15:21:02.258739+00\n2024-01-18 15:21:02.478 UTC [663816] LOG:  REDO @ 0/304DBF0; LSN 0/304DC40: prev 0/304DBC8; xid 899; len 3; blkref #0: \nrel 1663/5/16384, blk 1 - Heap/INSERT: off: 12, flags: 0x00\n2024-01-18 15:21:02.478 UTC [663816] LOG:  REDO @ 0/304DC40; LSN 0/304DC68: prev 0/304DBF0; xid 899; len 8 - \nTransaction/COMMIT: 2024-01-18 15:21:02.261828+00\n...\n2024-01-18 15:21:02.481 UTC [663819] LOG:  fetching timeline history file for timeline 20 from primary server\n2024-01-18 15:21:02.481 UTC [663819] LOG:  started streaming WAL from primary at 0/3000000 on timeline 19\n...\n2024-01-18 15:21:02.481 UTC [663819] DETAIL:  End of WAL reached on timeline 19 at 0/304DBF0.\n...\n2024-01-18 15:21:02.481 UTC [663816] LOG:  new timeline 20 forked off current database system timeline 19 before current \nrecovery point 0/304DC68\n\nIn this case, node1 wrote to it's WAL record 0/304DC68, but sent to node2\nonly record 0/304DBF0, then node2, being promoted to primary, forked a next\ntimeline from it, but when node1 was started as a standby, it first\nreplayed 0/304DC68 from WAL, and then could not switch to the new timeline\nstarting from the previous position.\n\nReproduced on REL_12_STABLE .. master.\n\nBest regards,\nAlexander", "msg_date": "Thu, 18 Jan 2024 21:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "BUG: Former primary node might stuck when started as a standby" }, { "msg_contents": "Hi,\n\n> it might not go online, due to the error:\n> new timeline N forked off current database system timeline M before current recovery point X/X\n> [...]\n> In this case, node1 wrote to it's WAL record 0/304DC68, but sent to node2\n> only record 0/304DBF0, then node2, being promoted to primary, forked a next\n> timeline from it, but when node1 was started as a standby, it first\n> replayed 0/304DC68 from WAL, and then could not switch to the new timeline\n> starting from the previous position.\n\nUnless I'm missing something, this is just the right behavior of the system.\n\nnode1 has no way of knowing the history of node1/node2/nodeN\npromotion. It sees that it has more data and/or inconsistent timeline\nwith another node and refuses to process further until DBA will\nintervene. What else can node1 do, drop the data? That's not how\nthings are done in Postgres :) What if this is a very important data\nand node2 was promoted mistakenly, either manually or by a buggy\nscript.\n\nIt's been a while since I seriously played with replication, but if\nmemory serves, a proper way to switch node1 to a replica mode would be\nto use pg_rewind on it first.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 19 Jan 2024 14:45:02 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: Former primary node might stuck when started as a standby" }, { "msg_contents": "Hi Aleksander,\n\n19.01.2024 14:45, Aleksander Alekseev wrote:\n>\n>> it might not go online, due to the error:\n>> new timeline N forked off current database system timeline M before current recovery point X/X\n>> [...]\n>> In this case, node1 wrote to it's WAL record 0/304DC68, but sent to node2\n>> only record 0/304DBF0, then node2, being promoted to primary, forked a next\n>> timeline from it, but when node1 was started as a standby, it first\n>> replayed 0/304DC68 from WAL, and then could not switch to the new timeline\n>> starting from the previous position.\n> Unless I'm missing something, this is just the right behavior of the system.\n\nThank you for the answer!\n\n> node1 has no way of knowing the history of node1/node2/nodeN\n> promotion. It sees that it has more data and/or inconsistent timeline\n> with another node and refuses to process further until DBA will\n> intervene.\n\nBut node1 knows that it's a standby now and it's expected to get all the\nWAL records from the primary, doesn't it?\nMaybe it could REDO from it's own WAL as little records as possible,\nbefore requesting records from the authoritative source...\nIs it supposed that it's more performance-efficient (not on the first\nrestart, but on later ones)?\n\n> What else can node1 do, drop the data? That's not how\n> things are done in Postgres :)\n\nIn case no other options exist (this behavior is really correct and the\nonly possible), maybe the server should just stop?\nCan DBA intervene somehow to make the server proceed without stopping it?\n\n> It's been a while since I seriously played with replication, but if\n> memory serves, a proper way to switch node1 to a replica mode would be\n> to use pg_rewind on it first.\n\nPerhaps that's true generally, but as we can see, without the extra\nrecords replayed, this scenario works just fine. Moreover, existing tests\nrely on it, e.g., 009_twophase.pl or 012_subtransactions.pl (in fact, my\nresearch of the issue was initiated per a test failure).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 19 Jan 2024 17:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BUG: Former primary node might stuck when started as a standby" }, { "msg_contents": "Hi,\n\n> But node1 knows that it's a standby now and it's expected to get all the\n> WAL records from the primary, doesn't it?\n\nYes, but node1 doesn't know if it always was a standby or not. What if\nnode1 was always a standby, node2 was a primary, then node2 died and\nnode3 is a new primary. If node1 sees inconsistency in the WAL\nrecords, it should report it and stop doing anything, since it doesn't\nhas all the information needed to resolve the inconsistencies in all\nthe possible cases. Only DBA has this information.\n\n> > It's been a while since I seriously played with replication, but if\n> > memory serves, a proper way to switch node1 to a replica mode would be\n> > to use pg_rewind on it first.\n>\n> Perhaps that's true generally, but as we can see, without the extra\n> records replayed, this scenario works just fine. Moreover, existing tests\n> rely on it, e.g., 009_twophase.pl or 012_subtransactions.pl (in fact, my\n> research of the issue was initiated per a test failure).\n\nI suggest focusing on particular flaky tests then and how to fix them.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 Jan 2024 14:00:45 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG: Former primary node might stuck when started as a standby" }, { "msg_contents": "Hi Aleksander,\n\n[ I'm writing this off-list to minimize noise, but we can continue the discussion in -hackers, if you wish ]\n\n22.01.2024 14:00, Aleksander Alekseev wrote:\n\n> Hi,\n>\n>> But node1 knows that it's a standby now and it's expected to get all the\n>> WAL records from the primary, doesn't it?\n> Yes, but node1 doesn't know if it always was a standby or not. What if\n> node1 was always a standby, node2 was a primary, then node2 died and\n> node3 is a new primary.\n\nExcuse me, but I still can't understand what could go wrong in this case.\nLet's suppose, node1 has WAL with the following contents before start:\nCPLOC | TL1R1 | TL1R2 | TL1R3 |\n\nwhile node2's WAL contains:\nTL1R1 | TL2R1 | TL2R2 | ...\n\nwhere CPLOC -- a checkpoint location, TLxRy -- a record y on a timeline x.\n\nI assume that requesting all WAL records from node2 without redoing local\nrecords should be the right thing.\n\nAnd even in the situation you propose:\nCPLOC | TL2R5 | TL2R6 | TL2R7 |\n\nwhile node3's WAL contains:\nTL2R5 | TL3R1 | TL3R2 | ...\n\nI see no issue with applying records from node3...\n\n> If node1 sees inconsistency in the WAL\n> records, it should report it and stop doing anything, since it doesn't\n> has all the information needed to resolve the inconsistencies in all\n> the possible cases. Only DBA has this information.\n\nI still wonder, what can be considered an inconsistency in this situation.\nDoesn't the exactly redo of all the local WAL records create the\ninconsistency here?\nFor me, it's the question of an authoritative source, and if we had such a\nsource, we should trust it's records only.\n\nOr in the other words, what if the record TL1R3, which node1 wrote to it's\nWAL, but didn't send to node2, happened to have an incorrect checksum (due\nto partial write, for example)?\nIf I understand correctly, node1 will just stop redoing WAL at that\nposition to receive all the following records from node2 and move forward\nwithout reporting the inconsistency (an extra WAL record).\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 23 Jan 2024 14:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BUG: Former primary node might stuck when started as a standby" } ]
[ { "msg_contents": "This issue has been reported in the <pgsql-bugs&gt; list at the below link, but\r\nreceived almost no response:\r\nhttps://www.postgresql.org/message-id/18280-4c8060178cb41750%40postgresql.org\r\nHoping for some feedback from kernel hackers, thanks!\r\n\r\n\r\nHi, hackers,\r\nI've encountered a problem with logical decoding history snapshots. The\r\nspecific error message is: \"ERROR: could not map filenode \"base/5/16390\" to\r\nrelation OID\".\r\n\r\n\r\nIf a subtransaction that modified the catalog ends before the\r\nrestart_lsn of the logical replication slot, and the commit WAL record of\r\nits top transaction is after the restart_lsn, the WAL record related to the\r\nsubtransaction won't be decoded during logical decoding. Therefore, the\r\nsubtransaction won't be marked as having modified the catalog, resulting in\r\nits absence from the snapshot's committed list.\r\n\r\n\r\nThe issue seems to be caused by SnapBuildXidSetCatalogChanges\r\n(introduced in 272248a) skipping checks for subtransactions when the top\r\ntransaction is marked as containing catalog changes.\r\n\r\n\r\nThe following steps can reproduce the problem (I increased the value of\r\nLOG_SNAPSHOT_INTERVAL_MS to avoid the impact of bgwriter writing\r\nXLOG_RUNNING_XACTS WAL records):\r\nsession 1:\r\n```\r\nCREATE TABLE tbl1 (val1 integer, val2 integer);\r\nCREATE TABLE tbl1_part (val1 integer) PARTITION BY RANGE (val1);\r\n\r\n\r\nSELECT 'init' FROM pg_create_logical_replication_slot('isolation_slot', 'test_decoding');\r\n\r\n\r\nBEGIN;\r\nSAVEPOINT sp1;\r\nCREATE TABLE tbl1_part_p1 PARTITION OF tbl1_part FOR VALUES FROM (0) TO (10);\r\nRELEASE SAVEPOINT sp1;\r\n```\r\n\r\n\r\nsession 2:\r\n```\r\nCHECKPOINT;\r\n```\r\n\r\n\r\nsession 1:\r\n```\r\nCREATE TABLE tbl1_part_p2 PARTITION OF tbl1_part FOR VALUES FROM (10) TO (20);\r\nCOMMIT;\r\nBEGIN;\r\nTRUNCATE tbl1;\r\n```\r\n\r\n\r\nsession 2:\r\n```\r\nCHECKPOINT;\r\nSELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\nINSERT INTO tbl1_part VALUES (1);\r\nSELECT data FROM pg_logical_slot_get_changes('isolation_slot', NULL, NULL, 'skip-empty-xacts', '1', 'include-xids', '0');\r\n```\r\n\r\n\r\n\r\n\r\nTo fix this issue, it is sufficient to remove the condition check for\r\nReorderBufferXidHasCatalogChanges in SnapBuildXidSetCatalogChanges.\r\n\r\n\r\nThis fix may add subtransactions that didn't change the catalog to the commit\r\nlist, which seems like a false positive. However, this is acceptable since\r\nwe only use the snapshot built during decoding to read system catalogs, as\r\nstated in 272248a's commit message.\r\n\r\n\r\nI have verified that the patch in the attachment resolves the issues\r\nmentioned, and I added some test cases.\r\n\r\n\r\nI am eager to hear your suggestions on this!\r\n\r\n\r\n\r\n\r\nBest Regards,\r\nFei Changhong\r\nAlibaba Cloud Computing Ltd.\r\n\r\n\r\n&nbsp;", "msg_date": "Fri, 19 Jan 2024 15:35:24 +0800", "msg_from": "\"=?gb18030?B?ZmVpY2hhbmdob25n?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "logical decoding build wrong snapshot with subtransactions" }, { "msg_contents": "On Fri, Jan 19, 2024 at 1:05 PM feichanghong <[email protected]> wrote:\n>\n> This issue has been reported in the <pgsql-bugs> list at the below link, but\n> received almost no response:\n> https://www.postgresql.org/message-id/18280-4c8060178cb41750%40postgresql.org\n> Hoping for some feedback from kernel hackers, thanks!\n>\n\nThanks for the report and analysis. I have responded to the original\nthread. Let's discuss it there.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Jan 2024 17:14:58 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: logical decoding build wrong snapshot with subtransactions" } ]
[ { "msg_contents": "This patch fixes two things in the function fetch_remote_table_info().\n\n(1) *pfree(pub_names.data)* to avoid potential memory leaks.\n(2) 2 pieces of code can do the same work,\n\n``` C\nforeach(lc, MySubscription->publications)\n{\n if (foreach_current_index(lc) > 0)\n appendStringInfoString(&pub_names, \", \");\n appendStringInfoString(&pub_names,\nquote_literal_cstr(strVal(lfirst(lc))));\n}\n```\nand\n``` C\nforeach_node(String, pubstr, MySubscription->publications)\n{\n char *pubname = strVal(pubstr);\n\n if (foreach_current_index(pubstr) > 0)\n appendStringInfoString(&pub_names, \", \");\n\n appendStringInfoString(&pub_names, quote_literal_cstr(pubname));\n}\n```\nI wanna integrate them into one function `make_pubname_list()` to make the\ncode neater.\n\nThanks for your time.\n\nRegards\n\nYongtao Huang", "msg_date": "Fri, 19 Jan 2024 22:42:46 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize duplicate code and fix memory leak in function\n fetch_remote_table_info()" }, { "msg_contents": "On Fri, Jan 19, 2024 at 10:42:46PM +0800, Yongtao Huang wrote:\n> This patch fixes two things in the function fetch_remote_table_info().\n> \n> (1) *pfree(pub_names.data)* to avoid potential memory leaks.\n\nTrue that this code puts some effort in cleaning up the memory used\nlocally.\n\n> (2) 2 pieces of code can do the same work,\n> ```\n> I wanna integrate them into one function `make_pubname_list()` to make the\n> code neater.\n\nIt does not strike me as a huge problem to let the code be as it is on\nHEAD when building the lists, FWIW, as we are talking about two places\nand there is clarity in keeping the code as it is.\n--\nMichael", "msg_date": "Sat, 20 Jan 2024 12:13:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize duplicate code and fix memory leak in function\n fetch_remote_table_info()" }, { "msg_contents": "Thanks for your review.\n\n(1) I think *pfree(pub_names.data)* is necessary.\n(2) Agree with you. Considering that the new function is only called\ntwice, not encapsulating it into a function is not a huge problem.\n\nBest wishes\n\nYongtao Huang\n\nMichael Paquier <[email protected]> 于2024年1月20日周六 11:13写道:\n\n> On Fri, Jan 19, 2024 at 10:42:46PM +0800, Yongtao Huang wrote:\n> > This patch fixes two things in the function fetch_remote_table_info().\n> >\n> > (1) *pfree(pub_names.data)* to avoid potential memory leaks.\n>\n> True that this code puts some effort in cleaning up the memory used\n> locally.\n>\n> > (2) 2 pieces of code can do the same work,\n> > ```\n> > I wanna integrate them into one function `make_pubname_list()` to make\n> the\n> > code neater.\n>\n> It does not strike me as a huge problem to let the code be as it is on\n> HEAD when building the lists, FWIW, as we are talking about two places\n> and there is clarity in keeping the code as it is.\n> --\n> Michael\n>", "msg_date": "Sat, 20 Jan 2024 12:08:52 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize duplicate code and fix memory leak in function\n fetch_remote_table_info()" }, { "msg_contents": "Yongtao Huang <[email protected]> writes:\n> (1) I think *pfree(pub_names.data)* is necessary.\n\nReally?\n\nIt looks to me like copy_table, and thence fetch_remote_table_info,\nis called once within a transaction. So whatever it leaks will be\nreleased at transaction end. This is a good thing, because it's\nmessy enough that I seriously doubt that there aren't other leaks\nin it, or that it'd be practical to expect that it can be made\nto never leak anything.\n\nIf anything, I'd be inclined to remove the random pfree's that\nare in it now. It's unlikely that they constitute a net win\ncompared to allowing memory context reset to clean things up.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Jan 2024 23:34:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize duplicate code and fix memory leak in function\n fetch_remote_table_info()" }, { "msg_contents": "Hi,\n\n> So whatever it leaks will be released at the transaction end.\n\nI learned it. thank you very much for your explanation.\n\nRegards,\nYongtao Huang\n\nTom Lane <[email protected]> 于2024年1月20日周六 12:34写道:\n\n> Yongtao Huang <[email protected]> writes:\n> > (1) I think *pfree(pub_names.data)* is necessary.\n>\n> Really?\n>\n> It looks to me like copy_table, and thence fetch_remote_table_info,\n> is called once within a transaction. So whatever it leaks will be\n> released at transaction end. This is a good thing, because it's\n> messy enough that I seriously doubt that there aren't other leaks\n> in it, or that it'd be practical to expect that it can be made\n> to never leak anything.\n>\n> If anything, I'd be inclined to remove the random pfree's that\n> are in it now. It's unlikely that they constitute a net win\n> compared to allowing memory context reset to clean things up.\n>\n> regards, tom lane\n>\n\nHi,>  So whatever it leaks will be released at the transaction end.I learned it. thank you very much for your explanation.Regards,Yongtao HuangTom Lane <[email protected]> 于2024年1月20日周六 12:34写道:Yongtao Huang <[email protected]> writes:\n> (1)  I think *pfree(pub_names.data)* is necessary.\n\nReally?\n\nIt looks to me like copy_table, and thence fetch_remote_table_info,\nis called once within a transaction.  So whatever it leaks will be\nreleased at transaction end.  This is a good thing, because it's\nmessy enough that I seriously doubt that there aren't other leaks\nin it, or that it'd be practical to expect that it can be made\nto never leak anything.\n\nIf anything, I'd be inclined to remove the random pfree's that\nare in it now.  It's unlikely that they constitute a net win\ncompared to allowing memory context reset to clean things up.\n\n                        regards, tom lane", "msg_date": "Sat, 20 Jan 2024 15:46:33 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize duplicate code and fix memory leak in function\n fetch_remote_table_info()" } ]
[ { "msg_contents": "We have a huge amount of WAL files at backup site. A listing of the\ndirectory takes several seconds. During startup pg_receivewal checks size\nof all theus files. It does not check file integrity or gaps between\nfiles. It takes several hours for our setup.\nI have add an options that skip this file size checking. Default behavior\nremains the same.\nA patch looks huge due to large code block ident. Actually it consists of\noption add and one if-branch.", "msg_date": "Fri, 19 Jan 2024 18:13:53 +0300", "msg_from": "Sergey Sergey <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] pg_receivewal skip WAL file size checking" } ]
[ { "msg_contents": "Hello,\n\nI realize this is almost ancient history at this point, but I ran into\na surprising behavior change from PG11->12 with ON CONFLICT ... DO\nUPDATE SET ...\n\nSuppose I have this table:\ncreate table foo (id int primary key);\n\nOn PG11 this works:\npostgres=# insert into foo (id) values (1) on conflict (id) do update\nset foo.id = 1;\nINSERT 0 1\n\nBut on PG12+ this is the result:\npostgres=# insert into foo (id) values (1) on conflict (id) do update\nset foo.id = 1;\nERROR: column \"foo\" of relation \"foo\" does not exist\nLINE 1: ...oo (id) values (1) on conflict (id) do update set foo.id = 1...\n\nMaking this more confusing is the fact that if I want to do something\nlike \"SET bar = foo.bar + 1\" the table qualification cannot be present\non the setting column but is required on the reading column.\n\nThere isn't anything in the docs that I see about this, and I don't\nsee anything scanning the release notes for PG12 either (though I\ncould have missed a keyword to search for).\n\nWas this intended? Or a side effect? And should we explicitly document\nthe expectations here\n\nThe error is also pretty confusing: when you miss the required\nqualification on the read column the error is more understandable:\nERROR: column reference \"bar\" is ambiguous\n\nIt seems to me that it'd be desirable to either allow the unnecessary\nqualification or give an error that's more easily understood.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Fri, 19 Jan 2024 12:00:42 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Fri, Jan 19, 2024 at 10:01 AM James Coleman <[email protected]> wrote:\n\n> Making this more confusing is the fact that if I want to do something\n> like \"SET bar = foo.bar + 1\" the table qualification cannot be present\n> on the setting column but is required on the reading column.\n>\n> There isn't anything in the docs that I see about this, and I don't\n> see anything scanning the release notes for PG12 either (though I\n> could have missed a keyword to search for).\n>\n>\nhttps://www.postgresql.org/docs/12/sql-insert.html\n\n\"When referencing a column with ON CONFLICT DO UPDATE, do not include the\ntable's name in the specification of a target column. For example, INSERT\nINTO table_name ... ON CONFLICT DO UPDATE SET table_name.col = 1 is invalid\n(this follows the general behavior for UPDATE).\"\n\nThe same text exists for v11.\n\nDavid J.\n\nOn Fri, Jan 19, 2024 at 10:01 AM James Coleman <[email protected]> wrote:Making this more confusing is the fact that if I want to do something\nlike \"SET bar = foo.bar + 1\" the table qualification cannot be present\non the setting column but is required on the reading column.\n\nThere isn't anything in the docs that I see about this, and I don't\nsee anything scanning the release notes for PG12 either (though I\ncould have missed a keyword to search for).\nhttps://www.postgresql.org/docs/12/sql-insert.html\"When referencing a column with ON CONFLICT DO UPDATE, do not include the table's name in the specification of a target column. For example, INSERT INTO table_name ... ON CONFLICT DO UPDATE SET table_name.col = 1 is invalid (this follows the general behavior for UPDATE).\" The same text exists for v11.David J.", "msg_date": "Fri, 19 Jan 2024 11:53:03 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Fri, Jan 19, 2024 at 1:53 PM David G. Johnston\n<[email protected]> wrote:\n>\n> On Fri, Jan 19, 2024 at 10:01 AM James Coleman <[email protected]> wrote:\n>>\n>> Making this more confusing is the fact that if I want to do something\n>> like \"SET bar = foo.bar + 1\" the table qualification cannot be present\n>> on the setting column but is required on the reading column.\n>>\n>> There isn't anything in the docs that I see about this, and I don't\n>> see anything scanning the release notes for PG12 either (though I\n>> could have missed a keyword to search for).\n>>\n>\n> https://www.postgresql.org/docs/12/sql-insert.html\n>\n> \"When referencing a column with ON CONFLICT DO UPDATE, do not include the table's name in the specification of a target column. For example, INSERT INTO table_name ... ON CONFLICT DO UPDATE SET table_name.col = 1 is invalid (this follows the general behavior for UPDATE).\"\n>\n> The same text exists for v11.\n\nWell, egg on my face for definitely missing that in the docs.\n\nUnfortunately that doesn't explain why it works on PG11 and not on PG12.\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Sat, 20 Jan 2024 09:10:53 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Saturday, January 20, 2024, James Coleman <[email protected]> wrote:\n>\n>\n> Well, egg on my face for definitely missing that in the docs.\n>\n> Unfortunately that doesn't explain why it works on PG11 and not on PG12.\n>\n\nIt was a bug that got fixed. I’m sure a search of the mailing list\narchives or Git will turn up the relevant discussion when it happened;\nthough I suppose it may just have been a refactoring to leverage the\nconsistency with update and no one realized they were fixing a bug at the\ntime.\n\nDavid J.\n\nOn Saturday, January 20, 2024, James Coleman <[email protected]> wrote:\n\nWell, egg on my face for definitely missing that in the docs.\n\nUnfortunately that doesn't explain why it works on PG11 and not on PG12.\nIt was a bug that got fixed.  I’m sure a search of the mailing list archives or Git will turn up the relevant discussion when it happened; though I suppose it may just have been a refactoring to leverage the consistency with update and no one realized they were fixing a bug at the time.David J.", "msg_date": "Sat, 20 Jan 2024 07:47:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> Suppose I have this table:\n> create table foo (id int primary key);\n\n> On PG11 this works:\n> postgres=# insert into foo (id) values (1) on conflict (id) do update\n> set foo.id = 1;\n> INSERT 0 1\n\nHmm, are you sure about that? I get\n\nERROR: column \"foo\" of relation \"foo\" does not exist\nLINE 2: on conflict (id) do update set foo.id = 1;\n ^\n\nin every branch back to 9.5 where ON CONFLICT was introduced.\n\nI'm checking branch tip in each case, so conceivably this is\nsomething that was changed post-11.0, but I kinda doubt we\nwould have back-patched it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 11:12:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Sat, Jan 20, 2024 at 11:12 AM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > Suppose I have this table:\n> > create table foo (id int primary key);\n>\n> > On PG11 this works:\n> > postgres=# insert into foo (id) values (1) on conflict (id) do update\n> > set foo.id = 1;\n> > INSERT 0 1\n>\n> Hmm, are you sure about that? I get\n>\n> ERROR: column \"foo\" of relation \"foo\" does not exist\n> LINE 2: on conflict (id) do update set foo.id = 1;\n> ^\n>\n> in every branch back to 9.5 where ON CONFLICT was introduced.\n>\n> I'm checking branch tip in each case, so conceivably this is\n> something that was changed post-11.0, but I kinda doubt we\n> would have back-patched it.\n\nHmm, I just tested it on the official 11.15 docker image and couldn't\nreproduce it. That leads me to believe that the difference isn't in\nPG11 vs. 12, but rather in 2ndQuadrant Postgres (which we are running\nfor PG11, but are not using for > 11). Egg on my face twice in this\nthread.\n\nI do wonder if it's plausible (and sufficiently easy) to improve the\nerror message here. \"column 'foo' of relation 'foo'\" makes one thing\nthat you've written foo.foo, (in my real-world case the error message\nalso cut off the sql past \"foo.\", and so I couldn't even tell if the\nsql was just malformed). At the very least it'd be nice to have a HINT\nhere (perhaps just when the relation and column name match).\n\nBefore I look at where it is, Is such an improvement something we'd be\ninterested in?\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Sat, 20 Jan 2024 12:51:30 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> I do wonder if it's plausible (and sufficiently easy) to improve the\n> error message here. \"column 'foo' of relation 'foo'\" makes one thing\n> that you've written foo.foo, (in my real-world case the error message\n> also cut off the sql past \"foo.\", and so I couldn't even tell if the\n> sql was just malformed). At the very least it'd be nice to have a HINT\n> here (perhaps just when the relation and column name match).\n\n> Before I look at where it is, Is such an improvement something we'd be\n> interested in?\n\nA HINT if the bogus column name (1) matches the relation name and\n(2) is field-qualified seems plausible to me. Then it's pretty\nlikely to be a user misunderstanding about whether to write the\nrelation name.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 12:59:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Sat, Jan 20, 2024 at 12:59 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > I do wonder if it's plausible (and sufficiently easy) to improve the\n> > error message here. \"column 'foo' of relation 'foo'\" makes one thing\n> > that you've written foo.foo, (in my real-world case the error message\n> > also cut off the sql past \"foo.\", and so I couldn't even tell if the\n> > sql was just malformed). At the very least it'd be nice to have a HINT\n> > here (perhaps just when the relation and column name match).\n>\n> > Before I look at where it is, Is such an improvement something we'd be\n> > interested in?\n>\n> A HINT if the bogus column name (1) matches the relation name and\n> (2) is field-qualified seems plausible to me. Then it's pretty\n> likely to be a user misunderstanding about whether to write the\n> relation name.\n\nAttached is a patch to do just that. We could also add tests for\nregular UPDATEs if you think that's useful.\n\nRegards,\nJames Coleman", "msg_date": "Sat, 20 Jan 2024 16:45:07 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "James Coleman <[email protected]> writes:\n> On Sat, Jan 20, 2024 at 12:59 PM Tom Lane <[email protected]> wrote:\n>> A HINT if the bogus column name (1) matches the relation name and\n>> (2) is field-qualified seems plausible to me. Then it's pretty\n>> likely to be a user misunderstanding about whether to write the\n>> relation name.\n\n> Attached is a patch to do just that. We could also add tests for\n> regular UPDATEs if you think that's useful.\n\nPushed with minor alterations:\n\n1. I think our usual style for conditional hints is to use a ternary\nexpression within the ereport, rather than duplicating code. In this\ncase that way allows not touching any of the existing lines, making\nreview easier.\n\n2. I thought we should test the UPDATE case as well as the ON CONFLICT\ncase, but I didn't think we needed quite as many tests as you had\nhere. I split up the responsibility so that one test covers the\nalias case and the other the no-alias case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 20 Jan 2024 17:57:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" }, { "msg_contents": "On Sat, Jan 20, 2024 at 5:57 PM Tom Lane <[email protected]> wrote:\n>\n> James Coleman <[email protected]> writes:\n> > On Sat, Jan 20, 2024 at 12:59 PM Tom Lane <[email protected]> wrote:\n> >> A HINT if the bogus column name (1) matches the relation name and\n> >> (2) is field-qualified seems plausible to me. Then it's pretty\n> >> likely to be a user misunderstanding about whether to write the\n> >> relation name.\n>\n> > Attached is a patch to do just that. We could also add tests for\n> > regular UPDATEs if you think that's useful.\n>\n> Pushed with minor alterations:\n>\n> 1. I think our usual style for conditional hints is to use a ternary\n> expression within the ereport, rather than duplicating code. In this\n> case that way allows not touching any of the existing lines, making\n> review easier.\n\nAh, I'd wondered if we had a pattern for that, but I didn't know what\nI was looking for.\n\n> 2. I thought we should test the UPDATE case as well as the ON CONFLICT\n> case, but I didn't think we needed quite as many tests as you had\n> here. I split up the responsibility so that one test covers the\n> alias case and the other the no-alias case.\n\nThat all makes sense. I figured it was better to have tests show all\nthe possible combinations for review and then you could whittle them\ndown as you saw fit.\n\nThanks for reviewing and committing!\n\nRegards,\nJames Coleman\n\n\n", "msg_date": "Sat, 20 Jan 2024 19:12:11 -0500", "msg_from": "James Coleman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG12 change to DO UPDATE SET column references" } ]
[ { "msg_contents": "EXPLAIN has always been really poor at displaying SubPlan nodes\nin expressions: you don't get much more than \"(SubPlan N)\".\nThis is mostly because every time I thought about it, I despaired\nof trying to represent all the information in a SubPlan accurately.\nHowever, a recent discussion[1] made me realize that we could do\na lot better just by displaying the SubLinkType and the testexpr\n(if relevant). So here's a proposed patch. You can see what\nit does by examining the regression test changes.\n\nThere's plenty of room to bikeshed about exactly how to display\nthis stuff, and I'm open to suggestions.\n\nBTW, I was somewhat depressed to discover that we have exactly\nzero regression coverage of the ROWCOMPARE_SUBLINK code paths;\nnot only was EXPLAIN output not covered, but the planner and\nexecutor too. So I added some simple tests for that. Otherwise\nI think existing coverage is enough for this.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/149c5c2f-4267-44e3-a177-d1fd24c53f6d%40universite-paris-saclay.fr", "msg_date": "Fri, 19 Jan 2024 14:32:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Hi,\n\n> EXPLAIN has always been really poor at displaying SubPlan nodes\n> in expressions: you don't get much more than \"(SubPlan N)\".\n> This is mostly because every time I thought about it, I despaired\n> of trying to represent all the information in a SubPlan accurately.\n> However, a recent discussion[1] made me realize that we could do\n> a lot better just by displaying the SubLinkType and the testexpr\n> (if relevant). So here's a proposed patch. You can see what\n> it does by examining the regression test changes.\n>\n> There's plenty of room to bikeshed about exactly how to display\n> this stuff, and I'm open to suggestions.\n>\n> BTW, I was somewhat depressed to discover that we have exactly\n> zero regression coverage of the ROWCOMPARE_SUBLINK code paths;\n> not only was EXPLAIN output not covered, but the planner and\n> executor too. So I added some simple tests for that. Otherwise\n> I think existing coverage is enough for this.\n\nI reviewed the code and tested the patch on MacOS. It looks good to me.\n\nAlthough something like:\n\n```\n+ Filter: (ANY (base_tbl.a = $1) FROM SubPlan 1 (returns $1))\n+ SubPlan 1 (returns $1)\n```\n\n... arguably doesn't give much more information to the user comparing\nto what we have now:\n\n```\n- Filter: (SubPlan 1)\n- SubPlan 1\n```\n\n... I believe this is the right step toward more detailed EXPLAINs,\nand perhaps could be useful for debugging and/or educational purposes.\nAlso the patch improves code coverage.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:35:33 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Aleksander Alekseev <[email protected]> writes:\n> Although something like:\n\n> ```\n> + Filter: (ANY (base_tbl.a = $1) FROM SubPlan 1 (returns $1))\n> + SubPlan 1 (returns $1)\n> ```\n\n> ... arguably doesn't give much more information to the user comparing\n> to what we have now:\n\n> ```\n> - Filter: (SubPlan 1)\n> - SubPlan 1\n> ```\n\nYeah, I would probably not have thought to do this on my own; but\nwe had an actual user request for it. I think arguably the main\nbenefit is to confirm \"yes, this is the sub-select you think it is\".\n\nThe main thing that's still missing compared to what is in the plan\ndata structure is information about which Param is which. I think\nwe have the subplan output Params relatively well covered through\nthe expedient of listing them in the generated plan_name, but it's\nstill not apparent to me how we could shoehorn subplan input\nParams into this (or whether it's worth doing).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:07:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Hi Aleksander and Tom\n\nI do confirm that I requested to get this information, in order to \nrecover the formula to filter on.\n\nThanks to both of you\nChantal\n\n\n\n\nLe 22/01/2024 à 18:07, Tom Lane a écrit :\n> Aleksander Alekseev <[email protected]> writes:\n>> Although something like:\n> \n>> ```\n>> + Filter: (ANY (base_tbl.a = $1) FROM SubPlan 1 (returns $1))\n>> + SubPlan 1 (returns $1)\n>> ```\n> \n>> ... arguably doesn't give much more information to the user comparing\n>> to what we have now:\n> \n>> ```\n>> - Filter: (SubPlan 1)\n>> - SubPlan 1\n>> ```\n> \n> Yeah, I would probably not have thought to do this on my own; but\n> we had an actual user request for it. I think arguably the main\n> benefit is to confirm \"yes, this is the sub-select you think it is\".\n> \n> The main thing that's still missing compared to what is in the plan\n> data structure is information about which Param is which. I think\n> we have the subplan output Params relatively well covered through\n> the expedient of listing them in the generated plan_name, but it's\n> still not apparent to me how we could shoehorn subplan input\n> Params into this (or whether it's worth doing).\n> \n> \t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jan 2024 18:11:23 +0100", "msg_from": "Chantal Keller <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> The main thing that's still missing compared to what is in the plan\n> data structure is information about which Param is which. I think\n> we have the subplan output Params relatively well covered through\n> the expedient of listing them in the generated plan_name, but it's\n> still not apparent to me how we could shoehorn subplan input\n> Params into this (or whether it's worth doing).\n\nActually ... it looks like it probably isn't worth doing, because\nit's already the case that we don't expose input Params as such.\nEXPLAIN searches for the referent of an input Param and displays\nthat (cf find_param_referent()). Just for experimental purposes,\nI wrote a follow-on patch to add printout of the parParam and args\nlist (attached, as .txt so the cfbot doesn't think it's a patch).\nThis produces results like\n\nexplain (verbose, costs off)\nselect array(select sum(x+y) s\n from generate_series(1,3) y group by y order by s)\n from generate_series(1,3) x;\n QUERY PLAN \n-------------------------------------------------------------------\n Function Scan on pg_catalog.generate_series x\n Output: ARRAY(SubPlan 1 PASSING $0 := x.x)\n ^^^^^^^^^^^^^^^^^ added by delta patch\n Function Call: generate_series(1, 3)\n SubPlan 1\n -> Sort\n Output: (sum((x.x + y.y))), y.y\n Sort Key: (sum((x.x + y.y)))\n -> HashAggregate\n Output: sum((x.x + y.y)), y.y\n ^^^ actual reference to $0\n Group Key: y.y\n -> Function Scan on pg_catalog.generate_series y\n Output: y.y\n Function Call: generate_series(1, 3)\n(13 rows)\n\nAs you can see, it's not necessary to explain what $0 is because\nthat name isn't shown anywhere else --- the references to \"x.x\" in\nthe subplan are actually uses of $0.\n\nSo now I'm thinking that we do have enough detail in the present\nproposal, and we just need to think about whether there's some\nnicer way to present it than the particular spelling I used here.\n\nOne idea I considered briefly is to pull the same trick with\nregards to output parameters --- that is, instead of adding all\nthe \"returns $n\" annotations to subplans, drill down and print\nthe subplan's relevant targetlist expression instead of \"$n\".\nOn balance I think that might be more confusing not less so,\nthough. SQL users are used to the idea that a sub-select can\n\"see\" variables from the outer query, but not at all vice-versa.\nI think it probably wouldn't be formally ambiguous, because\nruleutils already de-duplicates table aliases across the whole\ntree, but it still seems likely to be confusing. Also, people\nare already pretty used to seeing $n to represent the outputs\nof InitPlans, and I've not heard many complaints suggesting\nthat we should change that.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 22 Jan 2024 16:31:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> So now I'm thinking that we do have enough detail in the present\n> proposal, and we just need to think about whether there's some\n> nicer way to present it than the particular spelling I used here.\n\nHere's a rebase over 9f1337639 --- no code changes, but this affects\nsome of the new or changed expected outputs from that commit.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 16 Feb 2024 14:38:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "\n\n> On 17 Feb 2024, at 00:38, Tom Lane <[email protected]> wrote:\n> \n> Here's a rebase over 9f1337639 --- no code changes, but this affects\n> some of the new or changed expected outputs from that commit.\n\nAleksander, as long as your was reviewing this previously, I’ve added you to reviewers of this CF entry [0]. Please, ping me or remove yourself, it it’s actually not a case.\n\nBTW, as long as there’s a new version and some input from Tom, would you be willing to post fleshier review?\n\nThanks for working on this!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4782/\n\n", "msg_date": "Mon, 4 Mar 2024 11:47:21 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Fri, 16 Feb 2024 at 19:39, Tom Lane <[email protected]> wrote:\n>\n> > So now I'm thinking that we do have enough detail in the present\n> > proposal, and we just need to think about whether there's some\n> > nicer way to present it than the particular spelling I used here.\n>\n\nOne thing that concerns me about making even greater use of \"$n\" is\nthe potential for confusion with generic plan parameters. Maybe it's\nalways possible to work out which is which from context, but still it\nlooks messy:\n\ndrop table if exists foo;\ncreate table foo(id int, x int, y int);\n\nexplain (verbose, costs off, generic_plan)\nselect row($3,$4) = (select x,y from foo where id=y) and\n row($1,$2) = (select min(x+y),max(x+y) from generate_series(1,3) x)\nfrom generate_series(1,3) y;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Function Scan on pg_catalog.generate_series y\n Output: (($3 = $0) AND ($4 = $1) AND (ROWCOMPARE (($1 = $3) AND ($2\n= $4)) FROM SubPlan 2 (returns $3,$4)))\n Function Call: generate_series(1, 3)\n InitPlan 1 (returns $0,$1)\n -> Seq Scan on public.foo\n Output: foo.x, foo.y\n Filter: (foo.id = foo.y)\n SubPlan 2 (returns $3,$4)\n -> Aggregate\n Output: min((x.x + y.y)), max((x.x + y.y))\n -> Function Scan on pg_catalog.generate_series x\n Output: x.x\n Function Call: generate_series(1, 3)\n\nAnother odd thing about that is the inconsistency between how the\nSubPlan and InitPlan expressions are displayed. I think \"ROWCOMPARE\"\nis really just an internal detail that could be omitted without losing\nanything. But the \"FROM SubPlan ...\" is useful to work out where it's\ncoming from. Should it also output \"FROM InitPlan ...\"? I think that\nwould risk making it harder to read.\n\nAnother possibility is to put the SubPlan and InitPlan names inline,\nrather than outputting \"FROM SubPlan ...\". I had a go at hacking that\nup and this was the result:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\n Function Scan on pg_catalog.generate_series y\n Output: (($3 = (InitPlan 1).$0) AND ($4 = (InitPlan 1).$1) AND\n((($1 = (SubPlan 2).$3) AND ($2 = (SubPlan 2).$4))))\n Function Call: generate_series(1, 3)\n InitPlan 1 (returns $0,$1)\n -> Seq Scan on public.foo\n Output: foo.x, foo.y\n Filter: (foo.id = foo.y)\n SubPlan 2 (returns $3,$4)\n -> Aggregate\n Output: min((x.x + y.y)), max((x.x + y.y))\n -> Function Scan on pg_catalog.generate_series x\n Output: x.x\n Function Call: generate_series(1, 3)\n\nIt's a little more verbose in this case, but in a lot of other cases\nit ended up being more compact.\n\nThe code is a bit messy, but I think the regression test output\n(attached) is clearer and easier to interpret. SubPlans and InitPlans\nare displayed consistently, and it's easier to distinguish\nSubPlan/InitPlan outputs from external parameters.\n\nThere are a few more regression test changes, corresponding to cases\nwhere InitPlans are referenced, such as:\n\n Seq Scan on document\n- Filter: ((dlevel <= $0) AND f_leak(dtitle))\n+ Filter: ((dlevel <= (InitPlan 1).$0) AND f_leak(dtitle))\n InitPlan 1 (returns $0)\n -> Index Scan using uaccount_pkey on uaccount\n Index Cond: (pguser = CURRENT_USER)\n\nbut I think that's useful extra clarification.\n\nRegards,\nDean", "msg_date": "Sat, 9 Mar 2024 13:07:40 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> One thing that concerns me about making even greater use of \"$n\" is\n> the potential for confusion with generic plan parameters.\n\nTrue.\n\n> Another possibility is to put the SubPlan and InitPlan names inline,\n> rather than outputting \"FROM SubPlan ...\". I had a go at hacking that\n> up and this was the result:\n\n> Output: (($3 = (InitPlan 1).$0) AND ($4 = (InitPlan 1).$1) AND\n> ((($1 = (SubPlan 2).$3) AND ($2 = (SubPlan 2).$4))))\n\nHmm. I guess what bothers me about that is that it could be read to\nsuggest that the initplan or subplan is evaluated again for each\noutput parameter. Perhaps it'll be sufficiently clear as long as\nwe keep the labeling\n\n> InitPlan 1 (returns $0,$1)\n> SubPlan 2 (returns $3,$4)\n\nbut I'm not sure. Anybody else have an opinion?\n\n(I didn't read your changes to the code yet --- I think at this\npoint we can just debate proposed output without worrying about\nhow to implement it.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:58:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> Dean Rasheed <[email protected]> writes:\n>> One thing that concerns me about making even greater use of \"$n\" is\n>> the potential for confusion with generic plan parameters.\n\n> True.\n\nAfter looking at your draft some more, it occurred to me that we're\nnot that far from getting rid of showing \"$n\" entirely in this\ncontext. Instead of (subplan_name).$n, we could write something like\n(subplan_name).colN or (subplan_name).columnN or (subplan_name).fN,\ndepending on your taste for verboseness. \"fN\" would correspond to the\nnames we assign to columns of anonymous record types, but it hasn't\ngot much else to recommend it. In the attached I used \"colN\";\n\"columnN\" would be my next choice.\n\nYou could also imagine trying to use the sub-SELECT's actual output\ncolumn names, but I fear that would be ambiguous --- too often it'd\nbe \"?column?\" or some other non-unique name.\n\n> Hmm. I guess what bothers me about that is that it could be read to\n> suggest that the initplan or subplan is evaluated again for each\n> output parameter.\n\nThis objection seems like it could be solved through documentation,\nso I wrote some.\n\nThe attached proof-of-concept is incomplete: it fails to replace some\n$n occurrences with subplan references, as is visible in some of the\ntest cases. I believe your quick hack in get_parameter() is not\ncorrect in detail, but for the moment I didn't bother to debug it.\nI'm just presenting this as a POC to see if this is the direction\npeople would like to go in. If there's not objections, I'll try to\nmake a bulletproof implementation.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 16 Mar 2024 13:25:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Sat, 16 Mar 2024 at 17:25, Tom Lane <[email protected]> wrote:\n>\n> After looking at your draft some more, it occurred to me that we're\n> not that far from getting rid of showing \"$n\" entirely in this\n> context. Instead of (subplan_name).$n, we could write something like\n> (subplan_name).colN or (subplan_name).columnN or (subplan_name).fN,\n> depending on your taste for verboseness. \"fN\" would correspond to the\n> names we assign to columns of anonymous record types, but it hasn't\n> got much else to recommend it. In the attached I used \"colN\";\n> \"columnN\" would be my next choice.\n\nUsing the column number rather than the parameter index looks a lot\nneater, especially in output with multiple subplans. Of those choices,\n\"colN\" looks nicest, however...\n\nI think it would be confusing if there were tables whose columns are\nnamed \"colN\". In that case, a SQL qual like \"t1.col2 = t2.col2\" might\nbe output as something like \"t1.col2 = (SubPlan 1).col3\", since the\nsubplan's targetlist wouldn't necessarily just output the table\ncolumns in order.\n\nI actually think \"$n\" is not so bad (especially if we make n the\ncolumn number). The fact that it's qualified by the subplan name ought\nto be sufficient to avoid it being confused with an external\nparameter. Maybe there are other options, but I think it's important\nto choose something that's unlikely to be confused with a real column\nname.\n\nWhatever name is chosen, I think we should still output \"(returns\n...)\" on the subplan nodes. In a complex query there might be a lot of\noutput columns, and the expressions might be quite complex, making it\nhard to see how many columns the subplan is returning. Besides,\nwithout that, it might not be obvious to everyone what \"colN\" (or\nwhatever we settle on) means in places that refer to the subplan.\n\n> You could also imagine trying to use the sub-SELECT's actual output\n> column names, but I fear that would be ambiguous --- too often it'd\n> be \"?column?\" or some other non-unique name.\n\nYeah, I think that's a non-starter, because the output column names\naren't necessarily unique.\n\n> The attached proof-of-concept is incomplete: it fails to replace some\n> $n occurrences with subplan references, as is visible in some of the\n> test cases. I believe your quick hack in get_parameter() is not\n> correct in detail, but for the moment I didn't bother to debug it.\n\nYeah, that's exactly what it was, a quick hack. I just wanted to get\nsome output to see what it would look like in a few real cases.\n\nOverall, I think this is heading in the right direction. I think we\njust need a good way to say \"the n'th output column of the subplan\",\nthat can't be confused with anything else in the output.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 17 Mar 2024 09:46:42 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Sat, 16 Mar 2024 at 17:25, Tom Lane <[email protected]> wrote:\n>> After looking at your draft some more, it occurred to me that we're\n>> not that far from getting rid of showing \"$n\" entirely in this\n>> context. Instead of (subplan_name).$n, we could write something like\n>> (subplan_name).colN or (subplan_name).columnN or (subplan_name).fN,\n\n> I think it would be confusing if there were tables whose columns are\n> named \"colN\". In that case, a SQL qual like \"t1.col2 = t2.col2\" might\n> be output as something like \"t1.col2 = (SubPlan 1).col3\", since the\n> subplan's targetlist wouldn't necessarily just output the table\n> columns in order.\n\nPerhaps. I think people who are using columns named like that are\nalready accustomed to having to pay close attention to which table\nthe column is shown as qualified by. So I'm not sure there'd really\nbe much problem in practice.\n\n> I actually think \"$n\" is not so bad (especially if we make n the\n> column number). The fact that it's qualified by the subplan name ought\n> to be sufficient to avoid it being confused with an external\n> parameter.\n\nThat's an interesting compromise position, but I'm not sure that\nit buys much compared to the approach shown in your draft (that\nis, continuing to use the real param IDs). The real IDs at least\nhave the advantage of being unique.\n\n> Whatever name is chosen, I think we should still output \"(returns\n> ...)\" on the subplan nodes.\n\nWe should do that if we continue to show real param IDs, but\nif we change to using column numbers then I think it's pointless.\nOutput like\n\n SubPlan 1 (returns $1,$2)\n ...\n SubPlan 2 (returns $1,$2,$3)\n\nseems to me that it'd be more confusing not less so. Does SubPlan 2\nreturn the same values as SubPlan 1 plus more?\n\n> Overall, I think this is heading in the right direction. I think we\n> just need a good way to say \"the n'th output column of the subplan\",\n> that can't be confused with anything else in the output.\n\nWe could consider notations like \"(SubPlan 1 column 2)\", which\ncouldn't be confused with anything else, if only because a name\nlike that would have to be double-quoted. It's a little more\nverbose but not that much. I fear \"(SubPlan 1 col 2)\" is too\nshort to be clear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 17 Mar 2024 10:28:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> Dean Rasheed <[email protected]> writes:\n>> Overall, I think this is heading in the right direction. I think we\n>> just need a good way to say \"the n'th output column of the subplan\",\n>> that can't be confused with anything else in the output.\n\n> We could consider notations like \"(SubPlan 1 column 2)\", which\n> couldn't be confused with anything else, if only because a name\n> like that would have to be double-quoted. It's a little more\n> verbose but not that much. I fear \"(SubPlan 1 col 2)\" is too\n> short to be clear.\n\nHere's a cleaned-up version that seems to successfully resolve\nPARAM_EXEC references in all cases. I haven't changed the\n\"(plan_name).colN\" notation, but that's an easy fix once we have\nconsensus on the spelling.\n\nThere are two other loose ends bothering me:\n\n1. I see that Gather nodes sometimes print things like\n\n -> Gather (actual rows=N loops=N)\n Workers Planned: 2\n Params Evaluated: $0, $1\n Workers Launched: N\n\nThis now sticks out like a sore thumb, because there's no other\nreference to $0 or $1 in the EXPLAIN output. We could possibly\nadjust the code to print something like\n\n Params Evaluated: (InitPlan 1).col1, (InitPlan 2).col1\n\nbut I think that's pretty silly. This looks to me like a code\ndebugging aid that shouldn't have survived past initial development.\nIt's of zero use to end users, and it doesn't correspond to anything\nwe bother to mention in EXPLAIN output in any other case: initplans\njust magically get evaluated at the right times. I propose we\nnuke the \"Params Evaluated\" output altogether.\n\n2. MULTIEXPR_SUBLINK subplans now result in EXPLAIN output like\n\nexplain (verbose, costs off)\nupdate inhpar i set (f1, f2) = (select i.f1, i.f2 || '-' from int4_tbl limit 1);\n ...\n -> Result\n Output: (SubPlan 1).col1, (SubPlan 1).col2, (SubPlan 1), i.tableoid, i.ctid\n\nThe undecorated reference to (SubPlan 1) is fairly confusing, since\nit doesn't correspond to anything that will actually get output.\nI suggest that perhaps instead this should read\n\n Output: (SubPlan 1).col1, (SubPlan 1).col2, IGNORE(SubPlan 1), i.tableoid, i.ctid\n\nor\n\n Output: (SubPlan 1).col1, (SubPlan 1).col2, RESET(SubPlan 1), i.tableoid, i.ctid\n\nthe point of \"RESET()\" being that what the executor actually does\nthere is to re-arm the SubPlan to be evaluated again on the next\npass through the targetlist. I'm not greatly in love with either\nof those ideas, though. Any thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 17 Mar 2024 15:39:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Sun, 17 Mar 2024 at 19:39, Tom Lane <[email protected]> wrote:\n>\n> Here's a cleaned-up version that seems to successfully resolve\n> PARAM_EXEC references in all cases. I haven't changed the\n> \"(plan_name).colN\" notation, but that's an easy fix once we have\n> consensus on the spelling.\n\nI took a more detailed look at this and the code and doc changes all\nlook good to me.\n\nThere's a comment at the end of find_param_generator() that should\nprobably say \"No generator found\", rather than \"No referent found\".\n\nThe get_rule_expr() code could perhaps be simplified a bit, getting\nrid of the show_subplan_name variable and moving the recursive calls\nto get_rule_expr() to after the switch statement -- if testexpr is\nnon-NULL, print it, else print the subplan name probably works for all\nsubplan types.\n\nThe \"colN\" notation has grown on me, especially when you look at\nexamples like those in partition_prune.out with a mix of Param types.\nNot using \"$n\" for 2 different purposes is good, and I much prefer\nthis to the original \"FROM [HASHED] SubPlan N (returns ...)\" notation.\n\n> There are two other loose ends bothering me:\n>\n> 1. I see that Gather nodes sometimes print things like\n>\n> -> Gather (actual rows=N loops=N)\n> Workers Planned: 2\n> Params Evaluated: $0, $1\n> Workers Launched: N\n>\n> I propose we nuke the \"Params Evaluated\" output altogether.\n\n+1\n\n> 2. MULTIEXPR_SUBLINK subplans now result in EXPLAIN output like\n>\n> -> Result\n> Output: (SubPlan 1).col1, (SubPlan 1).col2, (SubPlan 1), i.tableoid, i.ctid\n>\n> The undecorated reference to (SubPlan 1) is fairly confusing, since\n> it doesn't correspond to anything that will actually get output.\n> I suggest that perhaps instead this should read\n>\n> Output: (SubPlan 1).col1, (SubPlan 1).col2, IGNORE(SubPlan 1), i.tableoid, i.ctid\n>\n> or\n>\n> Output: (SubPlan 1).col1, (SubPlan 1).col2, RESET(SubPlan 1), i.tableoid, i.ctid\n\nI think \"RESET()\" or \"RESCAN()\" or something like that is better than\n\"INGORE()\", because it indicates that it is actually doing something.\nI don't really have a better idea. Perhaps not all uppercase though,\nsince that seems to go against the rest of the EXPLAIN output.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 18 Mar 2024 14:29:21 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> The get_rule_expr() code could perhaps be simplified a bit, getting\n> rid of the show_subplan_name variable and moving the recursive calls\n> to get_rule_expr() to after the switch statement -- if testexpr is\n> non-NULL, print it, else print the subplan name probably works for all\n> subplan types.\n\nOooh, good idea. The symmetry wasn't apparent when we started, but\nit's there now, and the code does look nicer this way.\n\n> The \"colN\" notation has grown on me, especially when you look at\n> examples like those in partition_prune.out with a mix of Param types.\n\nOK, I've left it like that in the attached v5, but I'm still open\nto other opinions.\n\n>> The undecorated reference to (SubPlan 1) is fairly confusing, since\n>> it doesn't correspond to anything that will actually get output.\n>> I suggest that perhaps instead this should read\n>> Output: (SubPlan 1).col1, (SubPlan 1).col2, IGNORE(SubPlan 1), i.tableoid, i.ctid\n>> or\n>> Output: (SubPlan 1).col1, (SubPlan 1).col2, RESET(SubPlan 1), i.tableoid, i.ctid\n\n> I think \"RESET()\" or \"RESCAN()\" or something like that is better than\n> \"INGORE()\", because it indicates that it is actually doing something.\n> I don't really have a better idea. Perhaps not all uppercase though,\n> since that seems to go against the rest of the EXPLAIN output.\n\nHm. I used \"rescan(SubPlan)\" in the attached, but it still looks\na bit odd to my eye.\n\nI did some more work on the documentation too, to show the difference\nbetween hashed and not-hashed subplans. I feel like we're pretty\nclose here, with the possible exception of how to show MULTIEXPR.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 18 Mar 2024 17:10:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> Hm. I used \"rescan(SubPlan)\" in the attached, but it still looks\n> a bit odd to my eye.\n\nAfter thinking a bit more, I understood what was bothering me about\nthat notation: it looks too much like a call of a user-defined\nfunction named \"rescan()\". I think we'd be better off with the\nall-caps \"RESCAN()\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 19:19:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Mon, 18 Mar 2024 at 23:19, Tom Lane <[email protected]> wrote:\n>\n> > Hm. I used \"rescan(SubPlan)\" in the attached, but it still looks\n> > a bit odd to my eye.\n>\n> After thinking a bit more, I understood what was bothering me about\n> that notation: it looks too much like a call of a user-defined\n> function named \"rescan()\". I think we'd be better off with the\n> all-caps \"RESCAN()\".\n>\n\nOr perhaps move the parentheses, and write \"(rescan SubPlan N)\" or\n\"(reset SubPlan N)\". Dunno.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 18 Mar 2024 23:58:11 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> On Mon, 18 Mar 2024 at 23:19, Tom Lane <[email protected]> wrote:\n>> After thinking a bit more, I understood what was bothering me about\n>> that notation: it looks too much like a call of a user-defined\n>> function named \"rescan()\". I think we'd be better off with the\n>> all-caps \"RESCAN()\".\n\n> Or perhaps move the parentheses, and write \"(rescan SubPlan N)\" or\n> \"(reset SubPlan N)\". Dunno.\n\nOh, I like that! It seems rather parallel to the existing \"hashed\"\nannotation. If I had it to do over, I'd likely do the \"hashed\"\nbit differently --- but as the proposal currently stands, we are\nnot changing \"hashed\", so we might as well double down on that.\n\nI won't update the patch right now, but \"(rescan SubPlan N)\"\nseems like a winner to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Mar 2024 20:03:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "I wrote:\n> I won't update the patch right now, but \"(rescan SubPlan N)\"\n> seems like a winner to me.\n\nHere's a hopefully-final version that makes that adjustment and\ntweaks a couple of comments.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 19 Mar 2024 12:42:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Tue, 19 Mar 2024 at 16:42, Tom Lane <[email protected]> wrote:\n>\n> Here's a hopefully-final version that makes that adjustment and\n> tweaks a couple of comments.\n>\n\nThis looks very good to me.\n\nOne final case that could possibly be improved is this one from aggregates.out:\n\nexplain (verbose, costs off)\nselect array(select sum(x+y) s\n from generate_series(1,3) y group by y order by s)\n from generate_series(1,3) x;\n QUERY PLAN\n-------------------------------------------------------------------\n Function Scan on pg_catalog.generate_series x\n Output: ARRAY(SubPlan 1)\n Function Call: generate_series(1, 3)\n SubPlan 1\n -> Sort\n Output: (sum((x.x + y.y))), y.y\n Sort Key: (sum((x.x + y.y)))\n -> HashAggregate\n Output: sum((x.x + y.y)), y.y\n Group Key: y.y\n -> Function Scan on pg_catalog.generate_series y\n Output: y.y\n Function Call: generate_series(1, 3)\n\nARRAY operates on a SELECT with a single targetlist item, but in this\ncase it looks like the subplan output has 2 columns, which might\nconfuse people.\n\nI wonder if we should output \"ARRAY((SubPlan 1).col1)\" to make it\nclearer. Since ARRAY_SUBLINK is a special case, which always collects\nthe first column's values, we could just always output \"col1\" for\nARRAY.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 19 Mar 2024 21:06:03 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> One final case that could possibly be improved is this one from aggregates.out:\n\n> explain (verbose, costs off)\n> select array(select sum(x+y) s\n> from generate_series(1,3) y group by y order by s)\n> from generate_series(1,3) x;\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Function Scan on pg_catalog.generate_series x\n> Output: ARRAY(SubPlan 1)\n> Function Call: generate_series(1, 3)\n> SubPlan 1\n> -> Sort\n> Output: (sum((x.x + y.y))), y.y\n> Sort Key: (sum((x.x + y.y)))\n> -> HashAggregate\n> Output: sum((x.x + y.y)), y.y\n> Group Key: y.y\n> -> Function Scan on pg_catalog.generate_series y\n> Output: y.y\n> Function Call: generate_series(1, 3)\n\n> ARRAY operates on a SELECT with a single targetlist item, but in this\n> case it looks like the subplan output has 2 columns, which might\n> confuse people.\n\nI'm inclined to leave that alone. The actual source sub-SELECT\ncould only have had one column, so no real confusion is possible.\nYeah, there's a resjunk grouping column visible in the plan as well,\nbut those exist in many other queries, and we've not gotten questions\nabout them.\n\n(Perhaps some documentation about junk columns needs to be added?\nI'm not eager to write it though.)\n\nI had actually had a similar thought about sticking \".col1\" onto\nEXPR_SUBLINK cases, but I concluded it was mostly pedantry.\nNobody's likely to get confused.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 17:40:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "On Tue, 19 Mar 2024 at 21:40, Tom Lane <[email protected]> wrote:\n>\n> I'm inclined to leave that alone. The actual source sub-SELECT\n> could only have had one column, so no real confusion is possible.\n> Yeah, there's a resjunk grouping column visible in the plan as well,\n> but those exist in many other queries, and we've not gotten questions\n> about them.\n>\n\nFair enough. I have no further comments.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 19 Mar 2024 21:49:07 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> Fair enough. I have no further comments.\n\nPushed then. Thanks for reviewing! I gave you credit as co-author.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Mar 2024 18:20:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving EXPLAIN's display of SubPlan nodes" } ]
[ { "msg_contents": "Hello all,\n\nFollowing the threads here\n<https://www.postgresql.org/message-id/flat/CAFWczPvi_5FWH%2BJTqkWbi%2Bw83hy%3DMYg%3D2hKK0%3DJZBe9%3DhTpE4w%40mail.gmail.com>\nand there <https://commitfest.postgresql.org/13/958/>, I decided to submit\nthis patch.\n\nFollowing is the description which is also written in the commit message:\nMAX_SEND_SIZE parameter was used in WALSender to limit maximum size of\na WAL data packet sent to a WALReceiver during replication. Although\nits default value (128kB) was a reasonable value, it was written in\n2010. Since the hardwares have changed a lot since then, a PostgreSQL\nuser might need to customize this value.\nFor example, if a database's disk has a high bandwidth and a high\nlatency at the same time, it makes more sense to read bigger chunks of\ndata from disk in each iteration. One example of such disks is a remote\ndisk. (e.g. an RBD volume)\nHowever, this value does not need to be larger than wal_segment_size,\nthus its checker function returns false if a larger value is set for\nthis.\n\nThis is my first patch. So, I hope I haven't done something wrong. :'D\n\nBest regards\nMajid", "msg_date": "Fri, 19 Jan 2024 23:04:50 +0330", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Fri, Jan 19, 2024 at 11:04:50PM +0330, Majid Garoosi wrote:\n> However, this value does not need to be larger than wal_segment_size,\n> thus its checker function returns false if a larger value is set for\n> this.\n> \n> This is my first patch. So, I hope I haven't done something wrong. :'D\n\nYou've done nothing wrong. Thanks for the patch!\n\n+ if (*newval > wal_segment_size)\n+ return false;\n+ return true;\n\nI was not sure first that we need a dynamic check, but I could get why\nsomebody may want to make it higher than 1MB these days.\n\nThe patch is missing a couple of things:\n- Documentation in doc/src/sgml/config.sgml, that has a section for\n\"Sending Servers\".\n- It is not listed in postgresql.conf.sample. I would suggest to put\nit in REPLICATION -> Sending Servers.\nThe default value of 128kB should be mentioned in both cases.\n\n- * We don't have a good idea of what a good value would be; there's some\n- * overhead per message in both walsender and walreceiver, but on the other\n- * hand sending large batches makes walsender less responsive to signals\n- * because signals are checked only between messages. 128kB (with\n- * default 8k blocks) seems like a reasonable guess for now.\n[...]\n+\tgettext_noop(\"Walsender procedure consists of a loop, reading wal_sender_max_send_size \"\n+ \"bytes of WALs from disk and sending them to the receiver. Sending large \"\n+ \"batches makes walsender less responsive to signals.\"),\n\nThis is removing some information about why it may be a bad idea to\nuse a too low value (message overhead) and why it may be a bad idea to\nuse a too large value (responsiveness). I would suggest to remove the\nsecond gettext_noop() in guc_tables.c and move all this information to\nconfig.sgml with the description of the new GUC. Perhaps just put it\nafter wal_sender_timeout in the sample file and the docs?\n\nThree comments in walsender.c still mention MAX_SEND_SIZE. These\nshould be switched to mention the GUC instead.\n\nI am switching the patch as waiting on author for now.\n--\nMichael", "msg_date": "Tue, 6 Feb 2024 14:56:24 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Thank you very much for your review.\n\nI generally agree with your suggestions, so just applied them.\nYou can find the new patch in the attached file.\n\nBest\nMajid\n\nOn Tue, 6 Feb 2024 at 09:26, Michael Paquier <[email protected]> wrote:\n\n> On Fri, Jan 19, 2024 at 11:04:50PM +0330, Majid Garoosi wrote:\n> > However, this value does not need to be larger than wal_segment_size,\n> > thus its checker function returns false if a larger value is set for\n> > this.\n> >\n> > This is my first patch. So, I hope I haven't done something wrong. :'D\n>\n> You've done nothing wrong. Thanks for the patch!\n>\n> + if (*newval > wal_segment_size)\n> + return false;\n> + return true;\n>\n> I was not sure first that we need a dynamic check, but I could get why\n> somebody may want to make it higher than 1MB these days.\n>\n> The patch is missing a couple of things:\n> - Documentation in doc/src/sgml/config.sgml, that has a section for\n> \"Sending Servers\".\n> - It is not listed in postgresql.conf.sample. I would suggest to put\n> it in REPLICATION -> Sending Servers.\n> The default value of 128kB should be mentioned in both cases.\n>\n> - * We don't have a good idea of what a good value would be; there's some\n> - * overhead per message in both walsender and walreceiver, but on the\n> other\n> - * hand sending large batches makes walsender less responsive to signals\n> - * because signals are checked only between messages. 128kB (with\n> - * default 8k blocks) seems like a reasonable guess for now.\n> [...]\n> + gettext_noop(\"Walsender procedure consists of a loop, reading\n> wal_sender_max_send_size \"\n> + \"bytes of WALs from disk and sending them to the receiver.\n> Sending large \"\n> + \"batches makes walsender less responsive to signals.\"),\n>\n> This is removing some information about why it may be a bad idea to\n> use a too low value (message overhead) and why it may be a bad idea to\n> use a too large value (responsiveness). I would suggest to remove the\n> second gettext_noop() in guc_tables.c and move all this information to\n> config.sgml with the description of the new GUC. Perhaps just put it\n> after wal_sender_timeout in the sample file and the docs?\n>\n> Three comments in walsender.c still mention MAX_SEND_SIZE. These\n> should be switched to mention the GUC instead.\n>\n> I am switching the patch as waiting on author for now.\n> --\n> Michael\n>", "msg_date": "Thu, 8 Feb 2024 14:42:00 +0330", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Thu, Feb 08, 2024 at 02:42:00PM +0330, Majid Garoosi wrote:\n> Thank you very much for your review.\n\nSomething to be aware of, but the community lists use bottom-posting\nfor replies because it is easier to follow the logic of a thread this\nway. See here:\nhttps://en.wikipedia.org/wiki/Posting_style#Bottom-posting\n\n> I generally agree with your suggestions, so just applied them.\n> You can find the new patch in the attached file.\n\nThanks for the patch, that looks rather fine. I have spent some time\npolishing the docs, adding a mention that increasing the value can\nshow benefits depending on what you do. How does the attached look to\nyou?\n--\nMichael", "msg_date": "Fri, 9 Feb 2024 14:20:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Fri, 9 Feb 2024 at 08:50, Michael Paquier <[email protected]> wrote:\n\n> Something to be aware of, but the community lists use bottom-posting\n> for replies because it is easier to follow the logic of a thread this\n> way. See here:\n> https://en.wikipedia.org/wiki/Posting_style#Bottom-posting\n>\nOh, sorry for not using the convention here. I just noticed that after\npressing the send button. =)\n\nThanks for the patch, that looks rather fine. I have spent some time\n> polishing the docs, adding a mention that increasing the value can\n> show benefits depending on what you do. How does the attached look to\n> you?\n>\nI took a look at it and it seems good to me.\nMaybe I should work on my writing skills more. :D\n\nBest\nMajid\n\nOn Fri, 9 Feb 2024 at 08:50, Michael Paquier <[email protected]> wrote:\nSomething to be aware of, but the community lists use bottom-posting\nfor replies because it is easier to follow the logic of a thread this\nway.  See here:\nhttps://en.wikipedia.org/wiki/Posting_style#Bottom-postingOh, sorry for not using the convention here. I just noticed that after pressing the send button. =) \nThanks for the patch, that looks rather fine.  I have spent some time\npolishing the docs, adding a mention that increasing the value can\nshow benefits depending on what you do.  How does the attached look to\nyou?I took a look at it and it seems good to me.Maybe I should work on my writing skills more. :DBestMajid", "msg_date": "Fri, 9 Feb 2024 13:52:32 +0330", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hi,\n\nOn 2024-01-19 23:04:50 +0330, Majid Garoosi wrote:\n> Following is the description which is also written in the commit message:\n> MAX_SEND_SIZE parameter was used in WALSender to limit maximum size of\n> a WAL data packet sent to a WALReceiver during replication. Although\n> its default value (128kB) was a reasonable value, it was written in\n> 2010. Since the hardwares have changed a lot since then, a PostgreSQL\n> user might need to customize this value.\n> For example, if a database's disk has a high bandwidth and a high\n> latency at the same time, it makes more sense to read bigger chunks of\n> data from disk in each iteration. One example of such disks is a remote\n> disk. (e.g. an RBD volume)\n\nThe way we read the WAL data is perfectly prefetchable by the the OS, so I\nwouldn't really expect gains here. Have you actually been able to see a\nperformance benefit by increasing MAX_SEND_SIZE?\n\nI don't think we should add configuration parameters without at least some\ndemonstration of practical gains, otherwise we'll end up with hundreds of\nnever-useful config options.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 9 Feb 2024 11:03:46 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hi Andres,\n\nOn Fri, 9 Feb 2024 at 22:33, Andres Freund <[email protected]> wrote:\n\n> On 2024-01-19 23:04:50 +0330, Majid Garoosi wrote:\n> > Following is the description which is also written in the commit message:\n> > MAX_SEND_SIZE parameter was used in WALSender to limit maximum size of\n> > a WAL data packet sent to a WALReceiver during replication. Although\n> > its default value (128kB) was a reasonable value, it was written in\n> > 2010. Since the hardwares have changed a lot since then, a PostgreSQL\n> > user might need to customize this value.\n> > For example, if a database's disk has a high bandwidth and a high\n> > latency at the same time, it makes more sense to read bigger chunks of\n> > data from disk in each iteration. One example of such disks is a remote\n> > disk. (e.g. an RBD volume)\n>\n> The way we read the WAL data is perfectly prefetchable by the the OS, so I\n> wouldn't really expect gains here. Have you actually been able to see a\n> performance benefit by increasing MAX_SEND_SIZE?\n>\n\nYes, I have seen a huge performance jump. Take a look at here\n<https://www.postgresql.org/message-id/CAFWczPvi_5FWH%2BJTqkWbi%2Bw83hy%3DMYg%3D2hKK0%3DJZBe9%3DhTpE4w%40mail.gmail.com>\nfor\nmore info.\n\nBest\nMajid\n\nHi Andres,On Fri, 9 Feb 2024 at 22:33, Andres Freund <[email protected]> wrote:\nOn 2024-01-19 23:04:50 +0330, Majid Garoosi wrote:\n> Following is the description which is also written in the commit message:\n> MAX_SEND_SIZE parameter was used in WALSender to limit maximum size of\n> a WAL data packet sent to a WALReceiver during replication. Although\n> its default value (128kB) was a reasonable value, it was written in\n> 2010. Since the hardwares have changed a lot since then, a PostgreSQL\n> user might need to customize this value.\n> For example, if a database's disk has a high bandwidth and a high\n> latency at the same time, it makes more sense to read bigger chunks of\n> data from disk in each iteration. One example of such disks is a remote\n> disk. (e.g. an RBD volume)\n\nThe way we read the WAL data is perfectly prefetchable by the the OS, so I\nwouldn't really expect gains here.  Have you actually been able to see a\nperformance benefit by increasing MAX_SEND_SIZE?Yes, I have seen a huge performance jump. Take a look at here for more info.BestMajid", "msg_date": "Sun, 11 Feb 2024 16:32:20 +0330", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Sun, Feb 11, 2024 at 04:32:20PM +0330, Majid Garoosi wrote:\n> On Fri, 9 Feb 2024 at 22:33, Andres Freund <[email protected]> wrote:\n>> The way we read the WAL data is perfectly prefetchable by the the OS, so I\n>> wouldn't really expect gains here. Have you actually been able to see a\n>> performance benefit by increasing MAX_SEND_SIZE?\n> \n> Yes, I have seen a huge performance jump. Take a look at here\n> <https://www.postgresql.org/message-id/CAFWczPvi_5FWH%2BJTqkWbi%2Bw83hy%3DMYg%3D2hKK0%3DJZBe9%3DhTpE4w%40mail.gmail.com>\n> for\n> more info.\n\nYes, I can get the idea that grouping more replication messages in\none shot can be beneficial in some cases while being\nenvironment-dependent, though I also get the point that we cannot\nsimply GUC-ify everything either. I'm OK with this one at the end,\nbecause it is not performance critical.\n\nNote that it got lowered to the current value in ea5516081dcb to make\nit more responsive, while being half a WAL segment in 40f908bdcdc7\nwhen WAL senders have been introduced in 2010. I cannot pinpoint the\nexact thread that led to this change, but I'm adding Fujii-san and\nHeikki in CC for comments.\n--\nMichael", "msg_date": "Mon, 12 Feb 2024 09:10:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hey folks,\n\nAny news, comments, etc. about this thread?\n\nBest regards\nMajid Garoosi\n\nOn Mon, 12 Feb 2024 at 01:10, Michael Paquier <[email protected]> wrote:\n\n> On Sun, Feb 11, 2024 at 04:32:20PM +0330, Majid Garoosi wrote:\n> > On Fri, 9 Feb 2024 at 22:33, Andres Freund <[email protected]> wrote:\n> >> The way we read the WAL data is perfectly prefetchable by the the OS,\n> so I\n> >> wouldn't really expect gains here. Have you actually been able to see a\n> >> performance benefit by increasing MAX_SEND_SIZE?\n> >\n> > Yes, I have seen a huge performance jump. Take a look at here\n> > <\n> https://www.postgresql.org/message-id/CAFWczPvi_5FWH%2BJTqkWbi%2Bw83hy%3DMYg%3D2hKK0%3DJZBe9%3DhTpE4w%40mail.gmail.com\n> >\n> > for\n> > more info.\n>\n> Yes, I can get the idea that grouping more replication messages in\n> one shot can be beneficial in some cases while being\n> environment-dependent, though I also get the point that we cannot\n> simply GUC-ify everything either. I'm OK with this one at the end,\n> because it is not performance critical.\n>\n> Note that it got lowered to the current value in ea5516081dcb to make\n> it more responsive, while being half a WAL segment in 40f908bdcdc7\n> when WAL senders have been introduced in 2010. I cannot pinpoint the\n> exact thread that led to this change, but I'm adding Fujii-san and\n> Heikki in CC for comments.\n> --\n> Michael\n>\n\nHey folks,Any news, comments, etc. about this thread?Best regardsMajid GaroosiOn Mon, 12 Feb 2024 at 01:10, Michael Paquier <[email protected]> wrote:On Sun, Feb 11, 2024 at 04:32:20PM +0330, Majid Garoosi wrote:\n> On Fri, 9 Feb 2024 at 22:33, Andres Freund <[email protected]> wrote:\n>> The way we read the WAL data is perfectly prefetchable by the the OS, so I\n>> wouldn't really expect gains here.  Have you actually been able to see a\n>> performance benefit by increasing MAX_SEND_SIZE?\n> \n> Yes, I have seen a huge performance jump. Take a look at here\n> <https://www.postgresql.org/message-id/CAFWczPvi_5FWH%2BJTqkWbi%2Bw83hy%3DMYg%3D2hKK0%3DJZBe9%3DhTpE4w%40mail.gmail.com>\n> for\n> more info.\n\nYes, I can get the idea that grouping more replication messages in\none shot can be beneficial in some cases while being\nenvironment-dependent, though I also get the point that we cannot\nsimply GUC-ify everything either.  I'm OK with this one at the end,\nbecause it is not performance critical.\n\nNote that it got lowered to the current value in ea5516081dcb to make\nit more responsive, while being half a WAL segment in 40f908bdcdc7\nwhen WAL senders have been introduced in 2010.  I cannot pinpoint the\nexact thread that led to this change, but I'm adding Fujii-san and\nHeikki in CC for comments.\n--\nMichael", "msg_date": "Mon, 22 Apr 2024 15:40:01 +0200", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Mon, Apr 22, 2024 at 03:40:01PM +0200, Majid Garoosi wrote:\n> Any news, comments, etc. about this thread?\n\nFWIW, I'd still be in favor of doing a GUC-ification of this part, but\nat this stage I'd need more time to do a proper study of a case where\nthis shows benefits to prove your point, or somebody else could come\nin and show it.\n\nAndres has objected to this change, on the ground that this was not\nworth it, though you are telling the contrary. I would be curious to\nhear from others, first, so as we gather more opinions to reach a\nconsensus.\n--\nMichael", "msg_date": "Tue, 23 Apr 2024 09:23:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Tue, Apr 23, 2024 at 2:24 AM Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 03:40:01PM +0200, Majid Garoosi wrote:\n> > Any news, comments, etc. about this thread?\n>\n> FWIW, I'd still be in favor of doing a GUC-ification of this part, but\n> at this stage I'd need more time to do a proper study of a case where\n> this shows benefits to prove your point, or somebody else could come\n> in and show it.\n>\n> Andres has objected to this change, on the ground that this was not\n> worth it, though you are telling the contrary. I would be curious to\n> hear from others, first, so as we gather more opinions to reach a\n> consensus.\n\nI'm more with Andres on this one.I vaguely remember researching impact\nof MAX_SEND_SIZE on independent two occasions (earlier async and more\nrecent sync case where I've investigated variety of ways to keep\nlatency down) and my summary would be:\n\nFirst: it's very hard to get *reliable* replication setup for\nbenchmark, where one could demonstrate correlation between e.g.\nincreasing MAX_SEND_SIZE and observing benefits (in sync rep it is\neasier, as you are simply stalled in pgbench). Part of the problem are\nthe following things:\n\na) workload can be tricky, for this purpose it needs to be trivial but bulky\nb) it needs to be on isolated network and with guaranteed bandwidth\nc) wal_init_zero impact should be ruled out\nd) OS should be properly tuned autotuning TCP max(3rd value) + have\nsetup rmem_max/wmem_max properly\ne) more serious TCP congestion should be used that the default one in OS\nf) one should prevent any I/O stalls on walreceiver writeback during\nhuge WAL activity and restart points on standby (dirty_bytes and so\non, isolated pg_xlog, BDI max_ratio)\n\nSecond: once you perform above and ensure that there are no network or\nI/O stalls back then I *think* I couldn't see any impact of playing\nwith MAX_SEND_SIZE from what I remember as probably something else is\nsaturated first.\n\nI can offer help with trying to test this with artificial tests and\neven injecting proper latency (WAN-like), but OP (Majid) I think needs\nfirst describe his env much better (exact latency, bandwidth,\nworkload, TCP sysctl values, duration of the tests, no# of attempts\ntried, exact commands used, iperf3 TCP results demonstrating hw used\nand so on). So in short the patch is easy, but demonstrating the\neffect and persuading others here would be hard.\n\n-J.\n\n\n", "msg_date": "Tue, 23 Apr 2024 14:47:31 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hi,\n\nOn 2024-04-23 14:47:31 +0200, Jakub Wartak wrote:\n> On Tue, Apr 23, 2024 at 2:24 AM Michael Paquier <[email protected]> wrote:\n> >\n> > > Any news, comments, etc. about this thread?\n> >\n> > FWIW, I'd still be in favor of doing a GUC-ification of this part, but\n> > at this stage I'd need more time to do a proper study of a case where\n> > this shows benefits to prove your point, or somebody else could come\n> > in and show it.\n> >\n> > Andres has objected to this change, on the ground that this was not\n> > worth it, though you are telling the contrary. I would be curious to\n> > hear from others, first, so as we gather more opinions to reach a\n> > consensus.\n\nI think it's a bad idea to make it configurable. It's just one more guc that\nnobody has a chance of realistically tuning. I'm not saying we shouldn't\nimprove the code - just that making MAX_SEND_SIZE configurable doesn't really\nseem like a good answer.\n\nFWIW, I have a hard time believing that MAX_SEND_SIZE is going to be the the\nonly or even primary issue with high latency, high bandwidth storage devices.\n\n\n\n> First: it's very hard to get *reliable* replication setup for\n> benchmark, where one could demonstrate correlation between e.g.\n> increasing MAX_SEND_SIZE and observing benefits (in sync rep it is\n> easier, as you are simply stalled in pgbench). Part of the problem are\n> the following things:\n\nDepending on the workload, it's possible to measure streaming-out performance\nwithout actually regenerating WAL. E.g. by using pg_receivewal to stream the\ndata out multiple times.\n\n\nAnother way to get fairly reproducible WAL workloads is to drive\npg_logical_emit_message() from pgbench, that tends to havea lot less\nvariability than running tpcb-like or such.\n\n\n> Second: once you perform above and ensure that there are no network or\n> I/O stalls back then I *think* I couldn't see any impact of playing\n> with MAX_SEND_SIZE from what I remember as probably something else is\n> saturated first.\n\nMy understanding of Majid's use-case for tuning MAX_SEND_SIZE is that the\nbottleneck is storage, not network. The reason MAX_SEND_SIZE affects that is\nthat it determines the max size passed to WALRead(), which in turn determines\nhow much we read from the OS at once. If the storage has high latency but\nalso high throughput, and readahead is disabled or just not aggressive enough\nafter crossing segment boundaries, larger reads reduce the number of times\nyou're likely to be blocked waiting for read IO.\n\nWhich is also why I think that making MAX_SEND_SIZE configurable is a really\npoor proxy for improving the situation.\n\nWe're imo much better off working on read_stream.[ch] support for reading WAL.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Apr 2024 15:00:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hi,\n\n> My understanding of Majid's use-case for tuning MAX_SEND_SIZE is that the\n> bottleneck is storage, not network. The reason MAX_SEND_SIZE affects that is\n> that it determines the max size passed to WALRead(), which in turn determines\n> how much we read from the OS at once. If the storage has high latency but\n> also high throughput, and readahead is disabled or just not aggressive enough\n> after crossing segment boundaries, larger reads reduce the number of times\n> you're likely to be blocked waiting for read IO.\n>\n> Which is also why I think that making MAX_SEND_SIZE configurable is a really\n> poor proxy for improving the situation.\n>\n> We're imo much better off working on read_stream.[ch] support for reading WAL.\n\nWell then that would be a consistent message at least, because earlier\nin [1] it was rejected to have prefetch the WAL segment but on the\nstandby side, where the patch was only helping in configurations\nhaving readahead *disabled* for some reason.\n\nNow Majid stated that he uses \"RBD\" - Majid, any chance to specify\nwhat that RBD really is ? What's the tech? What fs? Any ioping or fio\nresults? What's the blockdev --report /dev/XXX output ? (you stated\n\"high\" latency and \"high\" bandwidth , but it is relative, for me 15ms+\nis high latency and >1000MB/s sequential, but it would help others in\nfuture if you could specify it by the exact numbers please). Maybe\nit's just a matter of enabling readahead (line in [1]) there and/or\nusing a higher WAL segment during initdb.\n\n[1] - https://www.postgresql.org/message-id/flat/CADVKa1WsQMBYK_02_Ji%3DpbOFnms%2BCT7TV6qJxDdHsFCiC9V_hw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 08:57:55 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "Hi,\n\n> Now Majid stated that he uses \"RBD\" - Majid, any chance to specify\n> what that RBD really is ? What's the tech? What fs? Any ioping or fio\n> results? What's the blockdev --report /dev/XXX output ? (you stated\n> \"high\" latency and \"high\" bandwidth , but it is relative, for me 15ms+\n> is high latency and >1000MB/s sequential, but it would help others in\n> future if you could specify it by the exact numbers please). Maybe\n> it's just a matter of enabling readahead (line in [1]) there and/or\n> using a higher WAL segment during initdb.\n\nUnfortunately, I quit that company a month ago (I wish we could\ndiscuss this earlier) and don't have access to the environment\nanymore.\nI'll try to ask my teammates and see if they can find anything about\nthe exact values of latency, bw, etc.\n\nAnyway, here is some description of the environment. Sadly, there\nare no numbers in this description, but I'll try to provide as much details\nas possible.\nThere is a k8s cluster running over some VMs. Each postgres\ninstance runs as a pod inside the k8s cluster. So, both the\nprimary and standby servers are in the same DC, but might be on\ndifferent baremetal nodes. There is an overlay network for the pods to\nsee each other, and there's also another overlay network for the VMs\nto see each other.\nThe storage servers are in the same DC, but we're sure they're on some\nracks other than the postgres pods. They run Ceph [1] project and provide\nRados Block Devices (RBD) [2] interface. In order for k8s to use ceph, a\nCeph-CSI [3] controller is running inside the k8s cluster.\nBTW, the FS type is ext4.\n\n[1] - https://ceph.io/en/\n[2] - https://docs.ceph.com/en/latest/rbd/\n[3] - https://github.com/ceph/ceph-csi\n\nHi,> Now Majid stated that he uses \"RBD\" - Majid, any chance to specify> what that RBD really is ? What's the tech? What fs? Any ioping or fio> results? What's the blockdev --report /dev/XXX output ? (you stated> \"high\" latency and \"high\" bandwidth , but it is relative, for me 15ms+> is high latency and >1000MB/s sequential, but it would help others in> future if you could specify it by the exact numbers please). Maybe> it's just a matter of enabling readahead (line in [1]) there and/or> using a higher WAL segment during initdb.\nUnfortunately, I quit that company a month ago (I wish we coulddiscuss this earlier) and don't have access to the environmentanymore.I'll try to ask my teammates and see if they can find anything aboutthe exact values of latency, bw, etc.Anyway, here is some description of the environment. Sadly, thereare no numbers in this description, but I'll try to provide as much detailsas possible.There is a k8s cluster running over some VMs. Each postgresinstance runs as a pod inside the k8s cluster. So, both theprimary and standby servers are in the same DC, but might be ondifferent baremetal nodes. There is an overlay network for the pods tosee each other, and there's also another overlay network for the VMsto see each other.The storage servers are in the same DC, but we're sure they're on someracks other than the postgres pods. They run Ceph [1] project and provideRados Block Devices (RBD) [2] interface. In order for k8s to use ceph, aCeph-CSI [3] controller is running inside the k8s cluster.BTW, the FS type is ext4.[1] - https://ceph.io/en/[2] - https://docs.ceph.com/en/latest/rbd/[3] - https://github.com/ceph/ceph-csi", "msg_date": "Thu, 25 Apr 2024 14:53:33 +0200", "msg_from": "Majid Garoosi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" }, { "msg_contents": "On Thu, Apr 25, 2024 at 02:53:33PM +0200, Majid Garoosi wrote:\n> Unfortunately, I quit that company a month ago (I wish we could\n> discuss this earlier) and don't have access to the environment\n> anymore.\n> I'll try to ask my teammates and see if they can find anything about\n> the exact values of latency, bw, etc.\n> \n> Anyway, here is some description of the environment. Sadly, there\n> are no numbers in this description, but I'll try to provide as much details\n> as possible.\n> There is a k8s cluster running over some VMs. Each postgres\n> instance runs as a pod inside the k8s cluster. So, both the\n> primary and standby servers are in the same DC, but might be on\n> different baremetal nodes. There is an overlay network for the pods to\n> see each other, and there's also another overlay network for the VMs\n> to see each other.\n> The storage servers are in the same DC, but we're sure they're on some\n> racks other than the postgres pods. They run Ceph [1] project and provide\n> Rados Block Devices (RBD) [2] interface. In order for k8s to use ceph, a\n> Ceph-CSI [3] controller is running inside the k8s cluster.\n> BTW, the FS type is ext4.\n\nOkay, seeing the feedback for this patch and Andres disagreeing with\nit as being a good idea, I have marked the patch as rejected as it was\nstill in the CF app.\n--\nMichael", "msg_date": "Tue, 14 May 2024 09:18:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GUC-ify walsender MAX_SEND_SIZE constant" } ]
[ { "msg_contents": "While working on the dynamic shared memory registry, I noticed a couple of\npotential improvements for code that uses dshash tables.\n\n* A couple of dshash_create() callers pass in 0 for the \"void *arg\"\n parameter, which seemed weird. I incorrectly assumed this was some sort\n of flags parameter until I actually looked at the function signature.\n IMHO we should specify NULL here if arg is not used. 0001 makes this\n change. This is admittedly a nitpick.\n\n* There are no dshash compare/hash functions for string keys. 0002\n introduces some that simply forward to strcmp()/string_hash(), and it\n uses them for the DSM registry's dshash table. This allows us to remove\n the hacky key padding code for lookups on this table.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 19 Jan 2024 15:59:41 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "cleanup patches for dshash" }, { "msg_contents": "\nNathan Bossart <[email protected]> writes:\n\n> While working on the dynamic shared memory registry, I noticed a couple of\n> potential improvements for code that uses dshash tables.\n>\n> * A couple of dshash_create() callers pass in 0 for the \"void *arg\"\n> parameter, which seemed weird. I incorrectly assumed this was some sort\n> of flags parameter until I actually looked at the function signature.\n> IMHO we should specify NULL here if arg is not used. 0001 makes this\n> change. This is admittedly a nitpick.\n>\n> * There are no dshash compare/hash functions for string keys. 0002\n> introduces some that simply forward to strcmp()/string_hash(), and it\n> uses them for the DSM registry's dshash table. This allows us to remove\n> the hacky key padding code for lookups on this table.\n>\n> Thoughts?\n\nBoth LGTM.\n\n+dshash_strcmp(const void *a, const void *b, size_t size, void *arg)\n+{\n+\tAssert(strlen((const char *) a) < size);\n+\tAssert(strlen((const char *) b) < size);\n+\n\nDo you think the below change will be more explicitly?\n\n#define DSMRegistryNameSize 64\n\nDSMRegistryEntry\n{\n char name[DSMRegistryNameSize];\n \n}\n\nAssert(strlen((const char *) a) < DSMRegistryNameSize);\n\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 10:28:42 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cleanup patches for dshash" }, { "msg_contents": "On Mon, Jan 22, 2024 at 10:28:42AM +0800, Andy Fan wrote:\n> Both LGTM.\n\nThanks for looking.\n\n> +dshash_strcmp(const void *a, const void *b, size_t size, void *arg)\n> +{\n> +\tAssert(strlen((const char *) a) < size);\n> +\tAssert(strlen((const char *) b) < size);\n> +\n> \n> Do you think the below change will be more explicitly?\n> \n> #define DSMRegistryNameSize 64\n> \n> DSMRegistryEntry\n> {\n> char name[DSMRegistryNameSize];\n> \n> }\n> \n> Assert(strlen((const char *) a) < DSMRegistryNameSize);\n\nThis is effectively what it's doing already. These functions are intended\nto be generic so that they could be used with other dshash tables with\nstring keys, possibly with different sizes.\n\nI did notice a cfbot failure [0]. After a quick glance, I'm assuming this\nis caused by the memcpy() in insert_into_bucket(). Even if the string is\nshort, it will copy the maximum key size, which is bad. So, 0002 is\ntotally broken at the moment, and we'd need to teach insert_into_bucket()\nto strcpy() instead for string keys to fix it. Perhaps we could add a\nfield to dshash_parameters for this purpose...\n\n[0] https://api.cirrus-ci.com/v1/artifact/task/5124848070950912/log/src/test/modules/test_dsm_registry/log/postmaster.log\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 21 Jan 2024 21:51:18 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for dshash" }, { "msg_contents": "On Sun, Jan 21, 2024 at 09:51:18PM -0600, Nathan Bossart wrote:\n> I did notice a cfbot failure [0]. After a quick glance, I'm assuming this\n> is caused by the memcpy() in insert_into_bucket(). Even if the string is\n> short, it will copy the maximum key size, which is bad. So, 0002 is\n> totally broken at the moment, and we'd need to teach insert_into_bucket()\n> to strcpy() instead for string keys to fix it. Perhaps we could add a\n> field to dshash_parameters for this purpose...\n\nI attempted to fix this in v2 of the patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 21 Jan 2024 23:07:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for dshash" }, { "msg_contents": "On Sun, Jan 21, 2024 at 11:07:15PM -0600, Nathan Bossart wrote:\n> I attempted to fix this in v2 of the patch set.\n\nIf there are no objections, I plan to commit these patches early next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 23 Feb 2024 15:52:16 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for dshash" }, { "msg_contents": "On Fri, Feb 23, 2024 at 03:52:16PM -0600, Nathan Bossart wrote:\n> If there are no objections, I plan to commit these patches early next week.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 26 Feb 2024 15:55:10 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for dshash" }, { "msg_contents": "On Mon, Feb 26, 2024 at 03:55:10PM -0600, Nathan Bossart wrote:\n> Committed.\n\nI noticed that I forgot to update a couple of comments. While fixing\nthose, I discovered additional oversights that have been around since 2017.\nI plan to commit this shortly. \n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 26 Feb 2024 22:52:13 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cleanup patches for dshash" } ]
[ { "msg_contents": "Hello, I find a bug in building historic snapshot and the steps to reproduce are as follows:\r\n\r\n\r\nPrepare:\r\n\r\n\r\n(pub)create table t1 (id int primary key);\r\n\r\n\r\n\r\n(pub)insert into t1 values (1);\r\n\r\n\r\n\r\n(pub)create publication pub for table t1;\r\n\r\n\r\n\r\n(sub)create table t1 (id int primary key);\r\n\r\nReproduce:\r\n\r\n\r\n\r\n(pub)begin; insert into t1 values (2); (txn1 in session1)\r\n\r\n\r\n\r\n(sub)create subscription sub connection 'hostaddr=127.0.0.1 port=5432 user=xxx dbname=postgres' publication pub; (pub will switch to BUILDING_SNAPSHOT state soon)\r\n\r\n\r\n\r\n(pub)begin; insert into t1 values (3); (txn2 in session2)\r\n\r\n\r\n\r\n(pub)create table t2 (id int primary key); (session3)\r\n\r\n\r\n\r\n(pub)commit; (commit txn1, and pub will switch to FULL_SNAPSHOT state soon)\r\n\r\n\r\n\r\n(pub)begin; insert into t2 values (1); (txn3 in session3)\r\n\r\n\r\n\r\n(pub)commit; (commit txn2, and pub will switch to CONSISTENT state soon)\r\n\r\n\r\n\r\n(pub)commit; (commit txn3, and replay txn3 will failed because its snapshot cannot see t2)\r\n\r\nReasons:\r\nWe currently don't track the transaction that begin after BUILDING_SNAPSHOT\r\nand commit before FULL_SNAPSHOT when building historic snapshot in logical\r\ndecoding. This can cause a transaction which begin after FULL_SNAPSHOT to take\r\nan incorrect historic snapshot because transactions committed in BUILDING_SNAPSHOT\r\nstate will not be processed by SnapBuildCommitTxn().\r\n\r\n\r\nTo fix it, we can track the transaction that begin after BUILDING_SNAPSHOT and\r\ncommit before FULL_SNAPSHOT forcely by using SnapBuildCommitTxn(). \r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Sun, 21 Jan 2024 17:25:14 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bug report and fix about building historic snapshot" }, { "msg_contents": "This patch may be better, which only track catalog modified transactions.\r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen\r\n------------------&nbsp;Original&nbsp;------------------\r\nFrom: \"cca5507\" <[email protected]&gt;;\r\nDate:&nbsp;Sun, Jan 21, 2024 05:25 PM\r\nTo:&nbsp;\"pgsql-hackers\"<[email protected]&gt;;\r\n\r\nSubject:&nbsp;Bug report and fix about building historic snapshot\r\n\r\n\r\n\r\nHello, I find a bug in building historic snapshot and the steps to reproduce are as follows:\r\n\r\n\r\nPrepare:\r\n\r\n\r\n(pub)create table t1 (id int primary key);\r\n\r\n\r\n\r\n(pub)insert into t1 values (1);\r\n\r\n\r\n\r\n(pub)create publication pub for table t1;\r\n\r\n\r\n\r\n(sub)create table t1 (id int primary key);\r\n\r\nReproduce:\r\n\r\n\r\n\r\n(pub)begin; insert into t1 values (2); (txn1 in session1)\r\n\r\n\r\n\r\n(sub)create subscription sub connection 'hostaddr=127.0.0.1 port=5432 user=xxx dbname=postgres' publication pub; (pub will switch to BUILDING_SNAPSHOT state soon)\r\n\r\n\r\n\r\n(pub)begin; insert into t1 values (3); (txn2 in session2)\r\n\r\n\r\n\r\n(pub)create table t2 (id int primary key); (session3)\r\n\r\n\r\n\r\n(pub)commit; (commit txn1, and pub will switch to FULL_SNAPSHOT state soon)\r\n\r\n\r\n\r\n(pub)begin; insert into t2 values (1); (txn3 in session3)\r\n\r\n\r\n\r\n(pub)commit; (commit txn2, and pub will switch to CONSISTENT state soon)\r\n\r\n\r\n\r\n(pub)commit; (commit txn3, and replay txn3 will failed because its snapshot cannot see t2)\r\n\r\nReasons:\r\nWe currently don't track the transaction that begin after BUILDING_SNAPSHOT\r\nand commit before FULL_SNAPSHOT when building historic snapshot in logical\r\ndecoding. This can cause a transaction which begin after FULL_SNAPSHOT to take\r\nan incorrect historic snapshot because transactions committed in BUILDING_SNAPSHOT\r\nstate will not be processed by SnapBuildCommitTxn().\r\n\r\n\r\nTo fix it, we can track the transaction that begin after BUILDING_SNAPSHOT and\r\ncommit before FULL_SNAPSHOT forcely by using SnapBuildCommitTxn(). \r\n\r\n\r\n--\r\nRegards,\r\nChangAo Chen", "msg_date": "Tue, 23 Jan 2024 21:46:57 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Bug report and fix about building historic snapshot" }, { "msg_contents": "&gt; This patch may be better, which only track catalog modified transactions.\r\nCan anyone help review this patch?\r\nThanks.\r\n--\r\nRegards,\r\nChangAo Chen\n> This patch may be better, which only track catalog modified transactions.Can anyone help review this patch?Thanks.--Regards,ChangAo Chen", "msg_date": "Tue, 30 Jan 2024 14:31:03 +0800", "msg_from": "\"=?ISO-8859-1?B?Y2NhNTUwNw==?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re:Bug report and fix about building historic snapshot" } ]
[ { "msg_contents": "Hi all,\n\nAs the title said, just fix some typos.\n\nRegards\n\nYongtao Huang", "msg_date": "Sun, 21 Jan 2024 20:22:01 +0800", "msg_from": "Yongtao Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Fix some typos" }, { "msg_contents": "On Sun, Jan 21, 2024 at 08:22:01PM +0800, Yongtao Huang wrote:\n> As the title said, just fix some typos.\n\nThanks, applied.\n--\nMichael", "msg_date": "Mon, 22 Jan 2024 14:06:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix some typos" } ]
[ { "msg_contents": "It would be useful to have the ability to define for a role default vCPU\naffinity limits/thread priority settings so that more active sessions could\ncoexist similar to MySQL resource groups\n<https://dev.mysql.com/doc/refman/8.0/en/resource-groups.html>.\n\nBest Regards,\nYoni Sade\n\nIt would be useful to have the ability to define for a role default vCPU affinity limits/thread priority settings so that more active sessions could coexist similar to MySQL resource groups.Best Regards,Yoni Sade", "msg_date": "Sun, 21 Jan 2024 20:07:42 +0200", "msg_from": "Yoni Sade <[email protected]>", "msg_from_op": true, "msg_subject": "FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "Hi Yoni,\n\n> It would be useful to have the ability to define for a role default vCPU affinity limits/thread priority settings so that more active sessions could coexist similar to MySQL resource groups.\n\nTo me this sounds like a valuable feature.\n\nWould you be interested in working on it? Typically it is a good idea\nto start with an RFC document and discuss it with the community.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:43:45 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "Well, I'm not a developer, I just wanted to pitch this idea as a DBA who\nwould make use of this feature.\n\nBest Regards,\nYoni Sade\n\n‫בתאריך יום ב׳, 22 בינו׳ 2024 ב-12:43 מאת ‪Aleksander Alekseev‬‏ <‪\[email protected]‬‏>:‬\n\n> Hi Yoni,\n>\n> > It would be useful to have the ability to define for a role default vCPU\n> affinity limits/thread priority settings so that more active sessions could\n> coexist similar to MySQL resource groups.\n>\n> To me this sounds like a valuable feature.\n>\n> Would you be interested in working on it? Typically it is a good idea\n> to start with an RFC document and discuss it with the community.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\nWell, I'm not a developer, I just wanted to pitch this idea as a DBA who would make use of this feature.Best Regards,Yoni Sade‫בתאריך יום ב׳, 22 בינו׳ 2024 ב-12:43 מאת ‪Aleksander Alekseev‬‏ <‪[email protected]‬‏>:‬Hi Yoni,\n\n> It would be useful to have the ability to define for a role default vCPU affinity limits/thread priority settings so that more active sessions could coexist similar to MySQL resource groups.\n\nTo me this sounds like a valuable feature.\n\nWould you be interested in working on it? Typically it is a good idea\nto start with an RFC document and discuss it with the community.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 23 Jan 2024 13:57:32 +0200", "msg_from": "Yoni Sade <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "Yoni Sade schrieb am 21.01.2024 um 19:07:\n> It would be useful to have the ability to define for a role default\n> vCPU affinity limits/thread priority settings so that more active\n> sessions could coexist similar to MySQL resource groups\n> <https://dev.mysql.com/doc/refman/8.0/en/resource-groups.html>.\n\nTo a certain extent, you can achieve something like that using Linux cgroups\n\nhttps://www.cybertec-postgresql.com/en/linux-cgroups-for-postgresql/\n\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 13:09:22 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "Hi,\n\n> Well, I'm not a developer, I just wanted to pitch this idea as a DBA who would make use of this feature.\n\nI don't think one shouldn't be a developer in order to write an RFC\nand drive its discussion within the community. On top of that I'm\npretty confident as a DBA you can contribute tests and documentation.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 23 Jan 2024 16:10:40 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "Hi,\n\nOn 2024-01-23 13:09:22 +0100, Thomas Kellerer wrote:\n> Yoni Sade schrieb am 21.01.2024 um 19:07:\n> > It would be useful to have the ability to define for a role default\n> > vCPU affinity limits/thread priority settings so that more active\n> > sessions could coexist similar to MySQL resource groups\n> > <https://dev.mysql.com/doc/refman/8.0/en/resource-groups.html>.\n> \n> To a certain extent, you can achieve something like that using Linux cgroups\n> \n> https://www.cybertec-postgresql.com/en/linux-cgroups-for-postgresql/\n\nIf you do that naively, you just run into priority inversion issues. E.g. a\nbackend holding a critical lwlock not getting scheduled for a while because it\nexceeded it CPU allocation, preventing higher priority processes from\nprogressing.\n\nI doubt you can implement this in a robust manner outside of postgres.\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:10:27 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" }, { "msg_contents": "On Tue, Jan 23, 2024 at 10:10:27AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2024-01-23 13:09:22 +0100, Thomas Kellerer wrote:\n> > Yoni Sade schrieb am 21.01.2024 um 19:07:\n> > > It would be useful to have the ability to define for a role default\n> > > vCPU affinity limits/thread priority settings so that more active\n> > > sessions could coexist similar to MySQL resource groups\n> > > <https://dev.mysql.com/doc/refman/8.0/en/resource-groups.html>.\n> > \n> > To a certain extent, you can achieve something like that using Linux cgroups\n> > \n> > https://www.cybertec-postgresql.com/en/linux-cgroups-for-postgresql/\n> \n> If you do that naively, you just run into priority inversion issues. E.g. a\n> backend holding a critical lwlock not getting scheduled for a while because it\n> exceeded it CPU allocation, preventing higher priority processes from\n> progressing.\n> \n> I doubt you can implement this in a robust manner outside of postgres.\n\nFYI, here is an article about priority inversion:\n\n\thttps://www.geeksforgeeks.org/priority-inversion-what-the-heck/\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 23 Jan 2024 13:25:04 -0500", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FEATURE REQUEST: Role vCPU limit/priority" } ]
[ { "msg_contents": "Hi,\n\nI'm an extension developer. If I use PostgreSQL built with\nMeson, I get the following warning:\n\n cc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n\nBecause \"pg_config --cflags\" includes -Wformat-security but\ndoesn't include -Wformat.\n\nCan we specify -Wformat as a common warning flag too? If we\ndo it, \"pg_config --cflags\" includes both of\n-Wformat-security and -Wformat. So I don't get the warning.\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 22 Jan 2024 14:11:39 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nCould someone take a look at this?\n\nPatch is attached in the original e-mail:\nhttps://www.postgresql.org/message-id/20240122.141139.931086145628347157.kou%40clear-code.com\n\n\nThanks,\n-- \nkou\n\nIn <[email protected]>\n \"meson: Specify -Wformat as a common warning flag for extensions\" on Mon, 22 Jan 2024 14:11:39 +0900 (JST),\n Sutou Kouhei <[email protected]> wrote:\n\n> Hi,\n> \n> I'm an extension developer. If I use PostgreSQL built with\n> Meson, I get the following warning:\n> \n> cc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n> \n> Because \"pg_config --cflags\" includes -Wformat-security but\n> doesn't include -Wformat.\n> \n> Can we specify -Wformat as a common warning flag too? If we\n> do it, \"pg_config --cflags\" includes both of\n> -Wformat-security and -Wformat. So I don't get the warning.\n> \n> \n> Thanks,\n> -- \n> kou\n\n\n", "msg_date": "Thu, 22 Feb 2024 16:41:48 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On Sun Jan 21, 2024 at 11:11 PM CST, Sutou Kouhei wrote:\n> Hi,\n>\n> I'm an extension developer. If I use PostgreSQL built with\n> Meson, I get the following warning:\n>\n> cc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n>\n> Because \"pg_config --cflags\" includes -Wformat-security but\n> doesn't include -Wformat.\n>\n> Can we specify -Wformat as a common warning flag too? If we\n> do it, \"pg_config --cflags\" includes both of\n> -Wformat-security and -Wformat. So I don't get the warning.\n\nThe GCC documentation[0] says the following:\n\n> If -Wformat is specified, also warn about uses of format functions \n> that represent possible security problems. At present, this warns \n> about calls to printf and scanf functions where the format string is \n> not a string literal and there are no format arguments, as in printf \n> (foo);. This may be a security hole if the format string came from \n> untrusted input and contains ‘%n’. (This is currently a subset of what \n> -Wformat-nonliteral warns about, but in future warnings may be added \n> to -Wformat-security that are not included in -Wformat-nonliteral.)\n\nIt sounds like a legitimate issue. I have confirmed the issue exists \nwith a pg_config compiled with Meson. I can also confirm that this issue \nexists in the autotools build.\n\nHere is a v2 of your patch which includes the fix for autotools. I will \nmark this \"Ready for Committer\" in the commitfest. Thanks!\n\n[0]: https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html\n\n-- \nTristan Partin\nNeon (https://neon.tech)", "msg_date": "Thu, 07 Mar 2024 23:39:39 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On Thu, Mar 07, 2024 at 11:39:39PM -0600, Tristan Partin wrote:\n> It sounds like a legitimate issue. I have confirmed the issue exists with a\n> pg_config compiled with Meson. I can also confirm that this issue exists in\n> the autotools build.\n\nFirst time I'm hearing about that, but I'll admit that I am cheating\nbecause -Wformat is forced in my local builds for some time now. I'm\nfailing to see the issue with meson and ./configure even if I remove\nthe switch, though, using a recent version of gcc at 13.2.0, but\nperhaps Debian does something underground. Are there version and/or\nenvironment requirements to be aware of?\n\nForcing -Wformat implies more stuff that can be disabled with\n-Wno-format-contains-nul, -Wno-format-extra-args, and\n-Wno-format-zero-length, but the thing is that we're usually very\nconservative with such additions in the scripts. See also\n8b6f5f25102f, done, I guess, as an answer to this thread:\nhttps://www.postgresql.org/message-id/4D431505.9010002%40dunslane.net\n\nA quick look at the past history of pgsql-hackers does not mention\nthat as a problem, either, but I may have missed something.\n--\nMichael", "msg_date": "Fri, 8 Mar 2024 15:32:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Fri, 8 Mar 2024 15:32:22 +0900,\n Michael Paquier <[email protected]> wrote:\n\n> Are there version and/or\n> environment requirements to be aware of?\n\nI'm using Debian GNU/Linux sid and I can reproduce with gcc\n8-13:\n\n$ for x in {8..13}; do; echo gcc-${x}; gcc-${x} -Wformat-security -E - < /dev/null > /dev/null; done\ngcc-8\ncc1: warning: -Wformat-security ignored without -Wformat [-Wformat-security]\ngcc-9\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\ngcc-10\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\ngcc-11\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\ngcc-12\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\ngcc-13\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n$\n\nI tried this on Ubuntu 22.04 too but this isn't reproduced:\n\n$ gcc-11 -Wformat-security -E - < /dev/null > /dev/null\n$\n\nIt seems that Ubuntu enables -Wformat by default:\n\n$ gcc-11 -Wno-format -Wformat-security -E - < /dev/null > /dev/null\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n\nI tried this on AlmaLinux 9 too and this is reproduced:\n\n$ gcc --version\ngcc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis is free software; see the source for copying conditions. There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n$ gcc -Wformat-security -E - < /dev/null > /dev/null\ncc1: warning: '-Wformat-security' ignored without '-Wformat' [-Wformat-security]\n\n> Forcing -Wformat implies more stuff that can be disabled with\n> -Wno-format-contains-nul, -Wno-format-extra-args, and\n> -Wno-format-zero-length, but the thing is that we're usually very\n> conservative with such additions in the scripts. See also\n> 8b6f5f25102f, done, I guess, as an answer to this thread:\n> https://www.postgresql.org/message-id/4D431505.9010002%40dunslane.net\n\nI think that this is not a problem. Because the comment\nadded by 8b6f5f25102f (\"This was included in -Wall/-Wformat\nin older GCC versions\") implies that we want to always use\n-Wformat-security. -Wformat-security isn't worked without\n-Wformat:\n\nhttps://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wformat-security\n\n> If -Wformat is specified, also warn about uses of format\n> functions that represent possible security problems.\n\n\nThanks,\n-- \nkou\n\n\n", "msg_date": "Fri, 08 Mar 2024 18:17:47 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On Fri Mar 8, 2024 at 12:32 AM CST, Michael Paquier wrote:\n> On Thu, Mar 07, 2024 at 11:39:39PM -0600, Tristan Partin wrote:\n> > It sounds like a legitimate issue. I have confirmed the issue exists with a\n> > pg_config compiled with Meson. I can also confirm that this issue exists in\n> > the autotools build.\n>\n> First time I'm hearing about that, but I'll admit that I am cheating\n> because -Wformat is forced in my local builds for some time now. I'm\n> failing to see the issue with meson and ./configure even if I remove\n> the switch, though, using a recent version of gcc at 13.2.0, but\n> perhaps Debian does something underground. Are there version and/or\n> environment requirements to be aware of?\n>\n> Forcing -Wformat implies more stuff that can be disabled with\n> -Wno-format-contains-nul, -Wno-format-extra-args, and\n> -Wno-format-zero-length, but the thing is that we're usually very\n> conservative with such additions in the scripts. See also\n> 8b6f5f25102f, done, I guess, as an answer to this thread:\n> https://www.postgresql.org/message-id/4D431505.9010002%40dunslane.net\n>\n> A quick look at the past history of pgsql-hackers does not mention\n> that as a problem, either, but I may have missed something.\n\nOk, I figured this out. -Wall implies -Wformat=1. We set warning_level \nto 1 in the Meson project() call, which implies -Wall, and set -Wall in \nCFLAGS for autoconf. That's the reason we don't get issues building \nPostgres. A user making use of the pg_config --cflags option, as Sutou \nis, *will* run into the aforementioned issues, since we don't propogate \n-Wall into pg_config.\n\n\t$ gcc $(pg_config --cflags) -E - < /dev/null > /dev/null\n\tcc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’ [-Wformat-security]\n\t$ gcc -Wall $(pg_config --cflags) -E - < /dev/null > /dev/null\n\t(nothing printed)\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 08 Mar 2024 10:05:27 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\r\n\r\nIn <[email protected]>\r\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Fri, 08 Mar 2024 10:05:27 -0600,\r\n \"Tristan Partin\" <[email protected]> wrote:\r\n\r\n> Ok, I figured this out. -Wall implies -Wformat=1. We set warning_level\r\n> to 1 in the Meson project() call, which implies -Wall, and set -Wall\r\n> in CFLAGS for autoconf. That's the reason we don't get issues building\r\n> Postgres. A user making use of the pg_config --cflags option, as Sutou\r\n> is, *will* run into the aforementioned issues, since we don't\r\n> propogate -Wall into pg_config.\r\n> \r\n> \t$ gcc $(pg_config --cflags) -E - < /dev/null > /dev/null\r\n> \tcc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\r\n> \t[-Wformat-security]\r\n> \t$ gcc -Wall $(pg_config --cflags) -E - < /dev/null > /dev/null\r\n> \t(nothing printed)\r\n\r\nThanks for explaining this. You're right. This is the reason\r\nwhy we don't need this for PostgreSQL itself but we need\r\nthis for PostgreSQL extensions. Sorry. I should have\r\nexplained this in the first e-mail...\r\n\r\n\r\nWhat should we do to proceed this patch?\r\n\r\n\r\nThanks,\r\n-- \r\nkou\r\n", "msg_date": "Wed, 13 Mar 2024 08:56:38 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On Tue Mar 12, 2024 at 6:56 PM CDT, Sutou Kouhei wrote:\n> Hi,\n>\n> In <[email protected]>\n> \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Fri, 08 Mar 2024 10:05:27 -0600,\n> \"Tristan Partin\" <[email protected]> wrote:\n>\n> > Ok, I figured this out. -Wall implies -Wformat=1. We set warning_level\n> > to 1 in the Meson project() call, which implies -Wall, and set -Wall\n> > in CFLAGS for autoconf. That's the reason we don't get issues building\n> > Postgres. A user making use of the pg_config --cflags option, as Sutou\n> > is, *will* run into the aforementioned issues, since we don't\n> > propogate -Wall into pg_config.\n> > \n> > \t$ gcc $(pg_config --cflags) -E - < /dev/null > /dev/null\n> > \tcc1: warning: ‘-Wformat-security’ ignored without ‘-Wformat’\n> > \t[-Wformat-security]\n> > \t$ gcc -Wall $(pg_config --cflags) -E - < /dev/null > /dev/null\n> > \t(nothing printed)\n>\n> Thanks for explaining this. You're right. This is the reason\n> why we don't need this for PostgreSQL itself but we need\n> this for PostgreSQL extensions. Sorry. I should have\n> explained this in the first e-mail...\n>\n>\n> What should we do to proceed this patch?\n\nPerhaps adding some more clarification in the comments that I wrote.\n\n- # -Wformat-security requires -Wformat, so check for it\n+ # -Wformat-secuirty requires -Wformat. We compile with -Wall in \n+ # Postgres, which includes -Wformat=1. -Wformat is shorthand for \n+ # -Wformat=1. The set of flags which includes -Wformat-security is \n+ # persisted into pg_config --cflags, which is commonly used by \n+ # PGXS-based extensions. The lack of -Wformat in the persisted flags\n+ # will produce a warning on many GCC versions, so even though adding \n+ # -Wformat here is a no-op for Postgres, it silences other use cases.\n\nThat might be too long-winded though :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 13 Mar 2024 00:43:11 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Wed, 13 Mar 2024 00:43:11 -0500,\n \"Tristan Partin\" <[email protected]> wrote:\n\n> Perhaps adding some more clarification in the comments that I wrote.\n> \n> - # -Wformat-security requires -Wformat, so check for it\n> + # -Wformat-secuirty requires -Wformat. We compile with -Wall in + #\n> Postgres, which includes -Wformat=1. -Wformat is shorthand for + #\n> -Wformat=1. The set of flags which includes -Wformat-security is + #\n> persisted into pg_config --cflags, which is commonly used by + #\n> PGXS-based extensions. The lack of -Wformat in the persisted flags\n> + # will produce a warning on many GCC versions, so even though adding\n> + # -Wformat here is a no-op for Postgres, it silences other use\n> cases.\n> \n> That might be too long-winded though :).\n\nThanks for the wording! I used it for the v3 patch.\n\n\nThanks,\n-- \nkou", "msg_date": "Wed, 13 Mar 2024 16:12:00 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On 08.03.24 17:05, Tristan Partin wrote:\n> Ok, I figured this out. -Wall implies -Wformat=1. We set warning_level \n> to 1 in the Meson project() call, which implies -Wall, and set -Wall in \n> CFLAGS for autoconf. That's the reason we don't get issues building \n> Postgres. A user making use of the pg_config --cflags option, as Sutou \n> is, *will* run into the aforementioned issues, since we don't propogate \n> -Wall into pg_config.\n\n(The actual mechanism for extensions is that they get CFLAGS from \nMakefile.global, but pg_config has the same underlying issue.)\n\nI think the fix then is to put -Wall into CFLAGS in Makefile.global. \nLooking at a diff of Makefile.global between an autoconf and a meson \nbuild, I also see that under meson, CFLAGS doesn't get -O2 -g (or \nsimilar, depending on settings). This presumably has the same \nunderlying issue that meson handles those flags internally.\n\nFor someone who wants to write a fix for this, the relevant variable is \nvar_cflags in our meson scripts. And var_cxxflags as well.\n\n\n\n\n", "msg_date": "Wed, 13 Mar 2024 08:38:28 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Wed, 13 Mar 2024 08:38:28 +0100,\n Peter Eisentraut <[email protected]> wrote:\n\n> I think the fix then is to put -Wall into CFLAGS in\n> Makefile.global. Looking at a diff of Makefile.global between an\n> autoconf and a meson build, I also see that under meson, CFLAGS\n> doesn't get -O2 -g (or similar, depending on settings). This\n> presumably has the same underlying issue that meson handles those\n> flags internally.\n> \n> For someone who wants to write a fix for this, the relevant variable\n> is var_cflags in our meson scripts. And var_cxxflags as well.\n\nHow about the attached v4 patch?\n\n\nThanks,\n-- \nkou", "msg_date": "Fri, 15 Mar 2024 18:36:55 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nOn 2024-03-15 18:36:55 +0900, Sutou Kouhei wrote:\n> +warning_level = get_option('warning_level')\n> +# See https://mesonbuild.com/Builtin-options.html#details-for-warning_level for\n> +# warning_level values.\n> +if warning_level == '1'\n> + common_builtin_flags += ['-Wall', '/W2']\n> +elif warning_level == '2'\n> + common_builtin_flags += ['-Wall', '-Wextra', '/W3']\n> +elif warning_level == '3'\n> + common_builtin_flags += ['-Wall', '-Wextra', '-Wpedantic', '/W4']\n> +elif warning_level == 'everything'\n> + common_builtin_flags += ['-Weverything', '/Wall']\n> +endif\n\n> +cflags_builtin = cc.get_supported_arguments(common_builtin_flags)\n> +if llvm.found()\n> + cxxflags_builtin = cpp.get_supported_arguments(common_builtin_flags)\n> +endif\n\nThis seems like a fair amount of extra configure tests. Particularly because\n/W* isn't ever interesting for Makefile.global - they're msvc flags - because\nyou can't use that with msvc.\n\nI'm also doubtful that it's worth supporting warning_level=3/everything, you\nend up with a completely flood of warnings that way.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 7 Apr 2024 16:26:35 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi Andres,\n\nThanks for reviewing this!\n\nIn <[email protected]>\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Sun, 7 Apr 2024 16:26:35 -0700,\n Andres Freund <[email protected]> wrote:\n\n> This seems like a fair amount of extra configure tests. Particularly because\n> /W* isn't ever interesting for Makefile.global - they're msvc flags - because\n> you can't use that with msvc.\n> \n> I'm also doubtful that it's worth supporting warning_level=3/everything, you\n> end up with a completely flood of warnings that way.\n\nOK. I've removed \"/W*\" flags and warning_level==3/everything\ncases.\n\nHow about the attached v5 patch?\n\n\nThanks,\n-- \nkou", "msg_date": "Mon, 08 Apr 2024 10:01:17 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On 07.04.24 18:01, Sutou Kouhei wrote:\n> +# We don't have \"warning_level == 3\" and \"warning_level ==\n> +# 'everything'\" here because we don't use these warning levels.\n> +if warning_level == '1'\n> + common_builtin_flags += ['-Wall']\n> +elif warning_level == '2'\n> + common_builtin_flags += ['-Wall', '-Wextra']\n> +endif\n\nI would trim this even further and always export just '-Wall'. The \nother options aren't really something we support.\n\nThe other stanzas, on '-g' and '-O*', look good to me.\n\n\n\n", "msg_date": "Tue, 28 May 2024 23:31:05 -0700", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "Hi,\n\nIn <[email protected]>\n \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Tue, 28 May 2024 23:31:05 -0700,\n Peter Eisentraut <[email protected]> wrote:\n\n> On 07.04.24 18:01, Sutou Kouhei wrote:\n>> +# We don't have \"warning_level == 3\" and \"warning_level ==\n>> +# 'everything'\" here because we don't use these warning levels.\n>> +if warning_level == '1'\n>> + common_builtin_flags += ['-Wall']\n>> +elif warning_level == '2'\n>> + common_builtin_flags += ['-Wall', '-Wextra']\n>> +endif\n> \n> I would trim this even further and always export just '-Wall'. The\n> other options aren't really something we support.\n\nOK. How about the v6 patch? It always uses '-Wall'.\n\nThanks,\n-- \nkou", "msg_date": "Wed, 29 May 2024 15:47:08 +0900 (JST)", "msg_from": "Sutou Kouhei <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On 29.05.24 08:47, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Tue, 28 May 2024 23:31:05 -0700,\n> Peter Eisentraut <[email protected]> wrote:\n> \n>> On 07.04.24 18:01, Sutou Kouhei wrote:\n>>> +# We don't have \"warning_level == 3\" and \"warning_level ==\n>>> +# 'everything'\" here because we don't use these warning levels.\n>>> +if warning_level == '1'\n>>> + common_builtin_flags += ['-Wall']\n>>> +elif warning_level == '2'\n>>> + common_builtin_flags += ['-Wall', '-Wextra']\n>>> +endif\n>>\n>> I would trim this even further and always export just '-Wall'. The\n>> other options aren't really something we support.\n> \n> OK. How about the v6 patch? It always uses '-Wall'.\n\nYes, this looks good to me.\n\nAll: I think we should backpatch this. Otherwise, meson-based installs \nwill get suboptimal behavior for extension builds via pgxs.\n\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:00:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" }, { "msg_contents": "On 29.05.24 08:47, Sutou Kouhei wrote:\n> In <[email protected]>\n> \"Re: meson: Specify -Wformat as a common warning flag for extensions\" on Tue, 28 May 2024 23:31:05 -0700,\n> Peter Eisentraut <[email protected]> wrote:\n> \n>> On 07.04.24 18:01, Sutou Kouhei wrote:\n>>> +# We don't have \"warning_level == 3\" and \"warning_level ==\n>>> +# 'everything'\" here because we don't use these warning levels.\n>>> +if warning_level == '1'\n>>> + common_builtin_flags += ['-Wall']\n>>> +elif warning_level == '2'\n>>> + common_builtin_flags += ['-Wall', '-Wextra']\n>>> +endif\n>>\n>> I would trim this even further and always export just '-Wall'. The\n>> other options aren't really something we support.\n> \n> OK. How about the v6 patch? It always uses '-Wall'.\n\nI have committed this. Thanks.\n\n\n\n", "msg_date": "Fri, 7 Jun 2024 09:44:26 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson: Specify -Wformat as a common warning flag for extensions" } ]
[ { "msg_contents": "One of the goals is to make the creation of the distribution tarball \nmore directly traceable to the git repository. That is why we removed \nthe \"make distprep\" step.\n\nHere I want to take another step in that direction, by changing \"make \ndist\" to directly use \"git archive\", rather than the custom shell script \nit currently runs.\n\nThe simple summary is that it would run\n\ngit archive --format tar.gz --prefix postgresql-17devel/ HEAD -o \npostgresql-17devel.tar.gz\n\n(with appropriate version numbers, of course), and that's the tarball we \nwould ship.\n\nThere are analogous commands for other compression formats.\n\nThe actual command gets subtly more complicated if you need to run this \nin a separate build directory. In my attached patch, the make version \ndoesn't support vpath at the moment, just so that it's easier to \nunderstand for now. The meson version is a bit hairy.\n\nI have studied and tested this quite a bit, and I have found that the \narchives produced this way are deterministic and reproducible, meaning \nfor a given commit the result file should always be bit-for-bit identical.\n\nThe exception is that if you use a git version older than 2.38.0, gzip \nrecords the platform in the archive, so you'd get a different output on \nWindows vs. macOS vs. \"UNIX\" (everything else). In git 2.38.0, this was \nchanged so that everything is recorded as \"UNIX\" now. This is just \nsomething to keep in mind. This issue is specific to the gzip format, \nit does not affect other compression formats.\n\nMeson has its own distribution building command (meson dist), but opted \nagainst using that. The main problem is that the way they have \nimplemented it, it is not deterministic in the above sense. (Another \npoint is of course that we probably want a \"make\" version for the time \nbeing.)\n\nBut the target name \"dist\" in meson is reserved for that reason, so I \nneeded to call the custom target \"pgdist\".\n\nI did take one idea from meson: It runs a check before git archive that \nthe checkout is clean. That way, you avoid mistakes because of \nuncommitted changes. This works well in my \"make\" implementation. In \nthe meson implementation, I had to find a workaround, because a \ncustom_target cannot have a dependency on a run_target. As also \nmentioned above, the whole meson implementation is a bit ugly.\n\nAnyway, with the attached patch you can do\n\n make dist\n\nor\n\n meson compile -C build pgdist\n\nand it produces the same set of tarballs as before, except it's done \ndifferently.\n\nThe actual build scripts need some fine-tuning, but the general idea is \ncorrect, I think.", "msg_date": "Mon, 22 Jan 2024 08:31:59 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "make dist using git archive" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 22, 2024 at 3:32 PM Peter Eisentraut <[email protected]> wrote:\n>\n> One of the goals is to make the creation of the distribution tarball\n> more directly traceable to the git repository. That is why we removed\n> the \"make distprep\" step.\n>\n> Here I want to take another step in that direction, by changing \"make\n> dist\" to directly use \"git archive\", rather than the custom shell script\n> it currently runs.\n>\n> The simple summary is that it would run\n>\n> git archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\n> postgresql-17devel.tar.gz\n>\n> (with appropriate version numbers, of course), and that's the tarball we\n> would ship.\n>\n> There are analogous commands for other compression formats.\n>\n> The actual command gets subtly more complicated if you need to run this\n> in a separate build directory. In my attached patch, the make version\n> doesn't support vpath at the moment, just so that it's easier to\n> understand for now. The meson version is a bit hairy.\n>\n> I have studied and tested this quite a bit, and I have found that the\n> archives produced this way are deterministic and reproducible, meaning\n> for a given commit the result file should always be bit-for-bit identical.\n>\n> The exception is that if you use a git version older than 2.38.0, gzip\n> records the platform in the archive, so you'd get a different output on\n> Windows vs. macOS vs. \"UNIX\" (everything else). In git 2.38.0, this was\n> changed so that everything is recorded as \"UNIX\" now. This is just\n> something to keep in mind. This issue is specific to the gzip format,\n> it does not affect other compression formats.\n>\n> Meson has its own distribution building command (meson dist), but opted\n> against using that. The main problem is that the way they have\n> implemented it, it is not deterministic in the above sense. (Another\n> point is of course that we probably want a \"make\" version for the time\n> being.)\n>\n> But the target name \"dist\" in meson is reserved for that reason, so I\n> needed to call the custom target \"pgdist\".\n>\n> I did take one idea from meson: It runs a check before git archive that\n> the checkout is clean. That way, you avoid mistakes because of\n> uncommitted changes. This works well in my \"make\" implementation. In\n> the meson implementation, I had to find a workaround, because a\n> custom_target cannot have a dependency on a run_target. As also\n> mentioned above, the whole meson implementation is a bit ugly.\n>\n> Anyway, with the attached patch you can do\n>\n> make dist\n>\n> or\n>\n> meson compile -C build pgdist\n\nI played this with meson build on macOS, the packages are generated\nin source root but not build root, I'm sure if this is by design but I think\npolluting *working directory* is not good.\n\nAnother thing I'd like to point out is, should we also introduce *git commit*\nor maybe *git tag* to package name, something like:\n\ngit archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\npostgresql-17devel-`git rev-parse --short HEAD`.tar.gz\ngit archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\npostgresql-`git describe --tags`.tar.gz\n\n>\n> and it produces the same set of tarballs as before, except it's done\n> differently.\n>\n> The actual build scripts need some fine-tuning, but the general idea is\n> correct, I think.\n\nI think this is a good idea, thanks for working on this.\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Mon, 22 Jan 2024 20:10:53 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 22.01.24 13:10, Junwang Zhao wrote:\n> I played this with meson build on macOS, the packages are generated\n> in source root but not build root, I'm sure if this is by design but I think\n> polluting *working directory* is not good.\n\nYes, it's not good, but I couldn't find a way to make it work.\n\nThis is part of the complications with meson I referred to. The \n@BUILD_ROOT@ placeholder in custom_target() is apparently always a \nrelative path, but it doesn't know that git -C changes the current \ndirectory.\n\n> Another thing I'd like to point out is, should we also introduce *git commit*\n> or maybe *git tag* to package name, something like:\n> \n> git archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\n> postgresql-17devel-`git rev-parse --short HEAD`.tar.gz\n> git archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\n> postgresql-`git describe --tags`.tar.gz\n\nI'm not sure why we would need it built-in. It can be done by hand, of \ncourse.\n\n\n\n", "msg_date": "Mon, 22 Jan 2024 19:35:56 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Mon Jan 22, 2024 at 1:31 AM CST, Peter Eisentraut wrote:\n> From 4b128faca90238d0a0bb6949a8050c2501d1bd67 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <[email protected]>\n> Date: Sat, 20 Jan 2024 21:54:36 +0100\n> Subject: [PATCH v0] make dist uses git archive\n>\n> ---\n> GNUmakefile.in | 34 ++++++++++++----------------------\n> meson.build | 38 ++++++++++++++++++++++++++++++++++++++\n> 2 files changed, 50 insertions(+), 22 deletions(-)\n>\n> diff --git a/GNUmakefile.in b/GNUmakefile.in\n> index eba569e930e..3e04785ada2 100644\n> --- a/GNUmakefile.in\n> +++ b/GNUmakefile.in\n> @@ -87,29 +87,19 @@ update-unicode: | submake-generated-headers submake-libpgport\n> distdir\t= postgresql-$(VERSION)\n> dummy\t= =install=\n> \n> +GIT = git\n> +\n> dist: $(distdir).tar.gz $(distdir).tar.bz2\n> -\trm -rf $(distdir)\n> -\n> -$(distdir).tar: distdir\n> -\t$(TAR) chf $@ $(distdir)\n> -\n> -.INTERMEDIATE: $(distdir).tar\n> -\n> -distdir-location:\n> -\t@echo $(distdir)\n> -\n> -distdir:\n> -\trm -rf $(distdir)* $(dummy)\n> -\tfor x in `cd $(top_srcdir) && find . \\( -name CVS -prune \\) -o \\( -name .git -prune \\) -o -print`; do \\\n> -\t file=`expr X$$x : 'X\\./\\(.*\\)'`; \\\n> -\t if test -d \"$(top_srcdir)/$$file\" ; then \\\n> -\t mkdir \"$(distdir)/$$file\" && chmod 777 \"$(distdir)/$$file\";\t\\\n> -\t else \\\n> -\t ln \"$(top_srcdir)/$$file\" \"$(distdir)/$$file\" >/dev/null 2>&1 \\\n> -\t || cp \"$(top_srcdir)/$$file\" \"$(distdir)/$$file\"; \\\n> -\t fi || exit; \\\n> -\tdone\n> -\t$(MAKE) -C $(distdir) distclean\n> +\n> +.PHONY: check-dirty-index\n> +check-dirty-index:\n> +\t$(GIT) diff-index --quiet HEAD\n> +\n> +$(distdir).tar.gz: check-dirty-index\n> +\t$(GIT) archive --format tar.gz --prefix $(distdir)/ HEAD -o $@\n> +\n> +$(distdir).tar.bz2: check-dirty-index\n> +\t$(GIT) -c tar.tar.bz2.command='$(BZIP2) -c' archive --format tar.bz2 --prefix $(distdir)/ HEAD -o $@\n> \n> distcheck: dist\n> \trm -rf $(dummy)\n> diff --git a/meson.build b/meson.build\n> index c317144b6bc..f0d870c5192 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -3347,6 +3347,44 @@ run_target('help',\n> \n> \n> \n> +###############################################################\n> +# Distribution archive\n> +###############################################################\n> +\n> +git = find_program('git', required: false, native: true, disabler: true)\n> +bzip2 = find_program('bzip2', required: false, native: true, disabler: true)\n\nThis doesn't need to be a disabler. git is fine as-is. See later \ncomment. Disablers only work like you are expecting when they are used \nlike how git is used. Once you call a method like .path(), all bets are \noff.\n\n> +distdir = meson.project_name() + '-' + meson.project_version()\n> +\n> +check_dirty_index = run_target('check-dirty-index',\n> + command: [git, 'diff-index', '--quiet', 'HEAD'])\n\nSeems like you might want to add -C here too?\n\n> +\n> +tar_gz = custom_target('tar.gz',\n> + build_always_stale: true,\n> + command: [git, '-C', '@SOURCE_ROOT@', 'archive',\n> + '--format', 'tar.gz',\n> + '--prefix', distdir + '/',\n> + '-o', '@BUILD_ROOT@/@OUTPUT@',\n> + 'HEAD', '.'],\n> + install: false,\n> + output: distdir + '.tar.gz',\n> +)\n> +\n> +tar_bz2 = custom_target('tar.bz2',\n> + build_always_stale: true,\n> + command: [git, '-C', '@SOURCE_ROOT@', '-c', 'tar.tar.bz2.command=' + bzip2.path() + ' -c', 'archive',\n> + '--format', 'tar.bz2',\n> + '--prefix', distdir + '/',\n\n- '-o', '@BUILD_ROOT@/@OUTPUT@',\n+ '-o', join_paths(meson.build_root(), '@OUTPUT@'),\n\nThis will generate the tarballs in the build directory. Do the same for \nthe previous target. Tested locally.\n\n> + 'HEAD', '.'],\n> + install: false,\n> + output: distdir + '.tar.bz2',\n> +)\n\nThe bz2 target should be wrapped in an `if bzip2.found()`. It is \npossible for git to be found, but not bzip2. I might also define the bz2 \ncommand out of line. Also, you may want to add \nthese programs to meson_options.txt for overriding, even though the \n\"meson-ic\" way is to use a machine file.\n\n> +\n> +alias_target('pgdist', [check_dirty_index, tar_gz, tar_bz2])\n\nAre you intending for the check_dirty_index target to prohibit the other \ntwo targets from running? Currently that is not the case. If it is what \nyou intend, use a stamp file or something to indicate a relationship. \nAlternatively, inline the git diff-index into the other commands. These \nmight also do better as external scripts. It would reduce duplication \nbetween the autotools and Meson builds.\n\n> +\n> +\n> +\n> ###############################################################\n> # The End, The End, My Friend\n> ###############################################################\n\nI am not really following why we can't use the builtin Meson dist \ncommand. The only difference from my testing is it doesn't use \na --prefix argument.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 22 Jan 2024 14:04:43 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Tue, Jan 23, 2024 at 2:36 AM Peter Eisentraut <[email protected]> wrote:\n>\n> On 22.01.24 13:10, Junwang Zhao wrote:\n> > I played this with meson build on macOS, the packages are generated\n> > in source root but not build root, I'm sure if this is by design but I think\n> > polluting *working directory* is not good.\n>\n> Yes, it's not good, but I couldn't find a way to make it work.\n>\n> This is part of the complications with meson I referred to. The\n> @BUILD_ROOT@ placeholder in custom_target() is apparently always a\n> relative path, but it doesn't know that git -C changes the current\n> directory.\n>\n> > Another thing I'd like to point out is, should we also introduce *git commit*\n> > or maybe *git tag* to package name, something like:\n> >\n> > git archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\n> > postgresql-17devel-`git rev-parse --short HEAD`.tar.gz\n> > git archive --format tar.gz --prefix postgresql-17devel/ HEAD -o\n> > postgresql-`git describe --tags`.tar.gz\n>\n> I'm not sure why we would need it built-in. It can be done by hand, of\n> course.\n\nIf this is only used by the release phase, one can do this by hand.\n\n*commit id/tag* in package name can be used to identify the git source,\nwhich might be useful for cooperation between QA and dev team,\nbut surely there are better ways for this, so I do not have a strong\nopinion here.\n\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:14:40 +0800", "msg_from": "Junwang Zhao <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 22.01.24 21:04, Tristan Partin wrote:\n> I am not really following why we can't use the builtin Meson dist \n> command. The only difference from my testing is it doesn't use a \n> --prefix argument.\n\nHere are some problems I have identified:\n\n1. meson dist internally runs gzip without the -n option. That makes \nthe tar.gz archive include a timestamp, which in turn makes it not \nreproducible.\n\n2. Because gzip includes a platform indicator in the archive, the \nproduced tar.gz archive is not reproducible across platforms. (I don't \nknow if gzip has an option to avoid that. git archive uses an internal \ngzip implementation that handles this.)\n\n3. Meson does not support tar.bz2 archives.\n\n4. Meson uses git archive internally, but then unpacks and repacks the \narchive, which loses the ability to use git get-tar-commit-id.\n\n5. I have found that the tar archives created by meson and git archive \ninclude the files in different orders. I suspect that the Python \ntarfile module introduces some either randomness or platform dependency.\n\n6. meson dist is also slower because of the additional work.\n\n7. meson dist produces .sha256sum files but we have called them .sha256. \n (This is obviously trivial, but it is something that would need to be \ndealt with somehow nonetheless.)\n\nMost or all of these issues are fixable, either upstream in Meson or by \nadjusting our own requirements. But for now this route would have some \nsignificant disadvantages.\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:30:05 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Tue Jan 23, 2024 at 3:30 AM CST, Peter Eisentraut wrote:\n> On 22.01.24 21:04, Tristan Partin wrote:\n> > I am not really following why we can't use the builtin Meson dist \n> > command. The only difference from my testing is it doesn't use a \n> > --prefix argument.\n>\n> Here are some problems I have identified:\n>\n> 1. meson dist internally runs gzip without the -n option. That makes \n> the tar.gz archive include a timestamp, which in turn makes it not \n> reproducible.\n>\n> 2. Because gzip includes a platform indicator in the archive, the \n> produced tar.gz archive is not reproducible across platforms. (I don't \n> know if gzip has an option to avoid that. git archive uses an internal \n> gzip implementation that handles this.)\n>\n> 3. Meson does not support tar.bz2 archives.\n>\n> 4. Meson uses git archive internally, but then unpacks and repacks the \n> archive, which loses the ability to use git get-tar-commit-id.\n>\n> 5. I have found that the tar archives created by meson and git archive \n> include the files in different orders. I suspect that the Python \n> tarfile module introduces some either randomness or platform dependency.\n>\n> 6. meson dist is also slower because of the additional work.\n>\n> 7. meson dist produces .sha256sum files but we have called them .sha256. \n> (This is obviously trivial, but it is something that would need to be \n> dealt with somehow nonetheless.)\n>\n> Most or all of these issues are fixable, either upstream in Meson or by \n> adjusting our own requirements. But for now this route would have some \n> significant disadvantages.\n\nThanks Peter. I will bring these up with upstream!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 24 Jan 2024 10:18:31 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Wed Jan 24, 2024 at 10:18 AM CST, Tristan Partin wrote:\n> On Tue Jan 23, 2024 at 3:30 AM CST, Peter Eisentraut wrote:\n> > On 22.01.24 21:04, Tristan Partin wrote:\n> > > I am not really following why we can't use the builtin Meson dist \n> > > command. The only difference from my testing is it doesn't use a \n> > > --prefix argument.\n> >\n> > Here are some problems I have identified:\n> >\n> > 1. meson dist internally runs gzip without the -n option. That makes \n> > the tar.gz archive include a timestamp, which in turn makes it not \n> > reproducible.\n\nIt doesn't look like Python provides the facilities to affect this.\n\n> > 2. Because gzip includes a platform indicator in the archive, the \n> > produced tar.gz archive is not reproducible across platforms. (I don't \n> > know if gzip has an option to avoid that. git archive uses an internal \n> > gzip implementation that handles this.)\n\nSame reason as above.\n\n> > 3. Meson does not support tar.bz2 archives.\n\nSubmitted https://github.com/mesonbuild/meson/pull/12770.\n\n> > 4. Meson uses git archive internally, but then unpacks and repacks the \n> > archive, which loses the ability to use git get-tar-commit-id.\n\nBecause Meson allows projects to distribute arbitrary files via \nmeson.add_dist_script(), and can include subprojects via `meson dist \n--include-subprojects`, this doesn't seem like an easily solvable \nproblem.\n\n> > 5. I have found that the tar archives created by meson and git archive \n> > include the files in different orders. I suspect that the Python \n> > tarfile module introduces some either randomness or platform dependency.\n\nSeems likely.\n\n> > 6. meson dist is also slower because of the additional work.\n\nNot easily solvable due to 4.\n\n> > 7. meson dist produces .sha256sum files but we have called them .sha256. \n> > (This is obviously trivial, but it is something that would need to be \n> > dealt with somehow nonetheless.)\n> >\n> > Most or all of these issues are fixable, either upstream in Meson or by \n> > adjusting our own requirements. But for now this route would have some \n> > significant disadvantages.\n>\n> Thanks Peter. I will bring these up with upstream!\n\nI think the solution to point 4 is to not unpack/repack if there are no \ndist scripts and/or subprojects to distribute. I can take a look at \nthis later. I think this would also solve points 1, 2, 5, and 6 because \nat that point meson is just calling git-archive.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Wed, 24 Jan 2024 11:57:08 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 24.01.24 18:57, Tristan Partin wrote:\n>> > 4. Meson uses git archive internally, but then unpacks and repacks \n>> the > archive, which loses the ability to use git get-tar-commit-id.\n> \n> Because Meson allows projects to distribute arbitrary files via \n> meson.add_dist_script(), and can include subprojects via `meson dist \n> --include-subprojects`, this doesn't seem like an easily solvable problem.\n\ngit archive has the --add-file option, which can probably do the same \nthing. Subprojects are another thing, but obviously are more rarely used.\n\n> I think the solution to point 4 is to not unpack/repack if there are no \n> dist scripts and/or subprojects to distribute. I can take a look at this \n> later. I think this would also solve points 1, 2, 5, and 6 because at \n> that point meson is just calling git-archive.\n\nI think that would be a useful direction.\n\n\n\n", "msg_date": "Thu, 25 Jan 2024 07:35:39 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 22.01.24 21:04, Tristan Partin wrote:\n>> +git = find_program('git', required: false, native: true, disabler: true)\n>> +bzip2 = find_program('bzip2', required: false, native: true, \n>> disabler: true)\n> \n> This doesn't need to be a disabler. git is fine as-is. See later \n> comment. Disablers only work like you are expecting when they are used \n> like how git is used. Once you call a method like .path(), all bets are \n> off.\n\nok, fixed\n\n>> +distdir = meson.project_name() + '-' + meson.project_version()\n>> +\n>> +check_dirty_index = run_target('check-dirty-index',\n>> +                               command: [git, 'diff-index', \n>> '--quiet', 'HEAD'])\n> \n> Seems like you might want to add -C here too?\n\ndone\n\n>> +tar_bz2 = custom_target('tar.bz2',\n>> +  build_always_stale: true,\n>> +  command: [git, '-C', '@SOURCE_ROOT@', '-c', 'tar.tar.bz2.command=' \n>> + bzip2.path() + ' -c', 'archive',\n>> +            '--format', 'tar.bz2',\n>> +            '--prefix', distdir + '/',\n> \n> -            '-o', '@BUILD_ROOT@/@OUTPUT@',\n> +            '-o', join_paths(meson.build_root(), '@OUTPUT@'),\n> \n> This will generate the tarballs in the build directory. Do the same for \n> the previous target. Tested locally.\n\nFixed, thanks. I had struggled with this one.\n\n>> +            'HEAD', '.'],\n>> +  install: false,\n>> +  output: distdir + '.tar.bz2',\n>> +)\n> \n> The bz2 target should be wrapped in an `if bzip2.found()`.\n\nWell, I think we want the dist step to fail if bzip2 is not there. At \nleast that is the current expectation.\n\n>> +alias_target('pgdist', [check_dirty_index, tar_gz, tar_bz2])\n> \n> Are you intending for the check_dirty_index target to prohibit the other \n> two targets from running? Currently that is not the case.\n\nYes, that was the hope, and that's how the make dist variant works. But \nI couldn't figure this out with meson. Also, the above actually also \ndoesn't work with older meson versions, so I had to comment this out to \nget CI to work.\n\n> If it is what \n> you intend, use a stamp file or something to indicate a relationship. \n> Alternatively, inline the git diff-index into the other commands. These \n> might also do better as external scripts. It would reduce duplication \n> between the autotools and Meson builds.\n\nYeah, maybe that's a direction.\n\nThe updated patch also supports vpath builds with make now.\n\nI have also added a CI patch, for amusement. Maybe we'll want to keep \nit, though.", "msg_date": "Thu, 25 Jan 2024 17:04:43 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Thu Jan 25, 2024 at 10:04 AM CST, Peter Eisentraut wrote:\n> On 22.01.24 21:04, Tristan Partin wrote:\n> >> +            'HEAD', '.'],\n> >> +  install: false,\n> >> +  output: distdir + '.tar.bz2',\n> >> +)\n> > \n> > The bz2 target should be wrapped in an `if bzip2.found()`.\n\nThe way that this currently works is that you will fail at configure \ntime if bz2 doesn't exist on the system. Meson will try to resolve \na .path() method on a NotFoundProgram. You might want to define the bz2 \ntarget to just call `exit 1` in this case.\n\nif bzip2.found()\n # do your current target\nelse\n bz2 = run_target('tar.bz2', command: ['exit', 1])\nendif\n\nThis should cause pgdist to appropriately fail at run time when \ngenerating the bz2 tarball.\n\n> Well, I think we want the dist step to fail if bzip2 is not there. At \n> least that is the current expectation.\n>\n> >> +alias_target('pgdist', [check_dirty_index, tar_gz, tar_bz2])\n> > \n> > Are you intending for the check_dirty_index target to prohibit the other \n> > two targets from running? Currently that is not the case.\n>\n> Yes, that was the hope, and that's how the make dist variant works. But \n> I couldn't figure this out with meson. Also, the above actually also \n> doesn't work with older meson versions, so I had to comment this out to \n> get CI to work.\n>\n> > If it is what \n> > you intend, use a stamp file or something to indicate a relationship. \n> > Alternatively, inline the git diff-index into the other commands. These \n> > might also do better as external scripts. It would reduce duplication \n> > between the autotools and Meson builds.\n>\n> Yeah, maybe that's a direction.\n\nFor what it's worth, I run Meson 1.3, and the behavior of generating the \ntarballs even though it is a dirty tree still occurred. In the new patch \nyou seem to say it was fixed in 0.60.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Thu, 25 Jan 2024 10:25:19 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 25.01.24 17:25, Tristan Partin wrote:\n> The way that this currently works is that you will fail at configure \n> time if bz2 doesn't exist on the system. Meson will try to resolve a \n> .path() method on a NotFoundProgram. You might want to define the bz2 \n> target to just call `exit 1` in this case.\n> \n> if bzip2.found()\n>  # do your current target\n> else\n>  bz2 = run_target('tar.bz2', command: ['exit', 1])\n> endif\n> \n> This should cause pgdist to appropriately fail at run time when \n> generating the bz2 tarball.\n\nOk, done that way.\n\n> For what it's worth, I run Meson 1.3, and the behavior of generating the \n> tarballs even though it is a dirty tree still occurred. In the new patch \n> you seem to say it was fixed in 0.60.\n\nThe problem I'm referring to is that before 0.60, alias_target cannot \ndepend on run_target (only \"build target\"). This is AFAICT not \ndocumented and might not have been an intentional change, but you can \ntrace it in the meson source code, and it shows in the PostgreSQL CI. \nThat's also why for the above bzip2 issue I have to use custom_target in \nplace of your run_target.", "msg_date": "Fri, 26 Jan 2024 07:28:15 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Fri Jan 26, 2024 at 12:28 AM CST, Peter Eisentraut wrote:\n> On 25.01.24 17:25, Tristan Partin wrote:\n> > For what it's worth, I run Meson 1.3, and the behavior of generating the \n> > tarballs even though it is a dirty tree still occurred. In the new patch \n> > you seem to say it was fixed in 0.60.\n>\n> The problem I'm referring to is that before 0.60, alias_target cannot \n> depend on run_target (only \"build target\"). This is AFAICT not \n> documented and might not have been an intentional change, but you can \n> trace it in the meson source code, and it shows in the PostgreSQL CI. \n> That's also why for the above bzip2 issue I have to use custom_target in \n> place of your run_target.\n\nhttps://github.com/mesonbuild/meson/pull/12783\n\nThanks for finding these issues.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 26 Jan 2024 13:46:36 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "Hello, meson developer here.\n\n\nOn 1/23/24 4:30 AM, Peter Eisentraut wrote:\n> On 22.01.24 21:04, Tristan Partin wrote:\n>> I am not really following why we can't use the builtin Meson dist\n>> command. The only difference from my testing is it doesn't use a\n>> --prefix argument.\n> \n> Here are some problems I have identified:\n> \n> 1. meson dist internally runs gzip without the -n option.  That makes\n> the tar.gz archive include a timestamp, which in turn makes it not\n> reproducible.\n\n\nWell, it uses python tarfile which uses python gzip support under the\nhood, but yes, that is true, python tarfile doesn't expose this tunable.\n\n\n> 2. Because gzip includes a platform indicator in the archive, the\n> produced tar.gz archive is not reproducible across platforms.  (I don't\n> know if gzip has an option to avoid that.  git archive uses an internal\n> gzip implementation that handles this.)\n\n\nThis appears to be https://github.com/python/cpython/issues/112346\n\n\n> 3. Meson does not support tar.bz2 archives.\n\n\nSimple enough to add, but I'm a bit surprised as usually people seem to\nwant either gzip for portability or xz for efficient compression.\n\n\n> 4. Meson uses git archive internally, but then unpacks and repacks the\n> archive, which loses the ability to use git get-tar-commit-id.\n\n\nWhat do you use this for? IMO a more robust way to track the commit used\nis to use gitattributes export-subst to write a `.git_archival.txt` file\ncontaining the commit sha1 and other info -- this can be read even after\nthe file is extracted, which means it can also be used to bake the ID\ninto the built binaries e.g. as part of --version output.\n\n\n> 5. I have found that the tar archives created by meson and git archive\n> include the files in different orders.  I suspect that the Python\n> tarfile module introduces some either randomness or platform dependency.\n\n\nDifferent orders is meaningless, the question is whether the order is\ninternally consistent. Python uses sorted() to guarantee a stable order,\nwhich may be a different algorithm than the one git-archive uses to\nguarantee a stable order. But the order should be stable and that is\nwhat matters.\n\n\n> 6. meson dist is also slower because of the additional work.\n\n\nI'm amenable to skipping the extraction/recombination of subprojects and\nrunning of dist scripts in the event that neither exist, as Tristan\noffered to do, but...\n\n\n> 7. meson dist produces .sha256sum files but we have called them .sha256.\n>  (This is obviously trivial, but it is something that would need to be\n> dealt with somehow nonetheless.)\n> \n> Most or all of these issues are fixable, either upstream in Meson or by\n> adjusting our own requirements.  But for now this route would have some\n> significant disadvantages.\n\n\nOverall I feel like much of this is about requiring dist tarballs to be\nbyte-identical to other dist tarballs, although reproducible builds is\nmainly about artifacts, not sources, and for sources it doesn't\ngenerally matter unless the sources are ephemeral and generated\non-demand (in which case it is indeed very important to produce the same\ntarball each time). A tarball is usually generated once, signed, and\nuploaded to release hosting. Meson already guarantees the contents are\nstrictly based on the built tag.\n\n\n-- \nEli Schwartz", "msg_date": "Fri, 26 Jan 2024 16:18:58 -0500", "msg_from": "Eli Schwartz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 26.01.24 22:18, Eli Schwartz wrote:\n> Hello, meson developer here.\n\nHello, and thanks for popping in!\n\n>> 3. Meson does not support tar.bz2 archives.\n> \n> Simple enough to add, but I'm a bit surprised as usually people seem to\n> want either gzip for portability or xz for efficient compression.\n\nWe may very well end up updating our requirements here before too long, \nso I wouldn't bother with this on the meson side. Last time we \ndiscussed this, there were still platforms under support that didn't \nhave xz easily available.\n\n>> 4. Meson uses git archive internally, but then unpacks and repacks the\n>> archive, which loses the ability to use git get-tar-commit-id.\n> \n> What do you use this for? IMO a more robust way to track the commit used\n> is to use gitattributes export-subst to write a `.git_archival.txt` file\n> containing the commit sha1 and other info -- this can be read even after\n> the file is extracted, which means it can also be used to bake the ID\n> into the built binaries e.g. as part of --version output.\n\nIt's a marginal use case, for sure. But it is something that git \nprovides tooling for that is universally available. Any alternative \nwould be an ad-hoc solution that is specific to our project and would be \ndifferent for the next project.\n\n>> 5. I have found that the tar archives created by meson and git archive\n>> include the files in different orders.  I suspect that the Python\n>> tarfile module introduces some either randomness or platform dependency.\n> \n> Different orders is meaningless, the question is whether the order is\n> internally consistent. Python uses sorted() to guarantee a stable order,\n> which may be a different algorithm than the one git-archive uses to\n> guarantee a stable order. But the order should be stable and that is\n> what matters.\n\n(FWIW, I couldn't reproduce this anymore, so maybe it's not actually an \nissue.)\n\n> Overall I feel like much of this is about requiring dist tarballs to be\n> byte-identical to other dist tarballs, although reproducible builds is\n> mainly about artifacts, not sources, and for sources it doesn't\n> generally matter unless the sources are ephemeral and generated\n> on-demand (in which case it is indeed very important to produce the same\n> tarball each time).\n\nThe source tarball is, in a way, also an artifact.\n\nI think it's useful that others can easily independently verify that the \nproduced tarball matches what they have locally. It's not an absolute \nrequirement, but given that it is possible, it seems useful to take \nadvantage of it.\n\nIn a way, this also avoids the need for signing the tarball, which we \ndon't do. So maybe that contributes to a different perspective.\n\n\n\n", "msg_date": "Wed, 31 Jan 2024 09:03:55 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 1/31/24 3:03 AM, Peter Eisentraut wrote:\n>> What do you use this for? IMO a more robust way to track the commit used\n>> is to use gitattributes export-subst to write a `.git_archival.txt` file\n>> containing the commit sha1 and other info -- this can be read even after\n>> the file is extracted, which means it can also be used to bake the ID\n>> into the built binaries e.g. as part of --version output.\n> \n> It's a marginal use case, for sure.  But it is something that git\n> provides tooling for that is universally available.  Any alternative\n> would be an ad-hoc solution that is specific to our project and would be\n> different for the next project.\n\n\nmercurial has the \"archivemeta\" config setting that exports similar\ninformation, but forces the filename \".hg_archival.txt\".\n\nThe setuptools-scm project follows this pattern by requiring the git\nfile to be called \".git_archival.txt\" with a set pattern mimicking the\nhg one:\n\nhttps://setuptools-scm.readthedocs.io/en/latest/usage/#git-archives\n\n\nSo, I guess you could use this and then it would not be specific to your\nproject. :)\n\n\n>> Overall I feel like much of this is about requiring dist tarballs to be\n>> byte-identical to other dist tarballs, although reproducible builds is\n>> mainly about artifacts, not sources, and for sources it doesn't\n>> generally matter unless the sources are ephemeral and generated\n>> on-demand (in which case it is indeed very important to produce the same\n>> tarball each time).\n> \n> The source tarball is, in a way, also an artifact.\n> \n> I think it's useful that others can easily independently verify that the\n> produced tarball matches what they have locally.  It's not an absolute\n> requirement, but given that it is possible, it seems useful to take\n> advantage of it.\n> \n> In a way, this also avoids the need for signing the tarball, which we\n> don't do.  So maybe that contributes to a different perspective.\n\n\nSince you mention signing and not as a simple \"aside\"...\n\nThat's a fascinating perspective. I wonder how people independently\nverify that what they have locally (I assume from git clones) matches\nwhat the postgres committers have authorized.\n\nI'm a bit skeptical that you can avoid the need to perform code-signing\nat some stage, somewhere, somehow, by suggesting that people can simply\ngit clone, run some commands and compare the tarball. The point of\nsigning is to verify that no one has acquired an untraceable API token\nthey should not have and gotten write access to the authoritative server\nthen uploaded malicious code under various forged identities, possibly\noverwriting previous versions, either in git or out of git.\n\nIdeally git commits should be signed, but that requires large numbers of\npeople to have security-minded git commit habits. From a quick check of\nthe postgres commit logs, only one person seems to be regularly signing\ncommits, which does provide a certain measure of protection -- an\nattacker cannot attack via `git push --force` across that boundary, and\nthose commits serve as verifiable states that multiple people have seen.\n\nThe tags aren't signed either, which is a big issue for verifiably\nidentifying the release artifacts published by the release manager. Even\nif not every commit is signed, having signed tags provides a known\ncoordination point of code that has been broadly tested and code-signed\nfor mass use.\n\n...\n\nIn summary, my opinion is that using git-get-tar-commit-id provides zero\nsecurity guarantees, and if that's not something you are worried about\nthen that's one thing, but if you were expecting it to *replace* signing\nthe tarball, then that's.... very much another thing entirely, and not\none I can agree at all with.\n\n\n\n-- \nEli Schwartz", "msg_date": "Wed, 31 Jan 2024 10:50:44 -0500", "msg_from": "Eli Schwartz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Wed, Jan 31, 2024 at 10:50 AM Eli Schwartz <[email protected]> wrote:\n> Ideally git commits should be signed, but that requires large numbers of\n> people to have security-minded git commit habits. From a quick check of\n> the postgres commit logs, only one person seems to be regularly signing\n> commits, which does provide a certain measure of protection -- an\n> attacker cannot attack via `git push --force` across that boundary, and\n> those commits serve as verifiable states that multiple people have seen.\n>\n> The tags aren't signed either, which is a big issue for verifiably\n> identifying the release artifacts published by the release manager. Even\n> if not every commit is signed, having signed tags provides a known\n> coordination point of code that has been broadly tested and code-signed\n> for mass use.\n>\n> In summary, my opinion is that using git-get-tar-commit-id provides zero\n> security guarantees, and if that's not something you are worried about\n> then that's one thing, but if you were expecting it to *replace* signing\n> the tarball, then that's.... very much another thing entirely, and not\n> one I can agree at all with.\n\nI read this part with interest. I think there's definitely something\nto be said for strengthening some of our practices in this area. At\nthe same time, I think it's reasonable for Peter to want to pursue the\nlimited goal he stated in the original post, namely reproducible\ntarball generation, without getting tangled up in possible policy\nchanges that might be controversial and might require a bunch of\nplanning and coordination. \"GPG signatures are good\" can be true\nwithout \"reproducible tarball generation is good\" being false; and if\n\"git archive\" allows for that and \"meson dist\" doesn't, then we're\nunlikely to adopt \"meson dist\".\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 31 Jan 2024 13:12:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "Small update: I noticed that on Windows (at least the one that is \nrunning the CI job), I need to use git -c core.autocrlf=false, otherwise \ngit archive does line-ending conversion for the files it puts into the \narchive. With this fix, all the archives produced by all the CI jobs \nacross the different platforms match, except the .tar.gz archive from \nthe Linux job, which I suspect suffers from an old git version. We \nshould get the Linux images updated to a newer Debian version soon \nanyway, so I think that issue will go away.", "msg_date": "Mon, 12 Feb 2024 00:09:32 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Sun Feb 11, 2024 at 5:09 PM CST, Peter Eisentraut wrote:\n> Small update: I noticed that on Windows (at least the one that is \n> running the CI job), I need to use git -c core.autocrlf=false, otherwise \n> git archive does line-ending conversion for the files it puts into the \n> archive. With this fix, all the archives produced by all the CI jobs \n> across the different platforms match, except the .tar.gz archive from \n> the Linux job, which I suspect suffers from an old git version. We \n> should get the Linux images updated to a newer Debian version soon \n> anyway, so I think that issue will go away.\n\nI think with this change, it is unlikely I will be able to upstream \nanything to Meson that would benefit Postgres here since setting this \noption seems project dependent.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 12 Feb 2024 11:26:33 -0600", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 12.02.24 18:26, Tristan Partin wrote:\n> On Sun Feb 11, 2024 at 5:09 PM CST, Peter Eisentraut wrote:\n>> Small update: I noticed that on Windows (at least the one that is \n>> running the CI job), I need to use git -c core.autocrlf=false, \n>> otherwise git archive does line-ending conversion for the files it \n>> puts into the archive.  With this fix, all the archives produced by \n>> all the CI jobs across the different platforms match, except the \n>> .tar.gz archive from the Linux job, which I suspect suffers from an \n>> old git version.  We should get the Linux images updated to a newer \n>> Debian version soon anyway, so I think that issue will go away.\n> \n> I think with this change, it is unlikely I will be able to upstream \n> anything to Meson that would benefit Postgres here since setting this \n> option seems project dependent.\n\nMeson is vulnerable to the same problem: If the person who makes the \nrelease had some crlf-related git setting activated in their \nenvironment, then that would affect the tarball. And such a tarball \nwould be genuinely broken for non-Windows users, because at least some \nparts of Unix systems can't process such CRLF files correctly.\n\n(This is easy to test: Run meson dist with core.autocrlf=true on the \npostgresql tree on a non-Windows system. It will fail during dist check.)\n\n\n\n", "msg_date": "Tue, 13 Feb 2024 07:53:57 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "Here is an updated version of this patch set.\n\nI have removed the \"dirty check\" stuff. It didn't really work well/was \nbuggy under meson, and it failed mysteriously on the Linux CI tasks. So \nlet's just drop that functionality for now.\n\nI have also added a more complete commit message and some more code \ncomments.\n\nI have extracted the freebsd CI script fix into a separate patch (0002). \n I think this is useful even if we don't take the full CI patch (0003).\n\nAbout the 0003 patch: It seems useful in principle to test these things \ncontinuously. The dist script runs about 10 seconds in each task, and \ntakes a bit of disk space for the artifacts. I'm not sure to what \ndegree this might bother someone.", "msg_date": "Thu, 21 Mar 2024 09:44:01 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Thu Mar 21, 2024 at 3:44 AM CDT, Peter Eisentraut wrote:\n> Here is an updated version of this patch set.\n\nYou should add 'disabler: true' to the git find_program in Meson. If Git \ndoesn't exist on the system with the way your patch is currently \nwritten, the targets would be defined, even though they would never \nsucceed.\n\nYou may also want to make sure that we are actually in a Git repository. \nI don't think git-archive works outside one.\n\nRe the autoclrf, is this something we could throw in a .gitattributes \nfiles?\n\n> I have removed the \"dirty check\" stuff. It didn't really work well/was \n> buggy under meson, and it failed mysteriously on the Linux CI tasks. So \n> let's just drop that functionality for now.\n>\n> I have also added a more complete commit message and some more code \n> comments.\n\n> Meson has its own distribution building command (meson dist), but we\n> are not using that at this point. The main problem is that the way\n> they have implemented it, it is not deterministic in the above sense.\n> Also, we want a \"make\" version for the time being. But the target\n> name \"dist\" in meson is reserved for that reason, so we call the\n> custom target \"pgdist\" (so call something like \"meson compile -C build\n> pgdist\").\n\nI would suggest poisoning `meson dist` in the following way:\n\nif not meson.is_subproject()\n\t# Maybe edit the comment...Maybe tell perl to print this message \n\t# instead and then exit non-zero?\n\t#\n\t# Meson has its own distribution building command (meson dist), but we\n\t# are not using that at this point. The main problem is that the way\n\t# they have implemented it, it is not deterministic in the above sense.\n\t# Also, we want a \"make\" version for the time being. But the target\n\t# name \"dist\" in meson is reserved for that reason, so we call the\n\t# custom target \"pgdist\" (so call something like \"meson compile -C build\n\t# pgdist\").\n\t#\n\t# We don't poison the dist if we are a subproject because it is \n\t# possible that the parent project may want to create a dist using \n\t# the builtin Meson method.\n\tmeson.add_dist_script(perl, '-e', 'exit 1')\nendif\n\n> I have extracted the freebsd CI script fix into a separate patch (0002). \n> I think this is useful even if we don't take the full CI patch (0003).\n\n0002 looks pretty reasonable to me.\n\n> About the 0003 patch: It seems useful in principle to test these things \n> continuously. The dist script runs about 10 seconds in each task, and \n> takes a bit of disk space for the artifacts. I'm not sure to what \n> degree this might bother someone.\n\n0003 works for me :).\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 22 Mar 2024 12:29:21 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 22.03.24 18:29, Tristan Partin wrote:\n> On Thu Mar 21, 2024 at 3:44 AM CDT, Peter Eisentraut wrote:\n>> Here is an updated version of this patch set.\n> \n> You should add 'disabler: true' to the git find_program in Meson. If Git \n> doesn't exist on the system with the way your patch is currently \n> written, the targets would be defined, even though they would never \n> succeed.\n\nOk, added. (I had it in there in an earlier version, but I think I \nmisread one of your earlier messages and removed it.)\n\n> You may also want to make sure that we are actually in a Git repository. \n> I don't think git-archive works outside one.\n\nThen git archive will print an error. That seems ok.\n\n> Re the autoclrf, is this something we could throw in a .gitattributes \n> files?\n\nWe don't want to apply it to all git commands, just this one in this \ncontext.\n\n> I would suggest poisoning `meson dist` in the following way:\n> \n> if not meson.is_subproject()\n[...]\n>     meson.add_dist_script(perl, '-e', 'exit 1')\n> endif\n\nGood idea, added that.\n\n>> I have extracted the freebsd CI script fix into a separate patch \n>> (0002).   I think this is useful even if we don't take the full CI \n>> patch (0003).\n> \n> 0002 looks pretty reasonable to me.\n\nCommitted that one in the meantime.", "msg_date": "Sun, 24 Mar 2024 13:03:40 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "3 comments left that are inconsequential. Feel free to ignore.\n\n> +# Meson has its own distribution building command (meson dist), but we\n> +# are not using that at this point. The main problem is that the way\n> +# they have implemented it, it is not deterministic. Also, we want it\n> +# to be equivalent to the \"make\" version for the time being. But the\n> +# target name \"dist\" in meson is reserved for that reason, so we call\n> +# the custom target \"pgdist\".\n\nThe second sentence is a run-on.\n\n> +if bzip2.found()\n> + tar_bz2 = custom_target('tar.bz2',\n> + build_always_stale: true,\n> + command: [git, '-C', '@SOURCE_ROOT@',\n> + '-c', 'core.autocrlf=false',\n> + '-c', 'tar.tar.bz2.command=\"' + bzip2.path() + '\" -c',\n> + 'archive',\n> + '--format', 'tar.bz2',\n> + '--prefix', distdir + '/',\n> + '-o', join_paths(meson.build_root(), '@OUTPUT@'),\n> + 'HEAD', '.'],\n> + install: false,\n> + output: distdir + '.tar.bz2',\n> + )\n\nYou might find Meson's string formatting syntax creates a more readable \ncommand string:\n\n'tar.tar.bz2.command=@0@ -c'.format(bzip2.path())\n\nAnd then 'install: false' is the default if you feel like leaving it \nout.\n\nOtherwise, let's get this in!\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Sun, 24 Mar 2024 10:42:53 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 24.03.24 16:42, Tristan Partin wrote:\n> You might find Meson's string formatting syntax creates a more readable \n> command string:\n> \n> 'tar.tar.bz2.command=@0@ -c'.format(bzip2.path())\n> \n> And then 'install: false' is the default if you feel like leaving it out.\n> \n> Otherwise, let's get this in!\n\nDone and committed.\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 06:44:33 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "Hi,\n\nOn 2024-03-25 06:44:33 +0100, Peter Eisentraut wrote:\n> Done and committed.\n\nThis triggered a new warning for me:\n\n../../../../../home/andres/src/postgresql/meson.build:3422: WARNING: Project targets '>=0.54' but uses feature introduced in '0.55.0': Passing executable/found program object to script parameter of add_dist_script.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Mon, 25 Mar 2024 17:23:09 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On 26.03.24 01:23, Andres Freund wrote:\n> On 2024-03-25 06:44:33 +0100, Peter Eisentraut wrote:\n>> Done and committed.\n> \n> This triggered a new warning for me:\n> \n> ../../../../../home/andres/src/postgresql/meson.build:3422: WARNING: Project targets '>=0.54' but uses feature introduced in '0.55.0': Passing executable/found program object to script parameter of add_dist_script.\n\nHmm, I don't see that. Is there another version dependency that \ncontrols when you see version dependency warnings? ;-)\n\nWe could trivially remove this particular line, or perhaps put a\n\nif meson.version().version_compare('>=0.55')\n\naround it. (But would that still warn?)\n\n\n\n", "msg_date": "Tue, 26 Mar 2024 08:36:58 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "Hi,\n\nOn 2024-03-26 08:36:58 +0100, Peter Eisentraut wrote:\n> On 26.03.24 01:23, Andres Freund wrote:\n> > On 2024-03-25 06:44:33 +0100, Peter Eisentraut wrote:\n> > > Done and committed.\n> > \n> > This triggered a new warning for me:\n> > \n> > ../../../../../home/andres/src/postgresql/meson.build:3422: WARNING: Project targets '>=0.54' but uses feature introduced in '0.55.0': Passing executable/found program object to script parameter of add_dist_script.\n> \n> Hmm, I don't see that. Is there another version dependency that controls\n> when you see version dependency warnings? ;-)\n\nSometimes an incompatibility is later noticed and a warning is introduced at\nthat point.\n\n> We could trivially remove this particular line, or perhaps put a\n> \n> if meson.version().version_compare('>=0.55')\n> \n> around it. (But would that still warn?)\n\nIt shouldn't, no. As long as the code is actually executed within the check,\nit avoids the warning. However if you just set a variable inside the version\ngated block and then later use the variable outside that, it will\nwarn. Probably hard to avoid...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 26 Mar 2024 00:56:32 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Tue Mar 26, 2024 at 2:56 AM CDT, Andres Freund wrote:\n> Hi,\n>\n> On 2024-03-26 08:36:58 +0100, Peter Eisentraut wrote:\n> > On 26.03.24 01:23, Andres Freund wrote:\n> > > On 2024-03-25 06:44:33 +0100, Peter Eisentraut wrote:\n> > > > Done and committed.\n> > > \n> > > This triggered a new warning for me:\n> > > \n> > > ../../../../../home/andres/src/postgresql/meson.build:3422: WARNING: Project targets '>=0.54' but uses feature introduced in '0.55.0': Passing executable/found program object to script parameter of add_dist_script.\n> > \n> > Hmm, I don't see that. Is there another version dependency that controls\n> > when you see version dependency warnings? ;-)\n>\n> Sometimes an incompatibility is later noticed and a warning is introduced at\n> that point.\n>\n> > We could trivially remove this particular line, or perhaps put a\n> > \n> > if meson.version().version_compare('>=0.55')\n> > \n> > around it. (But would that still warn?)\n>\n> It shouldn't, no. As long as the code is actually executed within the check,\n> it avoids the warning. However if you just set a variable inside the version\n> gated block and then later use the variable outside that, it will\n> warn. Probably hard to avoid...\n\nThe following change also makes the warning go away, but the version \ncomparison seems better to me due to how we choose not to use machine \nfiles for overriding programs[0]. :(\n\n- meson.add_dist_script(perl, ...)\n+ meson.add_dist_script('perl', ...)\n\nAside, but I think since we dropped AIX, we can bump the required Meson \nversion. My last analysis of the situation told me that the AIX \nbuildfarm animals were the only machines which didn't have a Python \nversion capable of running a newer version. I would need to look at the \nsituation again though.\n\n[0]: If someone wants to make a plea here: \n https://github.com/mesonbuild/meson/pull/12623\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Tue, 26 Mar 2024 09:26:08 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" }, { "msg_contents": "On Wed Jan 24, 2024 at 11:57 AM CST, Tristan Partin wrote:\n> On Wed Jan 24, 2024 at 10:18 AM CST, Tristan Partin wrote:\n> > On Tue Jan 23, 2024 at 3:30 AM CST, Peter Eisentraut wrote:\n> > > On 22.01.24 21:04, Tristan Partin wrote:\n> > > 3. Meson does not support tar.bz2 archives.\n>\n> Submitted https://github.com/mesonbuild/meson/pull/12770.\n\nThis has now been merged. It will be in 1.5, so we will probably see it \nin RHEL in a decade :P.\n\n> > > 4. Meson uses git archive internally, but then unpacks and repacks the \n> > > archive, which loses the ability to use git get-tar-commit-id.\n>\n> Because Meson allows projects to distribute arbitrary files via \n> meson.add_dist_script(), and can include subprojects via `meson dist \n> --include-subprojects`, this doesn't seem like an easily solvable \n> problem.\n>\n> > Thanks Peter. I will bring these up with upstream!\n>\n> I think the solution to point 4 is to not unpack/repack if there are no \n> dist scripts and/or subprojects to distribute. I can take a look at \n> this later. I think this would also solve points 1, 2, 5, and 6 because \n> at that point meson is just calling git-archive.\n\nI think implementing a solution to point 4 is a little bit more pressing \ngiven that reproducible tarballs are more important after the xz \ndebaucle. I will try to give it some effort soon.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:14:10 -0500", "msg_from": "\"Tristan Partin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: make dist using git archive" } ]
[ { "msg_contents": "Hi,\nI'm learning faster COPY of PG16. I have some questions about extension lock improvement.\nFrom ./src/backend/storage/buffer/bufmgr.c:1901 (ExtendBufferedRelShared)\n```\n /*\n * Lock relation against concurrent extensions, unless requested not to.\n *\n * We use the same extension lock for all forks. That's unnecessarily\n * restrictive, but currently extensions for forks don't happen often\n * enough to make it worth locking more granularly.\n *\n * Note that another backend might have extended the relation by the time\n * we get the lock.\n */\n if (!(flags & EB_SKIP_EXTENSION_LOCK))\n {\n LockRelationForExtension(bmr.rel, ExclusiveLock);\n if (bmr.rel)\n bmr.smgr = RelationGetSmgr(bmr.rel);\n }\n ...\n smgrzeroextend(bmr.smgr, fork, first_block, extend_by, false);\n```\nDuring concurrent extension, when we obtain the extension lock, we use smgrzeroextend() to extend relation files instead of searching fsm through GetPageWithFreeSpace(). Is this approach reasonable?\nDuring concurrent extensions, one backend bulk extend successfully, meaning that other backends waiting on extension lock have free pages to use.\nIf all concurrent extend backends extend the relation file after getting the extension lock, the extension lock will be held (extention backends * smgrzeroextend() executing time).\n\nAny feedback is welcome.\n\n--\nBest Regards\nKewen He\n\nfrom 阿里邮箱 macOS\nHi,I'm learning faster COPY of PG16. I have some questions about extension lock improvement.From ./src/backend/storage/buffer/bufmgr.c:1901 (ExtendBufferedRelShared)``` /* * Lock relation against concurrent extensions, unless requested not to. * * We use the same extension lock for all forks. That's unnecessarily * restrictive, but currently extensions for forks don't happen often * enough to make it worth locking more granularly. * * Note that another backend might have extended the relation by the time * we get the lock. */ if (!(flags & EB_SKIP_EXTENSION_LOCK)) { LockRelationForExtension(bmr.rel, ExclusiveLock); if (bmr.rel) bmr.smgr = RelationGetSmgr(bmr.rel); } ... smgrzeroextend(bmr.smgr, fork, first_block, extend_by, false);```During concurrent extension, when we obtain the extension lock, we use smgrzeroextend() to extend relation files instead of searching fsm through GetPageWithFreeSpace(). Is this approach reasonable?During concurrent extensions, one backend bulk extend successfully, meaning that other backends waiting on extension lock have free pages to use.If all concurrent extend backends extend the relation file after getting the extension lock, the extension lock will be held (extention backends * smgrzeroextend() executing time).Any feedback is welcome.--Best RegardsKewen Hefrom 阿里邮箱 macOS", "msg_date": "Mon, 22 Jan 2024 19:54:00 +0800", "msg_from": "\"=?UTF-8?B?5L2V5p+v5paHKOa4iuS6kSk=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?RG9lcyByZWR1bmRhbnQgZXh0ZW5zaW9uIGV4aXN0IER1cmluZyBmYXN0ZXIgQ09QWSBpbiBQ?=\n =?UTF-8?B?RzE2IA==?=" }, { "msg_contents": "Hi,\n\nOn 2024-01-22 19:54:00 +0800, 何柯文(渊云) wrote:\n> I'm learning faster COPY of PG16. I have some questions about extension lock improvement.\n> From ./src/backend/storage/buffer/bufmgr.c:1901 (ExtendBufferedRelShared)\n> ```\n> /*\n> * Lock relation against concurrent extensions, unless requested not to.\n> *\n> * We use the same extension lock for all forks. That's unnecessarily\n> * restrictive, but currently extensions for forks don't happen often\n> * enough to make it worth locking more granularly.\n> *\n> * Note that another backend might have extended the relation by the time\n> * we get the lock.\n> */\n> if (!(flags & EB_SKIP_EXTENSION_LOCK))\n> {\n> LockRelationForExtension(bmr.rel, ExclusiveLock);\n> if (bmr.rel)\n> bmr.smgr = RelationGetSmgr(bmr.rel);\n> }\n> ...\n> smgrzeroextend(bmr.smgr, fork, first_block, extend_by, false);\n> ```\n> During concurrent extension, when we obtain the extension lock, we use\n> smgrzeroextend() to extend relation files instead of searching fsm through\n> GetPageWithFreeSpace(). Is this approach reasonable?\n\nI think so, yes.\n\n\n> During concurrent extensions, one backend bulk extend successfully, meaning\n> that other backends waiting on extension lock have free pages to use. If\n> all concurrent extend backends extend the relation file after getting the\n> extension lock, the extension lock will be held (extention backends *\n> smgrzeroextend() executing time).\n\nIf there's this much contention on the extension lock, there's no harm in\nextending more - the space will be used soon. The alternatives would be a) to\nsearch the FSM with the extension lock held, making contention worse, b) to\nrelease the extension lock again if we couldn't immediately acquire it, search\nthe fsm, and retry if we couldn't find any free space, which would\nsubstantially increase contention.\n\nThe FSM is the source of substantial contention, disabling it actually results\nin substantial throughput increases. Vastly increasing the number of lookups\nin the FSM would make that considerably worse, without a meaningful gain in\ndensity.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:09:21 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does redundant extension exist During faster COPY in PG16" } ]
[ { "msg_contents": "Assuming a SELECT statement reading from a single table, it is quite an\neffort to transform that statement to an UPDATE statement on that table,\nperhaps to fix a typo that the user has spotted in the query result.\n\nFirst, the general syntax is not the same with the order of syntax\nelements changed. Then the row in question needs to be pinned down by\nthe primary key, requiring cut-and-paste of the PK columns. Furthermore,\nthe value to be updated needs to be put into the command, with proper\nquoting. If the original value spans multiple line, copy-and-pasting it\nfor editing is especially tedious.\n\nSuppose the following query where we spot a typo in the 2nd message:\n\n=# select id, language, message from messages where language = 'en';\nid | language | message\n 1 | en | Good morning\n 2 | en | Hello warld\n\nThe query needs to be transformed into this update:\n\n=# update messages set message = 'Hello world' where id = 2;\n\nThis patch automates the tedious parts by opening the query result in a\neditor in JSON format, where the user can edit the data. On closing the\neditor, the JSON data is read back, and the differences are sent as\nUPDATE commands. New rows are INSERTed, and deleted rows are DELETEd.\n\n=# select id, language, message from messages where language = 'en' \\gedit\n\nAn editor opens:\n[\n{ \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n{ \"id\": 2, \"language\": \"en\", \"message\": \"Hello warld\" }\n]\n\nLet's fix the typo and save the file:\n[\n{ \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n{ \"id\": 2, \"language\": \"en\", \"message\": \"Hello world\" }\n]\nUPDATE messages SET message = 'Hello world' WHERE id = '2';\nUPDATE 1\n\nIn this example, typing \"WHERE id = 2\" would not be too hard, but the\nprimary key might be a composite key, with complex non-numeric values.\nThis is supported as well.\n\nIf expanded mode (\\x) is enabled, \\gedit will use the expanded JSON\nformat, best suitable for long values.\n\n\nThis patch requires the \"psql JSON output format\" patch.\n\nChristoph", "msg_date": "Mon, 22 Jan 2024 16:06:37 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "psql: Allow editing query results with \\gedit" }, { "msg_contents": "Hi\n\npo 22. 1. 2024 v 16:06 odesílatel Christoph Berg <[email protected]> napsal:\n\n> Assuming a SELECT statement reading from a single table, it is quite an\n> effort to transform that statement to an UPDATE statement on that table,\n> perhaps to fix a typo that the user has spotted in the query result.\n>\n> First, the general syntax is not the same with the order of syntax\n> elements changed. Then the row in question needs to be pinned down by\n> the primary key, requiring cut-and-paste of the PK columns. Furthermore,\n> the value to be updated needs to be put into the command, with proper\n> quoting. If the original value spans multiple line, copy-and-pasting it\n> for editing is especially tedious.\n>\n> Suppose the following query where we spot a typo in the 2nd message:\n>\n> =# select id, language, message from messages where language = 'en';\n> id | language | message\n> 1 | en | Good morning\n> 2 | en | Hello warld\n>\n> The query needs to be transformed into this update:\n>\n> =# update messages set message = 'Hello world' where id = 2;\n>\n> This patch automates the tedious parts by opening the query result in a\n> editor in JSON format, where the user can edit the data. On closing the\n> editor, the JSON data is read back, and the differences are sent as\n> UPDATE commands. New rows are INSERTed, and deleted rows are DELETEd.\n>\n> =# select id, language, message from messages where language = 'en' \\gedit\n>\n> An editor opens:\n> [\n> { \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n> { \"id\": 2, \"language\": \"en\", \"message\": \"Hello warld\" }\n> ]\n>\n> Let's fix the typo and save the file:\n> [\n> { \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n> { \"id\": 2, \"language\": \"en\", \"message\": \"Hello world\" }\n> ]\n> UPDATE messages SET message = 'Hello world' WHERE id = '2';\n> UPDATE 1\n>\n> In this example, typing \"WHERE id = 2\" would not be too hard, but the\n> primary key might be a composite key, with complex non-numeric values.\n> This is supported as well.\n>\n> If expanded mode (\\x) is enabled, \\gedit will use the expanded JSON\n> format, best suitable for long values.\n>\n>\n> This patch requires the \"psql JSON output format\" patch.\n>\n\nIntroduction of \\gedit is interesting idea, but in this form it looks too\nmagic\n\na) why the data are in JSON format, that is not native for psql (minimally\nnow)\n\nb) the implicit transformation to UPDATEs and the next evaluation can be\npretty dangerous.\n\nThe concept of proposed feature is interesting, but the name \\gedit is too\ngeneric, maybe too less descriptive for this purpose\n\nMaybe \\geditupdates can be better - but still it can be dangerous and slow\n(without limits)\n\nIn the end I am not sure if I like it or dislike it. Looks dangerous. I can\nimagine possible damage when some people will see vi first time and will\ntry to finish vi, but in this command, it will be transformed to executed\nUPDATEs. More generating UPDATEs without knowledge of table structure\n(knowledge of PK) can be issue (and possibly dangerous too), and you cannot\nto recognize PK from result of SELECT (Everywhere PK is not \"id\" and it is\nnot one column).\n\nRegards\n\nPavel\n\n\n> Christoph\n>\n\nHipo 22. 1. 2024 v 16:06 odesílatel Christoph Berg <[email protected]> napsal:Assuming a SELECT statement reading from a single table, it is quite an\neffort to transform that statement to an UPDATE statement on that table,\nperhaps to fix a typo that the user has spotted in the query result.\n\nFirst, the general syntax is not the same with the order of syntax\nelements changed. Then the row in question needs to be pinned down by\nthe primary key, requiring cut-and-paste of the PK columns. Furthermore,\nthe value to be updated needs to be put into the command, with proper\nquoting. If the original value spans multiple line, copy-and-pasting it\nfor editing is especially tedious.\n\nSuppose the following query where we spot a typo in the 2nd message:\n\n=# select id, language, message from messages where language = 'en';\nid | language | message\n 1 | en       | Good morning\n 2 | en       | Hello warld\n\nThe query needs to be transformed into this update:\n\n=# update messages set message = 'Hello world' where id = 2;\n\nThis patch automates the tedious parts by opening the query result in a\neditor in JSON format, where the user can edit the data. On closing the\neditor, the JSON data is read back, and the differences are sent as\nUPDATE commands. New rows are INSERTed, and deleted rows are DELETEd.\n\n=# select id, language, message from messages where language = 'en' \\gedit\n\nAn editor opens:\n[\n{ \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n{ \"id\": 2, \"language\": \"en\", \"message\": \"Hello warld\" }\n]\n\nLet's fix the typo and save the file:\n[\n{ \"id\": 1, \"language\": \"en\", \"message\": \"Good morning\" },\n{ \"id\": 2, \"language\": \"en\", \"message\": \"Hello world\" }\n]\nUPDATE messages SET message = 'Hello world' WHERE id = '2';\nUPDATE 1\n\nIn this example, typing \"WHERE id = 2\" would not be too hard, but the\nprimary key might be a composite key, with complex non-numeric values.\nThis is supported as well.\n\nIf expanded mode (\\x) is enabled, \\gedit will use the expanded JSON\nformat, best suitable for long values.\n\n\nThis patch requires the \"psql JSON output format\" patch.Introduction of \\gedit is interesting idea, but in this form it looks too magica) why the data are in JSON format, that is not native for psql (minimally now)b) the implicit transformation to UPDATEs and the next evaluation can be pretty dangerous.The concept of proposed feature is interesting, but the name \\gedit is too generic, maybe too less descriptive for this purposeMaybe \\geditupdates can be better - but still it can be dangerous and slow (without limits)In the end I am not sure if I like it or dislike it. Looks dangerous. I can imagine possible damage when some people will see vi first time and will try to finish vi, but in this command, it will be transformed to executed UPDATEs. More generating UPDATEs without knowledge of table structure (knowledge of PK) can be issue (and possibly dangerous too), and you cannot to recognize PK from result of SELECT (Everywhere PK is not \"id\" and it is not one column).RegardsPavel\n\nChristoph", "msg_date": "Mon, 22 Jan 2024 16:43:25 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> po 22. 1. 2024 v 16:06 odesílatel Christoph Berg <[email protected]> napsal:\n>> This patch automates the tedious parts by opening the query result in a\n>> editor in JSON format, where the user can edit the data. On closing the\n>> editor, the JSON data is read back, and the differences are sent as\n>> UPDATE commands. New rows are INSERTed, and deleted rows are DELETEd.\n\n> Introduction of \\gedit is interesting idea, but in this form it looks too\n> magic\n\nYeah, I don't like it either --- it feels like something that belongs\nin an ETL tool not psql. The sheer size of the patch shows how far\nafield it is from anything that psql already does, necessitating\nwriting tons of stuff that was not there before. The bits that try\nto parse the query to get necessary information seem particularly\nhalf-baked.\n\n> In the end I am not sure if I like it or dislike it. Looks dangerous. I can\n> imagine possible damage when some people will see vi first time and will\n> try to finish vi, but in this command, it will be transformed to executed\n> UPDATEs.\n\nYup -- you'd certainly want some way of confirming that you actually\nwant the changes applied. Our existing edit commands load the edited\nstring back into the query buffer, where you can \\r it if you don't\nwant to run it. But I fear that the results of this operation would\nbe too long for that to be workable.\n\n> More generating UPDATEs without knowledge of table structure\n> (knowledge of PK) can be issue (and possibly dangerous too), and you cannot\n> to recognize PK from result of SELECT (Everywhere PK is not \"id\" and it is\n> not one column).\n\nIt does look like it's trying to find out the pkey from the system\ncatalogs ... however, it's also accepting unique constraints without\na lot of thought about the implications of NULLs. Even if you have\na pkey, it's not exactly clear to me what should happen if the user\nchanges the contents of a pkey field. That could be interpreted as\neither an INSERT or UPDATE, I think.\n\nAlso, while I didn't read too closely, it's not clear to me how the\ncode could reliably distinguish INSERT vs UPDATE vs DELETE cases ---\nsurely we're not going to try to put a \"diff\" engine into this, and\neven if we did, diff is well known for producing somewhat surprising\ndecisions about exactly which old lines match which new ones. That's\npart of the reason why I really don't like the idea that the deduced\nchange commands will be summarily applied without the user even\nseeing them.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jan 2024 11:15:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "On Mon, Jan 22, 2024 at 8:06 AM Christoph Berg <[email protected]> wrote:\n\n> Assuming a SELECT statement reading from a single table, it is quite an\n> effort to transform that statement to an UPDATE statement on that table,\n> perhaps to fix a typo that the user has spotted in the query result.\n>\n>\nBuilding off the other comments, I'd suggest trying to get rid of the\nintermediate JSOn format and also just focus on a single row at any given\ntime.\n\nFor an update the first argument to the metacommand could be the unique key\nvalue present in the previous result. The resultant UPDATE would just put\nthat into the where clause and every other column in the result would be a\nSET clause column with the thing being set the current value, ready to be\nedited.\n\nDELETE would be similar but without the need for a SET clause.\n\nINSERT can produce a template INSERT (cols) VALUES ... command with some\nsmarts regarding auto incrementing keys and placeholder values.\n\nDavid J.\n\nOn Mon, Jan 22, 2024 at 8:06 AM Christoph Berg <[email protected]> wrote:Assuming a SELECT statement reading from a single table, it is quite an\neffort to transform that statement to an UPDATE statement on that table,\nperhaps to fix a typo that the user has spotted in the query result.Building off the other comments, I'd suggest trying to get rid of the intermediate JSOn format and also just focus on a single row at any given time.For an update the first argument to the metacommand could be the unique key value present in the previous result.  The resultant UPDATE would just put that into the where clause and every other column in the result would be a SET clause column with the thing being set the current value, ready to be edited.DELETE would be similar but without the need for a SET clause.INSERT can produce a template INSERT (cols) VALUES ... command with some smarts regarding auto incrementing keys and placeholder values.David J.", "msg_date": "Mon, 22 Jan 2024 09:33:18 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "po 22. 1. 2024 v 17:34 odesílatel David G. Johnston <\[email protected]> napsal:\n\n> On Mon, Jan 22, 2024 at 8:06 AM Christoph Berg <[email protected]> wrote:\n>\n>> Assuming a SELECT statement reading from a single table, it is quite an\n>> effort to transform that statement to an UPDATE statement on that table,\n>> perhaps to fix a typo that the user has spotted in the query result.\n>>\n>>\n> Building off the other comments, I'd suggest trying to get rid of the\n> intermediate JSOn format and also just focus on a single row at any given\n> time.\n>\n> For an update the first argument to the metacommand could be the unique\n> key value present in the previous result. The resultant UPDATE would just\n> put that into the where clause and every other column in the result would\n> be a SET clause column with the thing being set the current value, ready to\n> be edited.\n>\n> DELETE would be similar but without the need for a SET clause.\n>\n> INSERT can produce a template INSERT (cols) VALUES ... command with some\n> smarts regarding auto incrementing keys and placeholder values.\n>\n> David J.\n>\n\nCan you imagine using it? I like psql, I like term applications, my first\ndatabase was FoxPro, but for dataeditation I am almost sure so I don't want\nto use psql.\n\nI can imagine enhancing the current \\gexec command because it is executed\ndirectly without possibility to edit. I see valuable some special clause\n\"edit\"\n\nlike\n\n\\gexec_edit or \\gexec(edit) or \\gexec edit\n\nThis is like current gexec but with possibility to edit the result in\neditor and with possibility to finish without saving.\n\nThen we can introduce SQL functions UpdateTemplate(cols text[], rowvalue),\nInsertTemplate, ...\n\nand then you can write\n\nSELECT UpdateTemplace(ARRAY['a','b','c'], foo) FROM foo WHERE id IN (1,2)\n\\gexec_with_edit\n\nBut still looks strange to me - like we try reintroduce of necessity sed or\nawk to SQL and psql\n\nI would have forms like FoxPro, I would have a grid like FoxPro, but not in\npsql, and I would not develop it :-)\n\npo 22. 1. 2024 v 17:34 odesílatel David G. Johnston <[email protected]> napsal:On Mon, Jan 22, 2024 at 8:06 AM Christoph Berg <[email protected]> wrote:Assuming a SELECT statement reading from a single table, it is quite an\neffort to transform that statement to an UPDATE statement on that table,\nperhaps to fix a typo that the user has spotted in the query result.Building off the other comments, I'd suggest trying to get rid of the intermediate JSOn format and also just focus on a single row at any given time.For an update the first argument to the metacommand could be the unique key value present in the previous result.  The resultant UPDATE would just put that into the where clause and every other column in the result would be a SET clause column with the thing being set the current value, ready to be edited.DELETE would be similar but without the need for a SET clause.INSERT can produce a template INSERT (cols) VALUES ... command with some smarts regarding auto incrementing keys and placeholder values.David J.Can you imagine using it?  I like psql, I like term applications, my first database was FoxPro, but for dataeditation I am almost sure so I don't want to use psql. I can imagine enhancing the current \\gexec command because it is executed directly without possibility to edit. I see valuable some special clause \"edit\"like\\gexec_edit or \\gexec(edit) or \\gexec editThis is like current gexec but with possibility to edit the result in editor and with possibility to finish without saving.Then we can introduce SQL functions UpdateTemplate(cols text[], rowvalue), InsertTemplate, ...and then you can writeSELECT UpdateTemplace(ARRAY['a','b','c'],  foo) FROM foo WHERE id IN (1,2) \\gexec_with_editBut still looks strange to me - like we try reintroduce of necessity sed or awk to SQL and psqlI would have forms like FoxPro, I would have a grid like FoxPro, but not in psql, and I would not develop it :-)", "msg_date": "Mon, 22 Jan 2024 18:15:52 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> I would have forms like FoxPro, I would have a grid like FoxPro, but not in\n> psql, and I would not develop it :-)\n\nYeah, that's something that was also bothering me, but I failed to\nput my finger on it. \"Here's some JSON, edit it, and don't forget\nto keep the quoting correct\" does not strike me as a user-friendly\nway to adjust data content. A spreadsheet-like display where you\ncan change data within cells seems like a far better API, although\nI don't want to write that either.\n\nThis kind of API would not readily support INSERT or DELETE cases, but\nTBH I think that's better anyway --- you're adding too much ambiguity\nin pursuit of a very secondary use-case. The stated complaint was\n\"it's too hard to build UPDATE commands\", which I can sympathize with.\n\n(BTW, I wonder how much of this already exists in pgAdmin.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:04:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Re: Pavel Stehule\n> Introduction of \\gedit is interesting idea, but in this form it looks too\n> magic\n> \n> a) why the data are in JSON format, that is not native for psql (minimally\n> now)\n\nBecause we need something machine-readable. CSV would be an\nalternative, but that is hardly human-readable.\n\n> b) the implicit transformation to UPDATEs and the next evaluation can be\n> pretty dangerous.\n\nYou can always run it in a transaction:\n\nbegin;\nselect * from tbl \\gedit\ncommit;\n\nAlternatively, we could try to make it not send the commands right away\n- either by putting them into the query buffer (that currently work\nonly for single commands without any ';') or by opening another\neditor. Doable, but perhaps the transaction Just Works?\n\n> The concept of proposed feature is interesting, but the name \\gedit is too\n> generic, maybe too less descriptive for this purpose\n> \n> Maybe \\geditupdates can be better - but still it can be dangerous and slow\n> (without limits)\n\nAn alternative I was considering was \\edittable, but that would put\nmore emphasis on \"table\" when it's actually more powerful than that,\nit works on a query result. (Similar in scope to updatable views.)\n\n> In the end I am not sure if I like it or dislike it. Looks dangerous. I can\n> imagine possible damage when some people will see vi first time and will\n> try to finish vi, but in this command, it will be transformed to executed\n> UPDATEs.\n\nIf you close the editor with touching the file, nothing will be sent.\nAnd if you mess up the file, it will complain. I think it's unlikely\nthat people who end up in an editor they can't operate would be able\nto modify JSON in a way that is still valid.\n\nAlso, \\e has the same problem. Please don't let \"there are users who\ndon't know what they are doing\" spoil the idea.\n\n> More generating UPDATEs without knowledge of table structure\n> (knowledge of PK) can be issue (and possibly dangerous too), and you cannot\n> to recognize PK from result of SELECT (Everywhere PK is not \"id\" and it is\n> not one column).\n\nIt *does* retrieve the proper PK from the table. All updates are based\non the PK.\n\n\nRe: Tom Lane\n> > Introduction of \\gedit is interesting idea, but in this form it looks too\n> > magic\n> \n> Yeah, I don't like it either --- it feels like something that belongs\n> in an ETL tool not psql.\n\nI tried to put it elsewhere first, starting with pspg:\nhttps://github.com/okbob/pspg/issues/200\nThe problem is that external programs like the pager neither have\naccess to the query string/table name, nor the database connection.\n\nETL would also not quite fit, this is meant for interactive use.\n\n> The sheer size of the patch shows how far\n> afield it is from anything that psql already does, necessitating\n> writing tons of stuff that was not there before.\n\nI've been working on this for several months, so it's already larger\nthan the MVP would be. It does have features like (key=col1,col2) that\ncould be omitted.\n\n> The bits that try\n> to parse the query to get necessary information seem particularly\n> half-baked.\n\nYes, that's not pretty, and I'd be open for suggestions on how to\nimprove that. I was considering:\n\n1) this (dumb query parsing)\n2) EXPLAIN the query to get the table\n3) use libpq's PQftable\n\nThe problem with 2+3 is that on views and partitioned tables, this\nwould yield the base table name, not the table name used in the query.\n1 turned out to be the most practical, and worked for me so far.\n\nIf the parse tree would be available, using that would be much better.\nShould we perhaps add something like \"explain (parse) select...\", or\nadd pg_parsetree(query) function?\n\n> Yup -- you'd certainly want some way of confirming that you actually\n> want the changes applied. Our existing edit commands load the edited\n> string back into the query buffer, where you can \\r it if you don't\n> want to run it. But I fear that the results of this operation would\n> be too long for that to be workable.\n\nThe results are as long as you like. The intended use case would be to\nchange just a few rows.\n\nAs said above, I was already thinking of making it user-confirmable,\njust the current version doesn't have it.\n\n> It does look like it's trying to find out the pkey from the system\n> catalogs ... however, it's also accepting unique constraints without\n> a lot of thought about the implications of NULLs.\n\nRight, the UNIQUE part doesn't take care of NULLs yet. Will fix that.\n(Probably by erroring out if any key column is NULL.)\n\n> Even if you have\n> a pkey, it's not exactly clear to me what should happen if the user\n> changes the contents of a pkey field. That could be interpreted as\n> either an INSERT or UPDATE, I think.\n\nA changed PK will be interpreted as DELETE + INSERT. (I shall make\nthat more clear in the documentation.)\n\n> Also, while I didn't read too closely, it's not clear to me how the\n> code could reliably distinguish INSERT vs UPDATE vs DELETE cases ---\n> surely we're not going to try to put a \"diff\" engine into this, and\n> even if we did, diff is well known for producing somewhat surprising\n> decisions about exactly which old lines match which new ones. That's\n> part of the reason why I really don't like the idea that the deduced\n> change commands will be summarily applied without the user even\n> seeing them.\n\nThe \"diff\" is purely based on \"after editing, is there still a row\nwith this key identity\". If the PK columns are not changed, the UPDATE\nwill hook on the PK value.\n\nIf you changed the PK columns, that will be a DELETE and an INSERT.\n\nDuring development, I was considering to forbid (error out) changing\nthe PK columns. But then, simply deleting rows (= DELETE) and adding\nnew rows (= INSERT) seemed like a nice feature by itself, so I left\nthat in. Perhaps that should be reconsidered?\n\nChristoph\n\n\n", "msg_date": "Mon, 22 Jan 2024 20:48:40 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Re: David G. Johnston\n> Building off the other comments, I'd suggest trying to get rid of the\n> intermediate JSOn format and also just focus on a single row at any given\n> time.\n\nWe need *some* machine-readable format. It doesn't have to be JSON,\nbut JSON is actually pretty nice to read - and if values are too long,\nor there are too many values, switch to extended mode:\n\nselect * from messages \\gedit (expanded)\n\n[{\n \"id\": \"1\",\n \"language\": \"en\",\n \"message\": \"This is a very long test text with little actual meaning.\"\n},{\n \"id\": \"2\",\n \"language\": \"en\",\n \"message\": \"Another one, a bit shorter.\"\n}]\n\nI tweaked the indentation in the psql JSON output patch specifically\nto make it readable.\n\nRestricting to a single line might make sense if it helps editing, but\nI don't think it does.\n\n> For an update the first argument to the metacommand could be the unique key\n> value present in the previous result. The resultant UPDATE would just put\n> that into the where clause and every other column in the result would be a\n> SET clause column with the thing being set the current value, ready to be\n> edited.\n\nHmm, then you would still have to cut-and-paste the PK value. If that\nthat's a multi-column non-numeric key, you are basically back to the\noriginal problem.\n\n\nRe: Tom Lane\n> Yeah, that's something that was also bothering me, but I failed to\n> put my finger on it. \"Here's some JSON, edit it, and don't forget\n> to keep the quoting correct\" does not strike me as a user-friendly\n> way to adjust data content. A spreadsheet-like display where you\n> can change data within cells seems like a far better API, although\n> I don't want to write that either.\n\nRight. I wouldn't want a form editing system in there either. But\nperhaps this middle ground of using a well-established format that is\neasy to generate and to parse (it's using the JSON parser from\npgcommon) makes it fit into psql.\n\nIf parsing the editor result fails, the user is asked if they want to\nre-edit with a parser error message, and if they go to the editor\nagain, the cursor is placed in the line where the error is. (Also,\nwhat's wrong with having to strictly adhere to some syntax, we are\ntalking about SQL here.)\n\nIt's admittedly larger than the average \\backslash command, but it\ndoes fit into psql's interactive usage. \\crosstabview is perhaps a\nsimilar thing - it doesn't really fit into a simple \"send query and\ndisplay result\" client, but since it solves an actual problem, it\nmakes well sense to spend the extra code on it.\n\n> This kind of API would not readily support INSERT or DELETE cases, but\n> TBH I think that's better anyway --- you're adding too much ambiguity\n> in pursuit of a very secondary use-case. The stated complaint was\n> \"it's too hard to build UPDATE commands\", which I can sympathize with.\n\nI've been using the feature already for some time, and it's a real\nrelief. In my actual use case here, I use it on my ham radio logbook:\n\n=# select start, call, qrg, name from log where cty = 'CE9' order by start;\n start │ call │ qrg │ name\n────────────────────────┼────────┼─────────────┼───────\n 2019-03-12 20:34:00+00 │ RI1ANL │ 7.076253 │ ∅\n 2021-03-16 21:24:00+00 │ DP0GVN │ 2400.395 │ Felix\n 2022-01-15 17:19:00+00 │ DP0GVN │ 2400.01 │ Felix\n 2022-10-23 19:17:15+00 │ DP0GVN │ 2400.041597 │ ∅\n 2023-10-01 14:05:00+00 │ 8J1RL │ 28.182575 │ ∅\n 2024-01-22 21:15:15+00 │ DP1POL │ 10.138821 │ ∅\n(6 Zeilen)\n\nThe primary key is (start, call).\n\nIf I now want to note that the last contact with Antarctica there was\nalso with Felix, I'd have to transform that into\n\nupdate log set name = 'Felix' where start = '2024-01-22 21:15:15+00' and call = 'DP1POL';\n\n\\gedit is just so much easier.\n\nUPDATE is the core feature. If we want to say INSERT and DELETE aren't\nsupported, but UPDATE support can go in, that'd be fine with me.\n\n> (BTW, I wonder how much of this already exists in pgAdmin.)\n\npgadmin seems to support it. (Most other clients don't.)\n\nObviously, I would want to do the updating using the client I also use\nfor querying.\n\nChristoph\n\n\n", "msg_date": "Mon, 22 Jan 2024 23:54:36 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "po 22. 1. 2024 v 23:54 odesílatel Christoph Berg <[email protected]> napsal:\n\n> Re: David G. Johnston\n> > Building off the other comments, I'd suggest trying to get rid of the\n> > intermediate JSOn format and also just focus on a single row at any given\n> > time.\n>\n> We need *some* machine-readable format. It doesn't have to be JSON,\n> but JSON is actually pretty nice to read - and if values are too long,\n> or there are too many values, switch to extended mode:\n>\n> select * from messages \\gedit (expanded)\n>\n> [{\n> \"id\": \"1\",\n> \"language\": \"en\",\n> \"message\": \"This is a very long test text with little actual meaning.\"\n> },{\n> \"id\": \"2\",\n> \"language\": \"en\",\n> \"message\": \"Another one, a bit shorter.\"\n> }]\n>\n> I tweaked the indentation in the psql JSON output patch specifically\n> to make it readable.\n>\n> Restricting to a single line might make sense if it helps editing, but\n> I don't think it does.\n>\n> > For an update the first argument to the metacommand could be the unique\n> key\n> > value present in the previous result. The resultant UPDATE would just\n> put\n> > that into the where clause and every other column in the result would be\n> a\n> > SET clause column with the thing being set the current value, ready to be\n> > edited.\n>\n> Hmm, then you would still have to cut-and-paste the PK value. If that\n> that's a multi-column non-numeric key, you are basically back to the\n> original problem.\n>\n>\n> Re: Tom Lane\n> > Yeah, that's something that was also bothering me, but I failed to\n> > put my finger on it. \"Here's some JSON, edit it, and don't forget\n> > to keep the quoting correct\" does not strike me as a user-friendly\n> > way to adjust data content. A spreadsheet-like display where you\n> > can change data within cells seems like a far better API, although\n> > I don't want to write that either.\n>\n> Right. I wouldn't want a form editing system in there either. But\n> perhaps this middle ground of using a well-established format that is\n> easy to generate and to parse (it's using the JSON parser from\n> pgcommon) makes it fit into psql.\n>\n> If parsing the editor result fails, the user is asked if they want to\n> re-edit with a parser error message, and if they go to the editor\n> again, the cursor is placed in the line where the error is. (Also,\n> what's wrong with having to strictly adhere to some syntax, we are\n> talking about SQL here.)\n>\n> It's admittedly larger than the average \\backslash command, but it\n> does fit into psql's interactive usage. \\crosstabview is perhaps a\n> similar thing - it doesn't really fit into a simple \"send query and\n> display result\" client, but since it solves an actual problem, it\n> makes well sense to spend the extra code on it.\n>\n\n\\crosstabview is read only\n\n\n>\n> > This kind of API would not readily support INSERT or DELETE cases, but\n> > TBH I think that's better anyway --- you're adding too much ambiguity\n> > in pursuit of a very secondary use-case. The stated complaint was\n> > \"it's too hard to build UPDATE commands\", which I can sympathize with.\n>\n> I've been using the feature already for some time, and it's a real\n> relief. In my actual use case here, I use it on my ham radio logbook:\n>\n> =# select start, call, qrg, name from log where cty = 'CE9' order by start;\n> start │ call │ qrg │ name\n> ────────────────────────┼────────┼─────────────┼───────\n> 2019-03-12 20:34:00+00 │ RI1ANL │ 7.076253 │ ∅\n> 2021-03-16 21:24:00+00 │ DP0GVN │ 2400.395 │ Felix\n> 2022-01-15 17:19:00+00 │ DP0GVN │ 2400.01 │ Felix\n> 2022-10-23 19:17:15+00 │ DP0GVN │ 2400.041597 │ ∅\n> 2023-10-01 14:05:00+00 │ 8J1RL │ 28.182575 │ ∅\n> 2024-01-22 21:15:15+00 │ DP1POL │ 10.138821 │ ∅\n> (6 Zeilen)\n>\n> The primary key is (start, call).\n>\n> If I now want to note that the last contact with Antarctica there was\n> also with Felix, I'd have to transform that into\n>\n> update log set name = 'Felix' where start = '2024-01-22 21:15:15+00' and\n> call = 'DP1POL';\n>\n> \\gedit is just so much easier.\n>\n\nIt looks great for simple queries, but if somebody uses it like SELECT *\nFROM pg_proc \\gedit\n\nI almost sure so \\gedit is wrong name for this feature.\n\nCan be nice if we are able:\n\na) export data set in some readable format\n\nb) be possible to use more command in pipes\n\nsome like\n\nselect start, call, qrg, name from log where cty = 'CE9' order by start\n\\gpipexec(tsv) mypipe | bash update_pattern.sh > tmpfile; vi tmpfile; cat\ntmpfile > mypipe\n\nI understand your motivation well, but I don't like your proposal because\ntoo many different things are pushed to one feature, and it is designed for\na single purpose.\n\n\nUPDATE is the core feature. If we want to say INSERT and DELETE aren't\n> supported, but UPDATE support can go in, that'd be fine with me.\n>\n> > (BTW, I wonder how much of this already exists in pgAdmin.)\n>\n> pgadmin seems to support it. (Most other clients don't.)\n>\n> Obviously, I would want to do the updating using the client I also use\n> for querying.\n>\n> Christoph\n>\n\npo 22. 1. 2024 v 23:54 odesílatel Christoph Berg <[email protected]> napsal:Re: David G. Johnston\n> Building off the other comments, I'd suggest trying to get rid of the\n> intermediate JSOn format and also just focus on a single row at any given\n> time.\n\nWe need *some* machine-readable format. It doesn't have to be JSON,\nbut JSON is actually pretty nice to read - and if values are too long,\nor there are too many values, switch to extended mode:\n\nselect * from messages \\gedit (expanded)\n\n[{\n  \"id\": \"1\",\n  \"language\": \"en\",\n  \"message\": \"This is a very long test text with little actual meaning.\"\n},{\n  \"id\": \"2\",\n  \"language\": \"en\",\n  \"message\": \"Another one, a bit shorter.\"\n}]\n\nI tweaked the indentation in the psql JSON output patch specifically\nto make it readable.\n\nRestricting to a single line might make sense if it helps editing, but\nI don't think it does.\n\n> For an update the first argument to the metacommand could be the unique key\n> value present in the previous result.  The resultant UPDATE would just put\n> that into the where clause and every other column in the result would be a\n> SET clause column with the thing being set the current value, ready to be\n> edited.\n\nHmm, then you would still have to cut-and-paste the PK value. If that\nthat's a multi-column non-numeric key, you are basically back to the\noriginal problem.\n\n\nRe: Tom Lane\n> Yeah, that's something that was also bothering me, but I failed to\n> put my finger on it.  \"Here's some JSON, edit it, and don't forget\n> to keep the quoting correct\" does not strike me as a user-friendly\n> way to adjust data content.  A spreadsheet-like display where you\n> can change data within cells seems like a far better API, although\n> I don't want to write that either.\n\nRight. I wouldn't want a form editing system in there either. But\nperhaps this middle ground of using a well-established format that is\neasy to generate and to parse (it's using the JSON parser from\npgcommon) makes it fit into psql.\n\nIf parsing the editor result fails, the user is asked if they want to\nre-edit with a parser error message, and if they go to the editor\nagain, the cursor is placed in the line where the error is. (Also,\nwhat's wrong with having to strictly adhere to some syntax, we are\ntalking about SQL here.)\n\nIt's admittedly larger than the average \\backslash command, but it\ndoes fit into psql's interactive usage. \\crosstabview is perhaps a\nsimilar thing - it doesn't really fit into a simple \"send query and\ndisplay result\" client, but since it solves an actual problem, it\nmakes well sense to spend the extra code on it.\\crosstabview is read only \n\n> This kind of API would not readily support INSERT or DELETE cases, but\n> TBH I think that's better anyway --- you're adding too much ambiguity\n> in pursuit of a very secondary use-case.  The stated complaint was\n> \"it's too hard to build UPDATE commands\", which I can sympathize with.\n\nI've been using the feature already for some time, and it's a real\nrelief. In my actual use case here, I use it on my ham radio logbook:\n\n=# select start, call, qrg, name from log where cty = 'CE9' order by start;\n         start          │  call  │     qrg     │ name\n────────────────────────┼────────┼─────────────┼───────\n 2019-03-12 20:34:00+00 │ RI1ANL │    7.076253 │ ∅\n 2021-03-16 21:24:00+00 │ DP0GVN │    2400.395 │ Felix\n 2022-01-15 17:19:00+00 │ DP0GVN │     2400.01 │ Felix\n 2022-10-23 19:17:15+00 │ DP0GVN │ 2400.041597 │ ∅\n 2023-10-01 14:05:00+00 │ 8J1RL  │   28.182575 │ ∅\n 2024-01-22 21:15:15+00 │ DP1POL │   10.138821 │ ∅\n(6 Zeilen)\n\nThe primary key is (start, call).\n\nIf I now want to note that the last contact with Antarctica there was\nalso with Felix, I'd have to transform that into\n\nupdate log set name = 'Felix' where start = '2024-01-22 21:15:15+00' and call = 'DP1POL';\n\n\\gedit is just so much easier.It looks great for simple queries, but if somebody uses it like SELECT * FROM pg_proc \\geditI almost sure so \\gedit is wrong name for this feature. Can be nice if we are able:a) export data set in some readable formatb) be possible to use more command in pipessome likeselect start, call, qrg, name from log where cty = 'CE9' order by start \\gpipexec(tsv) mypipe | bash update_pattern.sh > tmpfile; vi tmpfile; cat tmpfile > mypipeI understand your motivation well, but I don't like your proposal because too many different things are pushed to one feature, and it is designed for a single purpose.\nUPDATE is the core feature. If we want to say INSERT and DELETE aren't\nsupported, but UPDATE support can go in, that'd be fine with me.\n\n> (BTW, I wonder how much of this already exists in pgAdmin.)\n\npgadmin seems to support it. (Most other clients don't.)\n\nObviously, I would want to do the updating using the client I also use\nfor querying.\n\nChristoph", "msg_date": "Tue, 23 Jan 2024 05:41:25 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Re: Pavel Stehule\n> It looks great for simple queries, but if somebody uses it like SELECT *\n> FROM pg_proc \\gedit\n\nWhat's wrong with that? If the pager can handle the amount of data,\nthe editor can do that as well. (If not, the fix is to just not run\nthe command, and not blame the feature.)\n\n> I almost sure so \\gedit is wrong name for this feature.\n\nI'm open for suggestions.\n\n> Can be nice if we are able:\n> \n> a) export data set in some readable format\n> \n> b) be possible to use more command in pipes\n> \n> some like\n> \n> select start, call, qrg, name from log where cty = 'CE9' order by start\n> \\gpipexec(tsv) mypipe | bash update_pattern.sh > tmpfile; vi tmpfile; cat\n> tmpfile > mypipe\n\nWell yeah, that's still a lot of typing.\n\n> I understand your motivation well, but I don't like your proposal because\n> too many different things are pushed to one feature, and it is designed for\n> a single purpose.\n\nIt's one feature for one purpose. And the patch isn't *that* huge. Did\nI make the mistake of adding documentation, extra command options, and\ntab completion in v1?\n\nChristoph\n\n\n", "msg_date": "Tue, 23 Jan 2024 11:38:22 +0100", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "út 23. 1. 2024 v 11:38 odesílatel Christoph Berg <[email protected]> napsal:\n\n> Re: Pavel Stehule\n> > It looks great for simple queries, but if somebody uses it like SELECT *\n> > FROM pg_proc \\gedit\n>\n> What's wrong with that? If the pager can handle the amount of data,\n> the editor can do that as well. (If not, the fix is to just not run\n> the command, and not blame the feature.)\n>\n\njust editing wide or long command or extra long strings can be unfriendly\n\n\n>\n> > I almost sure so \\gedit is wrong name for this feature.\n>\n> I'm open for suggestions.\n>\n\nI have not too many ideas. The problem is in missing relation between\n\"edit\" and \"update and execute\"\n\n\n>\n> > Can be nice if we are able:\n> >\n> > a) export data set in some readable format\n> >\n> > b) be possible to use more command in pipes\n> >\n> > some like\n> >\n> > select start, call, qrg, name from log where cty = 'CE9' order by start\n> > \\gpipexec(tsv) mypipe | bash update_pattern.sh > tmpfile; vi tmpfile; cat\n> > tmpfile > mypipe\n>\n> Well yeah, that's still a lot of typing.\n>\n\nit should not be problem. You can hide long strings to psql variables\n\n\n\n>\n> > I understand your motivation well, but I don't like your proposal because\n> > too many different things are pushed to one feature, and it is designed\n> for\n> > a single purpose.\n>\n> It's one feature for one purpose. And the patch isn't *that* huge. Did\n> I make the mistake of adding documentation, extra command options, and\n> tab completion in v1?\n>\n\nno - I have problem so one command does editing, generating write SQL\ncommands (updates) and their execution\n\n\n>\n> Christoph\n>\n\nút 23. 1. 2024 v 11:38 odesílatel Christoph Berg <[email protected]> napsal:Re: Pavel Stehule\n> It looks great for simple queries, but if somebody uses it like SELECT *\n> FROM pg_proc \\gedit\n\nWhat's wrong with that? If the pager can handle the amount of data,\nthe editor can do that as well. (If not, the fix is to just not run\nthe command, and not blame the feature.)just editing wide or long command or extra long strings can be unfriendly \n\n> I almost sure so \\gedit is wrong name for this feature.\n\nI'm open for suggestions.I have not too many ideas. The problem is in missing relation between \"edit\" and \"update and execute\" \n\n> Can be nice if we are able:\n> \n> a) export data set in some readable format\n> \n> b) be possible to use more command in pipes\n> \n> some like\n> \n> select start, call, qrg, name from log where cty = 'CE9' order by start\n> \\gpipexec(tsv) mypipe | bash update_pattern.sh > tmpfile; vi tmpfile; cat\n> tmpfile > mypipe\n\nWell yeah, that's still a lot of typing.it should not be problem. You can hide long strings to psql variables \n\n> I understand your motivation well, but I don't like your proposal because\n> too many different things are pushed to one feature, and it is designed for\n> a single purpose.\n\nIt's one feature for one purpose. And the patch isn't *that* huge. Did\nI make the mistake of adding documentation, extra command options, and\ntab completion in v1?no - I have problem so one command does editing, generating write SQL commands (updates)  and their execution \n\nChristoph", "msg_date": "Tue, 23 Jan 2024 18:13:32 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "On Mon, Jan 22, 2024 at 11:15 AM Tom Lane <[email protected]> wrote:\n> > Introduction of \\gedit is interesting idea, but in this form it looks too\n> > magic\n>\n> Yeah, I don't like it either --- it feels like something that belongs\n> in an ETL tool not psql. The sheer size of the patch shows how far\n> afield it is from anything that psql already does, necessitating\n> writing tons of stuff that was not there before. The bits that try\n> to parse the query to get necessary information seem particularly\n> half-baked.\n\nBased on these comments and the one from David Johnston, I think there\nis a consensus that we do not want this patch, so I'm marking it as\nRejected in the CommitFest application. If I've misunderstood the\nsituation, then please feel free to change the status accordingly.\n\nI feel slightly bad about rejecting this not only because rejecting\npatches that people have put work into sucks but also because (1) I do\nunderstand why it could be useful to have something like this and (2)\nI think in many ways the patch is quite well-considered, e.g. it has\noptions like table and key to work around cases where the naive logic\ndoesn't get the right answer. But I also do understand why the\nreactions thus far have been skeptical: there's a lot of pretty\nmagical stuff in this patch. That's a reliability concern: when you\ntype \\gedit and it works, that's cool, but sometimes it isn't going to\nwork, and you're not always going to understand why, and you can\nprobably fix a lot of those cases by using the \"table\" or \"key\"\noptions, but you have to know they exist, and you have to realize that\nthey're needed, and the whole thing is suddenly a lot less convenient.\nI think if we add this feature, a bunch of people won't notice, but\namong those who do, I bet there will be a pretty good chance of people\ncomplaining about cases that don't work, and perhaps not understanding\nwhy they don't work, and I suspect fixing some of those complaints may\nrequire something pretty close to solving the halting problem. :-(\n\nNow maybe that's all wrong and we should adopt this patch with\nenthusiasm, but if so, we need the patch to have significantly more +1\nvotes than -1 votes, and right now the reverse seems to be the case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 May 2024 12:57:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "Re: Robert Haas\n> Based on these comments and the one from David Johnston, I think there\n> is a consensus that we do not want this patch, so I'm marking it as\n> Rejected in the CommitFest application. If I've misunderstood the\n> situation, then please feel free to change the status accordingly.\n\nHi Robert,\n\nthanks for looking at the patch.\n\n> I feel slightly bad about rejecting this not only because rejecting\n> patches that people have put work into sucks but also because (1) I do\n> understand why it could be useful to have something like this and (2)\n> I think in many ways the patch is quite well-considered, e.g. it has\n> options like table and key to work around cases where the naive logic\n> doesn't get the right answer. But I also do understand why the\n> reactions thus far have been skeptical: there's a lot of pretty\n> magical stuff in this patch. That's a reliability concern: when you\n> type \\gedit and it works, that's cool, but sometimes it isn't going to\n> work, and you're not always going to understand why, and you can\n> probably fix a lot of those cases by using the \"table\" or \"key\"\n> options, but you have to know they exist, and you have to realize that\n> they're needed, and the whole thing is suddenly a lot less convenient.\n> I think if we add this feature, a bunch of people won't notice, but\n> among those who do, I bet there will be a pretty good chance of people\n> complaining about cases that don't work, and perhaps not understanding\n> why they don't work, and I suspect fixing some of those complaints may\n> require something pretty close to solving the halting problem. :-(\n\nThat's a good summary of the situation, thanks.\n\nI still think the feature would be cool to have, but admittedly, in\nthe meantime I've had cases myself where the automatism went into the\nwrong direction (updating key columns results in DELETE-INSERT cycles\nthat aren't doing the right thing if you didn't select all columns\noriginally), so I now understand the objections and agree people were\nright about that. This could be fixed by feeding the resulting\ncommands through another editor round, but that just adds complexity\ninstead of removing confusion.\n\nI think there is a pretty straightforward way to address the problems,\nthough: instead of letting the user edit JSON, format the query result\nin the form of UPDATE commands and let the user edit them. As Tom said\nupthread:\n\nTom> The stated complaint was \"it's too hard to build UPDATE commands\",\nTom> which I can sympathize with.\n\n... which this would perfectly address - it's building the commands.\n\nThe editor will have a bit more clutter (the UPDATE SET WHERE\nboilerplate), and the fields are somewhat out of order (the key at the\nend), but SQL commands are what people understand, and there is\nabsolutely no ambiguity on what is going to be executed since the\ncommands are exactly what is leaving the editor.\n\n> Now maybe that's all wrong and we should adopt this patch with\n> enthusiasm, but if so, we need the patch to have significantly more +1\n> votes than -1 votes, and right now the reverse seems to be the case.\n\nI'll update the patch and ask here. Thanks!\n\nChristoph\n\n\n", "msg_date": "Fri, 17 May 2024 15:24:28 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Allow editing query results with \\gedit" }, { "msg_contents": "On Fri, May 17, 2024 at 9:24 AM Christoph Berg <[email protected]> wrote:\n> Tom> The stated complaint was \"it's too hard to build UPDATE commands\",\n> Tom> which I can sympathize with.\n>\n> ... which this would perfectly address - it's building the commands.\n>\n> The editor will have a bit more clutter (the UPDATE SET WHERE\n> boilerplate), and the fields are somewhat out of order (the key at the\n> end), but SQL commands are what people understand, and there is\n> absolutely no ambiguity on what is going to be executed since the\n> commands are exactly what is leaving the editor.\n\nA point to consider is that the user can also do this in the query\nitself, if desired. It'd just be a matter of assembling the query\nstring with appropriate calls to quote_literal() and quote_ident(),\nwhich is not actually all that hard and I suspect many of us have done\nthat at one point or another. And then you know that you'll get the\nright set of key columns and update the right table (as opposed to,\nsay, a view over the right table, or the wrong one out of several\ntables that you joined).\n\nNow you might say, well, that's not terribly convenient, which is\nprobably true, but this might be a case of -- convenient, reliable,\nworks with arbitrary queries -- pick two. I don't know. I wouldn't be\nall that surprised if there's something clever and useful we could do\nin this area, but I sure don't know what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 09:51:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Allow editing query results with \\gedit" } ]
[ { "msg_contents": "Hi,\n\nHere's a quick status report after the third week:\nStatus summary:\nstatus | w1 | w2 | w3\n-----------------------------------+-----------+--------+----------\nNeeds review: | 238 | 213 | 181\nWaiting on Author: | 44 | 46 | 52\nReady for Committer: | 27 | 27 | 26\nCommitted: | 36 | 46 | 57\nMoved to next CF | 1 | 3 | 4\nWithdrawn: | 2 | 4 | 12\nReturned with Feedback: | 3 | 12 | 18\nRejected: | 1 | 1 | 2\nTotal: | 352 | 352 | 352\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing.\nI have sent a private mail through commitfest to patch owners who have\nsubmitted one or more patches but have not picked any of the patches\nfor review.\nI have sent out mails for which there has been no activity for a long\ntime, please respond to the mails if you are planning to continue to\nwork on it or if you are planning to work on it in the next\ncommitfest, please move it to the next commitfest. If there is no\nresponse I'm planning to return these patches and it can be added\nagain when it will be worked upon actively.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 22 Jan 2024 22:48:47 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": true, "msg_subject": "Commitfest 2024-01 third week update" } ]
[ { "msg_contents": "Hello,\n\nA question about protocol design - would it be possible to extend the\nprotocol, so it can handle multiple startup / authentication messages over\na single connection? Are there any serious obstacles? (possible issues with\nre-initialization of backends, I guess?)\nIf that is possible, it could improve one important edge case - where you\nhave to talk to multiple databases on a single host currently, you need to\nopen a separate connection to each of them. In some cases (multitenancy for\nexample), you may have thousands of databases on a host, which leads to\ninefficient connection utilization on clients (on the db side too). A lot\nof other RDBMSes don't have this limitation.\n\nthank you,\n-Vladimir Churyukin\n\nHello,A question about protocol design - would it be possible to extend the protocol, so it can handle multiple startup / authentication messages over a single connection? Are there any serious obstacles? (possible issues with re-initialization of backends, I guess?)If that is possible, it could improve one important edge case - where you have to talk to multiple databases on a single host currently, you need to open a separate connection to each of them. In some cases (multitenancy for example), you may have thousands of databases on a host, which leads to inefficient connection utilization on clients (on the db side too). A lot of other RDBMSes  don't have this limitation.thank you,-Vladimir Churyukin", "msg_date": "Mon, 22 Jan 2024 11:58:36 -0800", "msg_from": "Vladimir Churyukin <[email protected]>", "msg_from_op": true, "msg_subject": "Multiple startup messages over the same connection" }, { "msg_contents": "On 22/01/2024 21:58, Vladimir Churyukin wrote:\n> A question about protocol design - would it be possible to extend the \n> protocol, so it can handle multiple startup / authentication messages \n> over a single connection? Are there any serious obstacles? (possible \n> issues with re-initialization of backends, I guess?)\n> If that is possible, it could improve one important edge case - where \n> you have to talk to multiple databases on a single host currently, you \n> need to open a separate connection to each of them. In some cases \n> (multitenancy for example), you may have thousands of databases on a \n> host, which leads to inefficient connection utilization on clients (on \n> the db side too). A lot of other RDBMSes  don't have this limitation.\n\nThe protocol and the startup message are the least of your problems. \nYeah, it would be nice if you could switch between databases, but the \nassumption that one backend operates on one database is pretty deeply \ningrained in the code.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 23 Jan 2024 09:43:50 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple startup messages over the same connection" }, { "msg_contents": "On Mon, Jan 22, 2024 at 11:43 PM Heikki Linnakangas <[email protected]> wrote:\n\n> On 22/01/2024 21:58, Vladimir Churyukin wrote:\n> > A question about protocol design - would it be possible to extend the\n> > protocol, so it can handle multiple startup / authentication messages\n> > over a single connection? Are there any serious obstacles? (possible\n> > issues with re-initialization of backends, I guess?)\n> > If that is possible, it could improve one important edge case - where\n> > you have to talk to multiple databases on a single host currently, you\n> > need to open a separate connection to each of them. In some cases\n> > (multitenancy for example), you may have thousands of databases on a\n> > host, which leads to inefficient connection utilization on clients (on\n> > the db side too). A lot of other RDBMSes don't have this limitation.\n>\n> The protocol and the startup message are the least of your problems.\n> Yeah, it would be nice if you could switch between databases, but the\n> assumption that one backend operates on one database is pretty deeply\n> ingrained in the code.\n\n\nYes, I suspected that's the reason why it was not implemented so far,\nbut what's the main problem there?\nIs the issue with the global data cleanup / re-initialization after the\ndatabase is changed?\nIs it in 3rd party extensions that assume the same and may break?\nAnything else?\n\n-Vladimir Churyukin\n\nOn Mon, Jan 22, 2024 at 11:43 PM Heikki Linnakangas <[email protected]> wrote:On 22/01/2024 21:58, Vladimir Churyukin wrote:\n> A question about protocol design - would it be possible to extend the \n> protocol, so it can handle multiple startup / authentication messages \n> over a single connection? Are there any serious obstacles? (possible \n> issues with re-initialization of backends, I guess?)\n> If that is possible, it could improve one important edge case - where \n> you have to talk to multiple databases on a single host currently, you \n> need to open a separate connection to each of them. In some cases \n> (multitenancy for example), you may have thousands of databases on a \n> host, which leads to inefficient connection utilization on clients (on \n> the db side too). A lot of other RDBMSes  don't have this limitation.\n\nThe protocol and the startup message are the least of your problems. \nYeah, it would be nice if you could switch between databases, but the \nassumption that one backend operates on one database is pretty deeply \ningrained in the code.  Yes, I suspected that's the reason why it was not implemented so far,but what's the main problem there?Is the issue with the global data cleanup / re-initialization after the database is changed?Is it in 3rd party extensions that assume the same and may break?Anything else?-Vladimir Churyukin", "msg_date": "Tue, 23 Jan 2024 13:51:04 -0800", "msg_from": "Vladimir Churyukin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple startup messages over the same connection" }, { "msg_contents": "On Mon, Jan 22, 2024 at 11:43 PM Heikki Linnakangas <[email protected]> wrote:\n\n> On 22/01/2024 21:58, Vladimir Churyukin wrote:\n> > A question about protocol design - would it be possible to extend the\n> > protocol, so it can handle multiple startup / authentication messages\n> > over a single connection? Are there any serious obstacles? (possible\n> > issues with re-initialization of backends, I guess?)\n> > If that is possible, it could improve one important edge case - where\n> > you have to talk to multiple databases on a single host currently, you\n> > need to open a separate connection to each of them. In some cases\n> > (multitenancy for example), you may have thousands of databases on a\n> > host, which leads to inefficient connection utilization on clients (on\n> > the db side too). A lot of other RDBMSes don't have this limitation.\n>\n> The protocol and the startup message are the least of your problems.\n> Yeah, it would be nice if you could switch between databases, but the\n> assumption that one backend operates on one database is pretty deeply\n> ingrained in the code.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n>\n>\nSorry to revive this old thread, just want to check on one thing:\nLet's say we keep one database per backend rule, I understand at this point\nit would be really hard to change.\nWhat if on a new startup message we just signal the postmaster about it, so\nit takes over the socket and spawns a new backend.\nAfter that we terminate the old one. How does it sound like in terms of\nimplementation complexity?\nI guess the process of passing control from child processes to the parent\ncould be a bit tricky for that one, but doable?\nIs there anything I'm missing that can be a no-go for this?\nThe end goal is to minimize a large overhead for clients having to deal\nwith a large number of connections on multi-tenant systems (say, one client\ndeals with thousands of databases on the same database server).\n\n-Vladimir Churyukin\n\nOn Mon, Jan 22, 2024 at 11:43 PM Heikki Linnakangas <[email protected]> wrote:On 22/01/2024 21:58, Vladimir Churyukin wrote:\n> A question about protocol design - would it be possible to extend the \n> protocol, so it can handle multiple startup / authentication messages \n> over a single connection? Are there any serious obstacles? (possible \n> issues with re-initialization of backends, I guess?)\n> If that is possible, it could improve one important edge case - where \n> you have to talk to multiple databases on a single host currently, you \n> need to open a separate connection to each of them. In some cases \n> (multitenancy for example), you may have thousands of databases on a \n> host, which leads to inefficient connection utilization on clients (on \n> the db side too). A lot of other RDBMSes  don't have this limitation.\n\nThe protocol and the startup message are the least of your problems. \nYeah, it would be nice if you could switch between databases, but the \nassumption that one backend operates on one database is pretty deeply \ningrained in the code.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\nSorry to revive this old thread, just want to check on one thing:Let's say we keep one database per backend rule, I understand at this point it would be really hard to change.What if on a new startup message we just signal the postmaster about it, so it takes over the socket and spawns a new backend.After that we terminate the old one. How does it sound like in terms of implementation complexity?I guess the process of passing control from child processes to the parent could be a bit tricky for that one, but doable?Is there anything I'm missing that can be a no-go for this?The end goal is to minimize a large overhead for clients having to deal with a large number of connections on multi-tenant systems (say, one client deals with thousands of databases on the same database server). -Vladimir Churyukin", "msg_date": "Sat, 18 May 2024 14:09:37 -0700", "msg_from": "Vladimir Churyukin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple startup messages over the same connection" }, { "msg_contents": "On Sat, 18 May 2024 at 23:10, Vladimir Churyukin <[email protected]> wrote:\n> I guess the process of passing control from child processes to the parent could be a bit tricky for that one, but doable?\n> Is there anything I'm missing that can be a no-go for this?\n\nOne seriously difficult/possibly impossible thing is passing SSL\nsession state between processes using shared memory.\n\n\n", "msg_date": "Sun, 19 May 2024 08:54:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple startup messages over the same connection" } ]
[ { "msg_contents": "Hi,\n\nI noticed that I was getting core dumps while executing the tests, without the\ntests failing. Backtraces are vriations of this:\n\n#0 0x0000000000ca29cd in pg_atomic_read_u32_impl (ptr=0x7fe13497a004) at ../../../../../home/andres/src/postgresql/src/include/port/atomics/generic.h:48\n#1 0x0000000000ca2b08 in pg_atomic_read_u32 (ptr=0x7fe13497a004) at ../../../../../home/andres/src/postgresql/src/include/port/atomics.h:239\n#2 0x0000000000ca3c3d in LWLockAttemptLock (lock=0x7fe13497a000, mode=LW_EXCLUSIVE)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:825\n#3 0x0000000000ca440c in LWLockAcquire (lock=0x7fe13497a000, mode=LW_EXCLUSIVE)\n at ../../../../../home/andres/src/postgresql/src/backend/storage/lmgr/lwlock.c:1264\n#4 0x00007fe130204ab4 in apw_detach_shmem (code=0, arg=0) at ../../../../../home/andres/src/postgresql/contrib/pg_prewarm/autoprewarm.c:788\n#5 0x0000000000c81c99 in shmem_exit (code=0) at ../../../../../home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:276\n#6 0x0000000000c81a7c in proc_exit_prepare (code=0) at ../../../../../home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:198\n#7 0x0000000000c819a8 in proc_exit (code=0) at ../../../../../home/andres/src/postgresql/src/backend/storage/ipc/ipc.c:111\n#8 0x0000000000bdd0b3 in BackgroundWorkerMain () at ../../../../../home/andres/src/postgresql/src/backend/postmaster/bgworker.c:841\n#9 0x0000000000be861d in do_start_bgworker (rw=0x341ff20) at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5756\n#10 0x0000000000be8a34 in maybe_start_bgworkers () at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5980\n#11 0x0000000000be4f9f in process_pm_child_exit () at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:3039\n#12 0x0000000000be2de4 in ServerLoop () at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1765\n#13 0x0000000000be27bf in PostmasterMain (argc=4, argv=0x33dbba0) at ../../../../../home/andres/src/postgresql/src/backend/postmaster/postmaster.c:1475\n#14 0x0000000000aca326 in main (argc=4, argv=0x33dbba0) at ../../../../../home/andres/src/postgresql/src/backend/main/main.c:198\n\nThe most likely culprit seems to be:\nhttps://postgr.es/m/E1rQvjC-002Chd-Cd%40gemulon.postgresql.org\n\nThe test encountering this is pg_prewarm/001_basic:\n(gdb) p DataDir\n$12 = 0x33ef8a0 \"/srv/dev/build/postgres/m-dev-assert/testrun/pg_prewarm/001_basic/data/t_001_basic_main_data/pgdata\"\n\n\nA secondary issue is that the tests suceed despite two segfaults. The problem\nhere likely is that we don't have sufficient error handling during shutdowns:\n\n2024-01-22 12:31:34.133 PST [3054823] LOG: background worker \"logical replication launcher\" (PID 3054836) exited with exit code 1\n2024-01-22 12:31:34.443 PST [3054823] LOG: background worker \"autoprewarm leader\" (PID 3054835) was terminated by signal 11: Segmentation fault\n2024-01-22 12:31:34.443 PST [3054823] LOG: terminating any other active server processes\n2024-01-22 12:31:34.444 PST [3054823] LOG: abnormal database system shutdown\n2024-01-22 12:31:34.469 PST [3054823] LOG: database system is shut down\n\n2024-01-22 12:31:34.555 PST [3055133] LOG: starting PostgreSQL 17devel on x86_64-linux, compiled by gcc-14.0.0, 64-bit\n2024-01-22 12:31:34.555 PST [3055133] LOG: listening on Unix socket \"/tmp/p6XG0kQW9w/.s.PGSQL.60402\"\n2024-01-22 12:31:34.557 PST [3055148] LOG: database system was interrupted; last known up at 2024-01-22 12:31:33 PST\n2024-01-22 12:31:34.557 PST [3055148] LOG: database system was not properly shut down; automatic recovery in progress\n2024-01-22 12:31:34.558 PST [3055148] LOG: redo starts at 0/1531090\n2024-01-22 12:31:34.559 PST [3055148] LOG: invalid record length at 0/1555F60: expected at least 24, got 0\n2024-01-22 12:31:34.559 PST [3055148] LOG: redo done at 0/1555F38 system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n2024-01-22 12:31:34.559 PST [3055146] LOG: checkpoint starting: end-of-recovery immediate wait\n2024-01-22 12:31:34.570 PST [3055146] LOG: checkpoint complete: wrote 42 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.001 s, total=0.011 s; sync files=0, longest=0.000 s, average=0.000 s; distance=147 kB, estimate=147 kB; lsn=0/1555F60, redo lsn=0/1555F60\n2024-01-22 12:31:34.573 PST [3055133] LOG: database system is ready to accept connections\n\nISTM that we shouldn't basically silently overlook shutdowns due to crashes in\nthe tests. How to not do so is unfortunately not immediately obvious to me...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jan 2024 12:41:17 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Mon, Jan 22, 2024 at 12:41:17PM -0800, Andres Freund wrote:\n> I noticed that I was getting core dumps while executing the tests, without the\n> tests failing. Backtraces are vriations of this:\n\nLooking, thanks for the heads-up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 14:44:57 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Mon, Jan 22, 2024 at 02:44:57PM -0600, Nathan Bossart wrote:\n> On Mon, Jan 22, 2024 at 12:41:17PM -0800, Andres Freund wrote:\n>> I noticed that I was getting core dumps while executing the tests, without the\n>> tests failing. Backtraces are vriations of this:\n> \n> Looking, thanks for the heads-up.\n\nI think this is because the autoprewarm state was moved to a DSM segment,\nand DSM segments are detached before the on_shmem_exit callbacks are called\nduring process exit. Moving apw_detach_shmem to the before_shmem_exit list\nseems to resolve the crashes.\n\ndiff --git a/contrib/pg_prewarm/autoprewarm.c b/contrib/pg_prewarm/autoprewarm.c\nindex 9ea6c2252a..88c3a04109 100644\n--- a/contrib/pg_prewarm/autoprewarm.c\n+++ b/contrib/pg_prewarm/autoprewarm.c\n@@ -165,7 +165,7 @@ autoprewarm_main(Datum main_arg)\n first_time = false;\n \n /* Set on-detach hook so that our PID will be cleared on exit. */\n- on_shmem_exit(apw_detach_shmem, 0);\n+ before_shmem_exit(apw_detach_shmem, 0);\n \n /*\n * Store our PID in the shared memory area --- unless there's already\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:19:36 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "Hi,\n\nOn 2024-01-22 15:19:36 -0600, Nathan Bossart wrote:\n> On Mon, Jan 22, 2024 at 02:44:57PM -0600, Nathan Bossart wrote:\n> > On Mon, Jan 22, 2024 at 12:41:17PM -0800, Andres Freund wrote:\n> >> I noticed that I was getting core dumps while executing the tests, without the\n> >> tests failing. Backtraces are vriations of this:\n> > \n> > Looking, thanks for the heads-up.\n> \n> I think this is because the autoprewarm state was moved to a DSM segment,\n> and DSM segments are detached before the on_shmem_exit callbacks are called\n> during process exit. Moving apw_detach_shmem to the before_shmem_exit list\n> seems to resolve the crashes.\n\nThat seems plausible. Would still be nice to have at least this test ensure\nthat the shutdown code works. Perhaps just a check of the control file after\nshutdown, ensuring that the state is \"shutdowned\" vs crashed?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 22 Jan 2024 13:24:54 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Mon, Jan 22, 2024 at 01:24:54PM -0800, Andres Freund wrote:\n> On 2024-01-22 15:19:36 -0600, Nathan Bossart wrote:\n>> I think this is because the autoprewarm state was moved to a DSM segment,\n>> and DSM segments are detached before the on_shmem_exit callbacks are called\n>> during process exit. Moving apw_detach_shmem to the before_shmem_exit list\n>> seems to resolve the crashes.\n> \n> That seems plausible. Would still be nice to have at least this test ensure\n> that the shutdown code works. Perhaps just a check of the control file after\n> shutdown, ensuring that the state is \"shutdowned\" vs crashed?\n\nI'll give that a try. I'll also expand the comment above the\nbefore_shmem_exit() call.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Jan 2024 15:38:15 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Mon, Jan 22, 2024 at 03:38:15PM -0600, Nathan Bossart wrote:\n> On Mon, Jan 22, 2024 at 01:24:54PM -0800, Andres Freund wrote:\n>> On 2024-01-22 15:19:36 -0600, Nathan Bossart wrote:\n>>> I think this is because the autoprewarm state was moved to a DSM segment,\n>>> and DSM segments are detached before the on_shmem_exit callbacks are called\n>>> during process exit. Moving apw_detach_shmem to the before_shmem_exit list\n>>> seems to resolve the crashes.\n>> \n>> That seems plausible. Would still be nice to have at least this test ensure\n>> that the shutdown code works. Perhaps just a check of the control file after\n>> shutdown, ensuring that the state is \"shutdowned\" vs crashed?\n> \n> I'll give that a try. I'll also expand the comment above the\n> before_shmem_exit() call.\n\nHere is a patch.\n\nThis might be a topic for another thread, but I do wonder whether we could\nput a generic pg_controldata check in node->stop that would at least make\nsure that the state is some sort of expected shut-down state. My first\nthought is that it could be a tad expensive, but... maybe it wouldn't be\ntoo bad.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 22 Jan 2024 16:27:43 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "Hello Andres,\n\n22.01.2024 23:41, Andres Freund wrote:\n> Hi,\n>\n> I noticed that I was getting core dumps while executing the tests, without the\n> tests failing. Backtraces are vriations of this:\n> ...\n>\n> ISTM that we shouldn't basically silently overlook shutdowns due to crashes in\n> the tests. How to not do so is unfortunately not immediately obvious to me...\n>\n\nFWIW, I encountered this behavior as well (with pg_stat):\nhttps://www.postgresql.org/message-id/[email protected]\n\nand proposed a way to detect such shutdowns for a discussion:\nhttps://www.postgresql.org/message-id/flat/290b9ae3-98a2-0896-a957-18d3b60b6260%40gmail.com\n\nwhere Shveta referenced a previous thread started by Tom Lane:\nhttps://www.postgresql.org/message-id/flat/[email protected]\n\nWhat do you think about leaving postmaster.pid on disk in case of an\nabnormal shutdown?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 23 Jan 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Mon, Jan 22, 2024 at 04:27:43PM -0600, Nathan Bossart wrote:\n> Here is a patch.\n\nI'd like to fix these crashes sooner than later, so I will plan on\ncommitting this tonight (barring objections or feedback). If this needs to\nbe revisited later for some reason, I'm happy to do so.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 10:03:13 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "Hi,\n\nOn 2024-01-22 16:27:43 -0600, Nathan Bossart wrote:\n> Here is a patch.\n\nLGTM.\n\n\n> This might be a topic for another thread, but I do wonder whether we could\n> put a generic pg_controldata check in node->stop that would at least make\n> sure that the state is some sort of expected shut-down state. My first\n> thought is that it could be a tad expensive, but... maybe it wouldn't be\n> too bad.\n\nI think that'd probably would be a good idea - I suspect there'd need to be a\nfair number of exceptions, but that it'd be easier to change uses of ->stop()\nto the exception case where needed, than to add a new function doing checking\nand converting most things to that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jan 2024 09:28:18 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "Hi,\n\nOn 2024-01-23 08:00:00 +0300, Alexander Lakhin wrote:\n> 22.01.2024 23:41, Andres Freund wrote:\n> > ISTM that we shouldn't basically silently overlook shutdowns due to crashes in\n> > the tests. How to not do so is unfortunately not immediately obvious to me...\n> > \n> \n> FWIW, I encountered this behavior as well (with pg_stat):\n> https://www.postgresql.org/message-id/[email protected]\n> \n> and proposed a way to detect such shutdowns for a discussion:\n> https://www.postgresql.org/message-id/flat/290b9ae3-98a2-0896-a957-18d3b60b6260%40gmail.com\n> \n> where Shveta referenced a previous thread started by Tom Lane:\n> https://www.postgresql.org/message-id/flat/[email protected]\n> \n> What do you think about leaving postmaster.pid on disk in case of an\n> abnormal shutdown?\n\nI don't think that's viable and would cause more problems than it solves, it'd\nmake us think that we might have an old postgres process hanging around that\nneeds to be terminted before we can start up. And I simply don't see the point\n- we already record whether we crashed in the control file, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jan 2024 09:30:05 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On 2024-Jan-22, Nathan Bossart wrote:\n\n> Here is a patch.\n\nLooks reasonable.\n\n> This might be a topic for another thread, but I do wonder whether we could\n> put a generic pg_controldata check in node->stop that would at least make\n> sure that the state is some sort of expected shut-down state. My first\n> thought is that it could be a tad expensive, but... maybe it wouldn't be\n> too bad.\n\nDoes this actually detect a problem if you take out the fix? I think\nwhat's going to happen is that postmaster is going to crash, then do the\nrecovery cycle, then stop as instructed by the test; so pg_controldata\nwould report that it was correctly shut down.\n\nIf we had a restart-cycle-counter to check maybe we could verify that it\nhasn't changed, but I don't think we have that.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n", "msg_date": "Tue, 23 Jan 2024 18:33:25 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Tue, Jan 23, 2024 at 06:33:25PM +0100, Alvaro Herrera wrote:\n> On 2024-Jan-22, Nathan Bossart wrote:\n>> This might be a topic for another thread, but I do wonder whether we could\n>> put a generic pg_controldata check in node->stop that would at least make\n>> sure that the state is some sort of expected shut-down state. My first\n>> thought is that it could be a tad expensive, but... maybe it wouldn't be\n>> too bad.\n> \n> Does this actually detect a problem if you take out the fix? I think\n> what's going to happen is that postmaster is going to crash, then do the\n> recovery cycle, then stop as instructed by the test; so pg_controldata\n> would report that it was correctly shut down.\n\nYes, the control data shows \"in production\" without it. The segfault\nhappens within the shut-down path, and the test logs indicate that the\nserver continues shutting down without doing a recovery cycle:\n\n2024-01-23 12:14:49.254 CST [2376301] LOG: received fast shutdown request\n2024-01-23 12:14:49.254 CST [2376301] LOG: aborting any active transactions\n2024-01-23 12:14:49.255 CST [2376301] LOG: background worker \"logical replication launcher\" (PID 2376308) exited with exit code 1\n2024-01-23 12:14:49.256 CST [2376301] LOG: background worker \"autoprewarm leader\" (PID 2376307) was terminated by signal 11: Segmentation fault\n2024-01-23 12:14:49.256 CST [2376301] LOG: terminating any other active server processes\n2024-01-23 12:14:49.257 CST [2376301] LOG: abnormal database system shutdown\n2024-01-23 12:14:49.261 CST [2376301] LOG: database system is shut down\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 12:22:58 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Tue, Jan 23, 2024 at 12:22:58PM -0600, Nathan Bossart wrote:\n> On Tue, Jan 23, 2024 at 06:33:25PM +0100, Alvaro Herrera wrote:\n>> Does this actually detect a problem if you take out the fix? I think\n>> what's going to happen is that postmaster is going to crash, then do the\n>> recovery cycle, then stop as instructed by the test; so pg_controldata\n>> would report that it was correctly shut down.\n> \n> Yes, the control data shows \"in production\" without it. The segfault\n> happens within the shut-down path, and the test logs indicate that the\n> server continues shutting down without doing a recovery cycle:\n\nI see that ->init turns off restart_after_crash, which might be why it's\nnot doing a recovery cycle.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 12:26:17 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "23.01.2024 20:30, Andres Freund wrote:\n> I don't think that's viable and would cause more problems than it solves, it'd\n> make us think that we might have an old postgres process hanging around that\n> needs to be terminted before we can start up. And I simply don't see the point\n> - we already record whether we crashed in the control file, no?\n\nWith an Assert injected in walsender.c (as in [1]) and test\n012_subtransactions.pl modified to finish just after the first\n\"$node_primary->stop;\", I see:\npg_controldata -D src/test/recovery/tmp_check/t_012_subtransactions_primary_data/pgdata/\nDatabase cluster state:               shut down\n\nBut the assertion undoubtedly failed:\ngrep TRAP src/test/recovery/tmp_check/log/*\nsrc/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed Assert(\"0\"), File: \"walsender.c\", Line: \n2688, PID: 142201\n\nAs to the need to terminate a process, which is supposedly hanging around,\nI think, this situation doesn't differ in general from what we have after\nkill -9...\n\nSo my point was to let 'pg_ctl stop' know about an error occurred during\nthe server stop.\n\n[1] https://www.postgresql.org/message-id/290b9ae3-98a2-0896-a957-18d3b60b6260%40gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 23 Jan 2024 22:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "On Tue, Jan 23, 2024 at 06:33:25PM +0100, Alvaro Herrera wrote:\n> On 2024-Jan-22, Nathan Bossart wrote:\n> \n>> Here is a patch.\n> \n> Looks reasonable.\n\nCommitted. Thank you for the report and the reviews.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 14:25:20 -0600", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" }, { "msg_contents": "Hi,\n\nOn 2024-01-23 22:00:01 +0300, Alexander Lakhin wrote:\n> 23.01.2024 20:30, Andres Freund wrote:\n> > I don't think that's viable and would cause more problems than it solves, it'd\n> > make us think that we might have an old postgres process hanging around that\n> > needs to be terminted before we can start up. And I simply don't see the point\n> > - we already record whether we crashed in the control file, no?\n> \n> With an Assert injected in walsender.c (as in [1]) and test\n> 012_subtransactions.pl modified to finish just after the first\n> \"$node_primary->stop;\", I see:\n> pg_controldata -D src/test/recovery/tmp_check/t_012_subtransactions_primary_data/pgdata/\n> Database cluster state:�������������� shut down\n> \n> But the assertion undoubtedly failed:\n> grep TRAP src/test/recovery/tmp_check/log/*\n> src/test/recovery/tmp_check/log/012_subtransactions_primary.log:TRAP: failed\n> Assert(\"0\"), File: \"walsender.c\", Line: 2688, PID: 142201\n\nYea, because it's after checkpointer has changed the state to \"shutdowned\". I\nthink we could add additional states, to be set by postmaster, instead of\ncheckpointer, for this purpose.\n\n\n> As to the need to terminate a process, which is supposedly hanging around,\n> I think, this situation doesn't differ in general from what we have after\n> kill -9...\n\nSo? Making it more likely for postgres failing to restart successfully,\nbecause the pid has been reused, is bad.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Jan 2024 16:46:14 -0800", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: core dumps in auto_prewarm, tests succeed" } ]
[ { "msg_contents": "Hi, hackers\n\nI find heapam_relation_copy_data() and index_copy_data() have the following code:\n\n\tdstrel = smgropen(*newrlocator, rel->rd_backend);\n\n\t...\n\n\tRelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true);\n\nThe smgropen() is also called by RelationCreateStorage(), why should we call\nsmgropen() explicitly here?\n\nI try to remove the smgropen(), and all tests passed.", "msg_date": "Tue, 23 Jan 2024 12:51:45 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Unnecessary smgropen in {heapam_relation,index}_copy_data?" }, { "msg_contents": "Hi,\n\n> I find heapam_relation_copy_data() and index_copy_data() have the following code:\n>\n> dstrel = smgropen(*newrlocator, rel->rd_backend);\n>\n> ...\n>\n> RelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true);\n>\n> The smgropen() is also called by RelationCreateStorage(), why should we call\n> smgropen() explicitly here?\n>\n> I try to remove the smgropen(), and all tests passed.\n\nThat's a very good question. Note that the second argument of\nsmgropen() used to create dstrel changes after applying your patch.\nI'm not 100% sure whether this is significant or not.\n\nI added your patch to the nearest open commitfest so that we will not lose it:\n\nhttps://commitfest.postgresql.org/47/4794/\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 25 Jan 2024 16:43:09 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unnecessary smgropen in {heapam_relation,index}_copy_data?" }, { "msg_contents": "\nOn Thu, 25 Jan 2024 at 21:43, Aleksander Alekseev <[email protected]> wrote:\n> Hi,\n>\n>> I find heapam_relation_copy_data() and index_copy_data() have the following code:\n>>\n>> dstrel = smgropen(*newrlocator, rel->rd_backend);\n>>\n>> ...\n>>\n>> RelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true);\n>>\n>> The smgropen() is also called by RelationCreateStorage(), why should we call\n>> smgropen() explicitly here?\n>>\n>> I try to remove the smgropen(), and all tests passed.\n>\n> That's a very good question. Note that the second argument of\n> smgropen() used to create dstrel changes after applying your patch.\n> I'm not 100% sure whether this is significant or not.\n>\n\nThanks for the review.\n\nAccording the comments of RelationData->rd_backend, it is the backend id, if\nthe relation is temporary. The differnece is RelationCreateStorage() uses\nrelpersistence to determinate the backend id.\n\n> I added your patch to the nearest open commitfest so that we will not lose it:\n>\n> https://commitfest.postgresql.org/47/4794/\n\nThank you.\n\n\n", "msg_date": "Thu, 25 Jan 2024 23:22:41 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unnecessary smgropen in {heapam_relation,index}_copy_data?" }, { "msg_contents": "On 25/01/2024 17:22, Japin Li wrote:\n> On Thu, 25 Jan 2024 at 21:43, Aleksander Alekseev <[email protected]> wrote:\n>>> I find heapam_relation_copy_data() and index_copy_data() have the following code:\n>>>\n>>> dstrel = smgropen(*newrlocator, rel->rd_backend);\n>>>\n>>> ...\n>>>\n>>> RelationCreateStorage(*newrlocator, rel->rd_rel->relpersistence, true);\n>>>\n>>> The smgropen() is also called by RelationCreateStorage(), why should we call\n>>> smgropen() explicitly here?\n>>>\n>>> I try to remove the smgropen(), and all tests passed.\n>>\n>> That's a very good question. Note that the second argument of\n>> smgropen() used to create dstrel changes after applying your patch.\n>> I'm not 100% sure whether this is significant or not.\n> \n> Thanks for the review.\n> \n> According the comments of RelationData->rd_backend, it is the backend id, if\n> the relation is temporary. The differnece is RelationCreateStorage() uses\n> relpersistence to determinate the backend id.\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 12 Feb 2024 11:13:26 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unnecessary smgropen in {heapam_relation,index}_copy_data?" } ]
[ { "msg_contents": "Add better handling of redundant IS [NOT] NULL quals\n\nUntil now PostgreSQL has not been very smart about optimizing away IS\nNOT NULL base quals on columns defined as NOT NULL. The evaluation of\nthese needless quals adds overhead. Ordinarily, anyone who came\ncomplaining about that would likely just have been told to not include\nthe qual in their query if it's not required. However, a recent bug\nreport indicates this might not always be possible.\n\nBug 17540 highlighted that when we optimize Min/Max aggregates the IS NOT\nNULL qual that the planner adds to make the rewritten plan ignore NULLs\ncan cause issues with poor index choice. That particular case\ndemonstrated that other quals, especially ones where no statistics are\navailable to allow the planner a chance at estimating an approximate\nselectivity for can result in poor index choice due to cheap startup paths\nbeing prefered with LIMIT 1.\n\nHere we take generic approach to fixing this by having the planner check\nfor NOT NULL columns and just have the planner remove these quals (when\nthey're not needed) for all queries, not just when optimizing Min/Max\naggregates.\n\nAdditionally, here we also detect IS NULL quals on a NOT NULL column and\ntransform that into a gating qual so that we don't have to perform the\nscan at all. This also works for join relations when the Var is not\nnullable by any outer join.\n\nThis also helps with the self-join removal work as it must replace\nstrict join quals with IS NOT NULL quals to ensure equivalence with the\noriginal query.\n\nAuthor: David Rowley, Richard Guo, Andy Fan\nReviewed-by: Richard Guo, David Rowley\nDiscussion: https://postgr.es/m/CAApHDvqg6XZDhYRPz0zgOcevSMo0d3vxA9DvHrZtKfqO30WTnw@mail.gmail.com\nDiscussion: https://postgr.es/m/17540-7aa1855ad5ec18b4%40postgresql.org\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b262ad440edecda0b1aba81d967ab560a83acb8a\n\nModified Files\n--------------\ncontrib/postgres_fdw/expected/postgres_fdw.out | 16 +-\ncontrib/postgres_fdw/sql/postgres_fdw.sql | 4 +-\nsrc/backend/optimizer/plan/initsplan.c | 197 +++++++++++++++++++-\nsrc/backend/optimizer/util/joininfo.c | 28 +++\nsrc/backend/optimizer/util/plancat.c | 19 ++\nsrc/backend/optimizer/util/relnode.c | 3 +\nsrc/include/nodes/pathnodes.h | 7 +-\nsrc/include/optimizer/planmain.h | 4 +\nsrc/test/regress/expected/equivclass.out | 18 +-\nsrc/test/regress/expected/join.out | 67 ++++---\nsrc/test/regress/expected/predicate.out | 244 +++++++++++++++++++++++++\nsrc/test/regress/parallel_schedule | 2 +-\nsrc/test/regress/sql/predicate.sql | 122 +++++++++++++\n13 files changed, 664 insertions(+), 67 deletions(-)", "msg_date": "Tue, 23 Jan 2024 05:09:43 +0000", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Add better handling of redundant IS [NOT] NULL quals" }, { "msg_contents": "On 2024-Jan-23, David Rowley wrote:\n\n> Add better handling of redundant IS [NOT] NULL quals\n> \n> Until now PostgreSQL has not been very smart about optimizing away IS\n> NOT NULL base quals on columns defined as NOT NULL.\n\nHmm, what happens if a NOT NULL constraint is dropped and you have such\na plan in plancache? As I recall, lack of a mechanism to invalidate\nsuch plans was the main reason for Postgres not to have this. One of\nthe motivations for adding catalogued NOT NULL constraints was precisely\nto have an OID that you could use to cause plancache to invalidate such\na plan. Does this new code add something like that?\n\nAdmittedly I didn't read the threads or the patch, just skimmed for some\nclues, so I may have failed to notice it. But in the tests you added I\ndon't see any ALTER TABLE DROP CONSTRAINT.\n\n\n(Similarly, allowing GROUP BY to ignore columns not in the GROUP BY,\nwhen a UNIQUE constraint exists and all columns are NOT NULL; currently\nwe allow that for PRIMARY KEY, but if you have the NOT NULL constraint\nOIDs to cue the plan invalidation would let that case to be implemented\nas well.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n", "msg_date": "Tue, 23 Jan 2024 20:15:38 +0100", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add better handling of redundant IS [NOT] NULL quals" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-Jan-23, David Rowley wrote:\n>> Until now PostgreSQL has not been very smart about optimizing away IS\n>> NOT NULL base quals on columns defined as NOT NULL.\n\n> Hmm, what happens if a NOT NULL constraint is dropped and you have such\n> a plan in plancache? As I recall, lack of a mechanism to invalidate\n> such plans was the main reason for Postgres not to have this.\n\nIIRC, we realized that that concern was bogus. Removal of such\nconstraints would cause pg_attribute.attnotnull to change, leading\nto a relcache invalidation on the table, forcing replan. If anyone\ntried to get rid of attnotnull or make it incompletely reliable,\nthen we'd have problems; but AFAIK that's not being contemplated.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Jan 2024 14:38:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add better handling of redundant IS [NOT] NULL quals" }, { "msg_contents": "On Wed, 24 Jan 2024 at 08:15, Alvaro Herrera <[email protected]> wrote:\n> (Similarly, allowing GROUP BY to ignore columns not in the GROUP BY,\n> when a UNIQUE constraint exists and all columns are NOT NULL; currently\n> we allow that for PRIMARY KEY, but if you have the NOT NULL constraint\n> OIDs to cue the plan invalidation would let that case to be implemented\n> as well.)\n\nI recall some discussion about the GROUP BY case. I think at the time\nthere might have been some confusion with plan cache invalidation and\ninvalidating views that have been created with columns in the target\nlist which are functionally dependent on columns in the GROUP BY.\n\ni.e, given:\n\ncreate table ab (a int primary key, b int not null unique);\n\nthe following works:\n\ncreate view v_ab1 as select a,b from ab group by a; -- works\n\nbut this one does not:\n\ncreate view v_ab2 as select a,b from ab group by b; -- does not work\nERROR: column \"ab.a\" must appear in the GROUP BY clause or be used in\nan aggregate function\nLINE 1: create view v_ab2 as select a,b from ab group by b;\n\nI think thanks to your work on adding pg_constraint records for NOT\nNULL conditions, the latter case could now be made to work.\n\nAs for the plan optimisation, I agree with Tom about the relcache\ninvalidation triggering a replan. Maybe it's worth adding a test to\nensure the replan is done after a ALTER TABLE ... DROP NOT NULL,\nhowever.\n\nDavid\n\n\n", "msg_date": "Wed, 24 Jan 2024 10:02:16 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add better handling of redundant IS [NOT] NULL quals" } ]
[ { "msg_contents": "Similar to what we did to GROUP BY keys in 0452b461bc, I think we can do\nthe same to DISTINCT keys, i.e. reordering DISTINCT keys to match input\npath's pathkeys, which can help reduce cost by avoiding unnecessary\nre-sort, or allowing us to use incremental-sort to save efforts. For\ninstance,\n\ncreate table t (a int, b int);\ncreate index on t (a, b);\n\nexplain (costs off) select distinct b, a from t limit 10;\n QUERY PLAN\n--------------------------------------------------\n Limit\n -> Unique\n -> Index Only Scan using t_a_b_idx on t\n(3 rows)\n\n\nPlease note that the parser has ensured that the DISTINCT pathkeys\nmatches the order of ORDER BY clauses. So there is no need to do this\npart again.\n\nIn principle, we can perform such reordering for DISTINCT ON too, but we\nneed to make sure that the resulting pathkeys matches initial ORDER BY\nkeys, which seems not trivial. So it doesn't seem worth the effort.\n\nAttached is a patch for this optimization. Any thoughts?\n\nThanks\nRichard", "msg_date": "Tue, 23 Jan 2024 13:55:54 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Reordering DISTINCT keys to match input path's pathkeys" }, { "msg_contents": "On Tue, 2024-01-23 at 13:55 +0800, Richard Guo wrote:\n> Similar to what we did to GROUP BY keys in 0452b461bc, I think we can do\n> the same to DISTINCT keys, i.e. reordering DISTINCT keys to match input\n> path's pathkeys, which can help reduce cost by avoiding unnecessary\n> re-sort, or allowing us to use incremental-sort to save efforts.\n> \n> Attached is a patch for this optimization.  Any thoughts?\n\nI didn't scrutinize the code, but that sounds like a fine optimization.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 23 Jan 2024 09:04:22 +0100", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reordering DISTINCT keys to match input path's pathkeys" }, { "msg_contents": "On Tue, 23 Jan 2024 at 18:56, Richard Guo <[email protected]> wrote:\n>\n> Similar to what we did to GROUP BY keys in 0452b461bc, I think we can do\n> the same to DISTINCT keys, i.e. reordering DISTINCT keys to match input\n> path's pathkeys, which can help reduce cost by avoiding unnecessary\n> re-sort, or allowing us to use incremental-sort to save efforts. For\n> instance,\n\nI've not caught up on the specifics of 0452b461b, but I just wanted to\nhighlight that there was some work done in [1] in this area. It seems\nAnkit didn't ever add that to a CF, so that might explain why it's\nbeen lost.\n\nAnyway, just pointing it out as there may be useful code or discussion\nin the corresponding threads.\n\nDavid\n\n[1] https://postgr.es/m/da9425ae-8ff7-33d9-23b3-2a3eb605e106%40gmail.com\n\n\n", "msg_date": "Tue, 23 Jan 2024 22:03:23 +1300", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reordering DISTINCT keys to match input path's pathkeys" }, { "msg_contents": "On Tue, Jan 23, 2024 at 5:03 PM David Rowley <[email protected]> wrote:\n\n> I've not caught up on the specifics of 0452b461b, but I just wanted to\n> highlight that there was some work done in [1] in this area. It seems\n> Ankit didn't ever add that to a CF, so that might explain why it's\n> been lost.\n>\n> Anyway, just pointing it out as there may be useful code or discussion\n> in the corresponding threads.\n\n\nThanks for pointing it out. I looked at the patch there and noticed\nseveral problems with it.\n\n* That patch is incomplete and does not work as expected. It at least\nneeds to modify truncate_useless_pathkeys() to account for DISTINCT\nclause (I think this has been mentioned in that thread).\n\n* That patch would not consider the origin DISTINCT pathkeys if it could\ndo some reordering, which is not great and can generate inefficient\nplans. For instance (after fixing the first problem)\n\ncreate table t (a int, b int);\ncreate index on t(a);\n\nset enable_hashagg to off;\nset enable_incremental_sort to off;\nset enable_seqscan to off;\n\nexplain (costs off) select distinct b, a from t order by b, a;\n QUERY PLAN\n-------------------------------------------------\n Sort\n Sort Key: b, a\n -> Unique\n -> Sort\n Sort Key: a, b\n -> Index Scan using t_a_idx on t\n(6 rows)\n\nUsing DISTINCT pathkeys {b, a} is more efficient for this plan, because\nonly one Sort would be required. But that patch is not able to do that,\nbecause it does not consider the origin DISTINCT pathkeys after\nreordering.\n\nThanks\nRichard\n\nOn Tue, Jan 23, 2024 at 5:03 PM David Rowley <[email protected]> wrote:\nI've not caught up on the specifics of 0452b461b, but I just wanted to\nhighlight that there was some work done in [1] in this area.  It seems\nAnkit didn't ever add that to a CF, so that might explain why it's\nbeen lost.\n\nAnyway, just pointing it out as there may be useful code or discussion\nin the corresponding threads.Thanks for pointing it out.  I looked at the patch there and noticedseveral problems with it.* That patch is incomplete and does not work as expected.  It at leastneeds to modify truncate_useless_pathkeys() to account for DISTINCTclause (I think this has been mentioned in that thread).* That patch would not consider the origin DISTINCT pathkeys if it coulddo some reordering, which is not great and can generate inefficientplans.  For instance (after fixing the first problem)create table t (a int, b int);create index on t(a);set enable_hashagg to off;set enable_incremental_sort to off;set enable_seqscan to off;explain (costs off) select distinct b, a from t order by b, a;                   QUERY PLAN------------------------------------------------- Sort   Sort Key: b, a   ->  Unique         ->  Sort               Sort Key: a, b               ->  Index Scan using t_a_idx on t(6 rows)Using DISTINCT pathkeys {b, a} is more efficient for this plan, becauseonly one Sort would be required.  But that patch is not able to do that,because it does not consider the origin DISTINCT pathkeys afterreordering.ThanksRichard", "msg_date": "Fri, 26 Jan 2024 18:48:39 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reordering DISTINCT keys to match input path's pathkeys" }, { "msg_contents": "cfbot reminds that this patch does not apply any more. So I've rebased\nit on master, and also adjusted the test cases a bit.\n\nThanks\nRichard", "msg_date": "Mon, 5 Feb 2024 11:18:18 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reordering DISTINCT keys to match input path's pathkeys" }, { "msg_contents": "On Mon, Feb 5, 2024 at 11:18 AM Richard Guo <[email protected]> wrote:\n> cfbot reminds that this patch does not apply any more. So I've rebased\n> it on master, and also adjusted the test cases a bit.\n\nThis patch does not apply any more, so here is a new rebase, with some\ntweaks to the comments.\n\nThanks\nRichard", "msg_date": "Fri, 7 Jun 2024 17:46:54 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Reordering DISTINCT keys to match input path's pathkeys" } ]
[ { "msg_contents": "We recently made corrections to the capitalization of DETAIL messages.\nFor example;\n\n-\t\tGUC_check_errdetail(\"invalid list syntax in parameter %s\",\n+\t\tGUC_check_errdetail(\"Invalid list syntax in parameter %s\",\n\nBut still it is missing the period at the end.\n\nThere are several patterns to this issue, but this time, I have only\nfixed the ones that are simple and obvious as follows:\n\na. GUC_check_errdetail(\"LITERAL\"), errdetail(\"LITERAL\") without a period.\nb. psprintf()'ed string that is passed to errdetail_internal()\n\nI didn't touch the following patterns:\n\nc. errdetail_internal(\"%s\")\nd. errdetail(\"Whatever: %s\")\ne. errdetail(\"%s versus %s\") and alikes\nf. errdetail_internal(\"%s\", pchomp(PQerrorMessage()))\ng. complex message compilation\n\n\nThe attached patch contains the following fix:\n\n-\t\t\t\tGUC_check_errdetail(\"timestamp out of range: \\\"%s\\\"\", str);\n+\t\t\t\tGUC_check_errdetail(\"Timestamp out of range: \\\"%s\\\".\", str);\n\nBut I'm not quite confident about whether capitalizing the type name\nhere is correct.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 23 Jan 2024 16:33:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Fix some errdetail's message format" } ]