threads
listlengths
1
2.99k
[ { "msg_contents": "I noticed that buildfarm member batfish has been complaining like\nthis for awhile:\n\nhmac_openssl.c:90:1: warning: unused function 'ResourceOwnerRememberHMAC' [-Wunused-function]\nhmac_openssl.c:95:1: warning: unused function 'ResourceOwnerForgetHMAC' [-Wunused-function]\n\nLooking at the code, this is all from commit e6bdfd970, and apparently\nbatfish is our only animal that doesn't HAVE_HMAC_CTX_NEW. I tried to\nunderstand the #if nesting and soon got very confused. I don't think\nit is helpful to put the resource owner manipulations inside #ifdef\nHAVE_HMAC_CTX_NEW and HAVE_HMAC_CTX_FREE --- probably, it would never\nbe the case that only one of those is defined, but it just seems\nmessy. What do you think of rearranging it as attached?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 01 Apr 2024 20:01:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Confusing #if nesting in hmac_openssl.c" }, { "msg_contents": "> On 2 Apr 2024, at 02:01, Tom Lane <[email protected]> wrote:\n\n> hmac_openssl.c:90:1: warning: unused function 'ResourceOwnerRememberHMAC' [-Wunused-function]\n> hmac_openssl.c:95:1: warning: unused function 'ResourceOwnerForgetHMAC' [-Wunused-function]\n> \n> Looking at the code, this is all from commit e6bdfd970, and apparently\n> batfish is our only animal that doesn't HAVE_HMAC_CTX_NEW.\n\nThanks for looking at this, it's been on my TODO for some time. It's a warning\nwhich only shows up when building against 1.0.2, the functions are present in\n1.1.0 and onwards (while deprecated in 3.0).\n\n> I don't think\n> it is helpful to put the resource owner manipulations inside #ifdef\n> HAVE_HMAC_CTX_NEW and HAVE_HMAC_CTX_FREE --- probably, it would never\n> be the case that only one of those is defined,\n\nCorrect, no version of OpenSSL has only one of them defined.\n\n> What do you think of rearranging it as attached?\n\n+1 on this patch, it makes the #ifdef soup more readable. We could go even\nfurther and remove the HAVE_HMAC defines completely with USE_RESOWNER_FOR_HMAC\nbeing set by autoconf/meson? I've attached an untested sketch diff to\nillustrate.\n\nA related tangent. If we assembled the data to calculate on ourselves rather\nthan rely on OpenSSL to do it with subsequent _update calls we could instead\nuse the simpler HMAC() API from OpenSSL. That would remove the need for the\nHMAC_CTX and resource owner tracking entirely and just have our pg_hmac_ctx.\nThats clearly not for this patch though, just thinking out loud that we set up\nOpenSSL infrastructure that we don't really use.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 2 Apr 2024 14:18:10 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusing #if nesting in hmac_openssl.c" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 2 Apr 2024, at 02:01, Tom Lane <[email protected]> wrote:\n>> I don't think\n>> it is helpful to put the resource owner manipulations inside #ifdef\n>> HAVE_HMAC_CTX_NEW and HAVE_HMAC_CTX_FREE --- ...\n>> What do you think of rearranging it as attached?\n\n> +1 on this patch, it makes the #ifdef soup more readable.\n\nThanks for looking at it.\n\n> We could go even\n> further and remove the HAVE_HMAC defines completely with USE_RESOWNER_FOR_HMAC\n> being set by autoconf/meson? I've attached an untested sketch diff to\n> illustrate.\n\nI'm inclined to think that won't work, because we need the HAVE_\nmacros separately to compile correct frontend code.\n\n> A related tangent. If we assembled the data to calculate on ourselves rather\n> than rely on OpenSSL to do it with subsequent _update calls we could instead\n> use the simpler HMAC() API from OpenSSL. That would remove the need for the\n> HMAC_CTX and resource owner tracking entirely and just have our pg_hmac_ctx.\n> Thats clearly not for this patch though, just thinking out loud that we set up\n> OpenSSL infrastructure that we don't really use.\n\nSimplifying like that could be good, but I'm not volunteering.\nFor the moment I'd just like to silence the buildfarm warning,\nso I'll go ahead with what I have.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 09:50:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Confusing #if nesting in hmac_openssl.c" }, { "msg_contents": "> On 2 Apr 2024, at 15:50, Tom Lane <[email protected]> wrote:\n\n> I'll go ahead with what I have.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 15:56:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusing #if nesting in hmac_openssl.c" }, { "msg_contents": "On Tue, Apr 02, 2024 at 03:56:13PM +0200, Daniel Gustafsson wrote:\n> > On 2 Apr 2024, at 15:50, Tom Lane <[email protected]> wrote:\n> \n> > I'll go ahead with what I have.\n> \n> +1\n\n+#ifdef USE_RESOWNER_FOR_HMAC \n\nWhy not, that's cleaner. Thanks for the commit. The interactions\nbetween this code and b8bff07da are interesting.\n--\nMichael", "msg_date": "Wed, 3 Apr 2024 15:18:10 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Confusing #if nesting in hmac_openssl.c" } ]
[ { "msg_contents": "Hello all.\nRecently, when working with the hstore and json formats, I came across the\nfact that PostgreSQL has a cast of hstore to json, but there is no reverse\ncast. I thought it might make it more difficult to work with these formats.\nAnd I decided to make a cast json in the hstore. I used the built-in jsonb\nstructure to create it and may have introduced methods to increase\nefficiency by 25% than converting the form jsonb->text->hstore. Which of\ncourse is a good fact. I also wrote regression tests to check the\nperformance. I think this extension will improve the work with jsonb and\nhstore in PostgreSQL.\nIf you've read this far, thank you for your interest, and I hope you enjoy\nthis extension!\n---- Antoine", "msg_date": "Tue, 2 Apr 2024 18:07:47 +0700", "msg_from": "ShadowGhost <[email protected]>", "msg_from_op": true, "msg_subject": "Extension for PostgreSQL cast jsonb to hstore WIP" }, { "msg_contents": "\nOn 2024-04-02 Tu 07:07, ShadowGhost wrote:\n> Hello all.\n> Recently, when working with the hstore and json formats, I came across \n> the fact that PostgreSQL has a cast of hstore to json, but there is no \n> reverse cast. I thought it might make it more difficult to work with \n> these formats. And I decided to make a cast json in the hstore. I used \n> the built-in jsonb structure to create it and may have introduced \n> methods to increase efficiency by 25% than converting the form \n> jsonb->text->hstore. Which of course is a good fact. I also wrote \n> regression tests to check the performance. I think this extension will \n> improve the work with jsonb and hstore in PostgreSQL.\n> If you've read this far, thank you for your interest, and I hope you \n> enjoy this extension!\n>\n\nOne reason we don't have such a cast is that hstore has a flat \nstructure, while json is tree structured, and it's not always an object \n/ hash. Thus it's easy to reliably cast hstore to json but far less easy \nto cast json to hstore in the general case.\n\nWhat do you propose to do in the case or json consisting of scalars, or \narrays, or with nested elements?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 08:47:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" }, { "msg_contents": "At the moment, this cast supports only these structures, as it was enough\nfor my tasks:\n{str:numeric}\n{str:str}\n{str:bool}\n{str:null}\nBut it's a great idea and I'll think about implementing it.\n\nвт, 2 апр. 2024 г. в 19:48, Andrew Dunstan <[email protected]>:\n\n>\n> On 2024-04-02 Tu 07:07, ShadowGhost wrote:\n> > Hello all.\n> > Recently, when working with the hstore and json formats, I came across\n> > the fact that PostgreSQL has a cast of hstore to json, but there is no\n> > reverse cast. I thought it might make it more difficult to work with\n> > these formats. And I decided to make a cast json in the hstore. I used\n> > the built-in jsonb structure to create it and may have introduced\n> > methods to increase efficiency by 25% than converting the form\n> > jsonb->text->hstore. Which of course is a good fact. I also wrote\n> > regression tests to check the performance. I think this extension will\n> > improve the work with jsonb and hstore in PostgreSQL.\n> > If you've read this far, thank you for your interest, and I hope you\n> > enjoy this extension!\n> >\n>\n> One reason we don't have such a cast is that hstore has a flat\n> structure, while json is tree structured, and it's not always an object\n> / hash. Thus it's easy to reliably cast hstore to json but far less easy\n> to cast json to hstore in the general case.\n>\n> What do you propose to do in the case or json consisting of scalars, or\n> arrays, or with nested elements?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nAt the moment, this cast supports only these structures, as it was enough for my tasks: {str:numeric}{str:str}{str:bool}{str:null}But it's a great idea and I'll think about implementing it.вт, 2 апр. 2024 г. в 19:48, Andrew Dunstan <[email protected]>:\nOn 2024-04-02 Tu 07:07, ShadowGhost wrote:\n> Hello all.\n> Recently, when working with the hstore and json formats, I came across \n> the fact that PostgreSQL has a cast of hstore to json, but there is no \n> reverse cast. I thought it might make it more difficult to work with \n> these formats. And I decided to make a cast json in the hstore. I used \n> the built-in jsonb structure to create it and may have introduced \n> methods to increase efficiency by 25% than converting the form \n> jsonb->text->hstore. Which of course is a good fact. I also wrote \n> regression tests to check the performance. I think this extension will \n> improve the work with jsonb and hstore in PostgreSQL.\n> If you've read this far, thank you for your interest, and I hope you \n> enjoy this extension!\n>\n\nOne reason we don't have such a cast is that hstore has a flat \nstructure, while json is tree structured, and it's not always an object \n/ hash. Thus it's easy to reliably cast hstore to json but far less easy \nto cast json to hstore in the general case.\n\nWhat do you propose to do in the case or json consisting of scalars, or \narrays, or with nested elements?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 2 Apr 2024 22:43:09 +0700", "msg_from": "ShadowGhost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" }, { "msg_contents": "On 2024-04-02 Tu 11:43, ShadowGhost wrote:\n> At the moment, this cast supports only these structures, as it was \n> enough for my tasks:\n> {str:numeric}\n> {str:str}\n> {str:bool}\n> {str:null}\n> But it's a great idea and I'll think about implementing it.\n\n\nPlease don't top-post on the PostgreSQL lists. See \n<https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\n\nI don't think a cast that doesn't cater for all the forms json can take \nis going to work very well. At the very least you would need to error \nout in cases you didn't want to cover, and have tests for all of those \nerrors. But the above is only a tiny fraction of those. If the error \ncases are going to be so much more than the cases that work it seems a \nbit pointless.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-02 Tu 11:43, ShadowGhost\n wrote:\n\n\n\nAt\nthe moment, this cast supports only these structures, as it was enough for my tasks: \n {str:numeric}\n{str:str}\n\n{str:bool}\n\n{str:null}\n\nBut it's a great idea and I'll think about implementing it.\n\n\n\n\n\nPlease don't top-post on the PostgreSQL lists. See\n<https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\nI don't think a cast that doesn't cater for all the forms json\n can take is going to work very well. At the very least you would\n need to error out in cases you didn't want to cover, and have\n tests for all of those errors. But the above is only a tiny\n fraction of those. If the error cases are going to be so much more\n than the cases that work it seems a bit pointless.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 2 Apr 2024 17:21:18 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" }, { "msg_contents": "On 2024-04-03 Wn 04:21, Andrew Dunstan\n\n> I don't think a cast that doesn't cater for all the forms json can take is\n> going to work very well. At the very least you would need to error out in\n> cases you didn't want to cover, and have tests for all of those errors. But\n> the above is only a tiny fraction of those. If the error cases are going to\n> be so much more than the cases that work it seems a bit pointless.\n>\nHi everyone\nI changed my mail account to be officially displayed in the correspondence.\nI also made an error conclusion if we are given an incorrect value. I\nbelieve that such a cast is needed by PostgreSQL since we already have\nseveral incomplete casts, but they perform their duties well and help in\nthe right situations.\n\ncheers\nAntoine Violin\n\nAntoine\n\nOn Mon, Jul 15, 2024 at 12:42 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-04-02 Tu 11:43, ShadowGhost wrote:\n>\n> At the moment, this cast supports only these structures, as it was enough\n> for my tasks:\n> {str:numeric}\n> {str:str}\n> {str:bool}\n> {str:null}\n> But it's a great idea and I'll think about implementing it.\n>\n>\n> Please don't top-post on the PostgreSQL lists. See\n> <https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\n> <https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\n>\n> I don't think a cast that doesn't cater for all the forms json can take is\n> going to work very well. At the very least you would need to error out in\n> cases you didn't want to cover, and have tests for all of those errors. But\n> the above is only a tiny fraction of those. If the error cases are going to\n> be so much more than the cases that work it seems a bit pointless.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn 2024-04-03 Wn 04:21, Andrew DunstanI don't think a cast that doesn't cater for all the forms json can take is going to work very well. At the very least you would need to error out in cases you didn't want to cover, and have tests for all of those errors. But the above is only a tiny fraction of those. If the error cases are going to be so much more than the cases that work it seems a bit pointless.Hi everyoneI changed my mail account to be officially displayed in the correspondence.I also made an error conclusion if we are given an incorrect value. Ibelieve that such a cast is needed by PostgreSQL since we already haveseveral incomplete casts, but they perform their duties well and help inthe right situations.\ncheers\nAntoine Violin\nAntoineOn Mon, Jul 15, 2024 at 12:42 PM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-04-02 Tu 11:43, ShadowGhost\n wrote:\n\n\nAt\nthe moment, this cast supports only these structures, as it was enough for my tasks: \n {str:numeric}\n{str:str}\n\n{str:bool}\n\n{str:null}\n\nBut it's a great idea and I'll think about implementing it.\n\n\n\n\n\nPlease don't top-post on the PostgreSQL lists. See\n<https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\nI don't think a cast that doesn't cater for all the forms json\n can take is going to work very well. At the very least you would\n need to error out in cases you didn't want to cover, and have\n tests for all of those errors. But the above is only a tiny\n fraction of those. If the error cases are going to be so much more\n than the cases that work it seems a bit pointless.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 15 Jul 2024 12:43:08 +0700", "msg_from": "=?UTF-8?B?0JDQvdGC0YPQsNC9INCS0LjQvtC70LjQvQ==?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" }, { "msg_contents": "On Mon, Jul 15, 2024 at 12:44 PM Антуан Виолин <[email protected]>\nwrote:\n\n> On 2024-04-03 Wn 04:21, Andrew Dunstan\n>\n>> I don't think a cast that doesn't cater for all the forms json can take\n>> is going to work very well. At the very least you would need to error out\n>> in cases you didn't want to cover, and have tests for all of those errors.\n>> But the above is only a tiny fraction of those. If the error cases are\n>> going to be so much more than the cases that work it seems a bit pointless.\n>>\n> Hi everyone\n> I changed my mail account to be officially displayed in the correspondence.\n> I also made an error conclusion if we are given an incorrect value. I\n> believe that such a cast is needed by PostgreSQL since we already have\n> several incomplete casts, but they perform their duties well and help in\n> the right situations.\n>\n> cheers\n> Antoine Violin\n>\n> Antoine\n>\n> On Mon, Jul 15, 2024 at 12:42 PM Andrew Dunstan <[email protected]>\n> wrote:\n>\n>>\n>> On 2024-04-02 Tu 11:43, ShadowGhost wrote:\n>>\n>> At the moment, this cast supports only these structures, as it was enough\n>> for my tasks:\n>> {str:numeric}\n>> {str:str}\n>> {str:bool}\n>> {str:null}\n>> But it's a great idea and I'll think about implementing it.\n>>\n>>\n>> Please don't top-post on the PostgreSQL lists. See\n>> <https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\n>> <https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\n>>\n>> I don't think a cast that doesn't cater for all the forms json can take\n>> is going to work very well. At the very least you would need to error out\n>> in cases you didn't want to cover, and have tests for all of those errors.\n>> But the above is only a tiny fraction of those. If the error cases are\n>> going to be so much more than the cases that work it seems a bit pointless.\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n>>\n>>\n\nHi! I agree in some cases this cast can be useful.\nI Have several comments about the patch:\n1)I think we should call pfree on pairs(now we call palloc, but not pfree)\n2)I think we should add error handling of load_external_function or maybe\nrewrite using of DirectFunctionCall\n3)i think we need replace all strdup occurences to pstrdup\n4)why such a complex system , you first make global variables there to load\na link to functions there, and then wrap this pointer to a function through\na define?\n5) postgres=# SELECT '{\"aaa\": \"first_value\", \"aaa\":\n\"second_value\"}'::jsonb::hstore;\n hstore\n-----------------------\n \"aaa\"=>\"second_value\"\n(1 row)\nis it documented behaviour?\n\nOn Mon, Jul 15, 2024 at 12:44 PM Антуан Виолин <[email protected]> wrote:On 2024-04-03 Wn 04:21, Andrew DunstanI don't think a cast that doesn't cater for all the forms json can take is going to work very well. At the very least you would need to error out in cases you didn't want to cover, and have tests for all of those errors. But the above is only a tiny fraction of those. If the error cases are going to be so much more than the cases that work it seems a bit pointless.Hi everyoneI changed my mail account to be officially displayed in the correspondence.I also made an error conclusion if we are given an incorrect value. Ibelieve that such a cast is needed by PostgreSQL since we already haveseveral incomplete casts, but they perform their duties well and help inthe right situations.\ncheers\nAntoine Violin\nAntoineOn Mon, Jul 15, 2024 at 12:42 PM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-04-02 Tu 11:43, ShadowGhost\n wrote:\n\n\nAt\nthe moment, this cast supports only these structures, as it was enough for my tasks: \n {str:numeric}\n{str:str}\n\n{str:bool}\n\n{str:null}\n\nBut it's a great idea and I'll think about implementing it.\n\n\n\n\n\nPlease don't top-post on the PostgreSQL lists. See\n<https://wiki.postgresql.org/wiki/Mailing_Lists#Email_etiquette_mechanics>\nI don't think a cast that doesn't cater for all the forms json\n can take is going to work very well. At the very least you would\n need to error out in cases you didn't want to cover, and have\n tests for all of those errors. But the above is only a tiny\n fraction of those. If the error cases are going to be so much more\n than the cases that work it seems a bit pointless.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.comHi! I agree in some cases this cast can be useful.I Have several comments about the patch:1)I think we should call pfree on pairs(now we call palloc, but not pfree)2)I think we should add error handling of load_external_function or maybe rewrite using of DirectFunctionCall3)i think we need replace all strdup occurences to pstrdup4)why such a complex system\n, you first make global variables there to load a link to functions there, and then wrap this pointer to a function through a define?5) postgres=# SELECT '{\"aaa\": \"first_value\", \"aaa\": \"second_value\"}'::jsonb::hstore;        hstore         ----------------------- \"aaa\"=>\"second_value\"(1 row)is it documented behaviour?", "msg_date": "Mon, 15 Jul 2024 14:54:10 +0700", "msg_from": "Stepan Neretin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extension for PostgreSQL cast jsonb to hstore WIP" } ]
[ { "msg_contents": "meson.build has this code\n\n     ldopts = run_command(perl, '-MExtUtils::Embed', '-e', 'ldopts',\n check: true).stdout().strip()     undesired =\n run_command(perl_conf_cmd, 'ccdlflags', check:\n true).stdout().split()     undesired += run_command(perl_conf_cmd,\n 'ldflags', check: true).stdout().split()     perl_ldopts = []    \n foreach ldopt : ldopts.split(' ')       if ldopt == '' or ldopt in\n undesired         continue       endif       perl_ldopts +=\n ldopt.strip('\"')     endforeach     message('LDFLAGS recommended by\n perl: \"@0@\"'.format(ldopts))     message('LDFLAGS for embedding\n perl: \"@0@\"'.format(' '.join(perl_ldopts)))\n\n\nThis code is seriously broken if perl reports items including spaces, \nwhen a) removing the quotes is quite wrong, and b) splitting on spaces \nis also wrong.\n\nHere's an example from one of my colleagues:\n\n\n C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional>perl.EXE -MExtUtils::Embed -e ldopts\n -nologo -nodefaultlib -debug -opt:ref,icf -ltcg -libpath:\"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\"\n -machine:AMD64 -subsystem:console,\"5.02\" \"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\\perl538.lib\"\n \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\oldnames.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\kernel32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\user32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\gdi32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winspool.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comdlg32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\advapi32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\shell32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ole32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\oleaut32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\netapi32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\uuid.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ws2_32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\mpr.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winmm.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\version.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbc32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbccp32.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comctl32.lib\"\n \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\msvcrt.lib\"\n \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\vcruntime.lib\"\n \"C:\\Program Files (x86)\\Windows Kits\\10\\lib\\10.0.22621.0\\ucrt\\x64\\ucrt.lib\"\n\nAnd with that we get errors like\n\n cl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\n cl : Command line warning D9024 : unrecognized source file type 'Files\\Microsoft', object file assumed\n cl : Command line warning D9024 : unrecognized source file type 'Visual', object file assumed\n cl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\n cl : Command line warning D9024 : unrecognized source file type 'Files', object file assumed\n cl : Command line warning D9024 : unrecognized source file type '(x86)\\Windows', object file assumed\n\n\nIt looks like we need to get smarter about how we process the ldopts and strip out the ccdlflags and ldflags\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nmeson.build has this code\n\n    ldopts = run_command(perl, '-MExtUtils::Embed', '-e', 'ldopts', check: true).stdout().strip()\n    undesired = run_command(perl_conf_cmd, 'ccdlflags', check: true).stdout().split()\n    undesired += run_command(perl_conf_cmd, 'ldflags', check: true).stdout().split()\n\n    perl_ldopts = []\n    foreach ldopt : ldopts.split(' ')\n      if ldopt == '' or ldopt in undesired\n        continue\n      endif\n\n      perl_ldopts += ldopt.strip('\"')\n    endforeach\n\n    message('LDFLAGS recommended by perl: \"@0@\"'.format(ldopts))\n    message('LDFLAGS for embedding perl: \"@0@\"'.format(' '.join(perl_ldopts)))\n\n\n\n\nThis code is seriously broken if perl\n reports items including spaces, when a) removing the quotes is\n quite wrong, and b) splitting on spaces is also wrong.\nHere's an example from one of my colleagues:\n\n\n\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Professional>perl.EXE -MExtUtils::Embed -e ldopts\n -nologo -nodefaultlib -debug -opt:ref,icf -ltcg -libpath:\"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\" \n-machine:AMD64 -subsystem:console,\"5.02\" \"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\\perl538.lib\" \n\"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\oldnames.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\kernel32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\user32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\gdi32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winspool.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comdlg32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\advapi32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\shell32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ole32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\oleaut32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\netapi32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\uuid.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ws2_32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\mpr.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winmm.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\version.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbc32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbccp32.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comctl32.lib\" \n\"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\msvcrt.lib\" \n\"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\vcruntime.lib\" \n\"C:\\Program Files (x86)\\Windows Kits\\10\\lib\\10.0.22621.0\\ucrt\\x64\\ucrt.lib\"\n\n\n\nAnd with that we get errors like\n\n\n\ncl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\ncl : Command line warning D9024 : unrecognized source file type 'Files\\Microsoft', object file assumed\ncl : Command line warning D9024 : unrecognized source file type 'Visual', object file assumed\ncl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\ncl : Command line warning D9024 : unrecognized source file type 'Files', object file assumed\ncl : Command line warning D9024 : unrecognized source file type '(x86)\\Windows', object file assumed\n\n\n\n\nIt looks like we need to get smarter about how we process the ldopts and strip out the ccdlflags and ldflags\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 2 Apr 2024 09:34:08 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "meson vs windows perl" }, { "msg_contents": "On 2024-04-02 Tu 09:34, Andrew Dunstan wrote:\n>\n> meson.build has this code\n>\n>     ldopts = run_command(perl, '-MExtUtils::Embed', '-e',\n> 'ldopts', check: true).stdout().strip()     undesired =\n> run_command(perl_conf_cmd, 'ccdlflags', check:\n> true).stdout().split()     undesired += run_command(perl_conf_cmd,\n> 'ldflags', check: true).stdout().split()     perl_ldopts = []    \n> foreach ldopt : ldopts.split(' ')       if ldopt == '' or ldopt in\n> undesired         continue       endif       perl_ldopts +=\n> ldopt.strip('\"')     endforeach     message('LDFLAGS recommended\n> by perl: \"@0@\"'.format(ldopts))     message('LDFLAGS for embedding\n> perl: \"@0@\"'.format(' '.join(perl_ldopts)))\n>\n>\n> This code is seriously broken if perl reports items including spaces, \n> when a) removing the quotes is quite wrong, and b) splitting on spaces \n> is also wrong.\n>\n> Here's an example from one of my colleagues:\n>\n>\n> C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional>perl.EXE -MExtUtils::Embed -e ldopts\n> -nologo -nodefaultlib -debug -opt:ref,icf -ltcg -libpath:\"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\"\n> -machine:AMD64 -subsystem:console,\"5.02\" \"C:\\edb\\languagepack\\v4\\Perl-5.38\\lib\\CORE\\perl538.lib\"\n> \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\oldnames.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\kernel32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\user32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\gdi32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winspool.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comdlg32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\advapi32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\shell32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ole32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\oleaut32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\netapi32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\uuid.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\ws2_32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\mpr.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\winmm.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\version.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbc32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\odbccp32.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\\\lib\\10.0.22621.0\\\\um\\x64\\comctl32.lib\"\n> \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\msvcrt.lib\"\n> \"C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.38.33130\\lib\\x64\\vcruntime.lib\"\n> \"C:\\Program Files (x86)\\Windows Kits\\10\\lib\\10.0.22621.0\\ucrt\\x64\\ucrt.lib\"\n>\n> And with that we get errors like\n>\n> cl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\n> cl : Command line warning D9024 : unrecognized source file type 'Files\\Microsoft', object file assumed\n> cl : Command line warning D9024 : unrecognized source file type 'Visual', object file assumed\n> cl : Command line warning D9024 : unrecognized source file type 'C:\\Program', object file assumed\n> cl : Command line warning D9024 : unrecognized source file type 'Files', object file assumed\n> cl : Command line warning D9024 : unrecognized source file type '(x86)\\Windows', object file assumed\n>\n>\n> It looks like we need to get smarter about how we process the ldopts and strip out the ccdlflags and ldflags\n>\n\nHere is an attempt to fix all that. It's ugly, but I think it's more \nprincipled.\n\nFirst, instead of getting the ldopts and then trying to filter out the \nldflags and ccdlflags, it tells perl not to include those in the first \nplace, by overriding a couple of routines in ExtUtils::Embed. And \nsecond, it's smarter about splitting what's left, so that it doesn't \nsplit on a space that's in a quoted item. The perl that's used to do \nthat second bit is not pretty, but it has been tested on the system \nwhere the problem arose and apparently cures the problem. (No doubt some \nperl guru could improve it.) It also works on my Ubuntu system, so I \ndon't think we'll be breaking anything (famous last words).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com", "msg_date": "Fri, 5 Apr 2024 08:25:53 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "\nOn 2024-04-05 Fr 08:25, Andrew Dunstan wrote:\n>\n>\n>\n> Here is an attempt to fix all that. It's ugly, but I think it's more \n> principled.\n>\n> First, instead of getting the ldopts and then trying to filter out the \n> ldflags and ccdlflags, it tells perl not to include those in the first \n> place, by overriding a couple of routines in ExtUtils::Embed. And \n> second, it's smarter about splitting what's left, so that it doesn't \n> split on a space that's in a quoted item. The perl that's used to do \n> that second bit is not pretty, but it has been tested on the system \n> where the problem arose and apparently cures the problem. (No doubt \n> some perl guru could improve it.) It also works on my Ubuntu system, \n> so I don't think we'll be breaking anything (famous last words).\n>\n>\n\nApparently I spoke too soon. Please ignore the above for now.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 5 Apr 2024 10:12:46 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "On 2024-04-05 Fr 10:12, Andrew Dunstan wrote:\n>\n> On 2024-04-05 Fr 08:25, Andrew Dunstan wrote:\n>>\n>>\n>>\n>> Here is an attempt to fix all that. It's ugly, but I think it's more \n>> principled.\n>>\n>> First, instead of getting the ldopts and then trying to filter out \n>> the ldflags and ccdlflags, it tells perl not to include those in the \n>> first place, by overriding a couple of routines in ExtUtils::Embed. \n>> And second, it's smarter about splitting what's left, so that it \n>> doesn't split on a space that's in a quoted item. The perl that's \n>> used to do that second bit is not pretty, but it has been tested on \n>> the system where the problem arose and apparently cures the problem. \n>> (No doubt some perl guru could improve it.) It also works on my \n>> Ubuntu system, so I don't think we'll be breaking anything (famous \n>> last words).\n>>\n>>\n>\n> Apparently I spoke too soon. Please ignore the above for now.\n>\n>\n>\n\nOK, this has been fixed and checked. The attached is what I propose.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Apr 2024 16:12:12 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "Hi,\n\nOn 2024-04-05 16:12:12 -0400, Andrew Dunstan wrote:\n> OK, this has been fixed and checked. The attached is what I propose.\n\nThe perl command is pretty hard to read. What about using python's shlex\nmodule instead? Rough draft attached. Still not very pretty, but seems easier\nto read?\n\nIt'd be even better if we could just get perl to print out the flags in an\neasier to parse way, but I couldn't immediately see a way.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 28 May 2024 15:13:50 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "On 2024-05-28 Tu 6:13 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2024-04-05 16:12:12 -0400, Andrew Dunstan wrote:\n>> OK, this has been fixed and checked. The attached is what I propose.\n> The perl command is pretty hard to read. What about using python's shlex\n> module instead? Rough draft attached. Still not very pretty, but seems easier\n> to read?\n>\n> It'd be even better if we could just get perl to print out the flags in an\n> easier to parse way, but I couldn't immediately see a way.\n>\n>\n\nThanks for looking.\n\nThe attached should be easier to read. The perl package similar to shlex \nis Text::ParseWords, which is already a requirement.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 20 Jul 2024 09:41:27 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "On 2024-07-20 Sa 9:41 AM, Andrew Dunstan wrote:\n>\n> On 2024-05-28 Tu 6:13 PM, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2024-04-05 16:12:12 -0400, Andrew Dunstan wrote:\n>>> OK, this has been fixed and checked. The attached is what I propose.\n>> The perl command is pretty hard to read. What about using python's shlex\n>> module instead? Rough draft attached.  Still not very pretty, but \n>> seems easier\n>> to read?\n>>\n>> It'd be even better if we could just get perl to print out the flags \n>> in an\n>> easier to parse way, but I couldn't immediately see a way.\n>>\n>>\n>\n> Thanks for looking.\n>\n> The attached should be easier to read. The perl package similar to \n> shlex is Text::ParseWords, which is already a requirement.\n\n\nIt turns out that shellwords eats backslashes, so we would need \nsomething like this version, which I have tested on Windows. I will \nprobably commit this in the next few days unless there's an objection.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 30 Jul 2024 15:47:16 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" }, { "msg_contents": "\nOn 2024-07-30 Tu 3:47 PM, Andrew Dunstan wrote:\n>\n> On 2024-07-20 Sa 9:41 AM, Andrew Dunstan wrote:\n>>\n>> On 2024-05-28 Tu 6:13 PM, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2024-04-05 16:12:12 -0400, Andrew Dunstan wrote:\n>>>> OK, this has been fixed and checked. The attached is what I propose.\n>>> The perl command is pretty hard to read. What about using python's \n>>> shlex\n>>> module instead? Rough draft attached.  Still not very pretty, but \n>>> seems easier\n>>> to read?\n>>>\n>>> It'd be even better if we could just get perl to print out the flags \n>>> in an\n>>> easier to parse way, but I couldn't immediately see a way.\n>>>\n>>>\n>>\n>> Thanks for looking.\n>>\n>> The attached should be easier to read. The perl package similar to \n>> shlex is Text::ParseWords, which is already a requirement.\n>\n>\n> It turns out that shellwords eats backslashes, so we would need \n> something like this version, which I have tested on Windows. I will \n> probably commit this in the next few days unless there's an objection.\n>\n\npushed\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 14 Sep 2024 10:39:11 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: meson vs windows perl" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nCoverity reported a resource leak at the function\nrun_ssl_passphrase_command.\n7. alloc_fn: Storage is returned from allocation function\nwait_result_to_str.[\"show details\"]\n\n8. noescape: Assuming resource wait_result_to_str(pclose_rc) is not freed\nor pointed-to as ellipsis argument to errdetail_internal.\n\nCID 1533043: (#1 of 1): Resource leak (RESOURCE_LEAK)\n9. leaked_storage: Failing to save or free storage allocated by\nwait_result_to_str(pclose_rc) leaks it.\n\nI think that Coverity is right.\n\nFix by freeing the pointer, like pclose_check (src/common/exec.c) similar\ncase.\n\nPatch attached.\n\nbest regards,", "msg_date": "Tue, 2 Apr 2024 15:13:19 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "> On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected]> wrote:\n\n> Fix by freeing the pointer, like pclose_check (src/common/exec.c) similar case.\n\nOff the cuff, seems reasonable when loglevel is LOG.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 20:31:48 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "Em ter., 2 de abr. de 2024 às 15:31, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected]> wrote:\n>\n> > Fix by freeing the pointer, like pclose_check (src/common/exec.c)\n> similar case.\n>\n> Off the cuff, seems reasonable when loglevel is LOG.\n>\n\nPer Coverity.\n\nAnother case of resource leak, when loglevel is LOG.\nIn the function shell_archive_file (src/backend/archive/shell_archive.c)\nThe pointer *xlogarchcmd* is not freed.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 10 Apr 2024 15:31:02 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "> On 10 Apr 2024, at 20:31, Ranier Vilela <[email protected]> wrote:\n> \n> Em ter., 2 de abr. de 2024 às 15:31, Daniel Gustafsson <[email protected] <mailto:[email protected]>> escreveu:\n>> > On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected] <mailto:[email protected]>> wrote:\n>> \n>> > Fix by freeing the pointer, like pclose_check (src/common/exec.c) similar case.\n>> \n>> Off the cuff, seems reasonable when loglevel is LOG.\n> \n> Per Coverity.\n> \n> Another case of resource leak, when loglevel is LOG.\n> In the function shell_archive_file (src/backend/archive/shell_archive.c)\n> The pointer *xlogarchcmd* is not freed.\n\nThanks, I'll have a look. I've left this for post-freeze on purpose to not\ncause unnecessary rebasing. Will take a look over the next few days unless\nbeaten to it.\n\n--\nDaniel Gustafsson\n\n\nOn 10 Apr 2024, at 20:31, Ranier Vilela <[email protected]> wrote:Em ter., 2 de abr. de 2024 às 15:31, Daniel Gustafsson <[email protected]> escreveu:> On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected]> wrote:\n\n> Fix by freeing the pointer, like pclose_check (src/common/exec.c) similar case.\n\nOff the cuff, seems reasonable when loglevel is LOG.Per Coverity.\nAnother case of resource leak, when loglevel is LOG.\nIn the function shell_archive_file (src/backend/archive/shell_archive.c)The pointer *xlogarchcmd*  is not freed.Thanks, I'll have a look.  I've left this for post-freeze on purpose to notcause unnecessary rebasing.  Will take a look over the next few days unlessbeaten to it.\n--Daniel Gustafsson", "msg_date": "Wed, 10 Apr 2024 20:33:03 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "Em qua., 10 de abr. de 2024 às 15:33, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> On 10 Apr 2024, at 20:31, Ranier Vilela <[email protected]> wrote:\n>\n> Em ter., 2 de abr. de 2024 às 15:31, Daniel Gustafsson <[email protected]>\n> escreveu:\n>\n>> > On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected]> wrote:\n>>\n>> > Fix by freeing the pointer, like pclose_check (src/common/exec.c)\n>> similar case.\n>>\n>> Off the cuff, seems reasonable when loglevel is LOG.\n>>\n>\n> Per Coverity.\n>\n> Another case of resource leak, when loglevel is LOG.\n> In the function shell_archive_file (src/backend/archive/shell_archive.c)\n> The pointer *xlogarchcmd* is not freed.\n>\n>\n> Thanks, I'll have a look. I've left this for post-freeze on purpose to not\n> cause unnecessary rebasing. Will take a look over the next few days unless\n> beaten to it.\n>\nAny chance we'll have these fixes in v17?\n\nbest regards,\nRanier Vilela\n\nEm qua., 10 de abr. de 2024 às 15:33, Daniel Gustafsson <[email protected]> escreveu:On 10 Apr 2024, at 20:31, Ranier Vilela <[email protected]> wrote:Em ter., 2 de abr. de 2024 às 15:31, Daniel Gustafsson <[email protected]> escreveu:> On 2 Apr 2024, at 20:13, Ranier Vilela <[email protected]> wrote:\n\n> Fix by freeing the pointer, like pclose_check (src/common/exec.c) similar case.\n\nOff the cuff, seems reasonable when loglevel is LOG.Per Coverity.\nAnother case of resource leak, when loglevel is LOG.\nIn the function shell_archive_file (src/backend/archive/shell_archive.c)The pointer *xlogarchcmd*  is not freed.Thanks, I'll have a look.  I've left this for post-freeze on purpose to notcause unnecessary rebasing.  Will take a look over the next few days unlessbeaten to it.Any chance we'll have these fixes in v17?best regards,Ranier Vilela", "msg_date": "Mon, 13 May 2024 15:05:32 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "> On 13 May 2024, at 20:05, Ranier Vilela <[email protected]> wrote:\n> Em qua., 10 de abr. de 2024 às 15:33, Daniel Gustafsson <[email protected]> escreveu:\n\n> Thanks, I'll have a look. I've left this for post-freeze on purpose to not\n> cause unnecessary rebasing. Will take a look over the next few days unless\n> beaten to it.\n\n> Any chance we'll have these fixes in v17?\n\nNice timing, I was actually rebasing them today to get them committed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 13 May 2024 20:06:57 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "On Mon, May 13, 2024 at 08:06:57PM +0200, Daniel Gustafsson wrote:\n>> Any chance we'll have these fixes in v17?\n> \n> Nice timing, I was actually rebasing them today to get them committed.\n\nLooks sensible seen from here, as these paths could use a LOG or rely\non a memory context permanent to the backend causing junk to be\naccumulated. It's not that much, still that would accumulate.\n--\nMichael", "msg_date": "Tue, 14 May 2024 15:03:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "> On 14 May 2024, at 08:03, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, May 13, 2024 at 08:06:57PM +0200, Daniel Gustafsson wrote:\n>>> Any chance we'll have these fixes in v17?\n>> \n>> Nice timing, I was actually rebasing them today to get them committed.\n> \n> Looks sensible seen from here, as these paths could use a LOG or rely\n> on a memory context permanent to the backend causing junk to be\n> accumulated. It's not that much, still that would accumulate.\n\nPushed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 14 May 2024 12:21:31 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" }, { "msg_contents": "Em ter., 14 de mai. de 2024 às 07:21, Daniel Gustafsson <[email protected]>\nescreveu:\n\n> > On 14 May 2024, at 08:03, Michael Paquier <[email protected]> wrote:\n> >\n> > On Mon, May 13, 2024 at 08:06:57PM +0200, Daniel Gustafsson wrote:\n> >>> Any chance we'll have these fixes in v17?\n> >>\n> >> Nice timing, I was actually rebasing them today to get them committed.\n> >\n> > Looks sensible seen from here, as these paths could use a LOG or rely\n> > on a memory context permanent to the backend causing junk to be\n> > accumulated. It's not that much, still that would accumulate.\n>\n> Pushed.\n>\nThanks Daniel.\n\nbest regards,\nRanier Vilela\n\nEm ter., 14 de mai. de 2024 às 07:21, Daniel Gustafsson <[email protected]> escreveu:> On 14 May 2024, at 08:03, Michael Paquier <[email protected]> wrote:\n> \n> On Mon, May 13, 2024 at 08:06:57PM +0200, Daniel Gustafsson wrote:\n>>> Any chance we'll have these fixes in v17?\n>> \n>> Nice timing, I was actually rebasing them today to get them committed.\n> \n> Looks sensible seen from here, as these paths could use a LOG or rely\n> on a memory context permanent to the backend causing junk to be\n> accumulated.  It's not that much, still that would accumulate.\n\nPushed.Thanks Daniel.best regards,Ranier Vilela", "msg_date": "Tue, 14 May 2024 08:03:57 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix resource leak (src/backend/libpq/be-secure-common.c)" } ]
[ { "msg_contents": "Hackers,\n\nIn the Security lessons from liblzma thread[1], walther broached the subject of an extension directory path[1]:\n\n> Also a configurable directoy to look up extensions, possibly even to be \n> changed at run-time like [2]. The patch says this:\n> \n>> This directory is prepended to paths when loading extensions (control and SQL files), and to the '$libdir' directive when loading modules that back functions. The location is made configurable to allow build-time testing of extensions that do not have been installed to their proper location yet.\n> \n> This seems like a great thing to have. This might also be relevant in \n> light of recent discussions in the ecosystem around extension management.\n\n\nThat quotation comes from this Debian patch[2] maintained by Christoph Berg. I’d like to formally propose integrating this patch into the core. And not only because it’s overhead for package maintainers like Christoph, but because a number of use cases have emerged since we originally discussed something like this back in 2013[3]:\n\nDocker Immutability\n-------------------\n\nDocker images must be immutable. In order for users of a Docker image to install extensions that persist, they must create a persistent volume, map it to SHAREDIR/extensions, and copy over all the core extensions (or muck with symlink magic[4]). This makes upgrades trickier, because the core extensions are mixed in with third party extensions. \n\nBy supporting a second directory pretended to the list of directories to search, as the Debian patch does, users of Docker images can keep extensions they install separate from core extensions, in a directory mounted to a persistent volume with none of the core extensions. Along with tweaking dynamic_library_path to support additional directories for shared object libraries, which can also be mounted to a separate path, we can have a persistent and clean separation of immutable core extensions and extensions installed at runtime.\n\nPostgres.app\n------------\n\nThe Postgres.app project also supports installing extensions. However, because they must go into the SHAREDIR/extensions, once a user installs one the package has been modified and the Apple bundle signature will be broken. The OS will no longer be able to validate that the app is legit.\n\nIf the core supported an additional extension (and PKGLIBDIR), it would allow an immutable PostgreSQL base package and still allow extensions to be installed into directories outside the app bundle, and thus preserve bundle signing on macOS (and presumably other systems --- is this the nix issue, too?)\n\nRFC\n---\n\nI know there was some objection to changes like this in the past, but the support I’m seeing in the liblzma thread for making pkglibdir configurable me optimistic that this might be the right time to support additional configuration for the extension directory, as well, starting with the Debian patch, perhaps.\n\nThoughts?\n\nI would be happy to submit a clean version of the Debian patch[2].\n\nBest,\n\nDavid\n\n[1] https://www.postgresql.org/message-id/99c41b46-616e-49d0-9ffd-a29432cec818%40technowledgy.de\n[2] https://salsa.debian.org/postgresql/postgresql/-/blob/17/debian/patches/extension_destdir?ref_type=heads\n[3] https://www.postgresql.org/message-id/flat/[email protected]\n[4] https://speakerdeck.com/ongres/postgres-extensions-in-kubernetes?slide=14\n\n\n\n", "msg_date": "Tue, 2 Apr 2024 14:38:56 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "RFC: Additional Directory for Extensions" }, { "msg_contents": "On 2024-Apr-02, David E. Wheeler wrote:\n\n> That quotation comes from this Debian patch[2] maintained by Christoph\n> Berg. I’d like to formally propose integrating this patch into the\n> core. And not only because it’s overhead for package maintainers like\n> Christoph, but because a number of use cases have emerged since we\n> originally discussed something like this back in 2013[3]:\n\nI support the idea of there being a second location from where to load\nshared libraries ... but I don't like the idea of making it\nruntime-configurable. If we want to continue to tighten up what\nsuperuser can do, then one of the things that has to go away is the\nability to load shared libraries from arbitrary locations\n(dynamic_library_path). I think we should instead look at making those\nlocations hardcoded at compile time. The packager can then decide where\nthose things go, and the superuser no longer has the ability to load\narbitrary code from arbitrary locations.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Wed, 3 Apr 2024 09:13:01 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Alvaro Herrera:\n> I support the idea of there being a second location from where to load\n> shared libraries ... but I don't like the idea of making it\n> runtime-configurable. If we want to continue to tighten up what\n> superuser can do, then one of the things that has to go away is the\n> ability to load shared libraries from arbitrary locations\n> (dynamic_library_path). I think we should instead look at making those\n> locations hardcoded at compile time. The packager can then decide where\n> those things go, and the superuser no longer has the ability to load\n> arbitrary code from arbitrary locations.\n\nThe use-case for runtime configuration of this seems to be build-time \ntesting of extensions against an already installed server. For this \npurpose it should be enough to be able to set this directory at startup \n- it doesn't need to be changed while the server is actually running. \nThen you could spin up a temporary postgres instance with the extension \ndirectory pointing a the build directory and test.\n\nWould startup-configurable be better than runtime-configurable regarding \nyour concerns?\n\nI can also imagine that it would be very helpful in a container setup to \nbe able to set an environment variable with this path instead of having \nto recompile all of postgres to change it.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Wed, 3 Apr 2024 09:57:12 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "> On 3 Apr 2024, at 09:13, Alvaro Herrera <[email protected]> wrote:\n> \n> On 2024-Apr-02, David E. Wheeler wrote:\n> \n>> That quotation comes from this Debian patch[2] maintained by Christoph\n>> Berg. I’d like to formally propose integrating this patch into the\n>> core. And not only because it’s overhead for package maintainers like\n>> Christoph, but because a number of use cases have emerged since we\n>> originally discussed something like this back in 2013[3]:\n> \n> I support the idea of there being a second location from where to load\n> shared libraries\n\nAgreed, the case made upthread that installing an extension breaks the app\nsigning seems like a compelling reason to do this.\n\nThe implementation of this need to make sure the directory is properly set up\nhowever to avoid similar problems that CVE 2019-10211 showed.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 10:33:10 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Apr 3, 2024, at 3:57 AM, [email protected] wrote:\n\n> I can also imagine that it would be very helpful in a container setup to be able to set an environment variable with this path instead of having to recompile all of postgres to change it.\n\nYes, I like the suggestion to make it require a restart, which lets the sysadmin control it and not limited to whatever the person who compiled it thought would make sense.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 3 Apr 2024 08:54:31 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Apr 3, 2024, at 8:54 AM, David E. Wheeler <[email protected]> wrote:\n\n> Yes, I like the suggestion to make it require a restart, which lets the sysadmin control it and not limited to whatever the person who compiled it thought would make sense.\n\nOr SIGHUP?\n\nD\n\n", "msg_date": "Wed, 3 Apr 2024 09:06:08 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Apr 3, 2024, at 8:54 AM, David E. Wheeler <[email protected]> wrote:\n\n> Yes, I like the suggestion to make it require a restart, which lets the sysadmin control it and not limited to whatever the person who compiled it thought would make sense.\n\nHere’s a revision of the Debian patch that requires a server start.\n\nHowever, in studying the patch, it appears that the `extension_directory` is searched for *all* shared libraries, not just those being loaded for an extension. Am I reading the `expand_dynamic_library_name()` function right?\n\nIf so, this seems like a good way for a bad actor to muck with things, by putting an exploited libpgtypes library into the extension directory, where it would be loaded in preference to the core libpgtypes library, if they couldn’t exploit the original.\n\nI’m thinking it would be better to have the dynamic library lookup for extension libraries (and LOAD libraries?) separate, so that the `extension_directory` would not be used for core libraries.\n\nThis would also allow the lookup of extension libraries prefixed by the directory field from the control file, which would enable much tidier extension installation: The control file, SQL scripts, and DSOs could all be in a single directory for an extension.\n\nThoughts?\n\nBest,\n\nDavid", "msg_date": "Wed, 3 Apr 2024 09:40:29 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Re: David E. Wheeler\n> > Yes, I like the suggestion to make it require a restart, which lets the sysadmin control it and not limited to whatever the person who compiled it thought would make sense.\n> \n> Here’s a revision of the Debian patch that requires a server start.\n\nThanks for bringing this up, I should have submitted this years ago.\n(The patch is originally from September 2020.)\n\nI designed the patch to require a superuser to set it, so it doesn't\nmatter very much by which mechanism it gets updated. There should be\nlittle reason to vary it at run-time, so I'd be fine with requiring a\nrestart, but otoh, why restrict the superuser from reloading it if\nthey know what they are doing?\n\n> However, in studying the patch, it appears that the `extension_directory` is searched for *all* shared libraries, not just those being loaded for an extension. Am I reading the `expand_dynamic_library_name()` function right?\n> \n> If so, this seems like a good way for a bad actor to muck with things, by putting an exploited libpgtypes library into the extension directory, where it would be loaded in preference to the core libpgtypes library, if they couldn’t exploit the original.\n> \n> I’m thinking it would be better to have the dynamic library lookup for extension libraries (and LOAD libraries?) separate, so that the `extension_directory` would not be used for core libraries.\n\nI'm not sure the concept of \"core libraries\" exists. PG happens to\ndlopen things at run time, and it doesn't know/care if they were\ninstalled by users or by the original PG server. Also, an exploited\nlibpgtypes library is not worse than any other untrusted \"user\"\nlibrary, so you really don't want to allow users to provide their own\n.so files, no matter by what mechanism.\n\n> This would also allow the lookup of extension libraries prefixed by the directory field from the control file, which would enable much tidier extension installation: The control file, SQL scripts, and DSOs could all be in a single directory for an extension.\n\nNice idea, but that would mean installing .so files into PGSHAREDIR.\nPerhaps the whole extension stuff would have to move to PKGLIBDIR\ninstead.\n\n\nFwiw, I wrote this patch to solve the problem of testing extensions at\nbuild-time where the build process does not have write access to\nPGSHAREDIR. It solves that problem quite well, almost all PG\nextensions have build-time test coverage now (where there was\nbasically 0 before).\n\nSecurity is not a concern at this point as everything is running as\nthe same user, and the test cluster will be wiped right after the\ntest. I figured marking the setting as \"super user\" only was enough\nsecurity at that point, but I would recommend another audit before\nusing it together with \"trusted\" extensions and other things in\nproduction.\n\nChristoph\n\n\n", "msg_date": "Wed, 3 Apr 2024 18:46:43 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "> Fwiw, I wrote this patch to solve the problem of testing extensions at\n> build-time where the build process does not have write access to\n> PGSHAREDIR. It solves that problem quite well, almost all PG\n> extensions have build-time test coverage now (where there was\n> basically 0 before).\n\nAlso, it's called extension \"destdir\" because it behaves like DESTDIR\nin Makefiles: It prepends the given path to the path that PG is trying\nto open when set. So it doesn't allow arbitrary new locations as of\nnow, just /home/build/foo-1/debian/foo/usr/share/postgresql/17/extension\nin addition to /usr/share/postgresql/17/extension. (That is what the\nDebian package build process needs, so that restriction/design choice\nmade sense.)\n\n> Security is not a concern at this point as everything is running as\n> the same user, and the test cluster will be wiped right after the\n> test. I figured marking the setting as \"super user\" only was enough\n> security at that point, but I would recommend another audit before\n> using it together with \"trusted\" extensions and other things in\n> production.\n\nThat's also included in the current GUC description:\n\n This directory is prepended to paths when loading extensions\n (control and SQL files), and to the '$libdir' directive when\n loading modules that back functions. The location is made\n configurable to allow build-time testing of extensions that do not\n have been installed to their proper location yet.\n\nPerhaps I should have included a more verbose \"NOT FOR PRODUCTION\"\nthere.\n\nAs for compatibility, the patch has been part of the PG 9.5..17 now\nfor several years, and I'm very happy with extra test coverage it\nprovides, especially on the Debian architectures that don't have\n\"autopkgtest\" runners yet.\n\nChristoph\n\n\n", "msg_date": "Wed, 3 Apr 2024 19:03:46 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Apr 3, 2024, at 12:46 PM, Christoph Berg <[email protected]> wrote:\n\n> Thanks for bringing this up, I should have submitted this years ago.\n> (The patch is originally from September 2020.)\n\nThat’s okay, it’s still 2020 in some ways. 😂\n\n> I designed the patch to require a superuser to set it, so it doesn't\n> matter very much by which mechanism it gets updated. There should be\n> little reason to vary it at run-time, so I'd be fine with requiring a\n> restart, but otoh, why restrict the superuser from reloading it if\n> they know what they are doing?\n\nI think that’s fair. I’ll keep it requiring a restart now, on the theory it would be easier to loosen it later than have to tighten it later.\n\n> I'm not sure the concept of \"core libraries\" exists. PG happens to\n> dlopen things at run time, and it doesn't know/care if they were\n> installed by users or by the original PG server. Also, an exploited\n> libpgtypes library is not worse than any other untrusted \"user\"\n> library, so you really don't want to allow users to provide their own\n> .so files, no matter by what mechanism.\n\nYes, I guess my concern is whether it could be used to “shadow” core libraries. Maybe it’s no different, really.\n\n>> This would also allow the lookup of extension libraries prefixed by the directory field from the control file, which would enable much tidier extension installation: The control file, SQL scripts, and DSOs could all be in a single directory for an extension.\n> \n> Nice idea, but that would mean installing .so files into PGSHAREDIR.\n> Perhaps the whole extension stuff would have to move to PKGLIBDIR\n> instead.\n\nYes, I was just poking around the code, and realized that, when extension functions are created they may or may not not use `MODULE_PATHNAME`, but in any event, there is nothing different about loading an extension DSO than any other DSO. I was hoping to find a path where it knows it’s opening a DSO for the purpose of an extension, so we could limit the lookup there. But that does not (currently) exist.\n\nMaybe we could add an `$extensiondir` variable to complement `$libdir`?\n\nOr is PGKLIBDIR is the way to go? I’m not familiar with it. It looks like extension JIT files are put there already.\n\n> Fwiw, I wrote this patch to solve the problem of testing extensions at\n> build-time where the build process does not have write access to\n> PGSHAREDIR. It solves that problem quite well, almost all PG\n> extensions have build-time test coverage now (where there was\n> basically 0 before).\n\nYeah, good additional use case.\n\nOn Apr 3, 2024, at 1:03 PM, Christoph Berg <[email protected]> wrote:\n\n> Also, it's called extension \"destdir\" because it behaves like DESTDIR\n> in Makefiles: It prepends the given path to the path that PG is trying\n> to open when set. So it doesn't allow arbitrary new locations as of\n> now, just /home/build/foo-1/debian/foo/usr/share/postgresql/17/extension\n> in addition to /usr/share/postgresql/17/extension. (That is what the\n> Debian package build process needs, so that restriction/design choice\n> made sense.\n\nRight, this makes perfect sense, in that you don’t have to copy all the extension files from the destdir to the SHAREDIR to test them, which I imagine could be a PITA.\n\n> That's also included in the current GUC description:\n> \n> This directory is prepended to paths when loading extensions\n> (control and SQL files), and to the '$libdir' directive when\n> loading modules that back functions. The location is made\n> configurable to allow build-time testing of extensions that do not\n> have been installed to their proper location yet.\n> \n> Perhaps I should have included a more verbose \"NOT FOR PRODUCTION\"\n> there.\n\nThe use cases I described upthread are very much production use cases. Do you think it’s not for production just because we need to really think it through?\n\nI’ve added some docs based on your GUC description; updated patch attached.\n\nBest,\n\nDavid", "msg_date": "Thu, 4 Apr 2024 13:20:11 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Apr 4, 2024, at 1:20 PM, David E. Wheeler <[email protected]> wrote:\n\n> I’ve added some docs based on your GUC description; updated patch attached.\n\nHere’s a rebase.\n\nI realize this probably isn’t going to happen for 17, given the freeze, but I would very much welcome feedback and pointers to address concerns about providing a second directory for extensions and DSOs. Quite a few people have talked about the need for this in the Extension Mini Summits[1], so I’m sure I could get some collaborators to make improvements or look at a different approach.\n\nBest,\n\nDavid\n\n[1] https://justatheory.com/2024/02/extension-ecosystem-summit/#extension-ecosystem-mini-summit", "msg_date": "Thu, 11 Apr 2024 13:52:26 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Thu, Apr 11, 2024 at 01:52:26PM -0400, David E. Wheeler wrote:\n> I realize this probably isn�t going to happen for 17, given the freeze,\n> but I would very much welcome feedback and pointers to address concerns\n> about providing a second directory for extensions and DSOs. Quite a few\n> people have talked about the need for this in the Extension Mini\n> Summits[1], so I�m sure I could get some collaborators to make\n> improvements or look at a different approach.\n\nAt first glance, the general idea seems reasonable to me. I'm wondering\nwhether there is a requirement for this directory to be prepended or if it\ncould be appended to the end. That way, the existing ones would take\npriority, which might be desirable from a security standpoint.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 24 Jun 2024 11:11:42 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Wed, Apr 3, 2024 at 3:13 AM Alvaro Herrera <[email protected]> wrote:\n> I support the idea of there being a second location from where to load\n> shared libraries ... but I don't like the idea of making it\n> runtime-configurable. If we want to continue to tighten up what\n> superuser can do, then one of the things that has to go away is the\n> ability to load shared libraries from arbitrary locations\n> (dynamic_library_path). I think we should instead look at making those\n> locations hardcoded at compile time. The packager can then decide where\n> those things go, and the superuser no longer has the ability to load\n> arbitrary code from arbitrary locations.\n\nIs \"tighten up what the superuser can do\" on our list of objectives?\nPersonally, I think we should be focusing mostly, and maybe entirely,\non letting non-superusers do more things, with appropriate security\ncontrols. The superuser can ultimately do anything, since they can\ncause shell commands to be run and load arbitrary code into the\nbackend and write code in untrusted procedural languages and mutilate\nthe system catalogs and lots of other terrible things.\n\nNow, I think there are environments where people have used things like\ncontainers to try to lock down the superuser, and while I'm not sure\nthat can ever be particularly water-tight, if it were the case that\nthis patch would make it a whole lot easier for a superuser to bypass\nthe kinds of controls that people are imposing today, that might be an\nargument against this patch. But ... off-hand, I'm not seeing such an\nexposure.\n\nOn the patch itself, I find the documentation for this to be fairly\nhard to understand. I think it could benefit from an example. I'm\nconfused about whether this is intended to let me search for\nextensions in /my/temp/root/usr/lib/postgresql/... by setting\nextension_directory=/my/temp/dir, or whether it's intended me to\nsearch both /usr/lib/postgresql as I normally would and also\n/some/other/place. If the latter, I wonder why we don't handle shared\nlibraries by setting dynamic_library_path and then just have an\nanalogue of that for control files.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 13:53:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 24, 2024, at 1:53 PM, Robert Haas <[email protected]> wrote:\n\n> Is \"tighten up what the superuser can do\" on our list of objectives?\n> Personally, I think we should be focusing mostly, and maybe entirely,\n> on letting non-superusers do more things, with appropriate security\n> controls. The superuser can ultimately do anything, since they can\n> cause shell commands to be run and load arbitrary code into the\n> backend and write code in untrusted procedural languages and mutilate\n> the system catalogs and lots of other terrible things.\n\nI guess the question then is what security controls are appropriate for this feature, which after all tells the postmaster what directories to read files from. It feels a little outside the scope of a regular user to even be aware of the file system undergirding the service. But perhaps there’s a non-superuser role for whom it is appropriate?\n\n> Now, I think there are environments where people have used things like\n> containers to try to lock down the superuser, and while I'm not sure\n> that can ever be particularly water-tight, if it were the case that\n> this patch would make it a whole lot easier for a superuser to bypass\n> the kinds of controls that people are imposing today, that might be an\n> argument against this patch. But ... off-hand, I'm not seeing such an\n> exposure.\n\nYeah I’m not even sure I follow. Containers are immutable, other than mutable mounted volumes --- which is one use case this patch is attempting to enable.\n\n> On the patch itself, I find the documentation for this to be fairly\n> hard to understand. I think it could benefit from an example. I'm\n> confused about whether this is intended to let me search for\n> extensions in /my/temp/root/usr/lib/postgresql/... by setting\n> extension_directory=/my/temp/dir, or whether it's intended me to\n> search both /usr/lib/postgresql as I normally would and also\n> /some/other/place.\n\nI sketched them quickly, so agree they can be better. Reading the code, I now see that it appears to be the former case. I’d like to advocate for the latter. \n\n> If the latter, I wonder why we don't handle shared\n> libraries by setting dynamic_library_path and then just have an\n> analogue of that for control files.\n\nThe challenge is that it applies not just to shared object libraries and control files, but also extension SQL files and any other SHAREDIR files an extension might include. But also, I think it should support all the pg_config installation targets that extensions might use, including:\n\nBINDIR\nDOCDIR\nHTMLDIR\nPKGINCLUDEDIR\nLOCALEDIR\nMANDIR\n\nI can imagine an extension wanting or needing to use any and all of these.\n\nBest,\n\nDavid\n\n\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 15:37:28 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Mon, Jun 24, 2024 at 3:37 PM David E. Wheeler <[email protected]> wrote:\n> I guess the question then is what security controls are appropriate for this feature, which after all tells the postmaster what directories to read files from. It feels a little outside the scope of a regular user to even be aware of the file system undergirding the service. But perhaps there’s a non-superuser role for whom it is appropriate?\n\nAs long as the GUC is superuser-only, I'm not sure what else there is\nto do here. The only question is whether there's some reason to\ndisallow this even from the superuser, but I'm not quite seeing such a\nreason.\n\n> > On the patch itself, I find the documentation for this to be fairly\n> > hard to understand. I think it could benefit from an example. I'm\n> > confused about whether this is intended to let me search for\n> > extensions in /my/temp/root/usr/lib/postgresql/... by setting\n> > extension_directory=/my/temp/dir, or whether it's intended me to\n> > search both /usr/lib/postgresql as I normally would and also\n> > /some/other/place.\n>\n> I sketched them quickly, so agree they can be better. Reading the code, I now see that it appears to be the former case. I’d like to advocate for the latter.\n\nSounds good.\n\n> > If the latter, I wonder why we don't handle shared\n> > libraries by setting dynamic_library_path and then just have an\n> > analogue of that for control files.\n>\n> The challenge is that it applies not just to shared object libraries and control files, but also extension SQL files and any other SHAREDIR files an extension might include. But also, I think it should support all the pg_config installation targets that extensions might use, including:\n>\n> BINDIR\n> DOCDIR\n> HTMLDIR\n> PKGINCLUDEDIR\n> LOCALEDIR\n> MANDIR\n>\n> I can imagine an extension wanting or needing to use any and all of these.\n\nAre these really all relevant to backend code?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:28:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 24, 2024, at 4:28 PM, Robert Haas <[email protected]> wrote:\n\n> As long as the GUC is superuser-only, I'm not sure what else there is\n> to do here. The only question is whether there's some reason to\n> disallow this even from the superuser, but I'm not quite seeing such a\n> reason.\n\nI can switch it back from requiring a restart to allowing a superuser to set it.\n\n>> I sketched them quickly, so agree they can be better. Reading the code, I now see that it appears to be the former case. I’d like to advocate for the latter.\n> \n> Sounds good.\n\nYeah, though then I have a harder time deciding how it should work. pg_config’s paths are absolute. With your first example, we just use them exactly as they are, but prefix them with the destination directory. So if it’s set to `/my/temp/root/`, then files go into\n\n/my/temp/root/$(pg_conifg --sharedir)\n/my/temp/root/$(pg_conifg --pkglibdir)\n/my/temp/root/$(pg_conifg --bindir)\n# etc.\n\nWhich is exactly how RPM and Apt packages are built, but seems like an odd configuration for general use.\n\n>> BINDIR\n>> DOCDIR\n>> HTMLDIR\n>> PKGINCLUDEDIR\n>> LOCALEDIR\n>> MANDIR\n>> \n>> I can imagine an extension wanting or needing to use any and all of these.\n> \n> Are these really all relevant to backend code?\n\nOh I think so. Especially BINDIR; lots of extensions ship with binary applications. And most ship with docs, too (PGXS puts items listed in DOCS into DOCDIR). Some might also produce man pages (for their binaries), HTML docs, and other stuff. Maybe an FTE extension would include locale files?\n\nI find it pretty easy to imagine use cases for all of them. So much so that I wrote an extension binary distribution RFC[1] and its POC[2] around them.\n\nBest,\n\nDavid\n\n[1]: https://github.com/orgs/pgxn/discussions/2\n[1]: https://justatheory.com/2024/06/trunk-poc/\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 16:42:00 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Thu, 11 Apr 2024 at 19:52, David E. Wheeler <[email protected]> wrote:\n> I realize this probably isn’t going to happen for 17, given the freeze, but I would very much welcome feedback and pointers to address concerns about providing a second directory for extensions and DSOs. Quite a few people have talked about the need for this in the Extension Mini Summits[1], so I’m sure I could get some collaborators to make improvements or look at a different approach.\n\nOverall +1 for the idea. We're running into this same limitation (only\na single place to put extension files) at Microsoft at the moment.\n\n+ and to the '$libdir' directive when loading modules\n+ that back functions.\n\nI feel like this is a bit strange. Either its impact is too wide, or\nit's not wide enough depending on your intent.\n\nIf you want to only change $libdir during CREATE EXTENSION (or ALTER\nEXTENSION UPDATE), then why not just change it there. And really you'd\nonly want to change it when creating an extension from which the\ncontrol file is coming from extension_destdir.\n\nHowever, I can also see a case for really always changing $libdir.\nBecause some extensions in shared_preload_libraries, might want to\ntrigger loading other libraries that they ship with dynamically. And\nthese libraries are probably also in extension_destdir.\n\n\n", "msg_date": "Mon, 24 Jun 2024 23:17:14 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Mon, 24 Jun 2024 at 18:11, Nathan Bossart <[email protected]> wrote:\n> At first glance, the general idea seems reasonable to me. I'm wondering\n> whether there is a requirement for this directory to be prepended or if it\n> could be appended to the end. That way, the existing ones would take\n> priority, which might be desirable from a security standpoint.\n\nCitus does ship with some override library for pgoutput to make\nlogical replication/CDC work correctly with sharded tables. Right now\nusing this override library requires changing dynamic_library_path. It\nwould be nice if that wasn't necessary. But this is obviously a small\nthing. And I definitely agree that there's a security angle to this as\nwell, but honestly that seems rather small too. If an attacker can put\nshared libraries into the extension_destdir, I'm pretty sure you've\nlost already, no matter if extension_destdir is prepended or appended\nto the existing $libdir.\n\n\n", "msg_date": "Mon, 24 Jun 2024 23:23:41 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 24, 2024, at 17:17, Jelte Fennema-Nio <[email protected]> wrote:\n\n> If you want to only change $libdir during CREATE EXTENSION (or ALTER\n> EXTENSION UPDATE), then why not just change it there. And really you'd\n> only want to change it when creating an extension from which the\n> control file is coming from extension_destdir.\n\nIIUC, the postmaster needs to load an extension on first use in every session unless it’s in shared_preload_libraries.\n\n> However, I can also see a case for really always changing $libdir.\n> Because some extensions in shared_preload_libraries, might want to\n> trigger loading other libraries that they ship with dynamically. And\n> these libraries are probably also in extension_destdir.\n\nRight, it can be more than just the DSOs for the extension itself.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 24 Jun 2024 17:31:53 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Mon, 24 Jun 2024 at 22:42, David E. Wheeler <[email protected]> wrote:\n> >> BINDIR\n> >> DOCDIR\n> >> HTMLDIR\n> >> PKGINCLUDEDIR\n> >> LOCALEDIR\n> >> MANDIR\n> >>\n> >> I can imagine an extension wanting or needing to use any and all of these.\n> >\n> > Are these really all relevant to backend code?\n>\n> Oh I think so. Especially BINDIR; lots of extensions ship with binary applications. And most ship with docs, too (PGXS puts items listed in DOCS into DOCDIR). Some might also produce man pages (for their binaries), HTML docs, and other stuff. Maybe an FTE extension would include locale files?\n>\n> I find it pretty easy to imagine use cases for all of them. So much so that I wrote an extension binary distribution RFC[1] and its POC[2] around them.\n\nDefinitely agreed on BINDIR needing to be supported.\n\nAnd while lots of extensions ship with docs, I expect this feature to\nmostly be used in production environments to make deploying extensions\neasier. And I'm not sure that many people care about deploying docs to\nproduction (honestly lots of people would probably want to strip\nthem).\n\nStill, for the sake of completeness it might make sense to support\nthis whole list in extension_destdir. (assuming it's easy to do)\n\n\n", "msg_date": "Mon, 24 Jun 2024 23:32:20 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Re: Nathan Bossart\n> At first glance, the general idea seems reasonable to me. I'm wondering\n> whether there is a requirement for this directory to be prepended or if it\n> could be appended to the end. That way, the existing ones would take\n> priority, which might be desirable from a security standpoint.\n\nMy use case for this is to test things at compile time (where I can't\nwrite to /usr/share/postgresql/). If installed things would take\npriority over the things that I'm trying to test, I'd be disappointed.\n\nChristoph\n\n\n", "msg_date": "Tue, 25 Jun 2024 11:18:25 +0200", "msg_from": "Christoph Berg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On 2024-Jun-24, Robert Haas wrote:\n\n> Is \"tighten up what the superuser can do\" on our list of objectives?\n> Personally, I think we should be focusing mostly, and maybe entirely,\n> on letting non-superusers do more things, with appropriate security\n> controls. The superuser can ultimately do anything, since they can\n> cause shell commands to be run and load arbitrary code into the\n> backend and write code in untrusted procedural languages and mutilate\n> the system catalogs and lots of other terrible things.\n\nI don't agree that we should focus _solely_ on allowing non-superusers\nto do more things. Sure, it's a good thing to do -- but we shouldn't\ncompletely close the option of securing superuser itself. I think it's\nnot completely impossible to have a future where superuser is just so\nwithin the database, i.e. that it can't escape to the operating system.\nI'm sure that would be useful in many environments. On this list, many\npeople frequently make the argument that it is impossible to secure, but\nI'm not convinced.\n\nThey can mutilate the system catalogs: yes, they can TRUNCATE pg_type.\nSo what? They've just destroyed their own ability to do anything else.\nThe real issue here is that they can edit pg_proc to cause SQL function\ncalls to call arbitrary code. But what if we limit functions so that\nthe C code that they can call is located in specific places that are\nknown to only contain secure code? This is easy: make sure the\nOS-installation only contains safe code in $libdir.\n\nI hear you say: ah, but they can modify dynamic_library_path, which is a\nGUC, to load code from anywhere -- especially /tmp, where the newest\nbitcoin-mining library was just written. This is true. I suggest, to\nsolve this problem, that we should make dynamic_library_path no longer a\nGUC. It should be a setting that comes from a different origin, one\nthat even superuser cannot write to. Only the OS-installation can\nmodify that file; that way, superuser cannot load arbitrary code that\nway.\n\nThis is where the new GUC setting being proposed in this thread rubs me\nthe wrong way: it's adding yet another avenue for this to be exploited.\nI would like this new directory not to be a GUC either, just like\ndynamic_library_path.\n\nI hear you argue: ah, but they can use COPY to write a new file to\n$libdir. Yes, they can, and I think that's foolish. We could have\nanother non-GUC setting which takes a list of directories where COPY can\nwrite files into. Problem solved. Do people really need the ability to\nwrite files on arbitrary locations?\n\nUntrusted extensions: well, just don't have those in the OS-installation\nand you'll be fine. I'm okay with saying that a superuser-restricted\nsystem is incompatible with plpython.\n\narchive_command and so on: we could disable these too. Nathan did some\nwork to implement those using dynamic libraries, so it shouldn't be too\nmuch of a loss; anything that is done with a shell script can also be\ndone with a small library. Those libraries can be made safe.\nIf there are other ways to invoke shell commands from GUCs, let's add\nthe ability to use libraries for those too.\n\nWhat other exploits do we know about? How can we close them?\n\n\nNow, I'm not saying that this is an easy journey. But if we don't\nstart, we're not going to get there.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n", "msg_date": "Tue, 25 Jun 2024 12:12:33 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Tue, 25 Jun 2024 at 12:12, Alvaro Herrera <[email protected]> wrote:\n> They can mutilate the system catalogs: yes, they can TRUNCATE pg_type.\n> So what? They've just destroyed their own ability to do anything else.\n> The real issue here is that they can edit pg_proc to cause SQL function\n> calls to call arbitrary code. But what if we limit functions so that\n> the C code that they can call is located in specific places that are\n> known to only contain secure code? This is easy: make sure the\n> OS-installation only contains safe code in $libdir.\n\nI wouldn't call it \"easy\" but I totally agree that changing pg_proc is\nthe main security issue that we have no easy way to tackle.\n\n> I hear you say: ah, but they can modify dynamic_library_path, which is a\n> GUC, to load code from anywhere -- especially /tmp, where the newest\n> bitcoin-mining library was just written. This is true. I suggest, to\n> solve this problem, that we should make dynamic_library_path no longer a\n> GUC. It should be a setting that comes from a different origin, one\n> that even superuser cannot write to. Only the OS-installation can\n> modify that file; that way, superuser cannot load arbitrary code that\n> way.\n\nI don't think that needs a whole new file. Making this GUC be\nPGC_SIGHUP/PGC_POSTMASTER + GUC_DISALLOW_IN_AUTO_FILE should be\nenough. Just like was done for the new allow_alter_system GUC in PG17.\n\n> This is where the new GUC setting being proposed in this thread rubs me\n> the wrong way: it's adding yet another avenue for this to be exploited.\n> I would like this new directory not to be a GUC either, just like\n> dynamic_library_path.\n\nWe can make it PGC_SIGHUP/PGC_POSTMASTER + GUC_DISALLOW_IN_AUTO_FILE\ntoo, either now or in the future.\n\n> Now, I'm not saying that this is an easy journey. But if we don't\n> start, we're not going to get there.\n\nSure, but it sounds like you're suggesting that you want to \"start\" by\nnot adding new features that have equivalent security holes as the\nones that we already have. I don't think that is a very helpful way to\nget to a better place. It seems much more useful to tackle the current\nproblems that we have first, and almost certainly the same solutions\nto those problems can be applied to any new features with security\nissues.\n\nIt at least definitely seems the case for the proposal in this thread:\ni.e. we already have a GUC that allows loading libraries from an\narbitrary location. This proposal adds another such GUC. If we solve\nthe security problem in that first GUC, either by\nGUC_DISALLOW_IN_AUTO_FILE, or by creating a whole new mechanism for\nthe setting, then I see no reason why we cannot use that exact same\nsolution for the newly proposed GUC. So the required work to secure\npostgres will not be meaningfully harder by adding this GUC.\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:45:06 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Tue, Jun 25, 2024 at 6:12 AM Alvaro Herrera <[email protected]> wrote:\n> Now, I'm not saying that this is an easy journey. But if we don't\n> start, we're not going to get there.\n\nI actually kind of agree with you. I think I've said something similar\nin a previous email to the list somewhere. But I don't agree that this\npatch should be burdened with taking the first step. We seem to often\nfind reasons why patches that packagers for prominent distributions\nare carrying shouldn't be put into core, and I think that's a bad\nhabit. They're not going to stop applying those packages because we\nrefuse to put suitable functionality in core; we're just creating a\nsituation where lots of people are running slightly patched versions\nof PostgreSQL instead of straight-up PostgreSQL. That's not improving\nanything. If we want to work on making the sorts of changes that\nyou're proposing, let's do it on a separate thread. It's not going to\nbe meaningfully harder to move in that direction after some patch like\nthis than it is today.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 25 Jun 2024 10:43:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 24, 2024, at 5:32 PM, Jelte Fennema-Nio <[email protected]> wrote:\n\n> Still, for the sake of completeness it might make sense to support\n> this whole list in extension_destdir. (assuming it's easy to do)\n\nIt should be with the current patch, which just uses a prefix to paths in `pg_config`. So if SHAREDIR is set to /usr/share/postgresql/16 and extension_destdir is set to /mount/ext, then Postgres will look for files in /mount/ext/usr/share/postgresql/16. The same rule applies (or should apply) for all other pg_config directory configs and where the postmaster looks for specific files. And PGXS already supports installing files in these locations, thanks to its DESTDIR param.\n\n(I don’t know how it works on Windows, though.)\n\nThat said, this is very much a pattern designed for RPM and Debian package management patterns, and not for actually installing and managing extensions. And maybe that’s fine for now, as it can still be used to address the immutability problems descried in the original post in this thread.\n\nUltimately, I’d like to figure out a way to more tidily organize installed extension files, but I think that, too, might be a separate thread.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 25 Jun 2024 13:33:08 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 25, 2024, at 10:43 AM, Robert Haas <[email protected]> wrote:\n\n> If we want to work on making the sorts of changes that\n> you're proposing, let's do it on a separate thread. It's not going to\n> be meaningfully harder to move in that direction after some patch like\n> this than it is today.\n\nI appreciate this separation of concerns, Robert.\n\nIn other news, here’s an updated patch that expands the documentation to record that the destination directory is a prefix, and full paths should be used under it. Also take the opportunity to document the PGXS DESTDIR variable as the thing to use to install files under the destination directory.\n\nIt still requires a server restart; I can change it back to superuser-only if that’s the consensus.\n\nFor those who prefer a GitHub patch review experience, see this PR:\n\n https://github.com/theory/postgres/pull/3/files\n\nBest,\n\nDavid", "msg_date": "Tue, 25 Jun 2024 18:31:46 -0400", "msg_from": "David E. Wheeler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Wed, 26 Jun 2024 at 00:32, David E. Wheeler <[email protected]> wrote:\n> In other news, here’s an updated patch that expands the documentation to record that the destination directory is a prefix, and full paths should be used under it. Also take the opportunity to document the PGXS DESTDIR variable as the thing to use to install files under the destination directory.\n\nDocs are much clearer now thanks.\n\n full = substitute_libpath_macro(name);\n+ /*\n+ * If extension_destdir is set, try to find the file there first\n+ */\n+ if (*extension_destdir != '\\0')\n+ {\n+ full2 = psprintf(\"%s%s\", extension_destdir, full);\n+ if (pg_file_exists(full2))\n+ {\n+ pfree(full);\n+ return full2;\n+ }\n+ pfree(full2);\n+ }\n\nI think this should be done differently. For two reasons:\n1. I don't think extension_destdir should be searched when $libdir is\nnot part of the name.\n2. find_in_dynamic_libpath currently doesn't use extension_destdir at\nall, so if there is no slash in the filename we do not search\nextension_destdir.\n\nI feel like changing the substitute_libpath_macro function a bit (or\nadding a new similar function) is probably the best way to achieve\nthat.\n\nWe should also check somewhere (probably GUC check hook) that\nextension_destdir is an absolute path.\n\n> It still requires a server restart;\n\nWhen reading the code I see no reason why this cannot be PGC_SIGHUP.\nEven though it's probably not needed to change on a running server, I\nthink it's better to allow that. Even just so people can disable it if\nnecessary for some reason without restarting the process.\n\n> I can change it back to superuser-only if that’s the consensus.\n\nIt still is GUC_SUPERUSER_ONLY, right?\n\n> For those who prefer a GitHub patch review experience, see this PR:\n>\n> https://github.com/theory/postgres/pull/3/files\n\nSidenote: The \"D\" link for each patch on cfbot[1] now gives a similar\nlink for all commitfest entries[2].\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/postgresql-cfbot/postgresql/compare/cf/4913~1...cf/4913\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:30:39 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Tue, 25 Jun 2024 at 19:33, David E. Wheeler <[email protected]> wrote:\n>\n> On Jun 24, 2024, at 5:32 PM, Jelte Fennema-Nio <[email protected]> wrote:\n>\n> > Still, for the sake of completeness it might make sense to support\n> > this whole list in extension_destdir. (assuming it's easy to do)\n>\n> It should be with the current patch, which just uses a prefix to paths in `pg_config`.\n\nAh alright, I think it confused me because I never saw bindir being\nused. But as it turns out the current backend code never uses bindir.\nSo that makes sense. I guess to actually use the binaries from the\nextension_destdir/$BINDIR the operator needs to set PATH accordingly,\nor the extension needs to be changed to support extension_destdir.\n\nIt might be nice to add a helper function to find binaries in BINDIR,\nnow that the resolution logic is more complex. Even if postgres itself\ndoesn't use it. That would make it easier for extensions to be\nmodified to support extension_distdir. Something like\nfind_bindir_executable(char *name)\n\n\n", "msg_date": "Wed, 26 Jun 2024 11:36:27 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Jun 25, 2024, at 18:31, David E. Wheeler <[email protected]> wrote:\n\n> For those who prefer a GitHub patch review experience, see this PR:\n> \n> https://github.com/theory/postgres/pull/3/files\n\nRebased and restored PGC_SUSET in the attached v5 patch, plus noted the required privileges in the docs.\n\nBest,\n\nDavid", "msg_date": "Mon, 8 Jul 2024 12:01:57 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Hi everyone,\n\nApologies for only starting to look into this now. Thanks, David, for\npushing this forward.\n\nI want to emphasize the importance of this patch for the broader adoption\nof extensions in immutable container environments, such as those used by\nthe CloudNativePG operator in Kubernetes.\n\nTo provide some context, one of the key principles of CloudNativePG is that\ncontainers, once started, cannot be modified—this includes the installation\nof Postgres extensions and their libraries. This restriction prevents us\nfrom adding extensions on the fly, requiring them to be included in the\nmain PostgreSQL operand image. As a result, users who need specific\nextensions must build custom images through automated pipelines (see:\nhttps://cloudnative-pg.io/blog/creating-container-images/).\n\nWe’ve been considering ways to improve this process for some time. The\ndirection we're exploring involves mounting an ephemeral volume that\ncontains the necessary extensions (namely $sharedir and $pkglibdir from\npg_config). These volumes would be created and populated with the required\nextensions when the container starts and destroyed when it shuts down. To\nmake this work, each extension must be independently packaged as a\ncontainer image containing the appropriate files for a specific extension\nversion, tailored to the architecture, distribution, OS version, and\nPostgres version.\n\nI’m committed to thoroughly reviewing this patch, testing it with\nCloudNativePG and a few extensions, and providing feedback as soon as\npossible.\n\nBest,\nGabriele\n\nOn Mon, 8 Jul 2024 at 18:02, David E. Wheeler <[email protected]> wrote:\n\n> On Jun 25, 2024, at 18:31, David E. Wheeler <[email protected]> wrote:\n>\n> > For those who prefer a GitHub patch review experience, see this PR:\n> >\n> > https://github.com/theory/postgres/pull/3/files\n>\n> Rebased and restored PGC_SUSET in the attached v5 patch, plus noted the\n> required privileges in the docs.\n>\n> Best,\n>\n> David\n>\n>\n>\n>\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi everyone,Apologies for only starting to look into this now. Thanks, David, for pushing this forward.I want to emphasize the importance of this patch for the broader adoption of extensions in immutable container environments, such as those used by the CloudNativePG operator in Kubernetes.To provide some context, one of the key principles of CloudNativePG is that containers, once started, cannot be modified—this includes the installation of Postgres extensions and their libraries. This restriction prevents us from adding extensions on the fly, requiring them to be included in the main PostgreSQL operand image. As a result, users who need specific extensions must build custom images through automated pipelines (see: https://cloudnative-pg.io/blog/creating-container-images/).We’ve been considering ways to improve this process for some time. The direction we're exploring involves mounting an ephemeral volume that contains the necessary extensions (namely $sharedir and $pkglibdir from pg_config). These volumes would be created and populated with the required extensions when the container starts and destroyed when it shuts down. To make this work, each extension must be independently packaged as a container image containing the appropriate files for a specific extension version, tailored to the architecture, distribution, OS version, and Postgres version.I’m committed to thoroughly reviewing this patch, testing it with CloudNativePG and a few extensions, and providing feedback as soon as possible.Best,GabrieleOn Mon, 8 Jul 2024 at 18:02, David E. Wheeler <[email protected]> wrote:On Jun 25, 2024, at 18:31, David E. Wheeler <[email protected]> wrote:\n\n> For those who prefer a GitHub patch review experience, see this PR:\n> \n>  https://github.com/theory/postgres/pull/3/files\n\nRebased and restored PGC_SUSET in the attached v5 patch, plus noted the required privileges in the docs.\n\nBest,\n\nDavid\n\n\n\n-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com", "msg_date": "Wed, 21 Aug 2024 21:59:46 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Tue, Jun 25, 2024 at 12:12:33PM +0200, Alvaro Herrera wrote:\n> archive_command and so on: we could disable these too. Nathan did some\n> work to implement those using dynamic libraries, so it shouldn't be too\n> much of a loss; anything that is done with a shell script can also be\n> done with a small library. Those libraries can be made safe.\n> If there are other ways to invoke shell commands from GUCs, let's add\n> the ability to use libraries for those too.\n\nSorry, I just noticed this message. I recently withdrew my patch set [0]\nfor using a library instead of shell commands for restore_command,\narchive_cleanup_command, and recovery_end_command, as it had sat idle for a\nvery long time. If/when there's interest, I'd be happy to pick it up\nagain.\n\n[0] https://postgr.es/m/ZkwLqichtySV5kF3%40nathan-air.lan\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:03:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Thu, 22 Aug 2024 at 08:00, Gabriele Bartolini\n<[email protected]> wrote:\n>\n> Hi everyone,\n>\n> Apologies for only starting to look into this now. Thanks, David, for pushing this forward.\n\n\n 100%. I've wanted this for some time but never had time to cook up a patch.\n\n> I want to emphasize the importance of this patch for the broader adoption of extensions in immutable container environments, such as those used by the CloudNativePG operator in Kubernetes.\n\n\nIt's also very relevant for local development and testing.\n\nRight now postgres makes it impossible to locally compile and install\nan extension for a distro-packaged postgres (whether from upstream\nrepos or PGDG repos) without dirtying the distro-managed filesystem\nsubtrees with local changes under /usr etc, because it cannot be\nconfigured to look for locally installed extensions on non-default\npaths.\n\n> To provide some context, one of the key principles of CloudNativePG is that containers, once started, cannot be modified—this includes the installation of Postgres extensions and their libraries. This restriction prevents us from adding extensions on the fly, requiring them to be included in the main PostgreSQL operand image. As a result, users who need specific extensions must build custom images through automated pipelines (see: https://cloudnative-pg.io/blog/creating-container-images/).\n\n\nIt may be possible to weaken this restriction somewhat thanks to the\nupcoming https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/\nfeature that permits additional OCI images to be mounted as read-only\nvolumes on a workload. This would still only permit mounting at\nPod-creation time, not runtime mounting and unmonuting, but means the\nbase postgres image could be supplemented by mounting additional\nimages for extensions.\n\nFor example, one might mount image \"postgis-vX.Y.Z\" image onto base\nimage \"postgresql-16\" if support for PostGIS is desired, without then\nhaving to bake every possible extension anyone might ever want into\nthe base image. This solves all sorts of messy issues with upgrades\nand new version releases.\n\nBut for it to work, it must be possible to tell postgres to look in\n_multiple places_ for extension .sql scripts and control files. This\nis presently possible for modules (dynamic libraries, .so / .dylib /\n.dll) but without a way to also configure the path for extensions it's\nof very limited utility.\n\n> We’ve been considering ways to improve this process for some time. The direction we're exploring involves mounting an ephemeral volume that contains the necessary extensions (namely $sharedir and $pkglibdir from pg_config). These volumes would be created and populated with the required extensions when the container starts and destroyed when it shuts down. To make this work, each extension must be independently packaged as a container image containing the appropriate files for a specific extension version, tailored to the architecture, distribution, OS version, and Postgres version.\n\n\nRight. And there might be more than one of them.\n\nSo IMO this should be a _path_ to search for extension control files\nand SQL scripts.\n\nIf the current built-in default extension dir was exposed as a var\n$extdir like we do for $libdir, this might look something like this\nfor local development and testing while working with a packaged\npostgres build:\n\n SET extension_search_path = $extsdir, /opt/myapp/extensions,\n/usr/local/postgres/my-custom-extension/extensions;\n SET dynamic_library_path = $libdir, /opt/myapp/lib,\n/usr/local/postgres/my-custom-extension/lib\n\nor in the container extensions case, something like:\n\n SET extension_search_path = $extsdir,\n/mnt/extensions/pg16/postgis-vX.Y/extensions,\n/mnt/extensions/pg16/gosuperfast/extensions;\n SET dynamic_library_path = $libdir,\n/mnt/extensions/pg16/postgis-vX.Y/lib,\n/mnt/extensions/pg16/gosuperfast/lib;\n\nFor safety, it might make sense to impose the restriction that if an\nextension control file is found in a given directory, SQL scripts will\nalso only be looked for in that same directory. That way there's no\nchance of accidentally mixing and matching SQL scripts from different\nversions of an extension if it appears twice on the extension search\npath in different places. The rule for loading SQL scripts would be:\n\n* locate first directory on path contianing matching extension control file\n* use this directory as the extension directory for all subsequent SQL\nscript loading and running actions\n\n--\nCraig Ringer\nEnterpriseDB\n\n\n", "msg_date": "Thu, 22 Aug 2024 11:07:17 +1200", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Thu, 22 Aug 2024 at 01:08, Craig Ringer\n<[email protected]> wrote:\n> SET extension_search_path = $extsdir,\n> /mnt/extensions/pg16/postgis-vX.Y/extensions,\n> /mnt/extensions/pg16/gosuperfast/extensions;\n\nIt looks like you want one directory per extension, so that list would\nget pretty long if you have multiple extensions. Maybe (as a follow up\nchange), we should start to support a * as a wildcard in both of these\nGUCs. So you could say:\n\nSET extension_search_path = /mnt/extensions/pg16/*\n\nTo mean effectively the same as you're describing above.\n\n\n", "msg_date": "Thu, 22 Aug 2024 09:31:54 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Hi Craig,\n\nOn Thu, 22 Aug 2024 at 01:07, Craig Ringer <[email protected]>\nwrote:\n\n> It's also very relevant for local development and testing.\n>\n\nYep, which is the original goal of Christoph IIRC.\n\n\n> It may be possible to weaken this restriction somewhat thanks to the\n> upcoming\n> https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/\n> feature that permits additional OCI images to be mounted as read-only\n> volumes on a workload. This would still only permit mounting at\n> Pod-creation time, not runtime mounting and unmonuting, but means the\n> base postgres image could be supplemented by mounting additional\n> images for extensions.\n>\n\nI'm really excited about that feature, but it's still in the alpha stage.\nHowever, I don't anticipate any issues for the future general availability\n(GA) release. Regardless, we may need to consider a temporary solution that\nis compatible with existing Kubernetes and possibly Postgres versions (but\nthat's beyond the purpose of this thread).\n\nFor example, one might mount image \"postgis-vX.Y.Z\" image onto base\n> image \"postgresql-16\" if support for PostGIS is desired, without then\n> having to bake every possible extension anyone might ever want into\n> the base image. This solves all sorts of messy issues with upgrades\n> and new version releases.\n>\n\nYep.\n\n\n> But for it to work, it must be possible to tell postgres to look in\n> _multiple places_ for extension .sql scripts and control files. This\n> is presently possible for modules (dynamic libraries, .so / .dylib /\n> .dll) but without a way to also configure the path for extensions it's\n> of very limited utility.\n>\n\nAgree.\n\n\n> So IMO this should be a _path_ to search for extension control files\n> and SQL scripts.\n>\n\nI like this. I also prefer the name `extension_search_path`.\n\nFor safety, it might make sense to impose the restriction that if an\n> extension control file is found in a given directory, SQL scripts will\n> also only be looked for in that same directory. That way there's no\n> chance of accidentally mixing and matching SQL scripts from different\n> versions of an extension if it appears twice on the extension search\n> path in different places. The rule for loading SQL scripts would be:\n>\n> * locate first directory on path contianing matching extension control file\n> * use this directory as the extension directory for all subsequent SQL\n> script loading and running actions\n>\n\nIt could work, but it requires some prototyping and exploration. I'm\nwilling to participate and use CloudNativePG as a test bed.\n\nCheers,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Craig,On Thu, 22 Aug 2024 at 01:07, Craig Ringer <[email protected]> wrote:It's also very relevant for local development and testing.Yep, which is the original goal of Christoph IIRC. It may be possible to weaken this restriction somewhat thanks to the\nupcoming https://kubernetes.io/blog/2024/08/16/kubernetes-1-31-image-volume-source/\nfeature that permits additional OCI images to be mounted as read-only\nvolumes on a workload. This would still only permit mounting at\nPod-creation time, not runtime mounting and unmonuting, but means the\nbase postgres image could be supplemented by mounting additional\nimages for extensions.I'm really excited about that feature, but it's still in the alpha stage. However, I don't anticipate any issues for the future general availability (GA) release. Regardless, we may need to consider a temporary solution that is compatible with existing Kubernetes and possibly Postgres versions (but that's beyond the purpose of this thread).For example, one might mount image \"postgis-vX.Y.Z\" image onto base\nimage \"postgresql-16\" if support for PostGIS is desired, without then\nhaving to bake every possible extension anyone might ever want into\nthe base image. This solves all sorts of messy issues with upgrades\nand new version releases.Yep. But for it to work, it must be possible to tell postgres to look in\n_multiple places_ for extension .sql scripts and control files. This\nis presently possible for modules (dynamic libraries, .so / .dylib /\n.dll) but without a way to also configure the path for extensions it's\nof very limited utility.Agree. So IMO this should be a _path_ to search for extension control files\nand SQL scripts.I like this. I also prefer the name `extension_search_path`. For safety, it might make sense to impose the restriction that if an\nextension control file is found in a given directory, SQL scripts will\nalso only be looked for in that same directory. That way there's no\nchance of accidentally mixing and matching SQL scripts from different\nversions of an extension if it appears twice on the extension search\npath in different places. The rule for loading SQL scripts would be:\n\n* locate first directory on path contianing matching extension control file\n* use this directory as the extension directory for all subsequent SQL\nscript loading and running actionsIt could work, but it requires some prototyping and exploration. I'm willing to participate and use CloudNativePG as a test bed.Cheers,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com", "msg_date": "Thu, 22 Aug 2024 10:58:45 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Hi Jelte,\n\nOn Thu, 22 Aug 2024 at 09:32, Jelte Fennema-Nio <[email protected]> wrote:\n\n> It looks like you want one directory per extension, so that list would\n> get pretty long if you have multiple extensions. Maybe (as a follow up\n> change), we should start to support a * as a wildcard in both of these\n> GUCs. So you could say:\n>\n> SET extension_search_path = /mnt/extensions/pg16/*\n>\n> To mean effectively the same as you're describing above.\n>\n\nThat'd be great. +1.\n\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi Jelte,On Thu, 22 Aug 2024 at 09:32, Jelte Fennema-Nio <[email protected]> wrote:It looks like you want one directory per extension, so that list would\nget pretty long if you have multiple extensions. Maybe (as a follow up\nchange), we should start to support a * as a wildcard in both of these\nGUCs. So you could say:\n\nSET extension_search_path = /mnt/extensions/pg16/*\n\nTo mean effectively the same as you're describing above.\nThat'd be great. +1.-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com", "msg_date": "Thu, 22 Aug 2024 10:59:47 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Thu, 22 Aug 2024 at 21:00, Gabriele Bartolini\n<[email protected]> wrote:\n> On Thu, 22 Aug 2024 at 09:32, Jelte Fennema-Nio <[email protected]> wrote:\n>> SET extension_search_path = /mnt/extensions/pg16/*\n>\n> That'd be great. +1.\n\nAgreed, that'd be handy, but not worth blocking the underlying capability for.\n\nExcept possibly to the degree that the feature should reserve wildcard\ncharacters and require them to be escaped if they appear on a path, so\nthere's no BC break if it's added later.\n\nOn Thu, 22 Aug 2024 at 21:00, Gabriele Bartolini\n<[email protected]> wrote:\n>\n> Hi Jelte,\n>\n> On Thu, 22 Aug 2024 at 09:32, Jelte Fennema-Nio <[email protected]> wrote:\n>>\n>> It looks like you want one directory per extension, so that list would\n>> get pretty long if you have multiple extensions. Maybe (as a follow up\n>> change), we should start to support a * as a wildcard in both of these\n>> GUCs. So you could say:\n>>\n>> SET extension_search_path = /mnt/extensions/pg16/*\n>>\n>> To mean effectively the same as you're describing above.\n>\n>\n> That'd be great. +1.\n>\n> --\n> Gabriele Bartolini\n> Vice President, Cloud Native at EDB\n> enterprisedb.com\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:14:20 +1200", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Fri, 23 Aug 2024 at 10:14, Craig Ringer\n<[email protected]> wrote:\n> On Thu, 22 Aug 2024 at 21:00, Gabriele Bartolini\n> <[email protected]> wrote:\n> > On Thu, 22 Aug 2024 at 09:32, Jelte Fennema-Nio <[email protected]> wrote:\n> >> SET extension_search_path = /mnt/extensions/pg16/*\n> >\n> > That'd be great. +1.\n>\n> Agreed, that'd be handy, but not worth blocking the underlying capability for.\n>\n> Except possibly to the degree that the feature should reserve wildcard\n> characters and require them to be escaped if they appear on a path, so\n> there's no BC break if it's added later.\n\n... though on second thoughts, it might make more sense to just\nrecursively search directories found under each path entry. Rules like\n'search if a redundant trailing / is present' can be an option.\n\nThat way there's no icky path escaping needed for normal configuration.\n\n\n", "msg_date": "Fri, 23 Aug 2024 10:16:00 +1200", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Hi Hackers,\n\nApologies for the delay in reply; I’ve been at the XOXO Festival and almost completely unplugged for the first time in ages. Happy to see this thread coming alive, though. Thank you Gabriele, Craig, and Jelte!\n\nOn Aug 21, 2024, at 19:07, Craig Ringer <[email protected]> wrote:\n\n> So IMO this should be a _path_ to search for extension control files\n> and SQL scripts.\n> \n> If the current built-in default extension dir was exposed as a var\n> $extdir like we do for $libdir, this might look something like this\n> for local development and testing while working with a packaged\n> postgres build:\n> \n> SET extension_search_path = $extsdir, /opt/myapp/extensions,\n> /usr/local/postgres/my-custom-extension/extensions;\n> SET dynamic_library_path = $libdir, /opt/myapp/lib,\n> /usr/local/postgres/my-custom-extension/lib\n\nI would very much like something like this, but I’m not sure how feasible it is for a few reasons. The first, and most important, is that extensions are not limited to just a control file and SQL file. They also very often include:\n\n* one or more shared library files\n* documentation files\n* binary files\n\nAnd maybe more? How many of these directories might an extension install files into:\n\n✦ ❯ pg_config | grep DIR | awk '{print $1}'\nBINDIR\nDOCDIR\nHTMLDIR\nINCLUDEDIR\nPKGINCLUDEDIR\nINCLUDEDIR-SERVER\nLIBDIR\nPKGLIBDIR\nLOCALEDIR\nMANDIR\nSHAREDIR\nSYSCONFDIR\n\nI would assume BINDIR, DOCDIR, HTMLDIR, PKGLIBDIR, MANDIR, SHAREDIR, and perhaps LOCALEDIR.\n\nBut even if it’s just one or two, the only proper way an extension directory would work, IME, is to define a directory-based structure for extensions, where every file for an extension is in a directory named for the extension, and subdirectories are defined for each of the above requisite file types. Something like:\n\nextension_name\n├── control.ini\n├── bin\n├── doc\n├── html\n├── lib\n├── local\n├── man\n└── share\n\nThis would allow multiple paths to work and keep all the files for an extension bundled together. It could also potentially allow for multiple versions of an extension to be installed at once, if we required the version to be part of the directory name.\n\nI think this would be a much nicer layout for packaging, installing, and managing extensions versus the current method of strewing files around to a slew of different directories. But it would come at some cost, in terms of backward with the existing layout (or migration to it), significant modification of the server to use the new layout (and extension_search_path), and other annoyances like PATH and MANPATH management.\n\nLong term I think it would be worthwhile, but the current patch feels like a decent interim step we could live with, solving most of the integration problems (immutable servers, packaging testing, etc.) at the cost of a slightly unexpected directory layout. What I mean by that is that the current patch is pretty much just using extension_destdir as a prefix to all of those directories from pg_config, so they never have to change, but it does mean that you end up installing extensions in something like\n\n/mnt/extensions/pg16/usr/share/postgresql/16\n/mnt/extensions/pg16/usr/include/postgresql\n\netc.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 26 Aug 2024 10:06:59 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Tue, 27 Aug 2024 at 02:07, David E. Wheeler <[email protected]> wrote:\n> On Aug 21, 2024, at 19:07, Craig Ringer <[email protected]> wrote:\n\n> But even if it’s just one or two, the only proper way an extension directory would work, IME, is to define a directory-based structure for extensions, where every file for an extension is in a directory named for the extension, and subdirectories are defined for each of the above requisite file types.\n> [...]\n> I think this would be a much nicer layout for packaging, installing, and managing extensions versus the current method of strewing files around to a slew of different directories.\n\nThis looks like a good suggestion to me, it would make the packaging,\ndistribution and integration of 3rd party extensions significantly\neasier without any obvious large or long term cost.\n\n> But it would come at some cost, in terms of backward with the existing layout (or migration to it), significant modification of the server to use the new layout (and extension_search_path), and other annoyances like PATH and MANPATH management.\n\nAlso PGXS, the windows extension build support, and 3rd party cmake\nbuilds etc. But not by the looks a drastic change.\n\n> Long term I think it would be worthwhile, but the current patch feels like a decent interim step we could live with, solving most of the integration problems (immutable servers, packaging testing, etc.) at the cost of a slightly unexpected directory layout. What I mean by that is that the current patch is pretty much just using extension_destdir as a prefix to all of those directories from pg_config, so they never have to change, but it does mean that you end up installing extensions in something like:\n>\n> /mnt/extensions/pg16/usr/share/postgresql/16\n> /mnt/extensions/pg16/usr/include/postgresql\n\nMy only real concern with the current patch is that it limits\nsearching for extensions to one additional configurable location,\nwhich is inconsistent with how things like the dynamic_library_path\nworks. Once in, it'll be difficult to change or extend for BC, and if\nsomeone wants to add a search path capability it'll break existing\nconfigurations.\n\nWould it be feasible to define its configuration syntax as accepting a\nlist of paths, but only implement the semantics for single-entry lists\nand ERROR on multiple paths? That way it could be extended w/o\nbreaking existing configurations later.\n\nWith that said, I'm not the one doing the work at the moment, and the\nfunctionality would definitely be helpful. If there's agreement on\nsupporting a search-path or recursing into subdirectories I'd be\nwilling to have a go at it, but I'm a bit stale on Pg's codebase now\nso I'd want to be fairly confident the work wouldn't just be thrown\nout.\n\n--\nCraig Ringer\n\n\n", "msg_date": "Tue, 27 Aug 2024 09:35:43 +1200", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "Hi David,\n\nThanks for your email.\n\nOn Mon, 26 Aug 2024 at 16:07, David E. Wheeler <[email protected]>\nwrote:\n\n> I would assume BINDIR, DOCDIR, HTMLDIR, PKGLIBDIR, MANDIR, SHAREDIR, and\n> perhaps LOCALEDIR.\n>\n> But even if it’s just one or two, the only proper way an extension\n> directory would work, IME, is to define a directory-based structure for\n> extensions, where every file for an extension is in a directory named for\n> the extension, and subdirectories are defined for each of the above\n> requisite file types. Something like:\n>\n> extension_name\n> ├── control.ini\n> ├── bin\n> ├── doc\n> ├── html\n> ├── lib\n> ├── local\n> ├── man\n> └── share\n>\n\nI'm really glad you proposed this publicly. I reached the same conclusion\nthe other day when digging deeper into the problem with a few folks from\nCloudNativePG. Setting aside multi-arch images for now, if we could\nreorganize the content of a single image (identified by OS distro,\nPostgreSQL major version, and extension version) with a top-level directory\nstructure as you described, we could easily mount each image as a separate\nvolume.\n\nThe extension image could follow a naming convention like this (order can\nbe adjusted): `<extension name>-<pg major>-<extension\nversion>-<distro>(-<seq>)`. For example, `pgvector-16-0.7.4-bookworm-1`\nwould represent the first image built in a repository for pgvector 0.7.4\nfor PostgreSQL 16 on Debian Bookworm. If multi-arch images aren't desired,\nwe could incorporate the architecture somewhere in the naming convention.\n\nThis would allow multiple paths to work and keep all the files for an\n> extension bundled together. It could also potentially allow for multiple\n> versions of an extension to be installed at once, if we required the\n> version to be part of the directory name.\n>\n\nIf we wanted to install multiple versions of an extension, we could mount\nthem in different directories, with the version included in the folder\nname—for example, `pgvector-0.7.4` instead of just `pgvector`. However, I'm\na bit rusty with the extensions framework, so I'll need to check if this\napproach is feasible and makes sense.\n\nThanks,\nGabriele\n-- \nGabriele Bartolini\nVice President, Cloud Native at EDB\nenterprisedb.com\n\nHi David,Thanks for your email.On Mon, 26 Aug 2024 at 16:07, David E. Wheeler <[email protected]> wrote:I would assume BINDIR, DOCDIR, HTMLDIR, PKGLIBDIR, MANDIR, SHAREDIR, and perhaps LOCALEDIR.\n\nBut even if it’s just one or two, the only proper way an extension directory would work, IME, is to define a directory-based structure for extensions, where every file for an extension is in a directory named for the extension, and subdirectories are defined for each of the above requisite file types. Something like:\n\nextension_name\n├── control.ini\n├── bin\n├── doc\n├── html\n├── lib\n├── local\n├── man\n└── shareI'm really glad you proposed this publicly. I reached the same conclusion the other day when digging deeper into the problem with a few folks from CloudNativePG. Setting aside multi-arch images for now, if we could reorganize the content of a single image (identified by OS distro, PostgreSQL major version, and extension version) with a top-level directory structure as you described, we could easily mount each image as a separate volume. The extension image could follow a naming convention like this (order can be adjusted): `<extension name>-<pg major>-<extension version>-<distro>(-<seq>)`. For example, `pgvector-16-0.7.4-bookworm-1` would represent the first image built in a repository for pgvector 0.7.4 for PostgreSQL 16 on Debian Bookworm. If multi-arch images aren't desired, we could incorporate the architecture somewhere in the naming convention.This would allow multiple paths to work and keep all the files for an extension bundled together. It could also potentially allow for multiple versions of an extension to be installed at once, if we required the version to be part of the directory name.If we wanted to install multiple versions of an extension, we could mount them in different directories, with the version included in the folder name—for example, `pgvector-0.7.4` instead of just `pgvector`. However, I'm a bit rusty with the extensions framework, so I'll need to check if this approach is feasible and makes sense.Thanks,Gabriele-- Gabriele BartoliniVice President, Cloud Native at EDBenterprisedb.com", "msg_date": "Tue, 27 Aug 2024 10:56:44 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Aug 27, 2024, at 04:56, Gabriele Bartolini <[email protected]> wrote:\n\n> The extension image could follow a naming convention like this (order can be adjusted): `<extension name>-<pg major>-<extension version>-<distro>(-<seq>)`. For example, `pgvector-16-0.7.4-bookworm-1` would represent the first image built in a repository for pgvector 0.7.4 for PostgreSQL 16 on Debian Bookworm. If multi-arch images aren't desired, we could incorporate the architecture somewhere in the naming convention.\n\nWell now you’re just describing the binary distribution format RFC[1] (POC[2]) and multi-platform OCI distribution POC[3] :-)\n\n> If we wanted to install multiple versions of an extension, we could mount them in different directories, with the version included in the folder name—for example, `pgvector-0.7.4` instead of just `pgvector`. However, I'm a bit rusty with the extensions framework, so I'll need to check if this approach is feasible and makes sense.\n\nRight, if we decided to adopt this proposal, it might make sense to include the “default version” as part of the directory name. But there’s quite a lot of work between here and there.\n\nBest,\n\nDavid\n\n[1]: https://github.com/pgxn/rfcs/pull/2\n[2]: https://justatheory.com/2024/06/trunk-poc/\n[3]: https://justatheory.com/2024/06/trunk-oci-poc/\n\n\n\n\n", "msg_date": "Tue, 27 Aug 2024 11:19:48 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Aug 26, 2024, at 17:35, Craig Ringer <[email protected]> wrote:\n\n> This looks like a good suggestion to me, it would make the packaging,\n> distribution and integration of 3rd party extensions significantly\n> easier without any obvious large or long term cost.\n\nYes!\n\n> Also PGXS, the windows extension build support, and 3rd party cmake\n> builds etc. But not by the looks a drastic change.\n\nRight. ISTM it could complicate PGXS quite a bit. If we set, say, \n\nSET extension_search_path = $extsdir, /mnt/extensions/pg16, /mnt/extensions/pg16/gosuperfast/extensions;\n\nWhat should be the output of `pg_config --sharedir`?\n\n> My only real concern with the current patch is that it limits\n> searching for extensions to one additional configurable location,\n> which is inconsistent with how things like the dynamic_library_path\n> works. Once in, it'll be difficult to change or extend for BC, and if\n> someone wants to add a search path capability it'll break existing\n> configurations.\n\nAgreed.\n\n> Would it be feasible to define its configuration syntax as accepting a\n> list of paths, but only implement the semantics for single-entry lists\n> and ERROR on multiple paths? That way it could be extended w/o\n> breaking existing configurations later.\n\nI imagine it’s a simple matter of programming :-) But that leaves the issue of directory organization. The current patch is just a prefix for various PGXS/pg_config directories; the longer-term proposal I’ve made here is not a prefix for sharedir, mandir, etc., but a directory that contains directories named for extensions. So even if we were to take this approach, the directory structure would vary.\n\nI suspect we’d have to name it differently and support both long-term. That, too me, is the main issue with this patch.\n\nOTOH, we have this patch now, and this other stuff is just a proposal. Actual code trumps ideas in my mind.\n\n> With that said, I'm not the one doing the work at the moment, and the\n> functionality would definitely be helpful. If there's agreement on\n> supporting a search-path or recursing into subdirectories I'd be\n> willing to have a go at it, but I'm a bit stale on Pg's codebase now\n> so I'd want to be fairly confident the work wouldn't just be thrown\n> out.\n\nI think we should get some clarity on the proposal, and then consensus, as you say. I say “get some clarity” because my proposal doesn’t require recursing, and I’m not sure why it’d be needed.\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Tue, 27 Aug 2024 11:26:15 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Wed, 28 Aug 2024 at 03:26, David E. Wheeler <[email protected]> wrote:\n> Right. ISTM it could complicate PGXS quite a bit. If we set, say,\n>\n> SET extension_search_path = $extsdir, /mnt/extensions/pg16, /mnt/extensions/pg16/gosuperfast/extensions;\n>\n> What should be the output of `pg_config --sharedir`?\n\n`pg_config` only cares about compile-time settings, so I would not\nexpect its output to change.\n\nI suspect we'd have to add PGXS extension-path awareness if going for\nper-extension subdir support. I'm not sure it makes sense to teach\n`pg_config` about this, since it'd need to have a different mode like\n\n pg_config --extension myextname --extension-sharedir\n\nsince the extension's \"sharedir\" is\n$basedir/extensions/myextension/share or whatever.\n\nSupporting this looks to be a bit intrusive in the makefiles,\nrequiring them to differentiate between \"share dir for extensions\" and\n\"share dir for !extensions\", etc. I'm not immediately sure if it can\nbe done in a way that transparently converts unmodified extension PGXS\nmakefiles to target the new paths; it might require an additional\ndefine, or use of new variables and an ifdef block to add\nbackwards-compat to the extension makefile for building on older\npostgres.\n\n> But that leaves the issue of directory organization. The current patch is just a prefix for various PGXS/pg_config directories; the longer-term proposal I’ve made here is not a prefix for sharedir, mandir, etc., but a directory that contains directories named for extensions. So even if we were to take this approach, the directory structure would vary.\n\nRight. The proposed structure is rather a bigger change than I was\nthinking when I suggested supporting an extension search path not just\na single extra path. But it's also a cleaner proposal; the\nper-extension directory would make it easier to ensure that the\nextension control file, sql scripts, and dynamic library all match the\nsame extension and version if multiple ones are on the path. Which is\ndesirable when doing things like testing a new version of an in-core\nextension.\n\n> OTOH, we have this patch now, and this other stuff is just a proposal. Actual code trumps ideas in my mind.\n\nRight. And I've been on the receiving end of having a small, focused\npatch derailed by others jumping in and scope-exploding it into\nsomething completely different to solve a much wider but related\nproblem.\n\nI'm definitely not trying to stand in the way of progress with this; I\njust want to make sure that it doesn't paint us into a corner that\nprevents a more general solution from being adopted later. That's why\nI'm suggesting making the config a multi-value string (list of paths)\nand raising a runtime \"ERROR: searching multiple paths for extensions\nnot yet supported\" or something if >1 path configured.\n\nIf that doesn't work, no problem.\n\n> I think we should get some clarity on the proposal, and then consensus, as you say. I say “get some clarity” because my proposal doesn’t require recursing, and I’m not sure why it’d be needed.\n\n From what you and Gabriele are discussing (which I agree with), it wouldn't.\n\n\n", "msg_date": "Wed, 28 Aug 2024 14:24:45 +1200", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RFC: Additional Directory for Extensions" }, { "msg_contents": "On Aug 27, 2024, at 22:24, Craig Ringer <[email protected]> wrote:\n\n> `pg_config` only cares about compile-time settings, so I would not\n> expect its output to change.\n\nRight, of course that’s its original purpose, but extensions depend on it to determine where to install extensions. Not just PGXS, but also pgrx and various Makefile customizations I’ve seen in the wild.\n\n> I suspect we'd have to add PGXS extension-path awareness if going for\n> per-extension subdir support. I'm not sure it makes sense to teach\n> `pg_config` about this, since it'd need to have a different mode like\n> \n> pg_config --extension myextname --extension-sharedir\n> \n> since the extension's \"sharedir\" is\n> $basedir/extensions/myextension/share or whatever.\n\nRight. PGXS would just need to know where to put the directory for an extension. There should be a default for the project, and then it can be overridden with something like DESTDIR (but without full paths under that prefix).\n\n> Supporting this looks to be a bit intrusive in the makefiles,\n> requiring them to differentiate between \"share dir for extensions\" and\n> \"share dir for !extensions\", etc. I'm not immediately sure if it can\n> be done in a way that transparently converts unmodified extension PGXS\n> makefiles to target the new paths; it might require an additional\n> define, or use of new variables and an ifdef block to add\n> backwards-compat to the extension makefile for building on older\n> postgres.\n\nYeah, might just have to be an entirely new thing, though it sure would be nice for existing PGXS-using Makefiles to do the right thing. Maybe for the new version of the server with the proposed new pattern it would dispatch to the new thing somehow without modifying all the rest of its logic.\n\n> Right. The proposed structure is rather a bigger change than I was\n> thinking when I suggested supporting an extension search path not just\n> a single extra path. But it's also a cleaner proposal; the\n> per-extension directory would make it easier to ensure that the\n> extension control file, sql scripts, and dynamic library all match the\n> same extension and version if multiple ones are on the path. Which is\n> desirable when doing things like testing a new version of an in-core\n> extension.\n\n💯\n\n> Right. And I've been on the receiving end of having a small, focused\n> patch derailed by others jumping in and scope-exploding it into\n> something completely different to solve a much wider but related\n> problem.\n\nI’m not complaining, I would definitely prefer to see consensus on a cleaner proposal along the lines we’ve discussed and a commitment to time from parties able to get it done in time for v18. I’m willing to help where I can with my baby C! Failing that, we can fall back on the destdir patch.\n\n> I'm definitely not trying to stand in the way of progress with this; I\n> just want to make sure that it doesn't paint us into a corner that\n> prevents a more general solution from being adopted later. That's why\n> I'm suggesting making the config a multi-value string (list of paths)\n> and raising a runtime \"ERROR: searching multiple paths for extensions\n> not yet supported\" or something if >1 path configured.\n> \n> If that doesn't work, no problem.\n\nI think the logic would have to be different, so they’d be different GUCs with their own semantics. But if the core team and committers are on board with the general idea of search paths and per-extension directory organization, it would be best to avoid getting stuck with maintaining the current patch’s GUC.\n\nOTOH, we could get it committed now and revert it later if we get the better thing done and committed.\n\n>> I think we should get some clarity on the proposal, and then consensus, as you say. I say “get some clarity” because my proposal doesn’t require recursing, and I’m not sure why it’d be needed.\n> \n> From what you and Gabriele are discussing (which I agree with), it wouldn’t.\n\nAh, great.\n\nI’ll try to put some thought into a more formal proposal in a new thread next week. Unless your Gabriele beats me to it 😂.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Thu, 29 Aug 2024 11:55:27 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RFC: Additional Directory for Extensions" } ]
[ { "msg_contents": "Greetings,\n\nThe command I am using is\nmeson setup --wipe build_dir -D\nextra_include_dirs=c:/Users/davec/projects/postgres/libxml2-vc140-static-32_64.2.9.4.1\\lib\\native\\include\\libxml\n-D\nextra_lib_dirs=c:/Users/davec/projects/postgres/libxml2-vc140-static-32_64.2.9.4.1\\lib\\native\\libs\\x64\\static\\Release\n--prefix=c:\\postgresql\n\nI've tried other libraries and no joy.\n\nLogs attached.\nThanks in advance\nDave Cramer", "msg_date": "Tue, 2 Apr 2024 18:12:51 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "building postgres on windows using meson libxml2 not found" } ]
[ { "msg_contents": "Hello,\n\nWhen implementing a GiST consistent function I found the need to cache pre-processed query across invocations.\nI am not sure if it is safe to do (or I need to perform some steps to make sure cached info is not leaked between rescans).\n\nThe comment in gistrescan says:\n\n\t\t/*\n\t\t * If this isn't the first time through, preserve the fn_extra\n\t\t * pointers, so that if the consistentFns are using them to cache\n\t\t * data, that data is not leaked across a rescan.\n\t\t */\n\nwhich seems to me self-contradictory as fn_extra is preserved between rescans (so leaks are indeed possible).\n\nAm I missing something?\n\nThanks,\nMichal\n\n", "msg_date": "Wed, 3 Apr 2024 08:43:17 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Is it safe to cache data by GiST consistent function" }, { "msg_contents": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n> When implementing a GiST consistent function I found the need to cache pre-processed query across invocations.\n> I am not sure if it is safe to do (or I need to perform some steps to make sure cached info is not leaked between rescans).\n\nAFAIK it works. I don't see any of the in-core ones doing so,\nbut at least range_gist_consistent and multirange_gist_consistent\nare missing a bet by repeating their cache search every time.\n\n> The comment in gistrescan says:\n\n> \t\t/*\n> \t\t * If this isn't the first time through, preserve the fn_extra\n> \t\t * pointers, so that if the consistentFns are using them to cache\n> \t\t * data, that data is not leaked across a rescan.\n> \t\t */\n\n> which seems to me self-contradictory as fn_extra is preserved between rescans (so leaks are indeed possible).\n\nI think you're reading it wrong. If we cleared fn_extra during\nrescan, access to the old extra value would be lost so a new one\nwould have to be created, leaking the old value for the rest of\nthe query.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Apr 2024 10:27:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it safe to cache data by GiST consistent function" }, { "msg_contents": "Thanks for taking your time to answer. Not sure if I understand though.\n\n> On 3 Apr 2024, at 16:27, Tom Lane <[email protected]> wrote:\n> \n> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n>> When implementing a GiST consistent function I found the need to cache pre-processed query across invocations.\n>> I am not sure if it is safe to do (or I need to perform some steps to make sure cached info is not leaked between rescans).\n> \n> AFAIK it works. I don't see any of the in-core ones doing so,\n> but at least range_gist_consistent and multirange_gist_consistent\n> are missing a bet by repeating their cache search every time.\n\npg_trgm consistent caches tigrams but it has some logic to make sure cached values are recalculated:\n\n\tcache = (gtrgm_consistent_cache *) fcinfo->flinfo->fn_extra;\n\tif (cache == NULL ||\n\t\tcache->strategy != strategy ||\n\t\tVARSIZE(cache->query) != querysize ||\n\t\tmemcmp((char *) cache->query, (char *) query, querysize) != 0)\n\nWhat I don’t understand is if it is necessary or it is enough to check fn_extra==NULL.\n\n> \n>> The comment in gistrescan says:\n> \n>> \t\t/*\n>> \t\t * If this isn't the first time through, preserve the fn_extra\n>> \t\t * pointers, so that if the consistentFns are using them to cache\n>> \t\t * data, that data is not leaked across a rescan.\n>> \t\t */\n> \n>> which seems to me self-contradictory as fn_extra is preserved between rescans (so leaks are indeed possible).\n> \n> I think you're reading it wrong. If we cleared fn_extra during\n> rescan, access to the old extra value would be lost so a new one\n> would have to be created, leaking the old value for the rest of\n> the query.\n\nI understand that but not sure what “that data is not leaked across a rescan” means.\n\n—\nMichal\n\n", "msg_date": "Wed, 3 Apr 2024 18:47:03 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it safe to cache data by GiST consistent function" }, { "msg_contents": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n> On 3 Apr 2024, at 16:27, Tom Lane <[email protected]> wrote:\n>> AFAIK it works. I don't see any of the in-core ones doing so,\n>> but at least range_gist_consistent and multirange_gist_consistent\n>> are missing a bet by repeating their cache search every time.\n\n> pg_trgm consistent caches tigrams but it has some logic to make sure cached values are recalculated:\n\n> \tcache = (gtrgm_consistent_cache *) fcinfo->flinfo->fn_extra;\n> \tif (cache == NULL ||\n> \t\tcache->strategy != strategy ||\n> \t\tVARSIZE(cache->query) != querysize ||\n> \t\tmemcmp((char *) cache->query, (char *) query, querysize) != 0)\n\n> What I don’t understand is if it is necessary or it is enough to check fn_extra==NULL.\n\nAh, I didn't think to search contrib. Yes, you need to validate the\ncache entry. In this example, a rescan could insert a new query\nvalue. In general, an opclass support function could get called using\na pretty long-lived FunctionCallInfo (e.g. one in the index's relcache\nentry), so it's unwise to assume that cached data is relevant to the\ncurrent call without checking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 Apr 2024 13:02:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it safe to cache data by GiST consistent function" }, { "msg_contents": "\n> On 3 Apr 2024, at 19:02, Tom Lane <[email protected]> wrote:\n> \n> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]> writes:\n> \n>> pg_trgm consistent caches tigrams but it has some logic to make sure cached values are recalculated:\n> \n>> \tcache = (gtrgm_consistent_cache *) fcinfo->flinfo->fn_extra;\n>> \tif (cache == NULL ||\n>> \t\tcache->strategy != strategy ||\n>> \t\tVARSIZE(cache->query) != querysize ||\n>> \t\tmemcmp((char *) cache->query, (char *) query, querysize) != 0)\n> \n>> What I don’t understand is if it is necessary or it is enough to check fn_extra==NULL.\n> \n> Ah, I didn't think to search contrib. Yes, you need to validate the\n> cache entry. In this example, a rescan could insert a new query\n> value. In general, an opclass support function could get called using\n> a pretty long-lived FunctionCallInfo (e.g. one in the index's relcache\n> entry), so it's unwise to assume that cached data is relevant to the\n> current call without checking.\n\nThis actually sounds scary - looks like there is no way to perform cache clean-up after rescan then?\n\nDo you think it might be useful to introduce a way for per-rescan caching (ie. setting up a dedicated memory context in gistrescan and passing it to support functions)?\n\n—\nMichal\n\n", "msg_date": "Thu, 4 Apr 2024 05:20:59 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is it safe to cache data by GiST consistent function" } ]
[ { "msg_contents": "Dear PostgreSQL Hackers,\n\nI am submitting a patch to modify pg_ctl to detect the presence of a geek\nuser on the system and adapt its behavior accordingly. This patch\nintroduces the following changes:\n\n 1.\n\n *Detection of geek user*: The modified pg_ctl now checks user created on\n the computer.\n 2.\n\n *No documentation or tests*: Please note that I have not included new\n documentation or tests in this patch submission. However, I am open to\n adding them based on the community's feedback.\n 3.\n\n *Performance impact*: The performance impact of these changes is\n minimal, with an expected delay of 500ms in specific scenarios only.\n\n\nPlease review the patch and provide your feedback. I am open to making any\nnecessary improvements based on the community's suggestions.\n\nThank you for considering my contribution.\n\nBest regards,", "msg_date": "Wed, 3 Apr 2024 16:17:21 +0300", "msg_from": "Panda Developpeur <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Modify pg_ctl to detect presence of geek user" }, { "msg_contents": "On Wed, Apr 3, 2024 at 04:17:21PM +0300, Panda Developpeur wrote:\n> Dear PostgreSQL Hackers,\n> \n> I am submitting a patch to modify pg_ctl to detect the presence of a geek user\n> on the system and adapt its behavior accordingly. This patch introduces the\n> following changes:\n> \n> 1. Detection of geek user: The modified pg_ctl now checks user created on the\n> computer.\n> \n> 2. No documentation or tests: Please note that I have not included new\n> documentation or tests in this patch submission. However, I am open to\n> adding them based on the community's feedback.\n> \n> 3. Performance impact: The performance impact of these changes is minimal,\n> with an expected delay of 500ms in specific scenarios only.\n> \n> \n> Please review the patch and provide your feedback. I am open to making any\n> necessary improvements based on the community's suggestions.\n> \n> Thank you for considering my contribution.\n\nAside from an extra newline in the patch, I think this is ready to go!\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 3 Apr 2024 09:25:02 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Modify pg_ctl to detect presence of geek user" }, { "msg_contents": "On Wed, Apr 3, 2024 at 09:25:02AM -0400, Bruce Momjian wrote:\n> On Wed, Apr 3, 2024 at 04:17:21PM +0300, Panda Developpeur wrote:\n> > Dear PostgreSQL Hackers,\n> > \n> > I am submitting a patch to modify pg_ctl to detect the presence of a geek user\n> > on the system and adapt its behavior accordingly. This patch introduces the\n> > following changes:\n> > \n> > 1. Detection of geek user: The modified pg_ctl now checks user created on the\n> > computer.\n> > \n> > 2. No documentation or tests: Please note that I have not included new\n> > documentation or tests in this patch submission. However, I am open to\n> > adding them based on the community's feedback.\n> > \n> > 3. Performance impact: The performance impact of these changes is minimal,\n> > with an expected delay of 500ms in specific scenarios only.\n> > \n> > \n> > Please review the patch and provide your feedback. I am open to making any\n> > necessary improvements based on the community's suggestions.\n> > \n> > Thank you for considering my contribution.\n> \n> Aside from an extra newline in the patch, I think this is ready to go!\n\nAlso, it feels like the deadline for this patch was two days ago. ;-)\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Wed, 3 Apr 2024 09:26:34 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Modify pg_ctl to detect presence of geek user" }, { "msg_contents": "Yeah sorry for the delay, it took me some time to understood how build,\nmodify and test the modification\n\nYeah sorry for the delay, it took me some time to understood how build, modify and test the modification", "msg_date": "Wed, 3 Apr 2024 18:49:41 +0300", "msg_from": "Panda Developpeur <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Modify pg_ctl to detect presence of geek user" }, { "msg_contents": "\n\n> On 3 Apr 2024, at 18:17, Panda Developpeur <[email protected]> wrote:\n> \n> Thank you for considering my contribution.\n\nLooks interesting!\n\n+\t\t\t\t\tusleep(500000);\n\nDon't we need to make system 500ms faster instead? Let's change it to\n\n+\t\t\t\t\tusleep(-500000);\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Wed, 3 Apr 2024 21:31:29 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Modify pg_ctl to detect presence of geek user" } ]
[ { "msg_contents": "Over at [1] we have a complaint of postgres_fdw failing with\na remote-server error\n\n> ERROR: invalid value for parameter \"TimeZone\": \"UTC\"\n\nI am not quite clear on how broken an installation needs to be to\nreject \"UTC\" as a time zone setting, except that the breakage cannot\nbe subtle. However, I notice that our code in pgtz.c and other\nplaces treats \"GMT\" as a hard-wired special case ... but not \"UTC\".\nI wonder if we ought to modify those places to force \"UTC\" down the\nsame hard-wired paths. If we acted like that, this would have worked\nno matter how misconfigured the installation was.\n\nAn alternative answer could be to change postgres_fdw to send \"GMT\"\nnot \"UTC\". That's ugly from a standards-compliance viewpoint, but\nit would fix this problem even with a non-updated remote server,\nand I think postgres_fdw is generally intended to work with even\nvery old remote servers.\n\nOr we could do both.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/5DF49366-10D1-42A4-99BF-F9A7DC3AB0F4%40mailbox.org\n\n\n", "msg_date": "Thu, 04 Apr 2024 02:19:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "postgres_fdw fails because GMT != UTC" }, { "msg_contents": "On Thu, 2024-04-04 at 02:19 -0400, Tom Lane wrote:\n> > ERROR:  invalid value for parameter \"TimeZone\": \"UTC\"\n> \n> I am not quite clear on how broken an installation needs to be to\n> reject \"UTC\" as a time zone setting, except that the breakage cannot\n> be subtle.  However, I notice that our code in pgtz.c and other\n> places treats \"GMT\" as a hard-wired special case ... but not \"UTC\".\n> I wonder if we ought to modify those places to force \"UTC\" down the\n> same hard-wired paths.  If we acted like that, this would have worked\n> no matter how misconfigured the installation was.\n> \n> An alternative answer could be to change postgres_fdw to send \"GMT\"\n> not \"UTC\".  That's ugly from a standards-compliance viewpoint, but\n> it would fix this problem even with a non-updated remote server,\n> and I think postgres_fdw is generally intended to work with even\n> very old remote servers.\n> \n> Or we could do both.\n\nI think the first is desirable for reasons of general sanity, and the\nsecond for best compatibility with old versions.\n\nSo I vote for \"both\".\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 04 Apr 2024 08:48:57 +0200", "msg_from": "Laurenz Albe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw fails because GMT != UTC" }, { "msg_contents": "> On 4 Apr 2024, at 08:19, Tom Lane <[email protected]> wrote:\n> \n> Over at [1] we have a complaint of postgres_fdw failing with\n> a remote-server error\n> \n>> ERROR: invalid value for parameter \"TimeZone\": \"UTC\"\n> \n> I am not quite clear on how broken an installation needs to be to\n> reject \"UTC\" as a time zone setting, except that the breakage cannot\n> be subtle. However, I notice that our code in pgtz.c and other\n> places treats \"GMT\" as a hard-wired special case ... but not \"UTC\".\n> I wonder if we ought to modify those places to force \"UTC\" down the\n> same hard-wired paths. If we acted like that, this would have worked\n> no matter how misconfigured the installation was.\n\n+1. It makes little sense to support GMT like that but not UTC.\n\n> An alternative answer could be to change postgres_fdw to send \"GMT\"\n> not \"UTC\". That's ugly from a standards-compliance viewpoint, but\n> it would fix this problem even with a non-updated remote server,\n> and I think postgres_fdw is generally intended to work with even\n> very old remote servers.\n\nThere is always a risk in accomodating broken installations that it might hide\nother subtle bugs, but off the cuff that risk seems quite low in this case.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 11:08:32 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw fails because GMT != UTC" }, { "msg_contents": "On Thu, Apr 4, 2024 at 3:49 PM Laurenz Albe <[email protected]> wrote:\n> On Thu, 2024-04-04 at 02:19 -0400, Tom Lane wrote:\n> > > ERROR: invalid value for parameter \"TimeZone\": \"UTC\"\n> >\n> > I am not quite clear on how broken an installation needs to be to\n> > reject \"UTC\" as a time zone setting, except that the breakage cannot\n> > be subtle. However, I notice that our code in pgtz.c and other\n> > places treats \"GMT\" as a hard-wired special case ... but not \"UTC\".\n> > I wonder if we ought to modify those places to force \"UTC\" down the\n> > same hard-wired paths. If we acted like that, this would have worked\n> > no matter how misconfigured the installation was.\n> >\n> > An alternative answer could be to change postgres_fdw to send \"GMT\"\n> > not \"UTC\". That's ugly from a standards-compliance viewpoint, but\n> > it would fix this problem even with a non-updated remote server,\n> > and I think postgres_fdw is generally intended to work with even\n> > very old remote servers.\n> >\n> > Or we could do both.\n>\n> I think the first is desirable for reasons of general sanity, and the\n> second for best compatibility with old versions.\n>\n> So I vote for \"both\".\n\n+1 for both (assuming that the latter does not make the postgres_fdw\ncode complicated).\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Wed, 10 Apr 2024 20:30:59 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw fails because GMT != UTC" }, { "msg_contents": "Etsuro Fujita <[email protected]> writes:\n> On Thu, Apr 4, 2024 at 3:49 PM Laurenz Albe <[email protected]> wrote:\n>> On Thu, 2024-04-04 at 02:19 -0400, Tom Lane wrote:\n>>> I am not quite clear on how broken an installation needs to be to\n>>> reject \"UTC\" as a time zone setting, except that the breakage cannot\n>>> be subtle. However, I notice that our code in pgtz.c and other\n>>> places treats \"GMT\" as a hard-wired special case ... but not \"UTC\".\n>>> I wonder if we ought to modify those places to force \"UTC\" down the\n>>> same hard-wired paths. If we acted like that, this would have worked\n>>> no matter how misconfigured the installation was.\n>>> \n>>> An alternative answer could be to change postgres_fdw to send \"GMT\"\n>>> not \"UTC\". That's ugly from a standards-compliance viewpoint, but\n>>> it would fix this problem even with a non-updated remote server,\n>>> and I think postgres_fdw is generally intended to work with even\n>>> very old remote servers.\n>>> \n>>> Or we could do both.\n\n> +1 for both (assuming that the latter does not make the postgres_fdw\n> code complicated).\n\nI looked briefly at changing the server like this, and decided that\nit would be a little invasive, if only because there would be\ndocumentation and such to update. Example question: should we change\nthe boot-time default value of the timezone GUC from \"GMT\" to \"UTC\"?\nProbably, but I doubt we want to back-patch that, nor does it seem\nlike something to be messing with post-feature-freeze. So I'm\nin favor of working on that when the tree opens for v18, but not\nright now.\n\nHowever, we can change postgres_fdw at basically no cost AFAICS.\nThat's the more important part anyway I think. If your own server\nburps because it's got a bad timezone database, you are probably in\na position to do something about that, while you may have no control\nover a remote server. (As indeed the original complainant didn't.)\n\nSo I propose to apply and back-patch the attached, and leave\nit at that for now.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 19 Apr 2024 16:14:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw fails because GMT != UTC" } ]
[ { "msg_contents": "Hi,\n\nHere are some vectored writeback patches I worked on in the 17 cycle\nand posted as part of various patch sets, but didn't get into a good\nenough shape to take further. They \"push\" vectored writes out, but I\nthink what they need is to be turned inside out and converted into\nusers of a new hypothetical write_stream.c, so that we have a model\nthat will survive contact with asynchronous I/O and would \"pull\"\nwrites from a stream that controls I/O concurrency. That all seemed a\nlot less urgent to work on than reads, hence leaving on ice for now.\nThere is a lot of code that reads, and a small finite amount that\nwrites. I think the patches show some aspects of the problem-space\nthough, and they certainly make checkpointing faster. They cover 2\nout of 5ish ways we write relation data: checkpointing, and strategies\nAKA ring buffers.\n\nThey make checkpoints look like this, respecting io_combine_limit,\ninstead of lots of 8kB writes:\n\npwritev(9,[...],2,0x0) = 131072 (0x20000)\npwrite(9,...,131072,0x20000) = 131072 (0x20000)\npwrite(9,...,131072,0x40000) = 131072 (0x20000)\npwrite(9,...,131072,0x60000) = 131072 (0x20000)\npwrite(9,...,131072,0x80000) = 131072 (0x20000)\n...\n\nTwo more ways data gets written back are: bgwriter and regular\nBAS_NORMAL buffer eviction, but they are not such natural candidates\nfor write combining. Well, if you know you're going to write out a\nbuffer, *maybe* it's worth probing the buffer pool to see if adjacent\nblock numbers are also present and dirty? I don't know. Before and\nafter? Or maybe it's better to wait for the tree-based mapping table\nof legend first so it becomes cheaper to navigate in block number\norder.\n\nThe 5th way is raw file copy that doesn't go through the buffer pool,\nsuch as CREATE DATABASE ... STRATEGY=FILE_COPY, which already works\nwith big writes, and CREATE INDEX via bulk_write.c which is easily\nconverted to vectored writes, and I plan to push the patches for that\nshortly. I think those should ultimately become stream-based too.\n\nAnyway, I wanted to share these uncommitfest patches, having rebased\nthem over relevant recent commits, so I could leave them in working\nstate in case anyone is interested in this file I/O-level stuff...", "msg_date": "Thu, 4 Apr 2024 19:29:44 +1300", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "WIP: Vectored writeback" }, { "msg_contents": "Here is a new straw-man patch set. I'd already shown the basic\ntechniques for vectored writes from the buffer pool (FlushBuffers(),\nnote the \"s\"), but that was sort of kludged into place while I was\nhacking on the lower level bits and pieces, and now I'm building\nlayers further up. The main idea is: you can clean buffers with a\n\"WriteStream\", and here are a bunch of example users to show that\nworking.\n\nA WriteStream is approximately the opposite of a ReadStream (committed\nin v17). You push pinned dirty buffers into it (well they don't have\nto be dirty, and it's OK if someone else cleans the buffer\nconcurrently, the point is that you recently dirtied them). It\ncombines buffers up to io_combine_limit, but defers writing as long as\npossible within some limits to avoid flushing the WAL, and tries to\ncoordinate with the WAL writer. The WAL writer interaction is a very\ntricky problem, and that aspect is only a toy for now, but it's at\nleast partially successful (see problems at end).\n\nThe CHECKPOINT code uses the WriteStream API directly. It creates one\nstream per tablespace, so that the existing load balancing algorithm\ndoesn't defeat the I/O combining algorithm. Unsurprisingly, it looks\nlike this:\n\npostgres=# checkpoint;\n\n...\npwritev(18,...,2,0x1499e000) = 131072 (0x20000)\npwrite(18,...,131072,0x149be000) = 131072 (0x20000)\npwrite(18,...,131072,0x149de000) = 131072 (0x20000)\n...\n\nSometimes you'll see it signalling the WAL writer. It builds up a\nqueue of writes that it doesn't want to perform yet, in the hope of\ngetting a free ride WRT WAL.\n\nOther places can benefit from a more centrally placed write stream,\nindirectly. Our BAS_BULKWRITE and BAS_VACUUM buffer access strategies\nalready perform \"write-behind\". That's a name I borrowed from some OS\nstuff, where the kernel has clues that bulk data (for example a big\nfile copy) will not likely be needed again soon so you want to get it\nout of the way soon before it trashes your whole buffer pool (AKA\n\"scan resistance\"), but you want to defer just a little bit to perform\nI/O combining. That applies directly here, but we have the additional\nconcern of delaying the referenced WAL write in the hope that someone\nelse will do it for us.\n\nIn this experiment, I am trying to give that pre-existing behaviour an\nexplicit name (better names welcome!), and optimise it. If you're\ndirtying buffers in a ring, you'll soon crash into your own tail and\nhave to write it out, and it is very often sequential blocks due to\nthe scan-like nature of many bulk I/O jobs, so I/O combining is very\neffective. The main problem is that you'll often have to flush WAL\nfirst, which this patch set tries to address to some extent. In the\nstrategy write-behind case you don't really need a LSN reordering\nqueue, just a plain FIFO queue would do, but hopefully that doesn't\ncost much. (Cf CHECKPOINT, which sorts blocks by buffer tag, but\nexpects LSNs in random order, so it does seem to need reordering.)\n\nWith this patch set, instead of calling ReleaseBuffer() after you've\ndirtied a buffer in one of those bulk writing code paths, you can use\nStrategyReleaseBuffer(), and the strategy will fire it into the stream\nto get I/O combining and LSN reordering; it'll be unpinned later, and\ncertainly before you get the same buffer back for a new block. So\nthose write-behind user patches are very short, they just do\ns/ReleaseBuffer/StrategyReleaseBuffer/ plus minor details.\nUnsurprisingly, it looks like this:\n\npostgres=# copy t from program 'seq -f %1.0f 1 10000000';\n\n...\npwrite(44,...,131072,0x2f986000) = 131072 (0x20000) <-- streaming write-behind!\npwrite(44,...,131072,0x2f966000) = 131072 (0x20000)\npwrite(44,...,131072,0x2f946000) = 131072 (0x20000)\n...\n\npostgres=# vacuum t;\n\n...\npwrite(35,...,131072,0x3fb3e000) = 131072 (0x20000) <-- streaming write-behind!\npreadv(35,...,122880}],2,0x3fb7a000) = 131072 (0x20000) <-- from Melanie's patch\npwritev(35,...,2,0x3fb5e000) = 131072 (0x20000)\npread(35,...,131072,0x3fb9a000) = 131072 (0x20000)\n...\n\nNext I considered how to get INSERT, UPDATE, DELETE to participate.\nThe problem is that they use BAS_BULKREAD, even though they might\ndirty buffers. In master, BAS_BULKREAD doesn't do write-behind,\ninstead it uses the \"reject\" mechanism: as soon as it smells a dirty\nbuffer, it escapes the ring and abandons all hope of scan resistance.\nAs buffer/README says in parentheses:\n\n Bulk writes work similarly to VACUUM. Currently this applies only to\n COPY IN and CREATE TABLE AS SELECT. (Might it be interesting to make\n seqscan UPDATE and DELETE use the bulkwrite strategy?) For bulk writes\n we use a ring size of 16MB (but not more than 1/8th of shared_buffers).\n\nHmm... what I'm now thinking is that the distinction might be a little\nbogus. Who knows how much scanned data will finish up being dirtied?\nI wonder if it would make more sense to abandon\nBAS_BULKREAD/BAS_BULKWRITE, and instead make an adaptive strategy. A\nring that starts small, and grows/shrinks in response to dirty data\n(instead of \"rejecting\"). That would have at least superficial\nsimilarities to the ARC algorithm, the \"adaptive\" bit that controls\nring size (it's interested in recency vs frequency, but here it's more\nlike \"we're willing to waste more memory on dirty data, because we\nneed to keep it around longer, to avoid flushing the WAL, but not\nlonger than that\" which may be a different dimension to value cached\ndata on, I'm not sure).\n\nOf course there must be some workloads/machines where using a strategy\n(instead of BAS_BULKREAD when it degrades to BAS_NORMAL behaviour)\nwill be slower because of WAL flushes, but that's not a fair fight:\nthe flip side of that coin is that you've trashed the buffer pool,\nwhich is an external cost paid by someone else, ie it's anti-social,\nBufferAccessStrategy's very raison d'être.\n\nAnyway, in the meantime, I hacked heapam.c to use BAS_BULKWRITE just\nto see how it would work with this patch set. (This causes an\nassertion to fail in some test, something about the stats for\ndifferent IO contexts that was upset by IOCONTEXT_BULKWRITE, which I\ndidn't bother to debug, it's only a demo hack.) Unsurprisingly, it\nlooks like this:\n\npostgres=# delete from t;\n\n...\npread(25,...,131072,0xc89e000) = 131072 (0x20000) <-- already committed\npread(25,...,131072,0xc8be000) = 131072 (0x20000) read-stream behaviour\nkill(75954,SIGURG) = 0 (0x0) <-- hey WAL writer!\npread(25,...,131072,0xc8de000) = 131072 (0x20000)\npread(25,...,131072,0xc8fe000) = 131072 (0x20000)\n...\npwrite(25,...,131072,0x15200000) = 131072 (0x20000) <-- write-behind!\npwrite(25,...,131072,0x151e0000) = 131072 (0x20000)\npwrite(25,...,131072,0x151c0000) = 131072 (0x20000)\n...\n\nUPDATE and INSERT conceptually work too, but they suffer from other\nstupid page-at-a-time problems around extension so it's more fun to\nlook at DELETE first.\n\nThe whole write-behind notion, and the realisation that we already\nhave it and should just make it into a \"real thing\", jumped out at me\nwhile studying Melanie's VACUUM pass 1 and VACUUM pass 2 patches for\nadding read streams. Rebased and attached here. That required\nhacking on the new tidstore.c stuff a bit. (We failed to get the\nVACUUM read stream bits into v17, but the study of that led to the\ndefault BAS_VACUUM size being cranked up to reflect modern realities,\nand generally sent me down this rabbit hole for a while.)\n\nSome problems:\n* If you wake the WAL writer more often, throughput might actually go\ndown on high latency storage due to serialisation of WAL flushes. So\nfar I have declined to try to write an adaptive algorithm to figure\nout whether to do it, and where the threshold should be. I suspect it\nmight involve measuring time and hill-climbing... One option is to\nabandon this part (ie just do no worse than master at WAL flushing),\nor at least consider that a separate project.\n* This might hold too many pins! It does respect the limit mechanism,\nbut that can let you have a lot of pins (it's a bit TOCTOU-racy too,\nwe might need something smarter). One idea would be to release pins\nwhile writes are in the LSN queue, and reacquire them with\nReadRecentBuffer() as required, since we don't really care if someone\nelse evicts them in the meantime.\n* It seems a bit weird that we *also* have the WritebackContext\nmachinery. I could probably subsume that whole mechanism into\nwrite_stream.c. If you squint, sync_file_range() is a sort of dual of\nPOSIX_FADV_WILLNEED, which the read counterpart looks after.\n* I would like to merge Heikki's bulk write stuff into this somehow,\nnot yet thought about it much.\n\nThe patches are POC-quality only and certainly have bugs/missed edge\ncases/etc. Thoughts, better ideas, references to writing about this\nproblem space, etc, welcome.", "msg_date": "Sat, 27 Apr 2024 16:26:14 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: WIP: Vectored writeback" } ]
[ { "msg_contents": "Hi,\n\nI encounter a new behaviour in Postgres 16: DROP OWNED BY drops\nmemberships when dropping grantor of memberhsip. A call on REASSIGN\nOWNED BY does not reassign membership from grantor to target owner.\n\nAttached is a script my-reassign.sql which reproduce the behaviour.\nJust run it with psql -f to reproduce. If you get *MEMBERSHIP LOST!!*\nmessage, then the DROP OWNED BY \"unexpectedly\" dropped the membership.\n\nThis script run smoothly on Postgres 15. Membership is survives drop of\ngrantor.\n\nWhat do you think of this ? How to change grantor of memberships before\ndropping the grantor role ? Should we fix REASSIGN to change grantor in\npg_auth_members ?\n\nRegards,\nÉtienne BERSAC\nDalibo", "msg_date": "Thu, 04 Apr 2024 09:09:19 +0200", "msg_from": "=?ISO-8859-1?Q?=C9tienne?= BERSAC <[email protected]>", "msg_from_op": true, "msg_subject": "REASSIGN pg_auth_members" } ]
[ { "msg_contents": "Hello hackers,\n\nCurrently, promotion related handling is missing in the slot sync SQL\nfunction pg_sync_replication_slots(). Here is the background on how\nit is done in slot sync worker:\nDuring promotion, the startup process in order to shut down the\nslot-sync worker, sets the 'stopSignaled' flag, sends the shut-down\nsignal, and waits for slot sync worker to exit. Meanwhile if the\npostmaster has not noticed the promotion yet, it may end up restarting\nslot sync worker. In such a case, the worker exits if 'stopSignaled'\nis set.\n\nSince there is a chance that the user (or any of his scripts/tools)\nmay execute SQL function pg_sync_replication_slots() in parallel to\npromotion, such handling is needed in this SQL function as well, The\nattached patch attempts to implement the same. Changes are:\n\n1) If pg_sync_replication_slots() is already running when the\npromotion is triggered, ShutDownSlotSync() checks the\n'SlotSyncCtx->syncing' flag as well and waits for it to become false\ni.e. waits till parallel running SQL function is finished.\n\n2) If pg_sync_replication_slots() is invoked when promotion is\nalready in progress, pg_sync_replication_slots() respects the\n'stopSignaled' flag set by the startup process and becomes a no-op.\n\nthanks\nShveta", "msg_date": "Thu, 4 Apr 2024 17:05:28 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Thu, Apr 4, 2024 at 5:05 PM shveta malik <[email protected]> wrote:\n>\n> Hello hackers,\n>\n> Currently, promotion related handling is missing in the slot sync SQL\n> function pg_sync_replication_slots(). Here is the background on how\n> it is done in slot sync worker:\n> During promotion, the startup process in order to shut down the\n> slot-sync worker, sets the 'stopSignaled' flag, sends the shut-down\n> signal, and waits for slot sync worker to exit. Meanwhile if the\n> postmaster has not noticed the promotion yet, it may end up restarting\n> slot sync worker. In such a case, the worker exits if 'stopSignaled'\n> is set.\n>\n> Since there is a chance that the user (or any of his scripts/tools)\n> may execute SQL function pg_sync_replication_slots() in parallel to\n> promotion, such handling is needed in this SQL function as well, The\n> attached patch attempts to implement the same.\n>\n\nThanks for the report and patch. I'll look into it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 4 Apr 2024 17:16:49 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Thu, Apr 4, 2024 at 5:17 PM Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Apr 4, 2024 at 5:05 PM shveta malik <[email protected]> wrote:\n> >\n> > Hello hackers,\n> >\n> > Currently, promotion related handling is missing in the slot sync SQL\n> > function pg_sync_replication_slots(). Here is the background on how\n> > it is done in slot sync worker:\n> > During promotion, the startup process in order to shut down the\n> > slot-sync worker, sets the 'stopSignaled' flag, sends the shut-down\n> > signal, and waits for slot sync worker to exit. Meanwhile if the\n> > postmaster has not noticed the promotion yet, it may end up restarting\n> > slot sync worker. In such a case, the worker exits if 'stopSignaled'\n> > is set.\n> >\n> > Since there is a chance that the user (or any of his scripts/tools)\n> > may execute SQL function pg_sync_replication_slots() in parallel to\n> > promotion, such handling is needed in this SQL function as well, The\n> > attached patch attempts to implement the same.\n> >\n>\n> Thanks for the report and patch. I'll look into it.\n>\n\nPlease find v2. Changes are:\n1) Rebased the patch as there was conflict due to recent commit 6f132ed.\n2) Added an Assert in update_synced_slots_inactive_since() to ensure\nthat the slot does not have active_pid.\n3) Improved commit msg and comments.\n\n\nthanks\nShveta", "msg_date": "Fri, 5 Apr 2024 10:31:15 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n>\n> Please find v2. Changes are:\n> 1) Rebased the patch as there was conflict due to recent commit 6f132ed.\n> 2) Added an Assert in update_synced_slots_inactive_since() to ensure\n> that the slot does not have active_pid.\n> 3) Improved commit msg and comments.\n>\n\nFew comments:\n==============\n1.\n void\n SyncReplicationSlots(WalReceiverConn *wrconn)\n {\n+ /*\n+ * Startup process signaled the slot sync to stop, so if meanwhile user\n+ * has invoked slot sync SQL function, simply return.\n+ */\n+ SpinLockAcquire(&SlotSyncCtx->mutex);\n+ if (SlotSyncCtx->stopSignaled)\n+ {\n+ ereport(LOG,\n+ errmsg(\"skipping slot synchronization as slot sync shutdown is\nsignaled during promotion\"));\n+\n+ SpinLockRelease(&SlotSyncCtx->mutex);\n+ return;\n+ }\n+ SpinLockRelease(&SlotSyncCtx->mutex);\n\nThere is a race condition with this code. Say during promotion\nShutDownSlotSync() is just before setting this flag and the user has\ninvoked pg_sync_replication_slots() and passed this check but still\ndidn't set the SlotSyncCtx->syncing flag. So, now, the promotion would\nrecognize that there is slot sync going on in parallel, and slot sync\nwouldn't know that the promotion is in progress.\n\nThe current coding for slot sync worker ensures there is no such race\nby checking the stopSignaled and setting the pid together under\nspinlock. I think we need to move the setting of the syncing flag\nsimilarly. Once we do that probably checking SlotSyncCtx->syncing\nshould be sufficient in ShutDownSlotSync(). If we change the location\nof setting the 'syncing' flag then please ensure its cleanup as we\ncurrently do in slotsync_failure_callback().\n\n2.\n@@ -1395,6 +1395,7 @@ update_synced_slots_inactive_since(void)\n if (s->in_use && s->data.synced)\n {\n Assert(SlotIsLogical(s));\n+ Assert(s->active_pid == 0);\n\nWe can add a comment like: \"The slot must not be acquired by any process\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 6 Apr 2024 11:49:19 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n>\n> Please find v2. Changes are:\n\nThanks for the patch. Here are some comments.\n\n1. Can we have a clear saying in the shmem variable who's syncing at\nthe moment? Is it a slot sync worker or a backend via SQL function?\nPerhaps turn \"bool syncing;\" to \"SlotSyncSource sync_source;\"\n\ntypedef enum SlotSyncSource\n{\n SLOT_SYNC_NONE,\n SLOT_SYNC_WORKER,\n SLOT_SYNC_BACKEND,\n} SlotSyncSource;\n\nThen, the check in ShutDownSlotSync can be:\n\n+ /*\n+ * Return if neither the slot sync worker is running nor the function\n+ * pg_sync_replication_slots() is executing.\n+ */\n+ if ((SlotSyncCtx->pid == InvalidPid) &&\nSlotSyncCtx->sync_source != SLOT_SYNC_BACKEND)\n {\n\n2.\nSyncReplicationSlots(WalReceiverConn *wrconn)\n {\n+ /*\n+ * Startup process signaled the slot sync to stop, so if meanwhile user\n+ * has invoked slot sync SQL function, simply return.\n+ */\n+ SpinLockAcquire(&SlotSyncCtx->mutex);\n+ if (SlotSyncCtx->stopSignaled)\n+ {\n+ ereport(LOG,\n+ errmsg(\"skipping slot synchronization as slot sync\nshutdown is signaled during promotion\"));\n+\n\nUnless I'm missing something, I think this can't detect if the backend\nvia SQL function is already half-way through syncing in\nsynchronize_one_slot. So, better move this check to (or also have it\nthere) slot sync loop that calls synchronize_one_slot. To avoid\nspinlock acquisitions, we can perhaps do this check in when we acquire\nthe spinlock for synced flag.\n\n /* Search for the named slot */\n if ((slot = SearchNamedReplicationSlot(remote_slot->name, true)))\n {\n bool synced;\n\n SpinLockAcquire(&slot->mutex);\n synced = slot->data.synced;\n << get SlotSyncCtx->stopSignaled here >>\n SpinLockRelease(&slot->mutex);\n\n << do the slot sync skip check here if (stopSignaled) >>\n\n3. Can we have a test or steps at least to check the consequences\nmanually one can get if slot syncing via SQL function is happening\nduring the promotion?\n\nIIUC, we need to ensure there is no backend acquiring it and\nperforming sync while the slot sync worker is shutting down/standby\npromotion is occuring. Otherwise, some of the slots can get resynced\nand some are not while we are shutting down the slot sync worker as\npart of the standby promotion which might leave the slots in an\ninconsistent state.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 6 Apr 2024 12:25:26 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Sat, Apr 6, 2024 at 11:49 AM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n> >\n> > Please find v2. Changes are:\n> > 1) Rebased the patch as there was conflict due to recent commit 6f132ed.\n> > 2) Added an Assert in update_synced_slots_inactive_since() to ensure\n> > that the slot does not have active_pid.\n> > 3) Improved commit msg and comments.\n> >\n>\n> Few comments:\n> ==============\n> 1.\n> void\n> SyncReplicationSlots(WalReceiverConn *wrconn)\n> {\n> + /*\n> + * Startup process signaled the slot sync to stop, so if meanwhile user\n> + * has invoked slot sync SQL function, simply return.\n> + */\n> + SpinLockAcquire(&SlotSyncCtx->mutex);\n> + if (SlotSyncCtx->stopSignaled)\n> + {\n> + ereport(LOG,\n> + errmsg(\"skipping slot synchronization as slot sync shutdown is\n> signaled during promotion\"));\n> +\n> + SpinLockRelease(&SlotSyncCtx->mutex);\n> + return;\n> + }\n> + SpinLockRelease(&SlotSyncCtx->mutex);\n>\n> There is a race condition with this code. Say during promotion\n> ShutDownSlotSync() is just before setting this flag and the user has\n> invoked pg_sync_replication_slots() and passed this check but still\n> didn't set the SlotSyncCtx->syncing flag. So, now, the promotion would\n> recognize that there is slot sync going on in parallel, and slot sync\n> wouldn't know that the promotion is in progress.\n\nDid you mean that now, the promotion *would not* recognize...\n\nI see, I will fix this.\n\n> The current coding for slot sync worker ensures there is no such race\n> by checking the stopSignaled and setting the pid together under\n> spinlock. I think we need to move the setting of the syncing flag\n> similarly. Once we do that probably checking SlotSyncCtx->syncing\n> should be sufficient in ShutDownSlotSync(). If we change the location\n> of setting the 'syncing' flag then please ensure its cleanup as we\n> currently do in slotsync_failure_callback().\n\nSure, let me review.\n\n> 2.\n> @@ -1395,6 +1395,7 @@ update_synced_slots_inactive_since(void)\n> if (s->in_use && s->data.synced)\n> {\n> Assert(SlotIsLogical(s));\n> + Assert(s->active_pid == 0);\n>\n> We can add a comment like: \"The slot must not be acquired by any process\"\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n", "msg_date": "Fri, 12 Apr 2024 07:47:39 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Sat, Apr 6, 2024 at 12:25 PM Bharath Rupireddy\n<[email protected]> wrote:\n>\n> On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n> >\n> > Please find v2. Changes are:\n>\n> Thanks for the patch. Here are some comments.\n\nThanks for reviewing.\n>\n> 1. Can we have a clear saying in the shmem variable who's syncing at\n> the moment? Is it a slot sync worker or a backend via SQL function?\n> Perhaps turn \"bool syncing;\" to \"SlotSyncSource sync_source;\"\n>\n> typedef enum SlotSyncSource\n> {\n> SLOT_SYNC_NONE,\n> SLOT_SYNC_WORKER,\n> SLOT_SYNC_BACKEND,\n> } SlotSyncSource;\n>\n> Then, the check in ShutDownSlotSync can be:\n>\n> + /*\n> + * Return if neither the slot sync worker is running nor the function\n> + * pg_sync_replication_slots() is executing.\n> + */\n> + if ((SlotSyncCtx->pid == InvalidPid) &&\n> SlotSyncCtx->sync_source != SLOT_SYNC_BACKEND)\n> {\n>\n> 2.\n> SyncReplicationSlots(WalReceiverConn *wrconn)\n> {\n> + /*\n> + * Startup process signaled the slot sync to stop, so if meanwhile user\n> + * has invoked slot sync SQL function, simply return.\n> + */\n> + SpinLockAcquire(&SlotSyncCtx->mutex);\n> + if (SlotSyncCtx->stopSignaled)\n> + {\n> + ereport(LOG,\n> + errmsg(\"skipping slot synchronization as slot sync\n> shutdown is signaled during promotion\"));\n> +\n>\n> Unless I'm missing something, I think this can't detect if the backend\n> via SQL function is already half-way through syncing in\n> synchronize_one_slot. So, better move this check to (or also have it\n> there) slot sync loop that calls synchronize_one_slot. To avoid\n> spinlock acquisitions, we can perhaps do this check in when we acquire\n> the spinlock for synced flag.\n\nIf the sync via SQL function is already half-way, then promotion\nshould wait for it to finish. I don't think it is a good idea to move\nthe check to synchronize_one_slot(). The sync-call should either not\nstart (if it noticed the promotion) or finish the sync and then let\npromotion proceed. But I would like to know others' opinion on this.\n\n>\n> /* Search for the named slot */\n> if ((slot = SearchNamedReplicationSlot(remote_slot->name, true)))\n> {\n> bool synced;\n>\n> SpinLockAcquire(&slot->mutex);\n> synced = slot->data.synced;\n> << get SlotSyncCtx->stopSignaled here >>\n> SpinLockRelease(&slot->mutex);\n>\n> << do the slot sync skip check here if (stopSignaled) >>\n>\n> 3. Can we have a test or steps at least to check the consequences\n> manually one can get if slot syncing via SQL function is happening\n> during the promotion?\n>\n> IIUC, we need to ensure there is no backend acquiring it and\n> performing sync while the slot sync worker is shutting down/standby\n> promotion is occuring. Otherwise, some of the slots can get resynced\n> and some are not while we are shutting down the slot sync worker as\n> part of the standby promotion which might leave the slots in an\n> inconsistent state.\n\nI do not think that we can reach a state (exception is some error\nscenario) where some of the slots are synced while the rest are not\nduring a *particular* sync-cycle only because promotion is going in\nparallel. (And yes we need to fix the race-condition stated by Amit\nup-thread for this statement to be true.)\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 12 Apr 2024 07:57:39 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 12, 2024 at 7:47 AM shveta malik <[email protected]> wrote:\n>\n> On Sat, Apr 6, 2024 at 11:49 AM Amit Kapila <[email protected]> wrote:\n> >\n> >\n> > Few comments:\n> > ==============\n> > 1.\n> > void\n> > SyncReplicationSlots(WalReceiverConn *wrconn)\n> > {\n> > + /*\n> > + * Startup process signaled the slot sync to stop, so if meanwhile user\n> > + * has invoked slot sync SQL function, simply return.\n> > + */\n> > + SpinLockAcquire(&SlotSyncCtx->mutex);\n> > + if (SlotSyncCtx->stopSignaled)\n> > + {\n> > + ereport(LOG,\n> > + errmsg(\"skipping slot synchronization as slot sync shutdown is\n> > signaled during promotion\"));\n> > +\n> > + SpinLockRelease(&SlotSyncCtx->mutex);\n> > + return;\n> > + }\n> > + SpinLockRelease(&SlotSyncCtx->mutex);\n> >\n> > There is a race condition with this code. Say during promotion\n> > ShutDownSlotSync() is just before setting this flag and the user has\n> > invoked pg_sync_replication_slots() and passed this check but still\n> > didn't set the SlotSyncCtx->syncing flag. So, now, the promotion would\n> > recognize that there is slot sync going on in parallel, and slot sync\n> > wouldn't know that the promotion is in progress.\n>\n> Did you mean that now, the promotion *would not* recognize...\n>\n\nRight.\n\n> I see, I will fix this.\n>\n\nThanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Apr 2024 16:07:06 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 12, 2024 at 7:57 AM shveta malik <[email protected]> wrote:\n>\n> On Sat, Apr 6, 2024 at 12:25 PM Bharath Rupireddy\n> <[email protected]> wrote:\n> >\n> > On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n> > >\n> > > Please find v2. Changes are:\n> >\n> > Thanks for the patch. Here are some comments.\n>\n> Thanks for reviewing.\n> >\n> > 1. Can we have a clear saying in the shmem variable who's syncing at\n> > the moment? Is it a slot sync worker or a backend via SQL function?\n> > Perhaps turn \"bool syncing;\" to \"SlotSyncSource sync_source;\"\n> >\n> > typedef enum SlotSyncSource\n> > {\n> > SLOT_SYNC_NONE,\n> > SLOT_SYNC_WORKER,\n> > SLOT_SYNC_BACKEND,\n> > } SlotSyncSource;\n> >\n> > Then, the check in ShutDownSlotSync can be:\n> >\n> > + /*\n> > + * Return if neither the slot sync worker is running nor the function\n> > + * pg_sync_replication_slots() is executing.\n> > + */\n> > + if ((SlotSyncCtx->pid == InvalidPid) &&\n> > SlotSyncCtx->sync_source != SLOT_SYNC_BACKEND)\n> > {\n> >\n\nI don't know if this will be help, especially after fixing the race\ncondition I mentioned. But otherwise, also, at this stage it doesn't\nseem helpful to add the source of sync explicitly.\n\n> > 2.\n> > SyncReplicationSlots(WalReceiverConn *wrconn)\n> > {\n> > + /*\n> > + * Startup process signaled the slot sync to stop, so if meanwhile user\n> > + * has invoked slot sync SQL function, simply return.\n> > + */\n> > + SpinLockAcquire(&SlotSyncCtx->mutex);\n> > + if (SlotSyncCtx->stopSignaled)\n> > + {\n> > + ereport(LOG,\n> > + errmsg(\"skipping slot synchronization as slot sync\n> > shutdown is signaled during promotion\"));\n> > +\n> >\n> > Unless I'm missing something, I think this can't detect if the backend\n> > via SQL function is already half-way through syncing in\n> > synchronize_one_slot. So, better move this check to (or also have it\n> > there) slot sync loop that calls synchronize_one_slot. To avoid\n> > spinlock acquisitions, we can perhaps do this check in when we acquire\n> > the spinlock for synced flag.\n>\n> If the sync via SQL function is already half-way, then promotion\n> should wait for it to finish. I don't think it is a good idea to move\n> the check to synchronize_one_slot(). The sync-call should either not\n> start (if it noticed the promotion) or finish the sync and then let\n> promotion proceed. But I would like to know others' opinion on this.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Apr 2024 16:17:53 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 12, 2024 at 4:18 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Apr 12, 2024 at 7:57 AM shveta malik <[email protected]> wrote:\n> >\n> > On Sat, Apr 6, 2024 at 12:25 PM Bharath Rupireddy\n> > <[email protected]> wrote:\n> > >\n> > > On Fri, Apr 5, 2024 at 10:31 AM shveta malik <[email protected]> wrote:\n> > > >\n> > > > Please find v2. Changes are:\n> > >\n> > > Thanks for the patch. Here are some comments.\n> >\n> > Thanks for reviewing.\n> > >\n> > > 1. Can we have a clear saying in the shmem variable who's syncing at\n> > > the moment? Is it a slot sync worker or a backend via SQL function?\n> > > Perhaps turn \"bool syncing;\" to \"SlotSyncSource sync_source;\"\n> > >\n> > > typedef enum SlotSyncSource\n> > > {\n> > > SLOT_SYNC_NONE,\n> > > SLOT_SYNC_WORKER,\n> > > SLOT_SYNC_BACKEND,\n> > > } SlotSyncSource;\n> > >\n> > > Then, the check in ShutDownSlotSync can be:\n> > >\n> > > + /*\n> > > + * Return if neither the slot sync worker is running nor the function\n> > > + * pg_sync_replication_slots() is executing.\n> > > + */\n> > > + if ((SlotSyncCtx->pid == InvalidPid) &&\n> > > SlotSyncCtx->sync_source != SLOT_SYNC_BACKEND)\n> > > {\n> > >\n>\n> I don't know if this will be help, especially after fixing the race\n> condition I mentioned. But otherwise, also, at this stage it doesn't\n> seem helpful to add the source of sync explicitly.\n>\n\nAgreed.\n\nPlease find v3 addressing race-condition and one other comment.\n\nUp-thread it was suggested that, probably, checking\nSlotSyncCtx->syncing should be sufficient in ShutDownSlotSync(). On\nre-thinking, it might not be. Slot sync worker sets and resets\n'syncing' with each sync-cycle, and thus we need to rely on worker's\npid in ShutDownSlotSync(), as there could be a window where promotion\nis triggered and 'syncing' is not set for worker, while the worker is\nstill running. This implementation of setting and resetting syncing\nwith each sync-cycle looks better as compared to setting syncing\nduring the entire life-cycle of the worker. So, I did not change it.\n\nTo fix the race condition, I moved the setting of the 'syncing' flag\ntogether with the 'stopSignaled' check under the same spinLock for the\nSQL function. OTOH, for worker, I feel it is good to check\n'stopSignaled' at the beginning itself, while retaining the\nsetting/resetting of 'syncing' at a later stage during the actual sync\ncycle. This makes handling for SQL function and worker slightly\ndifferent. And thus to achieve this, I had to take the 'syncing' flag\nhandling out of synchronize_slots() and move it to both worker and SQL\nfunction by introducing 2 new functions check_and_set_syncing_flag()\nand reset_syncing_flag().\nI am analyzing if there are better ways to achieve this, any\nsuggestions are welcome.\n\nthanks\nShveta", "msg_date": "Fri, 12 Apr 2024 17:24:50 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 12, 2024 at 5:25 PM shveta malik <[email protected]> wrote:\n>\n> Please find v3 addressing race-condition and one other comment.\n>\n> Up-thread it was suggested that, probably, checking\n> SlotSyncCtx->syncing should be sufficient in ShutDownSlotSync(). On\n> re-thinking, it might not be. Slot sync worker sets and resets\n> 'syncing' with each sync-cycle, and thus we need to rely on worker's\n> pid in ShutDownSlotSync(), as there could be a window where promotion\n> is triggered and 'syncing' is not set for worker, while the worker is\n> still running. This implementation of setting and resetting syncing\n> with each sync-cycle looks better as compared to setting syncing\n> during the entire life-cycle of the worker. So, I did not change it.\n>\n\nTo retain this we need to have different handling for 'syncing' for\nworkers and function which seems like more maintenance burden than the\nvalue it provides. Moreover, in SyncReplicationSlots(), we are calling\na function after acquiring spinlock which is not our usual coding\npractice.\n\nOne minor comment:\n * All the fields except 'syncing' are used only by slotsync worker.\n * 'syncing' is used both by worker and SQL function pg_sync_replication_slots.\n */\ntypedef struct SlotSyncCtxStruct\n{\npid_t pid;\nbool stopSignaled;\nbool syncing;\ntime_t last_start_time;\nslock_t mutex;\n} SlotSyncCtxStruct;\n\nI feel the above comment is no longer valid after this patch. We can\nprobably remove this altogether.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Apr 2024 14:29:16 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 15, 2024 at 2:29 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Apr 12, 2024 at 5:25 PM shveta malik <[email protected]> wrote:\n> >\n> > Please find v3 addressing race-condition and one other comment.\n> >\n> > Up-thread it was suggested that, probably, checking\n> > SlotSyncCtx->syncing should be sufficient in ShutDownSlotSync(). On\n> > re-thinking, it might not be. Slot sync worker sets and resets\n> > 'syncing' with each sync-cycle, and thus we need to rely on worker's\n> > pid in ShutDownSlotSync(), as there could be a window where promotion\n> > is triggered and 'syncing' is not set for worker, while the worker is\n> > still running. This implementation of setting and resetting syncing\n> > with each sync-cycle looks better as compared to setting syncing\n> > during the entire life-cycle of the worker. So, I did not change it.\n> >\n>\n> To retain this we need to have different handling for 'syncing' for\n> workers and function which seems like more maintenance burden than the\n> value it provides. Moreover, in SyncReplicationSlots(), we are calling\n> a function after acquiring spinlock which is not our usual coding\n> practice.\n\nOkay. Changed it to consistent handling. Now both worker and SQL\nfunction set 'syncing' when they start and reset it when they exit.\n\n> One minor comment:\n> * All the fields except 'syncing' are used only by slotsync worker.\n> * 'syncing' is used both by worker and SQL function pg_sync_replication_slots.\n> */\n> typedef struct SlotSyncCtxStruct\n> {\n> pid_t pid;\n> bool stopSignaled;\n> bool syncing;\n> time_t last_start_time;\n> slock_t mutex;\n> } SlotSyncCtxStruct;\n>\n> I feel the above comment is no longer valid after this patch. We can\n> probably remove this altogether.\n\nYes, changed.\n\nPlease find v4 addressing the above comments.\n\nthanks\nShveta", "msg_date": "Mon, 15 Apr 2024 15:38:53 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 15, 2024 at 03:38:53PM +0530, shveta malik wrote:\n> On Mon, Apr 15, 2024 at 2:29 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Apr 12, 2024 at 5:25 PM shveta malik <[email protected]> wrote:\n> > >\n> > > Please find v3 addressing race-condition and one other comment.\n> > >\n> > > Up-thread it was suggested that, probably, checking\n> > > SlotSyncCtx->syncing should be sufficient in ShutDownSlotSync(). On\n> > > re-thinking, it might not be. Slot sync worker sets and resets\n> > > 'syncing' with each sync-cycle, and thus we need to rely on worker's\n> > > pid in ShutDownSlotSync(), as there could be a window where promotion\n> > > is triggered and 'syncing' is not set for worker, while the worker is\n> > > still running. This implementation of setting and resetting syncing\n> > > with each sync-cycle looks better as compared to setting syncing\n> > > during the entire life-cycle of the worker. So, I did not change it.\n> > >\n> >\n> > To retain this we need to have different handling for 'syncing' for\n> > workers and function which seems like more maintenance burden than the\n> > value it provides. Moreover, in SyncReplicationSlots(), we are calling\n> > a function after acquiring spinlock which is not our usual coding\n> > practice.\n> \n> Okay. Changed it to consistent handling.\n\nThanks for the patch!\n\n> Now both worker and SQL\n> function set 'syncing' when they start and reset it when they exit.\n\nIt means that it's not possible anymore to trigger a manual sync if \nsync_replication_slots is on. Indeed that would trigger:\n\npostgres=# select pg_sync_replication_slots();\nERROR: cannot synchronize replication slots concurrently\n\nThat looks like an issue to me, thoughts?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 12:51:05 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 15, 2024 at 6:21 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Apr 15, 2024 at 03:38:53PM +0530, shveta malik wrote:\n> > On Mon, Apr 15, 2024 at 2:29 PM Amit Kapila <[email protected]> wrote:\n> > >\n> > > On Fri, Apr 12, 2024 at 5:25 PM shveta malik <[email protected]> wrote:\n> > > >\n> > > > Please find v3 addressing race-condition and one other comment.\n> > > >\n> > > > Up-thread it was suggested that, probably, checking\n> > > > SlotSyncCtx->syncing should be sufficient in ShutDownSlotSync(). On\n> > > > re-thinking, it might not be. Slot sync worker sets and resets\n> > > > 'syncing' with each sync-cycle, and thus we need to rely on worker's\n> > > > pid in ShutDownSlotSync(), as there could be a window where promotion\n> > > > is triggered and 'syncing' is not set for worker, while the worker is\n> > > > still running. This implementation of setting and resetting syncing\n> > > > with each sync-cycle looks better as compared to setting syncing\n> > > > during the entire life-cycle of the worker. So, I did not change it.\n> > > >\n> > >\n> > > To retain this we need to have different handling for 'syncing' for\n> > > workers and function which seems like more maintenance burden than the\n> > > value it provides. Moreover, in SyncReplicationSlots(), we are calling\n> > > a function after acquiring spinlock which is not our usual coding\n> > > practice.\n> >\n> > Okay. Changed it to consistent handling.\n>\n> Thanks for the patch!\n>\n> > Now both worker and SQL\n> > function set 'syncing' when they start and reset it when they exit.\n>\n> It means that it's not possible anymore to trigger a manual sync if\n> sync_replication_slots is on. Indeed that would trigger:\n>\n> postgres=# select pg_sync_replication_slots();\n> ERROR: cannot synchronize replication slots concurrently\n>\n> That looks like an issue to me, thoughts?\n>\n\nThis is intentional as of now for the sake of keeping\nimplementation/code simple. It is not difficult to allow them but I am\nnot sure whether we want to add another set of conditions allowing\nthem in parallel. And that too in an unpredictable way as the API will\nwork only for the time slot sync worker is not performing the sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Apr 2024 18:29:49 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 15, 2024 at 06:29:49PM +0530, Amit Kapila wrote:\n> On Mon, Apr 15, 2024 at 6:21 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Apr 15, 2024 at 03:38:53PM +0530, shveta malik wrote:\n> > > Now both worker and SQL\n> > > function set 'syncing' when they start and reset it when they exit.\n> >\n> > It means that it's not possible anymore to trigger a manual sync if\n> > sync_replication_slots is on. Indeed that would trigger:\n> >\n> > postgres=# select pg_sync_replication_slots();\n> > ERROR: cannot synchronize replication slots concurrently\n> >\n> > That looks like an issue to me, thoughts?\n> >\n> \n> This is intentional as of now for the sake of keeping\n> implementation/code simple. It is not difficult to allow them but I am\n> not sure whether we want to add another set of conditions allowing\n> them in parallel.\n\nI think that the ability to launch a manual sync before a switchover would be\nmissed. Except for this case I don't think that's an issue to prevent them to\nrun in parallel.\n\n> And that too in an unpredictable way as the API will\n> work only for the time slot sync worker is not performing the sync.\n\nYeah but then at least you would know that there is \"really\" a sync in progress\n(which is not the case currently with v4, as the sync worker being started is\nenough to prevent a manual sync even if a sync is not in progress).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 14:17:03 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 15, 2024 at 7:47 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Apr 15, 2024 at 06:29:49PM +0530, Amit Kapila wrote:\n> > On Mon, Apr 15, 2024 at 6:21 PM Bertrand Drouvot\n> > <[email protected]> wrote:\n> > >\n> > > On Mon, Apr 15, 2024 at 03:38:53PM +0530, shveta malik wrote:\n> > > > Now both worker and SQL\n> > > > function set 'syncing' when they start and reset it when they exit.\n> > >\n> > > It means that it's not possible anymore to trigger a manual sync if\n> > > sync_replication_slots is on. Indeed that would trigger:\n> > >\n> > > postgres=# select pg_sync_replication_slots();\n> > > ERROR: cannot synchronize replication slots concurrently\n> > >\n> > > That looks like an issue to me, thoughts?\n> > >\n> >\n> > This is intentional as of now for the sake of keeping\n> > implementation/code simple. It is not difficult to allow them but I am\n> > not sure whether we want to add another set of conditions allowing\n> > them in parallel.\n>\n> I think that the ability to launch a manual sync before a switchover would be\n> missed. Except for this case I don't think that's an issue to prevent them to\n> run in parallel.\n>\n\nI think if the slotsync worker is available, it can do that as well.\nThere is no clear use case for allowing them in parallel and I feel it\nwould add more confusion when it can work sometimes but not other\ntimes. However, if we receive some report from the field where there\nis a real demand for such a thing, it should be easy to achieve. For\nexample, I can imagine that we can have sync_state that has values\n'started', 'in_progress' , and 'finished'. This should allow us to\nachieve what the current proposed patch is doing along with allowing\nthe API to work in parallel when the sync_state is not 'in_progress'.\n\nI think for now let's restrict their usage in parallel and make the\npromotion behavior consistent both for worker and API.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:21:04 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Monday, April 15, 2024 6:09 PM shveta malik <[email protected]> wrote:\r\n> \r\n> Please find v4 addressing the above comments.\r\n\r\nThanks for the patch.\r\n\r\nHere are few comments:\r\n\r\n1.\r\n\r\n+\t\t\tereport(ERROR,\r\n+\t\t\t\t\terrmsg(\"promotion in progress, can not synchronize replication slots\"));\r\n+\t\t}\r\n\r\nI think an errcode is needed.\r\n\r\nThe style of the error message seems a bit unnatural to me. I suggest:\r\n\"cannot synchronize replication slots when standby promotion is ongoing\"\r\n\r\n\r\n2.\r\n\r\n+\tif (worker_pid != InvalidPid)\r\n+\t\tAssert(SlotSyncCtx->pid == InvalidPid);\r\n\r\nWe could merge the checks into one Assert().\r\nAssert(SlotSyncCtx->pid == InvalidPid || worker_pid == InvalidPid);\r\n\r\n\r\n3.\r\n\r\n-\tpqsignal(SIGINT, SignalHandlerForShutdownRequest);\r\n\r\nI realized that we should register this before setting SlotSyncCtx->pid,\r\notherwise if the standby is promoted after setting pid and before registering\r\nsignal handle function, the slotsync worker could miss to handle SIGINT sent by\r\nstartup process(ShutDownSlotSync). This is an existing issue for slotsync\r\nworker, but maybe we could fix it together with the patch.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Tue, 16 Apr 2024 03:57:44 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 16, 2024 at 9:27 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Monday, April 15, 2024 6:09 PM shveta malik <[email protected]> wrote:\n> >\n> > Please find v4 addressing the above comments.\n>\n> Thanks for the patch.\n>\n> Here are few comments:\n\nThanks for reviewing the patch.\n\n>\n> 1.\n>\n> + ereport(ERROR,\n> + errmsg(\"promotion in progress, can not synchronize replication slots\"));\n> + }\n>\n> I think an errcode is needed.\n>\n> The style of the error message seems a bit unnatural to me. I suggest:\n> \"cannot synchronize replication slots when standby promotion is ongoing\"\n\nModified.\n\n>\n> 2.\n>\n> + if (worker_pid != InvalidPid)\n> + Assert(SlotSyncCtx->pid == InvalidPid);\n>\n> We could merge the checks into one Assert().\n> Assert(SlotSyncCtx->pid == InvalidPid || worker_pid == InvalidPid);\n\nModified.\n\n>\n> 3.\n>\n> - pqsignal(SIGINT, SignalHandlerForShutdownRequest);\n>\n> I realized that we should register this before setting SlotSyncCtx->pid,\n> otherwise if the standby is promoted after setting pid and before registering\n> signal handle function, the slotsync worker could miss to handle SIGINT sent by\n> startup process(ShutDownSlotSync). This is an existing issue for slotsync\n> worker, but maybe we could fix it together with the patch.\n\nYes, it seems like a problem. Fixed it. Also to be consistent, moved\nother signal handlers' registration as well before we set pid.\n\nPlease find v5 addressing above comments.\n\nthanks\nShveta", "msg_date": "Tue, 16 Apr 2024 10:00:04 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 16, 2024 at 08:21:04AM +0530, Amit Kapila wrote:\n> On Mon, Apr 15, 2024 at 7:47 PM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > On Mon, Apr 15, 2024 at 06:29:49PM +0530, Amit Kapila wrote:\n> > > On Mon, Apr 15, 2024 at 6:21 PM Bertrand Drouvot\n> > > <[email protected]> wrote:\n> > > >\n> > > > On Mon, Apr 15, 2024 at 03:38:53PM +0530, shveta malik wrote:\n> > > > > Now both worker and SQL\n> > > > > function set 'syncing' when they start and reset it when they exit.\n> > > >\n> > > > It means that it's not possible anymore to trigger a manual sync if\n> > > > sync_replication_slots is on. Indeed that would trigger:\n> > > >\n> > > > postgres=# select pg_sync_replication_slots();\n> > > > ERROR: cannot synchronize replication slots concurrently\n> > > >\n> > > > That looks like an issue to me, thoughts?\n> > > >\n> > >\n> > > This is intentional as of now for the sake of keeping\n> > > implementation/code simple. It is not difficult to allow them but I am\n> > > not sure whether we want to add another set of conditions allowing\n> > > them in parallel.\n> >\n> > I think that the ability to launch a manual sync before a switchover would be\n> > missed. Except for this case I don't think that's an issue to prevent them to\n> > run in parallel.\n> >\n> \n> I think if the slotsync worker is available, it can do that as well.\n\nRight, but one has no control as to when the sync is triggered. \n\n> There is no clear use case for allowing them in parallel and I feel it\n> would add more confusion when it can work sometimes but not other\n> times. However, if we receive some report from the field where there\n> is a real demand for such a thing, it should be easy to achieve. For\n> example, I can imagine that we can have sync_state that has values\n> 'started', 'in_progress' , and 'finished'. This should allow us to\n> achieve what the current proposed patch is doing along with allowing\n> the API to work in parallel when the sync_state is not 'in_progress'.\n> \n> I think for now let's restrict their usage in parallel and make the\n> promotion behavior consistent both for worker and API.\n\nOkay, let's do it that way. Is it worth to add a few words in the doc related to\npg_sync_replication_slots() though? (to mention it can not be used if the sync\nslot worker is running).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 06:33:27 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 16, 2024 at 12:03 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 08:21:04AM +0530, Amit Kapila wrote:\n>\n> > There is no clear use case for allowing them in parallel and I feel it\n> > would add more confusion when it can work sometimes but not other\n> > times. However, if we receive some report from the field where there\n> > is a real demand for such a thing, it should be easy to achieve. For\n> > example, I can imagine that we can have sync_state that has values\n> > 'started', 'in_progress' , and 'finished'. This should allow us to\n> > achieve what the current proposed patch is doing along with allowing\n> > the API to work in parallel when the sync_state is not 'in_progress'.\n> >\n> > I think for now let's restrict their usage in parallel and make the\n> > promotion behavior consistent both for worker and API.\n>\n> Okay, let's do it that way. Is it worth to add a few words in the doc related to\n> pg_sync_replication_slots() though? (to mention it can not be used if the sync\n> slot worker is running).\n>\n\nYes, this makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Apr 2024 12:08:26 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 16, 2024 at 10:00:04AM +0530, shveta malik wrote:\n> Please find v5 addressing above comments.\n\nThanks!\n\n@@ -1634,9 +1677,14 @@ SyncReplicationSlots(WalReceiverConn *wrconn)\n {\n PG_ENSURE_ERROR_CLEANUP(slotsync_failure_callback, PointerGetDatum(wrconn));\n {\n+ check_flags_and_set_sync_info(InvalidPid);\n+\n\nGiven the fact that if the sync worker is running it won't be possible to trigger\na manual sync with pg_sync_replication_slots(), what about also checking the \n\"sync_replication_slots\" value at the start of SyncReplicationSlots() and\nemmit an error if sync_replication_slots is set to on? (The message could explicitly\nstates that it's not possible to use the function if sync_replication_slots is\nset to on).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 06:51:43 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 16, 2024 at 12:03 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> > I think for now let's restrict their usage in parallel and make the\n> > promotion behavior consistent both for worker and API.\n>\n> Okay, let's do it that way. Is it worth to add a few words in the doc related to\n> pg_sync_replication_slots() though? (to mention it can not be used if the sync\n> slot worker is running).\n\n+1. Please find v6 having the suggested doc changes.\n\n\nthanks\nShveta", "msg_date": "Tue, 16 Apr 2024 13:52:39 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tuesday, April 16, 2024 2:52 PM Bertrand Drouvot <[email protected]> wrote:\n\n\nHi,\n\n> On Tue, Apr 16, 2024 at 10:00:04AM +0530, shveta malik wrote:\n> > Please find v5 addressing above comments.\n> \n> Thanks!\n> \n> @@ -1634,9 +1677,14 @@ SyncReplicationSlots(WalReceiverConn *wrconn) {\n> PG_ENSURE_ERROR_CLEANUP(slotsync_failure_callback,\n> PointerGetDatum(wrconn));\n> {\n> + check_flags_and_set_sync_info(InvalidPid);\n> +\n> \n> Given the fact that if the sync worker is running it won't be possible to trigger a\n> manual sync with pg_sync_replication_slots(), what about also checking the\n> \"sync_replication_slots\" value at the start of SyncReplicationSlots() and emmit\n> an error if sync_replication_slots is set to on? (The message could explicitly\n> states that it's not possible to use the function if sync_replication_slots is set to\n> on).\n\nI personally feel adding the additional check for sync_replication_slots may\nnot improve the situation here. Because the GUC sync_replication_slots can\nchange at any point, the GUC could be false when performing this addition check\nand is set to true immediately after the check, so It could not simplify the logic\nanyway.\n\nBest Regards,\nHou zj\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:25:32 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 16, 2024 at 1:55 PM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Tuesday, April 16, 2024 2:52 PM Bertrand Drouvot <[email protected]> wrote:\n>\n>\n> Hi,\n>\n> > On Tue, Apr 16, 2024 at 10:00:04AM +0530, shveta malik wrote:\n> > > Please find v5 addressing above comments.\n> >\n> > Thanks!\n> >\n> > @@ -1634,9 +1677,14 @@ SyncReplicationSlots(WalReceiverConn *wrconn) {\n> > PG_ENSURE_ERROR_CLEANUP(slotsync_failure_callback,\n> > PointerGetDatum(wrconn));\n> > {\n> > + check_flags_and_set_sync_info(InvalidPid);\n> > +\n> >\n> > Given the fact that if the sync worker is running it won't be possible to trigger a\n> > manual sync with pg_sync_replication_slots(), what about also checking the\n> > \"sync_replication_slots\" value at the start of SyncReplicationSlots() and emmit\n> > an error if sync_replication_slots is set to on? (The message could explicitly\n> > states that it's not possible to use the function if sync_replication_slots is set to\n> > on).\n>\n> I personally feel adding the additional check for sync_replication_slots may\n> not improve the situation here. Because the GUC sync_replication_slots can\n> change at any point, the GUC could be false when performing this addition check\n> and is set to true immediately after the check, so It could not simplify the logic\n> anyway.\n\n+1.\nI feel doc and \"cannot synchronize replication slots concurrently\"\ncheck should suffice.\n\nIn the scenario which Hou-San pointed out, if after performing the\nGUC check in SQL function, this GUC is enabled immediately and say\nworker is started sooner than the function could get chance to sync,\nin that case as well, SQL function will ultimately get error \"cannot\nsynchronize replication slots concurrently\", even though GUC is\nenabled. Thus, I feel we should stick with samer error in all\nscenarios.\n\n\nthanks\nShveta\n\n\n", "msg_date": "Tue, 16 Apr 2024 14:06:45 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 16, 2024 at 02:06:45PM +0530, shveta malik wrote:\n> On Tue, Apr 16, 2024 at 1:55 PM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> > I personally feel adding the additional check for sync_replication_slots may\n> > not improve the situation here. Because the GUC sync_replication_slots can\n> > change at any point, the GUC could be false when performing this addition check\n> > and is set to true immediately after the check, so It could not simplify the logic\n> > anyway.\n> \n> +1.\n> I feel doc and \"cannot synchronize replication slots concurrently\"\n> check should suffice.\n> \n> In the scenario which Hou-San pointed out, if after performing the\n> GUC check in SQL function, this GUC is enabled immediately and say\n> worker is started sooner than the function could get chance to sync,\n> in that case as well, SQL function will ultimately get error \"cannot\n> synchronize replication slots concurrently\", even though GUC is\n> enabled. Thus, I feel we should stick with samer error in all\n> scenarios.\n\nOkay, fine by me, let's forget about checking sync_replication_slots then.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 09:22:06 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 16, 2024 at 2:52 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, Apr 16, 2024 at 02:06:45PM +0530, shveta malik wrote:\n> > On Tue, Apr 16, 2024 at 1:55 PM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > > I personally feel adding the additional check for sync_replication_slots may\n> > > not improve the situation here. Because the GUC sync_replication_slots can\n> > > change at any point, the GUC could be false when performing this addition check\n> > > and is set to true immediately after the check, so It could not simplify the logic\n> > > anyway.\n> >\n> > +1.\n> > I feel doc and \"cannot synchronize replication slots concurrently\"\n> > check should suffice.\n> >\n> > In the scenario which Hou-San pointed out, if after performing the\n> > GUC check in SQL function, this GUC is enabled immediately and say\n> > worker is started sooner than the function could get chance to sync,\n> > in that case as well, SQL function will ultimately get error \"cannot\n> > synchronize replication slots concurrently\", even though GUC is\n> > enabled. Thus, I feel we should stick with samer error in all\n> > scenarios.\n>\n> Okay, fine by me, let's forget about checking sync_replication_slots then.\n\nThanks.\n\nWhile reviewing and testing this patch further, we encountered 2\nrace-conditions which needs to be handled:\n\n1) For slot sync worker, the order of cleanup execution was a) first\nreset 'syncing' flag (slotsync_failure_callback) b) then reset pid and\nsyncing (slotsync_worker_onexit). But in ShutDownSlotSync(), we rely\nonly on the 'syncing' flag for wait-exit logic. So it may so happen\nthat in the window between these two callbacks, ShutDownSlotSync()\nproceeds and calls update_synced_slots_inactive_since() which may then\nhit assert Assert((SlotSyncCtx->pid == InvalidPid).\n\n2) Another problem as described by Hou-San off-list:\nWhen the slotsync worker error out after acquiring a slot, it will\nfirst call slotsync_worker_onexit() and then\nReplicationSlotShmemExit(), so in the window between these two\ncallbacks, it's possible that the SlotSyncCtx->syncing\nSlotSyncCtx->pid has been reset but the slot->active_pid is still\nvalid. The Assert will be broken in this.\n@@ -1471,6 +1503,9 @@ update_synced_slots_inactive_since(void)\n {\n Assert(SlotIsLogical(s));\n\n+ /* The slot must not be acquired by any process */\n+ Assert(s->active_pid == 0);\n+\n\n\nTo fix above issues, these changes have been made in v7:\n1) For worker, replaced slotsync_failure_callback() with\nslotsync_worker_disconnect() so that the latter only disconnects and\nthus slotsync_worker_onexit() does pid cleanup followed by syncing\nflag cleanup. This will make ShutDownSlotSync()'s wait exit reliably.\n\n2) To fix second problem, changes are:\n\n2.1) For worker, moved slotsync_worker_onexit() registration before\nBaseInit() (BaseInit is the one doing ReplicationSlotShmemExit\nregistration). If we do this change in order of registration, then\norder of cleanup for worker will be a) slotsync_worker_disconnect() b)\nReplicationSlotShmemExit() c) slotsync_worker_onexit(). This order\nensures that the worker is actually done with slots release and\ncleanup before it marks itself as done syncing.\n\n2.2) For SQL function, did ReplicationSlotRelease() and\nReplicationSlotCleanup() as first step in slotsync_failure_callback().\n\nWhile doing change 2.2, it occurred to us, that it would be a clean\nsolution to do ReplicationSlotCleanup() even on successful execution\nof SQL function. It seems better that the temporary slots are\ncleaned-up when SQL function exists, as we do not know when the user\nwill run this SQL function again and thus leaving temp slots for\nlonger does not seem a good idea. Thoughts?\n\nthanks\nShveta", "msg_date": "Thu, 18 Apr 2024 12:35:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Thu, Apr 18, 2024 at 12:35 PM shveta malik <[email protected]> wrote:\n>\n> To fix above issues, these changes have been made in v7:\n\nPlease find v8 attached. Changes are:\n\n1) It fixes ShutDownSlotSync() issue, where we perform\nkill(SlotSyncCtx->pid). There are chances that after we release\nspin-lock and before we perform kill, slot-sync worker has error-ed\nout and has set SlotSyncCtx->pid to InvalidPid (-1) already. And thus\nkill(-1) could result in abnormal process kills on some platforms.\nNow, we get pid under spin-lock and then use it to perform kill to\navoid pid=-1 kill. This is on a similar line of how ShutdownWalRcv()\ndoes it.\n\n2) Improved comments in code.\n\n3) Updated commit message with new fixes. I had missed to update it in\nthe previous version.\n\nthanks\nShveta", "msg_date": "Thu, 18 Apr 2024 17:36:05 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "Hi,\n\nOn Thu, Apr 18, 2024 at 05:36:05PM +0530, shveta malik wrote:\n> Please find v8 attached. Changes are:\n\nThanks!\n\nA few comments:\n\n1 ===\n\n@@ -1440,7 +1461,7 @@ ReplSlotSyncWorkerMain(char *startup_data, size_t startup_data_len)\n * slotsync_worker_onexit() but that will need the connection to be made\n * global and we want to avoid introducing global for this purpose.\n */\n- before_shmem_exit(slotsync_failure_callback, PointerGetDatum(wrconn));\n+ before_shmem_exit(slotsync_worker_disconnect, PointerGetDatum(wrconn));\n\nThe comment above this change still states \"Register the failure callback once\nwe have the connection\", I think it has to be reworded a bit now that v8 is\nmaking use of slotsync_worker_disconnect().\n\n2 ===\n\n+ * Register slotsync_worker_onexit() before we register\n+ * ReplicationSlotShmemExit() in BaseInit(), to ensure that during exit of\n+ * slot sync worker, ReplicationSlotShmemExit() is called first, followed\n+ * by slotsync_worker_onexit(). Startup process during promotion waits for\n\nWorth to mention in shmem_exit() (where it \"while (--before_shmem_exit_index >= 0)\"\nor before the shmem_exit() definition) that ReplSlotSyncWorkerMain() relies on\nthis LIFO behavior? (not sure if there is other \"strong\" LIFO requirement in\nother part of the code).\n\n3 ===\n\n+ * Startup process during promotion waits for slot sync to finish and it\n+ * does that by checking the 'syncing' flag.\n\nworth to mention ShutDownSlotSync()?\n\n4 ===\n\nI did a few tests manually (launching ShutDownSlotSync() through gdb / with and\nwithout sync worker and with / without pg_sync_replication_slots() running\nconcurrently) and it looks like it works as designed.\n\nHaving said that, the logic that is in place to take care of the corner cases\ndescribed up-thread seems reasonable to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 05:23:08 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 19, 2024 at 10:53 AM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> Hi,\n>\n> On Thu, Apr 18, 2024 at 05:36:05PM +0530, shveta malik wrote:\n> > Please find v8 attached. Changes are:\n>\n> Thanks!\n>\n> A few comments:\n\nThanks for reviewing.\n\n> 1 ===\n>\n> @@ -1440,7 +1461,7 @@ ReplSlotSyncWorkerMain(char *startup_data, size_t startup_data_len)\n> * slotsync_worker_onexit() but that will need the connection to be made\n> * global and we want to avoid introducing global for this purpose.\n> */\n> - before_shmem_exit(slotsync_failure_callback, PointerGetDatum(wrconn));\n> + before_shmem_exit(slotsync_worker_disconnect, PointerGetDatum(wrconn));\n>\n> The comment above this change still states \"Register the failure callback once\n> we have the connection\", I think it has to be reworded a bit now that v8 is\n> making use of slotsync_worker_disconnect().\n>\n> 2 ===\n>\n> + * Register slotsync_worker_onexit() before we register\n> + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during exit of\n> + * slot sync worker, ReplicationSlotShmemExit() is called first, followed\n> + * by slotsync_worker_onexit(). Startup process during promotion waits for\n>\n> Worth to mention in shmem_exit() (where it \"while (--before_shmem_exit_index >= 0)\"\n> or before the shmem_exit() definition) that ReplSlotSyncWorkerMain() relies on\n> this LIFO behavior? (not sure if there is other \"strong\" LIFO requirement in\n> other part of the code).\n\nI see other modules as well relying on LIFO behavior.\nPlease see applyparallelworker.c where\n'before_shmem_exit(pa_shutdown)' is needed to be done after\n'before_shmem_exit(logicalrep_worker_onexit)' (commit id 3d144c6).\nAlso in postinit.c, I see such comments atop\n'before_shmem_exit(ShutdownPostgres, 0)'.\nI feel we can skip adding this specific comment about\nReplSlotSyncWorkerMain() in ipc.c, as none of the other modules has\nalso not added any. I will address the rest of your comments in the\nnext version.\n\n> 3 ===\n>\n> + * Startup process during promotion waits for slot sync to finish and it\n> + * does that by checking the 'syncing' flag.\n>\n> worth to mention ShutDownSlotSync()?\n>\n> 4 ===\n>\n> I did a few tests manually (launching ShutDownSlotSync() through gdb / with and\n> without sync worker and with / without pg_sync_replication_slots() running\n> concurrently) and it looks like it works as designed.\n\nThanks for testing it.\n\n> Having said that, the logic that is in place to take care of the corner cases\n> described up-thread seems reasonable to me.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 19 Apr 2024 11:37:07 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 19, 2024 at 11:37 AM shveta malik <[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 10:53 AM Bertrand Drouvot\n> <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, Apr 18, 2024 at 05:36:05PM +0530, shveta malik wrote:\n> > > Please find v8 attached. Changes are:\n> >\n> > Thanks!\n> >\n> > A few comments:\n>\n> Thanks for reviewing.\n>\n> > 1 ===\n> >\n> > @@ -1440,7 +1461,7 @@ ReplSlotSyncWorkerMain(char *startup_data, size_t startup_data_len)\n> > * slotsync_worker_onexit() but that will need the connection to be made\n> > * global and we want to avoid introducing global for this purpose.\n> > */\n> > - before_shmem_exit(slotsync_failure_callback, PointerGetDatum(wrconn));\n> > + before_shmem_exit(slotsync_worker_disconnect, PointerGetDatum(wrconn));\n> >\n> > The comment above this change still states \"Register the failure callback once\n> > we have the connection\", I think it has to be reworded a bit now that v8 is\n> > making use of slotsync_worker_disconnect().\n> >\n> > 2 ===\n> >\n> > + * Register slotsync_worker_onexit() before we register\n> > + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during exit of\n> > + * slot sync worker, ReplicationSlotShmemExit() is called first, followed\n> > + * by slotsync_worker_onexit(). Startup process during promotion waits for\n> >\n> > Worth to mention in shmem_exit() (where it \"while (--before_shmem_exit_index >= 0)\"\n> > or before the shmem_exit() definition) that ReplSlotSyncWorkerMain() relies on\n> > this LIFO behavior? (not sure if there is other \"strong\" LIFO requirement in\n> > other part of the code).\n>\n> I see other modules as well relying on LIFO behavior.\n> Please see applyparallelworker.c where\n> 'before_shmem_exit(pa_shutdown)' is needed to be done after\n> 'before_shmem_exit(logicalrep_worker_onexit)' (commit id 3d144c6).\n> Also in postinit.c, I see such comments atop\n> 'before_shmem_exit(ShutdownPostgres, 0)'.\n> I feel we can skip adding this specific comment about\n> ReplSlotSyncWorkerMain() in ipc.c, as none of the other modules has\n> also not added any. I will address the rest of your comments in the\n> next version.\n>\n> > 3 ===\n> >\n> > + * Startup process during promotion waits for slot sync to finish and it\n> > + * does that by checking the 'syncing' flag.\n> >\n> > worth to mention ShutDownSlotSync()?\n> >\n> > 4 ===\n> >\n> > I did a few tests manually (launching ShutDownSlotSync() through gdb / with and\n> > without sync worker and with / without pg_sync_replication_slots() running\n> > concurrently) and it looks like it works as designed.\n>\n> Thanks for testing it.\n>\n> > Having said that, the logic that is in place to take care of the corner cases\n> > described up-thread seems reasonable to me.\n\nPlease find v9 with the above comments addressed.\n\nthanks\nShveta", "msg_date": "Fri, 19 Apr 2024 13:52:18 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Friday, April 19, 2024 4:22 PM shveta malik <[email protected]> wrote:\r\n> On Fri, Apr 19, 2024 at 11:37 AM shveta malik <[email protected]> wrote:\r\n> >\r\n> > On Fri, Apr 19, 2024 at 10:53 AM Bertrand Drouvot\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > Hi,\r\n> > >\r\n> > > On Thu, Apr 18, 2024 at 05:36:05PM +0530, shveta malik wrote:\r\n> > > > Please find v8 attached. Changes are:\r\n> > >\r\n> > > Thanks!\r\n> > >\r\n> > > A few comments:\r\n> >\r\n> > Thanks for reviewing.\r\n> >\r\n> > > 1 ===\r\n> > >\r\n> > > @@ -1440,7 +1461,7 @@ ReplSlotSyncWorkerMain(char *startup_data,\r\n> size_t startup_data_len)\r\n> > > * slotsync_worker_onexit() but that will need the connection to be\r\n> made\r\n> > > * global and we want to avoid introducing global for this purpose.\r\n> > > */\r\n> > > - before_shmem_exit(slotsync_failure_callback,\r\n> PointerGetDatum(wrconn));\r\n> > > + before_shmem_exit(slotsync_worker_disconnect,\r\n> > > + PointerGetDatum(wrconn));\r\n> > >\r\n> > > The comment above this change still states \"Register the failure\r\n> > > callback once we have the connection\", I think it has to be reworded\r\n> > > a bit now that v8 is making use of slotsync_worker_disconnect().\r\n> > >\r\n> > > 2 ===\r\n> > >\r\n> > > + * Register slotsync_worker_onexit() before we register\r\n> > > + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during\r\n> exit of\r\n> > > + * slot sync worker, ReplicationSlotShmemExit() is called first,\r\n> followed\r\n> > > + * by slotsync_worker_onexit(). Startup process during\r\n> > > + promotion waits for\r\n> > >\r\n> > > Worth to mention in shmem_exit() (where it \"while\r\n> (--before_shmem_exit_index >= 0)\"\r\n> > > or before the shmem_exit() definition) that ReplSlotSyncWorkerMain()\r\n> > > relies on this LIFO behavior? (not sure if there is other \"strong\"\r\n> > > LIFO requirement in other part of the code).\r\n> >\r\n> > I see other modules as well relying on LIFO behavior.\r\n> > Please see applyparallelworker.c where\r\n> > 'before_shmem_exit(pa_shutdown)' is needed to be done after\r\n> > 'before_shmem_exit(logicalrep_worker_onexit)' (commit id 3d144c6).\r\n> > Also in postinit.c, I see such comments atop\r\n> > 'before_shmem_exit(ShutdownPostgres, 0)'.\r\n> > I feel we can skip adding this specific comment about\r\n> > ReplSlotSyncWorkerMain() in ipc.c, as none of the other modules has\r\n> > also not added any. I will address the rest of your comments in the\r\n> > next version.\r\n> >\r\n> > > 3 ===\r\n> > >\r\n> > > + * Startup process during promotion waits for slot sync to finish\r\n> and it\r\n> > > + * does that by checking the 'syncing' flag.\r\n> > >\r\n> > > worth to mention ShutDownSlotSync()?\r\n> > >\r\n> > > 4 ===\r\n> > >\r\n> > > I did a few tests manually (launching ShutDownSlotSync() through gdb\r\n> > > / with and without sync worker and with / without\r\n> > > pg_sync_replication_slots() running\r\n> > > concurrently) and it looks like it works as designed.\r\n> >\r\n> > Thanks for testing it.\r\n> >\r\n> > > Having said that, the logic that is in place to take care of the\r\n> > > corner cases described up-thread seems reasonable to me.\r\n> \r\n> Please find v9 with the above comments addressed.\r\n\r\nThanks, the patch looks good to me. I also tested a few concurrent\r\npromotion/function execution cases and didn't find issues.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Mon, 22 Apr 2024 00:31:19 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Fri, Apr 19, 2024 at 1:52 PM shveta malik <[email protected]> wrote:\n>\n> Please find v9 with the above comments addressed.\n>\n\nI have made minor modifications in the comments and a function name.\nPlease see the attached top-up patch. Apart from this, the patch looks\ngood to me.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 22 Apr 2024 17:09:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 22, 2024 at 5:10 PM Amit Kapila <[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 1:52 PM shveta malik <[email protected]> wrote:\n> >\n> > Please find v9 with the above comments addressed.\n> >\n>\n> I have made minor modifications in the comments and a function name.\n> Please see the attached top-up patch. Apart from this, the patch looks\n> good to me.\n\nThanks for the patch, the changes look good Amit. Please find the merged patch.\n\nthanks\nShveta", "msg_date": "Mon, 22 Apr 2024 17:32:08 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 22, 2024 at 9:02 PM shveta malik <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 5:10 PM Amit Kapila <[email protected]> wrote:\n> >\n> > On Fri, Apr 19, 2024 at 1:52 PM shveta malik <[email protected]> wrote:\n> > >\n> > > Please find v9 with the above comments addressed.\n> > >\n> >\n> > I have made minor modifications in the comments and a function name.\n> > Please see the attached top-up patch. Apart from this, the patch looks\n> > good to me.\n>\n> Thanks for the patch, the changes look good Amit. Please find the merged patch.\n>\n\nI've reviewed the patch and have some comments:\n\n---\n /*\n- * Early initialization.\n+ * Register slotsync_worker_onexit() before we register\n+ * ReplicationSlotShmemExit() in BaseInit(), to ensure that during the\n+ * exit of the slot sync worker, ReplicationSlotShmemExit() is called\n+ * first, followed by slotsync_worker_onexit(). The startup process during\n+ * promotion invokes ShutDownSlotSync() which waits for slot sync to\n+ * finish and it does that by checking the 'syncing' flag. Thus worker\n+ * must be done with the slots' release and cleanup before it marks itself\n+ * as finished syncing.\n */\n\nI'm slightly worried that we register the slotsync_worker_onexit()\ncallback before BaseInit(), because it could be a blocker when we want\nto add more work in the callback, for example sending the stats.\n\n---\n synchronize_slots(wrconn);\n+\n+ /* Cleanup the temporary slots */\n+ ReplicationSlotCleanup();\n+\n+ /* We are done with sync, so reset sync flag */\n+ reset_syncing_flag();\n\nI think it ends up removing other temp slots that are created by the\nsame backend process using\npg_create_{physical,logical_replication_slots() function, which could\nbe a large side effect of this function for users. Also, if users want\nto have a process periodically calling pg_sync_replication_slots()\ninstead of the slotsync worker, it doesn't support a case where we\ncreate a temp not-ready slot and turn it into a persistent slot if\nit's ready for sync.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 22:33:37 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Mon, Apr 22, 2024 at 7:04 PM Masahiko Sawada <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 9:02 PM shveta malik <[email protected]> wrote:\n> >\n> > Thanks for the patch, the changes look good Amit. Please find the merged patch.\n> >\n>\n> I've reviewed the patch and have some comments:\n>\n> ---\n> /*\n> - * Early initialization.\n> + * Register slotsync_worker_onexit() before we register\n> + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during the\n> + * exit of the slot sync worker, ReplicationSlotShmemExit() is called\n> + * first, followed by slotsync_worker_onexit(). The startup process during\n> + * promotion invokes ShutDownSlotSync() which waits for slot sync to\n> + * finish and it does that by checking the 'syncing' flag. Thus worker\n> + * must be done with the slots' release and cleanup before it marks itself\n> + * as finished syncing.\n> */\n>\n> I'm slightly worried that we register the slotsync_worker_onexit()\n> callback before BaseInit(), because it could be a blocker when we want\n> to add more work in the callback, for example sending the stats.\n>\n\nThe other possibility is that we do slot release/clean up in the\nslotsync_worker_onexit() call itself and then we can do it after\nBaseInit(). Do you have any other/better idea for this?\n\n> ---\n> synchronize_slots(wrconn);\n> +\n> + /* Cleanup the temporary slots */\n> + ReplicationSlotCleanup();\n> +\n> + /* We are done with sync, so reset sync flag */\n> + reset_syncing_flag();\n>\n> I think it ends up removing other temp slots that are created by the\n> same backend process using\n> pg_create_{physical,logical_replication_slots() function, which could\n> be a large side effect of this function for users.\n>\n\nTrue, I think here we should either remove only temporary and synced\nmarked slots. The other possibility is to create slots as RS_EPHEMERAL\ninitially when called from the SQL function but that doesn't sound\nlike a neat approach.\n\n>\n Also, if users want\n> to have a process periodically calling pg_sync_replication_slots()\n> instead of the slotsync worker, it doesn't support a case where we\n> create a temp not-ready slot and turn it into a persistent slot if\n> it's ready for sync.\n>\n\nTrue, but eventually the API should be able to directly create the\npersistent slots and anyway this can happen only for the first time\n(till the slots are created and marked persistent) and one who wants\nto use this function periodically should be able to see regular syncs.\nOTOH, leaving temp slots created via this API could remain as-is after\npromotion and we need to document for users to remove such slots. Now,\nwe can do that if we want but I think it is better to clean up such\nslots rather than putting the onus on users to remove them after\npromotion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Apr 2024 09:07:23 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 23, 2024 at 9:07 AM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 7:04 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Apr 22, 2024 at 9:02 PM shveta malik <[email protected]> wrote:\n> > >\n> > > Thanks for the patch, the changes look good Amit. Please find the merged patch.\n> > >\n> >\n> > I've reviewed the patch and have some comments:\n\nThanks for the comments.\n\n> > ---\n> > /*\n> > - * Early initialization.\n> > + * Register slotsync_worker_onexit() before we register\n> > + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during the\n> > + * exit of the slot sync worker, ReplicationSlotShmemExit() is called\n> > + * first, followed by slotsync_worker_onexit(). The startup process during\n> > + * promotion invokes ShutDownSlotSync() which waits for slot sync to\n> > + * finish and it does that by checking the 'syncing' flag. Thus worker\n> > + * must be done with the slots' release and cleanup before it marks itself\n> > + * as finished syncing.\n> > */\n> >\n> > I'm slightly worried that we register the slotsync_worker_onexit()\n> > callback before BaseInit(), because it could be a blocker when we want\n> > to add more work in the callback, for example sending the stats.\n> >\n>\n> The other possibility is that we do slot release/clean up in the\n> slotsync_worker_onexit() call itself and then we can do it after\n> BaseInit(). Do you have any other/better idea for this?\n\nI have currently implemented it this way in v11.\n\n> > ---\n> > synchronize_slots(wrconn);\n> > +\n> > + /* Cleanup the temporary slots */\n> > + ReplicationSlotCleanup();\n> > +\n> > + /* We are done with sync, so reset sync flag */\n> > + reset_syncing_flag();\n> >\n> > I think it ends up removing other temp slots that are created by the\n> > same backend process using\n> > pg_create_{physical,logical_replication_slots() function, which could\n> > be a large side effect of this function for users.\n\nYes, this is a problem. Thanks for catching it.\n\n>\n> True, I think here we should either remove only temporary and synced\n> marked slots. The other possibility is to create slots as RS_EPHEMERAL\n> initially when called from the SQL function but that doesn't sound\n> like a neat approach.\n\nModified the logic to remove only synced temporary slots during\nSQL-function exit.\n\nPlease find v11 with above changes.\n\nthanks\nShveta", "msg_date": "Wed, 24 Apr 2024 10:28:34 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Tue, Apr 23, 2024 at 12:37 PM Amit Kapila <[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 7:04 PM Masahiko Sawada <[email protected]> wrote:\n> >\n> > On Mon, Apr 22, 2024 at 9:02 PM shveta malik <[email protected]> wrote:\n> > >\n> > > Thanks for the patch, the changes look good Amit. Please find the merged patch.\n> > >\n> >\n> > I've reviewed the patch and have some comments:\n> >\n> > ---\n> > /*\n> > - * Early initialization.\n> > + * Register slotsync_worker_onexit() before we register\n> > + * ReplicationSlotShmemExit() in BaseInit(), to ensure that during the\n> > + * exit of the slot sync worker, ReplicationSlotShmemExit() is called\n> > + * first, followed by slotsync_worker_onexit(). The startup process during\n> > + * promotion invokes ShutDownSlotSync() which waits for slot sync to\n> > + * finish and it does that by checking the 'syncing' flag. Thus worker\n> > + * must be done with the slots' release and cleanup before it marks itself\n> > + * as finished syncing.\n> > */\n> >\n> > I'm slightly worried that we register the slotsync_worker_onexit()\n> > callback before BaseInit(), because it could be a blocker when we want\n> > to add more work in the callback, for example sending the stats.\n> >\n>\n> The other possibility is that we do slot release/clean up in the\n> slotsync_worker_onexit() call itself and then we can do it after\n> BaseInit().\n\nThis approach sounds clearer and safer to me. The current approach\nrelies on the callback registration order of\nReplicationSlotShmemExit(). If it changes in the future, we will\nsilently have the same problem. Every slot sync related work should be\ndone before allowing someone to touch synced slots by clearing the\n'syncing' flag.\n\n>\n> > ---\n> > synchronize_slots(wrconn);\n> > +\n> > + /* Cleanup the temporary slots */\n> > + ReplicationSlotCleanup();\n> > +\n> > + /* We are done with sync, so reset sync flag */\n> > + reset_syncing_flag();\n> >\n> > I think it ends up removing other temp slots that are created by the\n> > same backend process using\n> > pg_create_{physical,logical_replication_slots() function, which could\n> > be a large side effect of this function for users.\n> >\n>\n> True, I think here we should either remove only temporary and synced\n> marked slots. The other possibility is to create slots as RS_EPHEMERAL\n> initially when called from the SQL function but that doesn't sound\n> like a neat approach.\n>\n> >\n> Also, if users want\n> > to have a process periodically calling pg_sync_replication_slots()\n> > instead of the slotsync worker, it doesn't support a case where we\n> > create a temp not-ready slot and turn it into a persistent slot if\n> > it's ready for sync.\n> >\n>\n> True, but eventually the API should be able to directly create the\n> persistent slots and anyway this can happen only for the first time\n> (till the slots are created and marked persistent) and one who wants\n> to use this function periodically should be able to see regular syncs.\n\nI agree that we remove temp-and-synced slots created via the API at\nthe end of the API . We end up creating and dropping slots in every\nAPI call but since the pg_sync_replication_slots() function is a kind\nof debug-purpose function and it will not be common to call this\nfunction regularly instead of using the slot sync worker, we can live\nwith such overhead.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 14:43:59 +0900", "msg_from": "Masahiko Sawada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" }, { "msg_contents": "On Wed, Apr 24, 2024 at 10:28 AM shveta malik <[email protected]> wrote:\n>\n> Modified the logic to remove only synced temporary slots during\n> SQL-function exit.\n>\n> Please find v11 with above changes.\n>\n\nLGTM, so pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Apr 2024 16:21:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: promotion related handling in pg_sync_replication_slots()" } ]
[ { "msg_contents": "You cannot run queries on a Hot Standby server until the standby has \nseen a running-xacts record. Furthermore if the subxids cache had \noverflowed, you also need to wait for those transactions to finish. That \nis usually not a problem, because we write a running-xacts record after \neach checkpoint, and most systems don't use so many subtransactions that \nthe cache would overflow. Still, you can run into it if you're unlucky, \nand it's annoying when you do.\n\nIt occurred to me that we could replace the known-assigned-xids \nmachinery with CSN snapshots. We've talked about CSN snapshots many \ntimes in the past, and I think it would make sense on the primary too, \nbut for starters, we could use it just during Hot Standby.\n\nWith CSN-based snapshots, you don't have the limitation with the \nfixed-size known-assigned-xids array, and overflowed sub-XIDs are not a \nproblem either. You can always enter Hot Standby and start accepting \nqueries as soon as the standby is in a physically consistent state.\n\nI dusted up and rebased the last CSN patch that I found on the mailing \nlist [1], and modified it so that it's only used during recovery. That \nmakes some things simpler and less scary. There are no changes to how \ntransaction commit happens in the primary, the CSN log is only kept \nup-to-date in the standby, when commit/abort records are replayed. The \nCSN of each transaction is the LSN of its commit record.\n\nThe CSN approach is much simpler than the existing known-assigned-XIDs \nmachinery, as you can see from \"git diff --stat\" with this patch:\n\n 32 files changed, 773 insertions(+), 1711 deletions(-)\n\nWith CSN snapshots, we don't need the known-assigned-XIDs machinery, and \nwe can get rid of the xact-assignment records altogether. We no longer \nneed the running-xacts records for Hot Standby either, but I wasn't able \nto remove that because it's still used by logical replication, in \nsnapbuild.c. I have a feeling that that could somehow be simplified too, \nbut didn't look into it.\n\nThis is obviously v18 material, so I'll park this at the July commitfest \nfor now. There are a bunch of little FIXMEs in the code, and needs \nperformance testing, but overall I was surprised how easy this was.\n\n(We ran into this issue particularly hard with Neon, because with Neon \nyou don't need to perform WAL replay at standby startup. However, when \nyou don't perform WAL replay, you don't get to see the running-xact \nrecord after the checkpoint either. If the primary is idle, it doesn't \ngenerate new running-xact records, and the standby cannot start Hot \nStandby until the next time something happens in the primary. It's \nalways a potential problem with overflowed sub-XIDs cache, but the lack \nof WAL replay made it happen even when there are no subtransactions \ninvolved.)\n\n[1] https://www.postgresql.org/message-id/2020081009525213277261%40highgo.ca\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Thu, 4 Apr 2024 20:21:18 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "CSN snapshots in hot standby" }, { "msg_contents": "Hi,\n\nOn Thu, 4 Apr 2024 at 22:21, Heikki Linnakangas <[email protected]> wrote:\n\n> You cannot run queries on a Hot Standby server until the standby has\n> seen a running-xacts record. Furthermore if the subxids cache had\n> overflowed, you also need to wait for those transactions to finish. That\n> is usually not a problem, because we write a running-xacts record after\n> each checkpoint, and most systems don't use so many subtransactions that\n> the cache would overflow. Still, you can run into it if you're unlucky,\n> and it's annoying when you do.\n>\n> It occurred to me that we could replace the known-assigned-xids\n> machinery with CSN snapshots. We've talked about CSN snapshots many\n> times in the past, and I think it would make sense on the primary too,\n> but for starters, we could use it just during Hot Standby.\n>\n> With CSN-based snapshots, you don't have the limitation with the\n> fixed-size known-assigned-xids array, and overflowed sub-XIDs are not a\n> problem either. You can always enter Hot Standby and start accepting\n> queries as soon as the standby is in a physically consistent state.\n>\n> I dusted up and rebased the last CSN patch that I found on the mailing\n> list [1], and modified it so that it's only used during recovery. That\n> makes some things simpler and less scary. There are no changes to how\n> transaction commit happens in the primary, the CSN log is only kept\n> up-to-date in the standby, when commit/abort records are replayed. The\n> CSN of each transaction is the LSN of its commit record.\n>\n> The CSN approach is much simpler than the existing known-assigned-XIDs\n> machinery, as you can see from \"git diff --stat\" with this patch:\n>\n> 32 files changed, 773 insertions(+), 1711 deletions(-)\n>\n> With CSN snapshots, we don't need the known-assigned-XIDs machinery, and\n> we can get rid of the xact-assignment records altogether. We no longer\n> need the running-xacts records for Hot Standby either, but I wasn't able\n> to remove that because it's still used by logical replication, in\n> snapbuild.c. I have a feeling that that could somehow be simplified too,\n> but didn't look into it.\n>\n> This is obviously v18 material, so I'll park this at the July commitfest\n> for now. There are a bunch of little FIXMEs in the code, and needs\n> performance testing, but overall I was surprised how easy this was.\n>\n> (We ran into this issue particularly hard with Neon, because with Neon\n> you don't need to perform WAL replay at standby startup. However, when\n> you don't perform WAL replay, you don't get to see the running-xact\n> record after the checkpoint either. If the primary is idle, it doesn't\n> generate new running-xact records, and the standby cannot start Hot\n> Standby until the next time something happens in the primary. It's\n> always a potential problem with overflowed sub-XIDs cache, but the lack\n> of WAL replay made it happen even when there are no subtransactions\n> involved.)\n>\n> [1]\n> https://www.postgresql.org/message-id/2020081009525213277261%40highgo.ca\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n\n\nGreat. I really like the idea of vanishing KnownAssignedXids instead of\noptimizing it (if optimizations are even possible).\n\n> + /*\n> + * TODO: We must mark CSNLOG first\n> + */\n> + CSNLogSetCSN(xid, parsed->nsubxacts, parsed->subxacts, lsn);\n> +\n\nAs far as I understand we simply use the current Wal Record LSN as its XID\nCSN number. Ok.\nThis seems to work for standbys snapshots, but this patch may be really\nuseful for distributed postgresql solutions, that use CSN for working\nwith distributed database snapshot (across multiple shards). These\nsolutions need to set CSN to some other value (time from True time/ClockSI\nor whatever).\nSo, maybe we need some hooks here? Or maybe, we can take CSN here from\nextension somehow. For example, we can define\nsome interface and extend it. Does this sound reasonable for you?\n\nAlso, I attached a patch which adds some more todos.", "msg_date": "Fri, 5 Apr 2024 02:08:37 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CSN snapshots in hot standby" }, { "msg_contents": "\n\n> On 5 Apr 2024, at 02:08, Kirill Reshke <[email protected]> wrote:\n> \n> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.\n\nI really like the idea of CSN-provider-as-extension.\nBut it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.\nSo, from my perspective, having LSN-as-CSN is already huge step forward.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 5 Apr 2024 15:49:24 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CSN snapshots in hot standby" }, { "msg_contents": "On 05/04/2024 13:49, Andrey M. Borodin wrote:\n>> On 5 Apr 2024, at 02:08, Kirill Reshke <[email protected]> wrote:\n\nThanks for taking a look, Kirill!\n\n>> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.\n> \n> I really like the idea of CSN-provider-as-extension.\n> But it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.\n> So, from my perspective, having LSN-as-CSN is already huge step forward.\n\nYeah, I really don't want to expand the scope of this.\n\nHere's a new version. Rebased, and lots of comments updated.\n\nI added a tiny cache of the CSN lookups into SnapshotData, which can \nhold the values of 4 XIDs that are known to be visible to the snapshot, \nand 4 invisible XIDs. This is pretty arbitrary, but the idea is to have \nsomething very small to speed up the common cases that 1-2 XIDs are \nrepeatedly looked up, without adding too much overhead.\n\n\nI did some performance testing of the visibility checks using these CSN \nsnapshots. The tests run SELECTs with a SeqScan in a standby, over a \ntable where all the rows have xmin/xmax values that are still \nin-progress in the primary.\n\nThree test scenarios:\n\n1. large-xact: one large transaction inserted all the rows. All rows \nhave the same XMIN, which is still in progress\n\n2. many-subxacts: one large transaction inserted each row in a separate \nsubtransaction. All rows have a different XMIN, but they're all \nsubtransactions of the same top-level transaction. (This causes the \nsubxids cache in the proc array to overflow)\n\n3. few-subxacts: All rows are inserted, committed, and vacuum frozen. \nThen, using 10 in separate subtransactions, DELETE the rows, in an \ninterleaved fashion. The XMAX values cycle like this \"1, 2, 3, 4, 5, 6, \n7, 8, 9, 10, 1, 2, 3, 4, 5, ...\". The point of this is that these \nsub-XIDs fit in the subxids cache in the procarray, but the pattern \ndefeats the simple 4-element cache that I added.\n\nThe test script I used is attached. I repeated it a few times with \nmaster and the patches here, and picked the fastest runs for each. Just \neyeballing the results, there's about ~10% variance in these numbers. \nSmaller is better.\n\nMaster:\n\nlarge-xact: 4.57732510566711\nmany-subxacts: 18.6958119869232\nfew-subxacts: 16.467698097229\n\nPatched:\n\nlarge-xact: 10.2999930381775\nmany-subxacts: 11.6501438617706\nfew-subxacts: 19.8457028865814\n\nWith cache:\n\nlarge-xact: 3.68792295455933\nmany-subxacts: 13.3662350177765\nfew-subxacts: 21.4426419734955\n\nThe 'large-xacts' results show that the CSN lookups are slower than the \nbinary search on the 'xids' array. Not a surprise. The 4-element cache \nfixes the regression, which is also not a surprise.\n\nThe 'many-subxacts' results show that the CSN lookups are faster than \nthe current method in master, when the subxids cache has overflowed. \nThat makes sense: on master, we always perform a lookup in pg_subtrans, \nif the suxids cache has overflowed, which is more or less the same \noverhead as the CSN lookup. But we avoid the binary search on the xids \narray after that.\n\nThe 'few-subxacts' shows a regression, when the 4-element cache is not \neffective. I think that's acceptable, the CSN approach has many \nbenefits, and I don't think this is a very common scenario. But if \nnecessary, it could perhaps be alleviated with more caching, or by \ntrying to compensate by optimizing elsewhere.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Tue, 13 Aug 2024 23:13:39 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CSN snapshots in hot standby" }, { "msg_contents": "On Wed, 14 Aug 2024 at 01:13, Heikki Linnakangas <[email protected]> wrote:\n>\n> On 05/04/2024 13:49, Andrey M. Borodin wrote:\n> >> On 5 Apr 2024, at 02:08, Kirill Reshke <[email protected]> wrote:\n>\n> Thanks for taking a look, Kirill!\n>\n> >> maybe we need some hooks here? Or maybe, we can take CSN here from extension somehow.\n> >\n> > I really like the idea of CSN-provider-as-extension.\n> > But it's very important to move on with CSN, at least on standby, to make CSN actually happen some day.\n> > So, from my perspective, having LSN-as-CSN is already huge step forward.\n>\n> Yeah, I really don't want to expand the scope of this.\n>\n> Here's a new version. Rebased, and lots of comments updated.\n>\n> I added a tiny cache of the CSN lookups into SnapshotData, which can\n> hold the values of 4 XIDs that are known to be visible to the snapshot,\n> and 4 invisible XIDs. This is pretty arbitrary, but the idea is to have\n> something very small to speed up the common cases that 1-2 XIDs are\n> repeatedly looked up, without adding too much overhead.\n>\n>\n> I did some performance testing of the visibility checks using these CSN\n> snapshots. The tests run SELECTs with a SeqScan in a standby, over a\n> table where all the rows have xmin/xmax values that are still\n> in-progress in the primary.\n>\n> Three test scenarios:\n>\n> 1. large-xact: one large transaction inserted all the rows. All rows\n> have the same XMIN, which is still in progress\n>\n> 2. many-subxacts: one large transaction inserted each row in a separate\n> subtransaction. All rows have a different XMIN, but they're all\n> subtransactions of the same top-level transaction. (This causes the\n> subxids cache in the proc array to overflow)\n>\n> 3. few-subxacts: All rows are inserted, committed, and vacuum frozen.\n> Then, using 10 in separate subtransactions, DELETE the rows, in an\n> interleaved fashion. The XMAX values cycle like this \"1, 2, 3, 4, 5, 6,\n> 7, 8, 9, 10, 1, 2, 3, 4, 5, ...\". The point of this is that these\n> sub-XIDs fit in the subxids cache in the procarray, but the pattern\n> defeats the simple 4-element cache that I added.\n>\n> The test script I used is attached. I repeated it a few times with\n> master and the patches here, and picked the fastest runs for each. Just\n> eyeballing the results, there's about ~10% variance in these numbers.\n> Smaller is better.\n>\n> Master:\n>\n> large-xact: 4.57732510566711\n> many-subxacts: 18.6958119869232\n> few-subxacts: 16.467698097229\n>\n> Patched:\n>\n> large-xact: 10.2999930381775\n> many-subxacts: 11.6501438617706\n> few-subxacts: 19.8457028865814\n>\n> With cache:\n>\n> large-xact: 3.68792295455933\n> many-subxacts: 13.3662350177765\n> few-subxacts: 21.4426419734955\n>\n> The 'large-xacts' results show that the CSN lookups are slower than the\n> binary search on the 'xids' array. Not a surprise. The 4-element cache\n> fixes the regression, which is also not a surprise.\n>\n> The 'many-subxacts' results show that the CSN lookups are faster than\n> the current method in master, when the subxids cache has overflowed.\n> That makes sense: on master, we always perform a lookup in pg_subtrans,\n> if the suxids cache has overflowed, which is more or less the same\n> overhead as the CSN lookup. But we avoid the binary search on the xids\n> array after that.\n>\n> The 'few-subxacts' shows a regression, when the 4-element cache is not\n> effective. I think that's acceptable, the CSN approach has many\n> benefits, and I don't think this is a very common scenario. But if\n> necessary, it could perhaps be alleviated with more caching, or by\n> trying to compensate by optimizing elsewhere.\n>\n> --\n> Heikki Linnakangas\n> Neon (https://neon.tech)\n\nThanks for the update. I will try to find time for perf-testing this.\nFirstly, random suggestions. Sorry for being too nit-picky\n\n1) in 0002\n> +/*\n> + * Number of shared CSNLog buffers.\n> + */\n> +static Size\n> +CSNLogShmemBuffers(void)\n> +{\n> + return Min(32, Max(16, NBuffers / 512));\n> +}\n\nShould we GUC this?\n\n2) In 0002 CSNLogShmemInit:\n\n> + //SlruPagePrecedesUnitTests(CsnlogCtl, SUBTRANS_XACTS_PER_PAGE);\n\nremove this?\n\n3) In 0002 InitCSNLogPage:\n\n> + SimpleLruZeroPage(CsnlogCtl, pageno);\nwe can use ZeroCSNLogPage here. This will justify existance of this\nfunction a little bit more.\n\n4) In 0002:\n> +++ b/src/backend/replication/logical/snapbuild.c\n> @@ -27,7 +27,7 @@\n> * removed. This is achieved by using the replication slot mechanism.\n> *\n> * As the percentage of transactions modifying the catalog normally is fairly\n> - * small in comparisons to ones only manipulating user data, we keep track of\n> + * small in comparison to ones only manipulating user data, we keep track of\n> * the committed catalog modifying ones inside [xmin, xmax) instead of keeping\n> * track of all running transactions like it's done in a normal snapshot. Note\n> * that we're generally only looking at transactions that have acquired an\n\nThis change is unrelated to 0002 patch, let's just push it as a separate change.\n\n\nOverall, 0002 looks straightforward, though big. I however wonder how\nwe can test that this change does not lead to any unpleasant problem,\nlike observing uncommitted changes on replicas, corruption, and other\nstuff? Maybe some basic injection-point-based TAP test here is\ndesirable?\n\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Wed, 14 Aug 2024 02:15:01 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CSN snapshots in hot standby" }, { "msg_contents": "Hi,\n\nOn 2024-08-13 23:13:39 +0300, Heikki Linnakangas wrote:\n> I added a tiny cache of the CSN lookups into SnapshotData, which can hold\n> the values of 4 XIDs that are known to be visible to the snapshot, and 4\n> invisible XIDs. This is pretty arbitrary, but the idea is to have something\n> very small to speed up the common cases that 1-2 XIDs are repeatedly looked\n> up, without adding too much overhead.\n> \n> \n> I did some performance testing of the visibility checks using these CSN\n> snapshots. The tests run SELECTs with a SeqScan in a standby, over a table\n> where all the rows have xmin/xmax values that are still in-progress in the\n> primary.\n> \n> Three test scenarios:\n> \n> 1. large-xact: one large transaction inserted all the rows. All rows have\n> the same XMIN, which is still in progress\n> \n> 2. many-subxacts: one large transaction inserted each row in a separate\n> subtransaction. All rows have a different XMIN, but they're all\n> subtransactions of the same top-level transaction. (This causes the subxids\n> cache in the proc array to overflow)\n> \n> 3. few-subxacts: All rows are inserted, committed, and vacuum frozen. Then,\n> using 10 in separate subtransactions, DELETE the rows, in an interleaved\n> fashion. The XMAX values cycle like this \"1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1,\n> 2, 3, 4, 5, ...\". The point of this is that these sub-XIDs fit in the\n> subxids cache in the procarray, but the pattern defeats the simple 4-element\n> cache that I added.\n\nI'd like to see some numbers for a workload with many overlapping top-level\ntransactions. I contrast to 2) HEAD wouldn't need to do subtrans lookups,\nwhereas this patch would need to do csn lookups. And a four entry cache\nprobably wouldn't help very much.\n\n\n> +/*\n> + * Record commit LSN of a transaction and its subtransaction tree.\n> + *\n> + * xid is a single xid to set status for. This will typically be the top level\n> + * transaction ID for a top level commit.\n> + *\n> + * subxids is an array of xids of length nsubxids, representing subtransactions\n> + * in the tree of xid. In various cases nsubxids may be zero.\n> + *\n> + * commitLsn is the LSN of the commit record. This is currently never called\n> + * for aborted transactions.\n> + */\n> +void\n> +CSNLogSetCSN(TransactionId xid, int nsubxids, TransactionId *subxids,\n> +\t\t\t XLogRecPtr commitLsn)\n> +{\n> +\tint\t\t\tpageno;\n> +\tint\t\t\ti = 0;\n> +\tint\t\t\toffset = 0;\n> +\n> +\tAssert(TransactionIdIsValid(xid));\n> +\n> +\tpageno = TransactionIdToPage(xid);\t/* get page of parent */\n> +\tfor (;;)\n> +\t{\n> +\t\tint\t\t\tnum_on_page = 0;\n> +\n> +\t\twhile (i < nsubxids && TransactionIdToPage(subxids[i]) == pageno)\n> +\t\t{\n> +\t\t\tnum_on_page++;\n> +\t\t\ti++;\n> +\t\t}\n\nHm - is there any guarantee / documented requirement that subxids is sorted?\n\n\n> +\t\tCSNLogSetPageStatus(xid,\n> +\t\t\t\t\t\t\tnum_on_page, subxids + offset,\n> +\t\t\t\t\t\t\tcommitLsn, pageno);\n> +\t\tif (i >= nsubxids)\n> +\t\t\tbreak;\n> +\n> +\t\toffset = i;\n> +\t\tpageno = TransactionIdToPage(subxids[offset]);\n> +\t\txid = InvalidTransactionId;\n> +\t}\n> +}\n\nHm. Maybe I'm missing something, but what prevents a concurrent transaction to\ncheck the visibility of a subtransaction between marking the subtransaction\ncommitted and marking the main transaction committed? If subtransaction and\nmain transaction are on the same page that won't be possible, but if they are\non different ones it does seem possible?\n\nToday XidInMVCCSnapshot() will use pg_subtrans to find the top transaction in\ncase of a suboverflowed snapshot, but with this patch that's not the case\nanymore. Which afaict will mean that repeated snapshot computations could\ngive different results for the same query?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Sep 2024 14:08:27 -0400", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CSN snapshots in hot standby" } ]
[ { "msg_contents": "Hi,\n\nIs there any way to add check constraints to a materialized view, or plans\nto add this feature in the future?\n\nAttempting to use \"ALTER TABLE tablename ADD CHECK(...)\" or \"ALTER\nMATERIALIZED VIEW\" result in error:\n\nERROR: \"tablename\" is not a table or foreign table\n\nCheck constraints are used by the optimizer, eg for partition removal, so\nthis would be a useful feature to me.\n\n--\nDaniel\n\nHi,Is there any way to add check constraints to a materialized view, or plans to add this feature in the future?Attempting to use \"ALTER TABLE tablename ADD CHECK(...)\" or \"ALTER MATERIALIZED VIEW\" result in error:ERROR:  \"tablename\" is not a table or foreign tableCheck constraints are used by the optimizer, eg for partition removal, so this would be a useful feature to me.--Daniel", "msg_date": "Thu, 4 Apr 2024 13:25:11 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Materialized views and check constraints" } ]
[ { "msg_contents": "Hi,\n\nYesterday, Tomas Vondra reported to me off-list that he was seeing\nwhat appeared to be data corruption after taking and restoring an\nincremental backup. Overnight, Jakub Wartak further experimented with\nTomas's test case, did some initial analysis, and made it very easy to\nreproduce. I spent this morning tracking down the problem, for which I\nattach a patch.\n\nIt doesn't take a whole lot to hit this bug. I think all you need is a\nrelation that is more than 1GB in size, plus a modification to the\nsecond half of some 1GB file between the full backup and the\nincremental backup, plus some way of realizing after you do a restore\nthat you've gotten the wrong answer. It's obviously quite embarrassing\nthat this wasn't caught sooner, and to be honest I'm not sure why we\ndidn't. I think the test cases pass because we avoid committing test\ncases that create large relations, but the failure to catch it during\nmanual testing must be because I never worked hard enough to verify\nthat the results were fully correct. Ouch.\n\nThe actual bug is here:\n\nif (chunkno == stop_chunkno - 1)\n stop_offset = stop_blkno % BLOCKS_PER_CHUNK;\n\nEach chunk covers 64k block numbers. The caller specifies a range of\nblock numbers of interest via start_blkno (inclusive) and stop_blkno\n(non-inclusive). We need to translate those start and stop values for\nthe overall function call into start and stop values for each\nparticular chunk. When we're on any chunk but the last one of\ninterest, the stop offset is BLOCKS_PER_CHUNK i.e. we care about\nblocks all the way through the end of the chunk. The existing code\nhandles that fine. If stop_blkno is somewhere in the middle of the\nlast chunk, the existing code also handles that fine. But the code is\nwrong when stop_blkno is a multiple of BLOCKS_PER_CHUNK, because it\nthen calculates a stop_offset of 0 (which is never right, because that\nwould mean that we thought the chunk was relevant when it isn't)\nrather than BLOCKS_PER_CHUNK. So this forgets about 64k * BLCKSZ =\n512MB of potentially modified blocks whenever the size of a single\nrelation file is exactly 512MB or exactly 1GB, which our users would\nprobably find more than slightly distressing.\n\nI'm not posting Jakub's reproducer here because it was sent to me\nprivately and, although I presume he would have no issue with me\nposting it, I can't actually confirm that right at the moment.\nHopefully he'll reply with it, and double-check that this does indeed\nfix the issue. My thanks to both Tomas and Jakub.\n\nRegards,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 4 Apr 2024 13:38:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "incremental backup breakage in BlockRefTableEntryGetBlocks" }, { "msg_contents": "On 4/4/24 19:38, Robert Haas wrote:\n> Hi,\n> \n> Yesterday, Tomas Vondra reported to me off-list that he was seeing\n> what appeared to be data corruption after taking and restoring an\n> incremental backup. Overnight, Jakub Wartak further experimented with\n> Tomas's test case, did some initial analysis, and made it very easy to\n> reproduce. I spent this morning tracking down the problem, for which I\n> attach a patch.\n> \n\nThanks, I can confirm this fixes the issue I've observed/reported. On\nmaster 10 out of 10 runs failed, with the patch no failures.\n\nThe test is very simple:\n\n1) init pgbench\n2) full backup\n3) run short pgbench\n4) incremental backup\n5) compare pg_dumpall on the instance vs. restored backup\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 4 Apr 2024 21:11:27 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: incremental backup breakage in BlockRefTableEntryGetBlocks" }, { "msg_contents": "On Thu, Apr 4, 2024 at 9:11 PM Tomas Vondra\n<[email protected]> wrote:\n>\n> On 4/4/24 19:38, Robert Haas wrote:\n> > Hi,\n> >\n> > Yesterday, Tomas Vondra reported to me off-list that he was seeing\n> > what appeared to be data corruption after taking and restoring an\n> > incremental backup. Overnight, Jakub Wartak further experimented with\n> > Tomas's test case, did some initial analysis, and made it very easy to\n> > reproduce. I spent this morning tracking down the problem, for which I\n> > attach a patch.\n> >\n>\n> Thanks, I can confirm this fixes the issue I've observed/reported. On\n> master 10 out of 10 runs failed, with the patch no failures.\n\nSame here, patch fixes it on recent master. I've also run pgbench for\n~30mins and compared master and incremental and got 0 differences,\nshould be good.\n\n> The test is very simple:\n>\n> 1) init pgbench\n\nTomas had magic fingers here - he used pgbench -i -s 100 which causes\nbigger relations (it wouldn't trigger for smaller -s values as Robert\nexplained - now it makes full sense; in earlier tests I was using much\nsmaller -s , then transitioned to other workloads (mostly append\nonly), and final 100GB+/24h+ tests used mostly INSERTs rather than\nUPDATEs AFAIR). The other interesting thing is that one of the animals\nruns with configure --with-relsegsize=<somesmallvalue> (so new\nrelations are full much earlier) and it was not catched there either -\nWouldn't it be good idea to to test in src/test/recover/ like that?\n\nAnd of course i'm attaching reproducer with some braindump notes in\ncase in future one hits similiar issue and wonders where to even start\nlooking (it's very primitive though but might help).\n\n-J.", "msg_date": "Fri, 5 Apr 2024 08:59:02 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: incremental backup breakage in BlockRefTableEntryGetBlocks" }, { "msg_contents": "On Fri, Apr 5, 2024 at 2:59 AM Jakub Wartak\n<[email protected]> wrote:\n> And of course i'm attaching reproducer with some braindump notes in\n> case in future one hits similiar issue and wonders where to even start\n> looking (it's very primitive though but might help).\n\nThanks. I've committed the patch now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 13:51:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: incremental backup breakage in BlockRefTableEntryGetBlocks" } ]
[ { "msg_contents": "TIL that IPC::Run::timer is not the same as IPC::Run::timeout.\nWith a timer object you have to check $timer->is_expired to see\nif the timeout has elapsed, but with a timeout object you don't\nbecause it will throw a Perl exception upon timing out, probably\nkilling your test program.\n\nIt appears that a good chunk of our TAP codebase has not read this\nmemo, because I see plenty of places that are checking is_expired\nin the naive belief that they'll still have control after a timeout\nhas fired.\n\nThe particular thing that started me down this road was wondering\nwhy we are getting no useful failure details from buildfarm member\ntanager's struggles with the tab-completion test case added by commit\n927332b95 [1]. Apparently it's not seeing a match to what it expects\nso it eventually times out, but all we get in the log is\n\n[03:03:42.595](0.002s) ok 82 - complete an interpolated psql variable name\n[03:03:42.597](0.002s) ok 83 - \\\\r works\nIPC::Run: timeout on timer #1 at /usr/share/perl5/IPC/Run.pm line 2944.\n# Postmaster PID for node \"main\" is 17308\n### Stopping node \"main\" using mode immediate\n\nWe would have learned something useful if control had returned to\npump_until, or even better 010_tab_completion.pl's check_completion,\nbut it doesn't.\n\nA minimum fix that seems to make this work better is as attached,\nbut I feel like somebody ought to examine all the IPC::Run::timer\nand IPC::Run::timeout calls and see which ones are mistaken.\nIt's a little scary to convert a timeout to a timer because of\nthe hazard that someplace that would need to be checking for\nis_expired isn't.\n\nAlso, the debug printout code at the bottom of check_completion\nis quite useless, because control can never reach it since\nBackgroundPsql::query_until will \"die\" on failure. I think that\nthat code worked when written, and I'm suspicious that 664d75753\nbroke it, but I've not dug deeply into the history.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tanager&dt=2024-04-04%2016%3A56%3A14", "msg_date": "Thu, 04 Apr 2024 17:24:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "> On 4 Apr 2024, at 23:24, Tom Lane <[email protected]> wrote:\n\n> A minimum fix that seems to make this work better is as attached,\n> but I feel like somebody ought to examine all the IPC::Run::timer\n> and IPC::Run::timeout calls and see which ones are mistaken.\n> It's a little scary to convert a timeout to a timer because of\n> the hazard that someplace that would need to be checking for\n> is_expired isn't.\n\nSkimming this and a few callsites it seems reasonable to use a timer instead of\na timeout, but careful study is needed to make sure we're not introducing\nanything subtly wrong in the other direction.\n\n> Also, the debug printout code at the bottom of check_completion\n> is quite useless, because control can never reach it since\n> BackgroundPsql::query_until will \"die\" on failure. I think that\n> that code worked when written, and I'm suspicious that 664d75753\n> broke it, but I've not dug deeply into the history.\n\nAFAICT, in the previous coding the interactive_psql object would use a timer or\ntimeout based on what the caller provided, and check_completion used a timer so\nthe debug logging probably worked as written.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 4 Apr 2024 23:46:22 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Thu, Apr 04, 2024 at 05:24:05PM -0400, Tom Lane wrote:\n> The particular thing that started me down this road was wondering\n> why we are getting no useful failure details from buildfarm member\n> tanager's struggles with the tab-completion test case added by commit\n> 927332b95 [1].\n\nAlso please note that tanager has been offline from around the 19th of\nMarch to the 3rd of April, explaining the delay in reporting the\nfailure in this new psql test. I've switched it back online two days\nago.\n\nTom, would you like me to test your patch directly on the host? That\nshould be pretty quick, even if I've not yet checked if the failure is\nreproducible with a manual build, outside the buildfarm scripts.\n--\nMichael", "msg_date": "Fri, 5 Apr 2024 08:05:41 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Apr 04, 2024 at 05:24:05PM -0400, Tom Lane wrote:\n>> The particular thing that started me down this road was wondering\n>> why we are getting no useful failure details from buildfarm member\n>> tanager's struggles with the tab-completion test case added by commit\n>> 927332b95 [1].\n\n> Tom, would you like me to test your patch directly on the host? That\n> should be pretty quick, even if I've not yet checked if the failure is\n> reproducible with a manual build, outside the buildfarm scripts.\n\nIf you have time, that'd be great. What I suspect is that that\nmachine's readline isn't regurgitating the string verbatim but is\ndoing something fancy with backspaces or other control characters.\nBut we need to see what it's actually emitting before there's\nmuch hope of adjusting the expected-output regex.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 19:09:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Thu, Apr 04, 2024 at 07:09:53PM -0400, Tom Lane wrote:\n> If you have time, that'd be great. What I suspect is that that\n> machine's readline isn't regurgitating the string verbatim but is\n> doing something fancy with backspaces or other control characters.\n> But we need to see what it's actually emitting before there's\n> much hope of adjusting the expected-output regex.\n\nI have been able to reproduce the failure manually and your patch is\nproviding more information, indeed, as of:\n[10:21:44.017](0.002s) ok 83 - \\r works\n[10:24:45.462](181.445s) # pump_until: timeout expired when searching\nfor \"(?^::\\{\\?VERBOSITY} )\" with stream: \"\\echo :{?VERB^G^Mpostgres=#\n\\echo :\\{\\?VERBOSITY\\} \"\npsql query timed out at\n/home/pgbuildfarm/git/postgres/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/BackgroundPsql.pm\nline 281.\n\nThis stuff is actually kind of funny on this host, \"\\echo :{?VERB\\t\"\ncompletes to something incorrect, as of:\npostgres=# \\echo :\\{\\?VERBOSITY\\}\n\nAttaching the log file, for reference. Now I can see that this uses\nlibedit at 3.1-20181209, which is far from recent. I'd be OK to just\nremove libedit from the build to remove this noise, still I am\nwondering if 927332b95e77 got what it was trying to achieve actually\nright. Thoughts?\n--\nMichael", "msg_date": "Fri, 5 Apr 2024 11:12:18 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Thu, Apr 04, 2024 at 07:09:53PM -0400, Tom Lane wrote:\n>> If you have time, that'd be great. What I suspect is that that\n>> machine's readline isn't regurgitating the string verbatim but is\n>> doing something fancy with backspaces or other control characters.\n\n> I have been able to reproduce the failure manually and your patch is\n> providing more information, indeed, as of:\n> [10:21:44.017](0.002s) ok 83 - \\r works\n> [10:24:45.462](181.445s) # pump_until: timeout expired when searching\n> for \"(?^::\\{\\?VERBOSITY} )\" with stream: \"\\echo :{?VERB^G^Mpostgres=#\n> \\echo :\\{\\?VERBOSITY\\} \"\n> psql query timed out at\n> /home/pgbuildfarm/git/postgres/src/bin/psql/../../../src/test/perl/PostgreSQL/Test/BackgroundPsql.pm\n> line 281.\n\n> This stuff is actually kind of funny on this host, \"\\echo :{?VERB\\t\"\n> completes to something incorrect, as of:\n> postgres=# \\echo :\\{\\?VERBOSITY\\}\n\nJust to be clear: you see the extra backslashes if you try this\ntab-completion manually?\n\n> Attaching the log file, for reference. Now I can see that this uses\n> libedit at 3.1-20181209, which is far from recent. I'd be OK to just\n> remove libedit from the build to remove this noise, still I am\n> wondering if 927332b95e77 got what it was trying to achieve actually\n> right. Thoughts?\n\nIt kind of looks like a libedit bug, but maybe we should dig more\ndeeply. I felt itchy about 927332b95e77 removing '{' from the\nWORD_BREAKS set, and wondered exactly how that would change readline's\nbehavior. But even if that somehow accounts for the extra backslash\nbefore '{', it's not clear how it could lead to '?' and '}' also\ngetting backslashed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 22:31:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Thu, Apr 04, 2024 at 10:31:24PM -0400, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> This stuff is actually kind of funny on this host, \"\\echo :{?VERB\\t\"\n>> completes to something incorrect, as of:\n>> postgres=# \\echo :\\{\\?VERBOSITY\\}\n> \n> Just to be clear: you see the extra backslashes if you try this\n> tab-completion manually?\n\nYeah, I do, after completing \"\\echo :{?VERB\" with this version of\nlibedit. I see that this completes with backslashes added before '{',\n'}' and '?'. The test is telling the same.\n\n>> Attaching the log file, for reference. Now I can see that this uses\n>> libedit at 3.1-20181209, which is far from recent. I'd be OK to just\n>> remove libedit from the build to remove this noise, still I am\n>> wondering if 927332b95e77 got what it was trying to achieve actually\n>> right. Thoughts?\n> \n> It kind of looks like a libedit bug, but maybe we should dig more\n> deeply. I felt itchy about 927332b95e77 removing '{' from the\n> WORD_BREAKS set, and wondered exactly how that would change readline's\n> behavior. But even if that somehow accounts for the extra backslash\n> before '{', it's not clear how it could lead to '?' and '}' also\n> getting backslashed.\n\nI don't have a clear idea, either. I also feel uneasy about\n927332b95e77 and its change of WORD_BREAKS, but this has the smell\nof a bug from an outdated libedit version.\n--\nMichael", "msg_date": "Fri, 5 Apr 2024 11:37:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On 2024-04-05 04:37 +0200, Michael Paquier wrote:\n> On Thu, Apr 04, 2024 at 10:31:24PM -0400, Tom Lane wrote:\n> > Michael Paquier <[email protected]> writes:\n> >> This stuff is actually kind of funny on this host, \"\\echo :{?VERB\\t\"\n> >> completes to something incorrect, as of:\n> >> postgres=# \\echo :\\{\\?VERBOSITY\\}\n> > \n> > Just to be clear: you see the extra backslashes if you try this\n> > tab-completion manually?\n> \n> Yeah, I do, after completing \"\\echo :{?VERB\" with this version of\n> libedit. I see that this completes with backslashes added before '{',\n> '}' and '?'. The test is telling the same.\n> \n> >> Attaching the log file, for reference. Now I can see that this uses\n> >> libedit at 3.1-20181209, which is far from recent. I'd be OK to just\n> >> remove libedit from the build to remove this noise, still I am\n> >> wondering if 927332b95e77 got what it was trying to achieve actually\n> >> right. Thoughts?\n> > \n> > It kind of looks like a libedit bug, but maybe we should dig more\n> > deeply. I felt itchy about 927332b95e77 removing '{' from the\n> > WORD_BREAKS set, and wondered exactly how that would change readline's\n> > behavior. But even if that somehow accounts for the extra backslash\n> > before '{', it's not clear how it could lead to '?' and '}' also\n> > getting backslashed.\n> \n> I don't have a clear idea, either. I also feel uneasy about\n> 927332b95e77 and its change of WORD_BREAKS, but this has the smell\n> of a bug from an outdated libedit version.\n\nIt works with the latest libedit 20230828-3.1. Have to check the NetBSD\nsource to find out what changed since 20181209-3.1.\n\nhttps://github.com/NetBSD/src/tree/trunk/lib/libedit\n\n-- \nErik\n\n\n", "msg_date": "Fri, 5 Apr 2024 04:56:40 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> It works with the latest libedit 20230828-3.1. Have to check the NetBSD\n> source to find out what changed since 20181209-3.1.\n\nYeah, the test is passing on mamba which is running the (just\nofficially released) NetBSD 10.0. I'm not sure whether 10.0\nhas the \"latest\" libedit or something a little further back.\nsidewinder, with NetBSD 9.3, is happy as well. But 20181209\npresumably belongs to NetBSD 8.x, which is theoretically still\nin support, so maybe it's worth poking into.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 23:10:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On 2024-04-05 05:10 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > It works with the latest libedit 20230828-3.1. Have to check the NetBSD\n> > source to find out what changed since 20181209-3.1.\n> \n> Yeah, the test is passing on mamba which is running the (just\n> officially released) NetBSD 10.0. I'm not sure whether 10.0\n> has the \"latest\" libedit or something a little further back.\n> sidewinder, with NetBSD 9.3, is happy as well. But 20181209\n> presumably belongs to NetBSD 8.x, which is theoretically still\n> in support, so maybe it's worth poking into.\n\nHaving a look right now. Change [1] looks like a good candidate which\nis likely in 20221030-3.1.\n\nI'm trying to build Postgres with that older libedit version but can't\nfigure out what options to pass to ./configure so that it picks\n/usr/local/lib/libedit.so instead of /usr/lib/libedit.so. This didn't\nwork:\n\n LDFLAGS='-L/usr/local/lib' ./configure --with-libedit-preferred\n\n(My ld fu is not so great.)\n\n[1] https://github.com/NetBSD/src/commit/12863d4d7917df8a7ef5ad9dab6bb719018a22d1\n\n-- \nErik\n\n\n", "msg_date": "Fri, 5 Apr 2024 05:34:15 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> I'm trying to build Postgres with that older libedit version but can't\n> figure out what options to pass to ./configure so that it picks\n> /usr/local/lib/libedit.so instead of /usr/lib/libedit.so. This didn't\n> work:\n\nYou probably want configure --with-libs=/usr/local/lib,\nand likely also --with-includes=/usr/local/include.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 Apr 2024 23:37:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On 2024-04-05 05:37 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > I'm trying to build Postgres with that older libedit version but can't\n> > figure out what options to pass to ./configure so that it picks\n> > /usr/local/lib/libedit.so instead of /usr/lib/libedit.so. This didn't\n> > work:\n> \n> You probably want configure --with-libs=/usr/local/lib,\n> and likely also --with-includes=/usr/local/include.\n\nThanks Tom. But I also have to run psql with:\n\n LD_LIBRARY_PATH=/usr/local/lib:/usr/lib:/lib src/bin/psql/psql\n\nLibedit 20191025-3.1 is the first version where \":{?VERB<tab>\" works as\nexpected. The previous release 20190324-3.1 still produces the escaped\noutput that Michael found. That narrows down the changes to everything\nbetween [1] (changed on 2019-03-24 but not included in 20190324-3.1) and\n[2] (both inclusive).\n\n[1] https://github.com/NetBSD/src/commit/e09538bda2f805200d0f7ae09fb9b7f2f5ed75f2\n[2] https://github.com/NetBSD/src/commit/de11d876419df3570c2418468613aebcebafe6ae\n\n-- \nErik\n\n\n", "msg_date": "Fri, 5 Apr 2024 06:10:14 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> Libedit 20191025-3.1 is the first version where \":{?VERB<tab>\" works as\n> expected. The previous release 20190324-3.1 still produces the escaped\n> output that Michael found. That narrows down the changes to everything\n> between [1] (changed on 2019-03-24 but not included in 20190324-3.1) and\n> [2] (both inclusive).\n\nHm. I just installed NetBSD 8.2 in a VM, and it passes this test:\n\n# +++ tap install-check in src/bin/psql +++\nt/001_basic.pl ........... ok \nt/010_tab_completion.pl .. ok \nt/020_cancel.pl .......... ok \nAll tests successful.\n\nSo it seems like the bug does not exist in any currently-supported\nNetBSD release. Debian has been known to ship obsolete libedit\nversions, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Apr 2024 00:15:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Erik Wienhold <[email protected]> writes:\n>> Libedit 20191025-3.1 is the first version where \":{?VERB<tab>\" works as\n>> expected. The previous release 20190324-3.1 still produces the escaped\n>> output that Michael found. That narrows down the changes to everything\n>> between [1] (changed on 2019-03-24 but not included in 20190324-3.1) and\n>> [2] (both inclusive).\n>\n> Hm. I just installed NetBSD 8.2 in a VM, and it passes this test:\n>\n> # +++ tap install-check in src/bin/psql +++\n> t/001_basic.pl ........... ok \n> t/010_tab_completion.pl .. ok \n> t/020_cancel.pl .......... ok \n> All tests successful.\n>\n> So it seems like the bug does not exist in any currently-supported\n> NetBSD release. Debian has been known to ship obsolete libedit\n> versions, though.\n\nBoth the current (bokworm/12) and previous (bullseye/11) versions of\nDebian have new enough libedits to not be affected by this bug:\n\nlibedit | 3.1-20181209-1 | oldoldstable | source\nlibedit | 3.1-20191231-2 | oldstable | source\nlibedit | 3.1-20221030-2 | stable | source\nlibedit | 3.1-20230828-1 | testing | source\nlibedit | 3.1-20230828-1 | unstable | source\nlibedit | 3.1-20230828-1 | unstable-debug | source\n\nBut in bullseye they decided that OpenSSL is a system library as far as\nthe GPL is concerned, so are linking directly to readline.\n\nAnd even before then their psql wrapper would LD_PRELOAD readline\ninstead if installed, so approximately nobody actually ever used psql\nwith libedit on Debian.\n\n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Fri, 05 Apr 2024 13:57:04 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On 2024-04-04 Th 17:24, Tom Lane wrote:\n> TIL that IPC::Run::timer is not the same as IPC::Run::timeout.\n> With a timer object you have to check $timer->is_expired to see\n> if the timeout has elapsed, but with a timeout object you don't\n> because it will throw a Perl exception upon timing out, probably\n> killing your test program.\n>\n> It appears that a good chunk of our TAP codebase has not read this\n> memo, because I see plenty of places that are checking is_expired\n> in the naive belief that they'll still have control after a timeout\n> has fired.\n>\n\nI started having a look at these.\n\nHere are the cases I found:\n\n./src/bin/psql/t/010_tab_completion.pl:    my $okay = ($out =~ $pattern \n&& !$h->{timeout}->is_expired);\n./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:      until \n$self->{stdout} =~ /$banner/ || $self->{timeout}->is_expired;\n./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql startup \ntimed out\" if $self->{timeout}->is_expired;\n./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql query \ntimed out\" if $self->{timeout}->is_expired;\n./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql query \ntimed out\" if $self->{timeout}->is_expired;\n./src/test/perl/PostgreSQL/Test/Cluster.pm:            # timeout, which \nwe'll handle by testing is_expired\n./src/test/perl/PostgreSQL/Test/Cluster.pm:              unless \n$timeout->is_expired;\n./src/test/perl/PostgreSQL/Test/Cluster.pm:timeout is the \nIPC::Run::Timeout object whose is_expired method can be tested\n./src/test/perl/PostgreSQL/Test/Cluster.pm:            # timeout, which \nwe'll handle by testing is_expired\n./src/test/perl/PostgreSQL/Test/Cluster.pm:              unless \n$timeout->is_expired;\n./src/test/perl/PostgreSQL/Test/Utils.pm:        if ($timeout->is_expired)\n./src/test/recovery/t/021_row_visibility.pl:        if \n($psql_timeout->is_expired)\n./src/test/recovery/t/032_relfilenode_reuse.pl:        if \n($psql_timeout->is_expired)\n\nThose in Cluster.pm look correct - they are doing the run() in an eval \nblock and testing for the is_expired setting in an exception block. The \nother cases look more suspect. I'll take a closer look.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-04 Th 17:24, Tom Lane wrote:\n\n\nTIL that IPC::Run::timer is not the same as IPC::Run::timeout.\nWith a timer object you have to check $timer->is_expired to see\nif the timeout has elapsed, but with a timeout object you don't\nbecause it will throw a Perl exception upon timing out, probably\nkilling your test program.\n\nIt appears that a good chunk of our TAP codebase has not read this\nmemo, because I see plenty of places that are checking is_expired\nin the naive belief that they'll still have control after a timeout\nhas fired.\n\n\n\n\n\nI started having a look at these. \n\nHere are the cases I found:\n./src/bin/psql/t/010_tab_completion.pl:    my $okay = ($out =~\n $pattern && !$h->{timeout}->is_expired);\n ./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:      until\n $self->{stdout} =~ /$banner/ ||\n $self->{timeout}->is_expired;\n ./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql\n startup timed out\" if $self->{timeout}->is_expired;\n ./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql\n query timed out\" if $self->{timeout}->is_expired;\n ./src/test/perl/PostgreSQL/Test/BackgroundPsql.pm:    die \"psql\n query timed out\" if $self->{timeout}->is_expired;\n ./src/test/perl/PostgreSQL/Test/Cluster.pm:            # timeout,\n which we'll handle by testing is_expired\n ./src/test/perl/PostgreSQL/Test/Cluster.pm:              unless\n $timeout->is_expired;\n ./src/test/perl/PostgreSQL/Test/Cluster.pm:timeout is the\n IPC::Run::Timeout object whose is_expired method can be tested\n ./src/test/perl/PostgreSQL/Test/Cluster.pm:            # timeout,\n which we'll handle by testing is_expired\n ./src/test/perl/PostgreSQL/Test/Cluster.pm:              unless\n $timeout->is_expired;\n ./src/test/perl/PostgreSQL/Test/Utils.pm:        if\n ($timeout->is_expired)\n ./src/test/recovery/t/021_row_visibility.pl:        if\n ($psql_timeout->is_expired)\n ./src/test/recovery/t/032_relfilenode_reuse.pl:        if\n ($psql_timeout->is_expired)\n\n\n\n\nThose in Cluster.pm look correct - they are doing the run() in an eval block and testing for the is_expired setting in an exception block. The other cases look more suspect. I'll take a closer look.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 5 Apr 2024 10:14:06 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> So it seems like the bug does not exist in any currently-supported\n>> NetBSD release. Debian has been known to ship obsolete libedit\n>> versions, though.\n\n> Both the current (bokworm/12) and previous (bullseye/11) versions of\n> Debian have new enough libedits to not be affected by this bug:\n> ...\n> But in bullseye they decided that OpenSSL is a system library as far as\n> the GPL is concerned, so are linking directly to readline.\n\n> And even before then their psql wrapper would LD_PRELOAD readline\n> instead if installed, so approximately nobody actually ever used psql\n> with libedit on Debian.\n\nBased on this info, I'm disinclined to put work into trying to\nmake the case behave correctly with that old libedit version, or\neven to lobotomize the test case enough so it would pass.\n\nWhat I suggest Michael do with tanager is install the\nOS-version-appropriate version of GNU readline, so that the animal\nwill test what ilmari describes as the actually common use-case.\n\n(I see that what he did for the moment is add --without-readline.\nPerhaps that's a decent long-term choice too, because I think we\nhave rather little coverage of that option except on Windows.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Apr 2024 17:18:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Fri, Apr 05, 2024 at 05:18:51PM -0400, Tom Lane wrote:\n> What I suggest Michael do with tanager is install the\n> OS-version-appropriate version of GNU readline, so that the animal\n> will test what ilmari describes as the actually common use-case.\n\nThanks for the investigations! It's clear that this version of\nlibedit is borked.\n\n> (I see that what he did for the moment is add --without-readline.\n> Perhaps that's a decent long-term choice too, because I think we\n> have rather little coverage of that option except on Windows.)\n\nYeah, my motivation did not go beyond that so you can blame me for\nbeing lazy. ;)\n\nNote also the --without-icu on this host that has little coverage, as\nwell.\n--\nMichael", "msg_date": "Sat, 6 Apr 2024 09:16:25 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Fri, Apr 05, 2024 at 05:18:51PM -0400, Tom Lane wrote:\n> Based on this info, I'm disinclined to put work into trying to\n> make the case behave correctly with that old libedit version, or\n> even to lobotomize the test case enough so it would pass.\n\nBy the way, are you planning to do something like [1]? I've not\nlooked in details at the callers of IPC::Run::timeout, still the extra\ndebug output would be nice.\n\n[1]: https://www.postgresql.org/message-id/[email protected]\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 15:13:21 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> By the way, are you planning to do something like [1]? I've not\n> looked in details at the callers of IPC::Run::timeout, still the extra\n> debug output would be nice.\n\nIt needs more review I think - I didn't check every call site to see\nif anything would be broken. I believe Andrew has undertaken a\nsurvey of all the timeout/timer calls, but if he doesn't produce\nanything I might have a go at it after awhile.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 09:46:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "\nOn 2024-04-09 Tu 09:46, Tom Lane wrote:\n> Michael Paquier <[email protected]> writes:\n>> By the way, are you planning to do something like [1]? I've not\n>> looked in details at the callers of IPC::Run::timeout, still the extra\n>> debug output would be nice.\n> It needs more review I think - I didn't check every call site to see\n> if anything would be broken. I believe Andrew has undertaken a\n> survey of all the timeout/timer calls, but if he doesn't produce\n> anything I might have a go at it after awhile.\n>\n> \t\t\t\n\n\nWhat I looked at so far was the use of is_expired, but when you look \ninto that you see that you need to delve further, to where timeout/timer \nobjects are created and passed around. I'll take a closer look when I \nhave done some incremental json housekeeping.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 10:27:10 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Thu, Apr 4, 2024 at 10:38 PM Michael Paquier <[email protected]> wrote:\n> > It kind of looks like a libedit bug, but maybe we should dig more\n> > deeply. I felt itchy about 927332b95e77 removing '{' from the\n> > WORD_BREAKS set, and wondered exactly how that would change readline's\n> > behavior. But even if that somehow accounts for the extra backslash\n> > before '{', it's not clear how it could lead to '?' and '}' also\n> > getting backslashed.\n>\n> I don't have a clear idea, either. I also feel uneasy about\n> 927332b95e77 and its change of WORD_BREAKS, but this has the smell\n> of a bug from an outdated libedit version.\n\nI too felt uneasy about that commit, for the same reason. However,\nthere is a justification for the change in the commit message which is\nnot obviously wrong, namely that \":{?name} is the only psql syntax\nusing the '{' sign\". And in fact, SQL basically doesn't use '{' for\nanything, either. We do see { showing up inside of quoted strings, for\narrays or JSON, but I would guess that the word-break characters\naren't going to affect behavior within a quoted string. So it still\nseems like it should be OK? Another thing that makes me think that my\nunease may be unfounded is that the matching character '}' isn't in\nWORD_BREAKS either, and I would have thought that if we needed one\nwe'd need both.\n\nBut does anyone else have a more specific reason for thinking that\nthis might be a problem?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 11:17:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "removal of '{' from WORD_BREAKS" }, { "msg_contents": "> On 4 Apr 2024, at 23:46, Daniel Gustafsson <[email protected]> wrote:\n> \n>> On 4 Apr 2024, at 23:24, Tom Lane <[email protected]> wrote:\n> \n>> A minimum fix that seems to make this work better is as attached,\n>> but I feel like somebody ought to examine all the IPC::Run::timer\n>> and IPC::Run::timeout calls and see which ones are mistaken.\n>> It's a little scary to convert a timeout to a timer because of\n>> the hazard that someplace that would need to be checking for\n>> is_expired isn't.\n> \n> Skimming this and a few callsites it seems reasonable to use a timer instead of\n> a timeout, but careful study is needed to make sure we're not introducing\n> anything subtly wrong in the other direction.\n\nSharing a few preliminary results from looking at this, the attached passes\ncheck-world but need more study/testing.\n\nIt seems wrong to me that we die() in query and query_until rather than giving\nthe caller the power to decide how to proceed. We have even documented that we\ndo just this:\n\n\t\"Dies on failure to invoke psql, or if psql fails to connect. Errors\n\toccurring later are the caller's problem\"\n\nTurning the timeout into a timer and returning undef along with logging a test\nfailure in case of expiration seems a bit saner (maybe Andrew can suggest an\nAPI which has a better Perl feel to it). Most callsites don't need any changes\nto accommodate for this, the attached 0002 implements this timer change and\nmodify the few sites that need it, converting one to plain query() where the\nadded complexity of query_until isn't required.\n\nA few other comments on related things that stood out while reviewing:\n\nThe tab completion test can use the API call for automatically restart the\ntimer to reduce the complexity of check_completion a hair. Done in 0001 (but\nreally not necessary).\n\nCommit Af279ddd1c2 added this sequence to 040_standby_failover_slots_sync.pl in\nthe recovery tests:\n\n\t$back_q->query_until(\n\t qr/logical_slot_get_changes/, q(\n\t \\echo logical_slot_get_changes\n\t SELECT pg_logical_slot_get_changes('test_slot', NULL, NULL);\n\t));\n\n\t... <other tests> ...\n\n\t# Since there are no slots in standby_slot_names, the function\n\t# pg_logical_slot_get_changes should now return, and the session can be\n\t# stopped.\n\t$back_q->quit;\n\nThere is no guarantee that pg_logical_slot_get_changes has returned when\nreaching this point. This might still work as intended, but the comment is\nslightly misleading IMO.\n\n\nrecovery/t/043_wal_replay_wait.pl calls pg_wal_replay_wait() since 06c418e163e\nin a background session which it then skips terminating. Calling ->quit is\nmandated by the API, in turn required by IPC::Run. Calling ->quit on the\nprocess makes the test fail from the process having already exited, but we can\ncall ->finish directly like we do in test_misc/t/005_timeouts.pl. 0003 fixes\nthis.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 9 Apr 2024 19:10:07 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IPC::Run::time[r|out] vs our TAP tests" }, { "msg_contents": "On Tue, Apr 9, 2024 at 6:18 PM Robert Haas <[email protected]> wrote:\n> On Thu, Apr 4, 2024 at 10:38 PM Michael Paquier <[email protected]> wrote:\n> > > It kind of looks like a libedit bug, but maybe we should dig more\n> > > deeply. I felt itchy about 927332b95e77 removing '{' from the\n> > > WORD_BREAKS set, and wondered exactly how that would change readline's\n> > > behavior. But even if that somehow accounts for the extra backslash\n> > > before '{', it's not clear how it could lead to '?' and '}' also\n> > > getting backslashed.\n> >\n> > I don't have a clear idea, either. I also feel uneasy about\n> > 927332b95e77 and its change of WORD_BREAKS, but this has the smell\n> > of a bug from an outdated libedit version.\n>\n> I too felt uneasy about that commit, for the same reason. However,\n> there is a justification for the change in the commit message which is\n> not obviously wrong, namely that \":{?name} is the only psql syntax\n> using the '{' sign\". And in fact, SQL basically doesn't use '{' for\n> anything, either. We do see { showing up inside of quoted strings, for\n> arrays or JSON, but I would guess that the word-break characters\n> aren't going to affect behavior within a quoted string. So it still\n> seems like it should be OK? Another thing that makes me think that my\n> unease may be unfounded is that the matching character '}' isn't in\n> WORD_BREAKS either, and I would have thought that if we needed one\n> we'd need both.\n\nFWIW, the default value of rl_basic_word_break_characters [1] has '{'\nbut doesn't have '}'. The documentation says that this should \"break\nwords for completion in Bash\". But I failed to find an explanation\nwhy this should be so for Bash. As you correctly get, my idea was\nthat our SQL isn't not heavily using '{' unlike Bash.\n\n> But does anyone else have a more specific reason for thinking that\n> this might be a problem?\n\nI don't particularly object against reverting this commit, but I think\nwe should get to the bottom of this first. Otherwise there is no\nwarranty to not run into the same problem again.\n\nLinks.\n1. https://tiswww.case.edu/php/chet/readline/readline.html#index-rl_005fbasic_005fword_005fbreak_005fcharacters\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 21:32:16 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: removal of '{' from WORD_BREAKS" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 6:18 PM Robert Haas <[email protected]> wrote:\n>> I too felt uneasy about that commit, for the same reason. However,\n>> there is a justification for the change in the commit message which is\n>> not obviously wrong, namely that \":{?name} is the only psql syntax\n>> using the '{' sign\". And in fact, SQL basically doesn't use '{' for\n>> anything, either.\n\nTrue.\n\n> FWIW, the default value of rl_basic_word_break_characters [1] has '{'\n> but doesn't have '}'. The documentation says that this should \"break\n> words for completion in Bash\". But I failed to find an explanation\n> why this should be so for Bash. As you correctly get, my idea was\n> that our SQL isn't not heavily using '{' unlike Bash.\n\nYeah, there's no doubt that what we are currently using for\nWORD_BREAKS has mostly been cargo-culted from Bash rather than having\nany solid foundation in SQL syntax. It works all right for us today\nbecause we don't really try to complete anything in general SQL\nexpression contexts, so as long as we have whitespace and parens in\nthere we're more or less fine.\n\nI wonder a bit why comma isn't in there, though. As an example,\n\tvacuum (full,fre<TAB>\nfails to complete \"freeze\", though it works fine with a space\nafter the comma. I've not experimented, but it seems certain\nthat it'd behave better with comma in WORD_BREAKS. Whether\nthat'd pessimize other behaviors, I dunno.\n\nThe longer-range concern that I have is that if we ever want to\ncomplete stuff within expressions, I think we're going to need\nall the valid operator characters to be in WORD_BREAKS. And that\nwill be a problem for this patch, not because of '{' but because\nof '?'. So I'd be happier if the parsing were done in a way that\ndid not rely on '{' and '?' being treated as word characters.\nBut I've not looked into how hard that'd be. In any case, it's\nlikely that expanding WORD_BREAKS like that would cause some other\nproblems that'd have to be fixed, so it's not very reasonable to\nexpect this patch to avoid a hypothetical future problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 17:03:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: removal of '{' from WORD_BREAKS" } ]
[ { "msg_contents": "Building the generated ecpg preproc file can take a long time. You can\ncheck how long using:\n\nninja -C build src/interfaces/ecpg/preproc/ecpg.p/meson-generated_.._preproc.c.o\n\nThis moves that file much closer to the top of our build order, so\nbuilding it can be pipelined much better with other files.\n\nIt improved clean build times on my machine (10 cores/20 threads) from ~40\nseconds to ~30 seconds.\n\nYou can check improvements for yourself with:\n\nninja -C build clean && ninja -C build all", "msg_date": "Fri, 5 Apr 2024 00:45:20 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Speed up clean meson builds by ~25%" }, { "msg_contents": "On Fri, 5 Apr 2024 at 00:45, Jelte Fennema-Nio <[email protected]> wrote:\n> It improved clean build times on my machine (10 cores/20 threads) from ~40\n> seconds to ~30 seconds.\n\nAfter discussing this off-list with Bilal, I realized that this gain\nis only happening for clang builds on my system. Because those take a\nlong time as was also recently discussed in[1]. My builds don't take\nnearly as long though. I tried with clang 15 through 18 and they all\ntook 10-22 seconds to run and clang comes from apt.llvm.org on Ubuntu\n22.04\n\n[1]: https://www.postgresql.org/message-id/flat/CA%2BhUKGLvJ7-%3DfS-J9kN%3DaZWrpyqykwqCBbxXLEhUa9831dPFcg%40mail.gmail.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 15:36:34 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-05 15:36:34 +0200, Jelte Fennema-Nio wrote:\n> On Fri, 5 Apr 2024 at 00:45, Jelte Fennema-Nio <[email protected]> wrote:\n> > It improved clean build times on my machine (10 cores/20 threads) from ~40\n> > seconds to ~30 seconds.\n> \n> After discussing this off-list with Bilal, I realized that this gain\n> is only happening for clang builds on my system. Because those take a\n> long time as was also recently discussed in[1]. My builds don't take\n> nearly as long though. I tried with clang 15 through 18 and they all\n> took 10-22 seconds to run and clang comes from apt.llvm.org on Ubuntu\n> 22.04\n> \n> [1]: https://www.postgresql.org/message-id/flat/CA%2BhUKGLvJ7-%3DfS-J9kN%3DaZWrpyqykwqCBbxXLEhUa9831dPFcg%40mail.gmail.com\n\nI recommend opening a bug report for clang, best with an already preprocessed\ninput file.\n\nWe're going to need to do something about this from our side as well, I\nsuspect. The times aren't great with gcc either, even if not as bad as with\nclang.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 5 Apr 2024 08:24:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Fri, 5 Apr 2024 at 17:24, Andres Freund <[email protected]> wrote:\n> I recommend opening a bug report for clang, best with an already preprocessed\n> input file.\n\n> We're going to need to do something about this from our side as well, I\n> suspect. The times aren't great with gcc either, even if not as bad as with\n> clang.\n\nAgreed. While not a full solution, I think this patch is still good to\naddress some of the pain: Waiting 10 seconds at the end of my build\nwith only 1 of my 10 cores doing anything.\n\nSo while this doesn't decrease CPU time spent it does decrease\nwall-clock time significantly in some cases, with afaict no downsides.\n\n\n", "msg_date": "Fri, 5 Apr 2024 18:19:20 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Fri, Apr 05, 2024 at 06:19:20PM +0200, Jelte Fennema-Nio wrote:\n> Agreed. While not a full solution, I think this patch is still good to\n> address some of the pain: Waiting 10 seconds at the end of my build\n> with only 1 of my 10 cores doing anything.\n> \n> So while this doesn't decrease CPU time spent it does decrease\n> wall-clock time significantly in some cases, with afaict no downsides.\n\nWell, this is also painful with ./configure. So, even if we are going\nto move away from it at this point, we still need to support it for a\ncouple of years. It looks to me that letting the clang folks know\nabout the situation is the best way forward.\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 14:23:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hello Michael,\n\n08.04.2024 08:23, Michael Paquier wrote:\n> On Fri, Apr 05, 2024 at 06:19:20PM +0200, Jelte Fennema-Nio wrote:\n>> Agreed. While not a full solution, I think this patch is still good to\n>> address some of the pain: Waiting 10 seconds at the end of my build\n>> with only 1 of my 10 cores doing anything.\n>>\n>> So while this doesn't decrease CPU time spent it does decrease\n>> wall-clock time significantly in some cases, with afaict no downsides.\n> Well, this is also painful with ./configure. So, even if we are going\n> to move away from it at this point, we still need to support it for a\n> couple of years. It looks to me that letting the clang folks know\n> about the situation is the best way forward.\n>\n\nAs I wrote in [1], I didn't observe the issue with clang-18, so maybe it\nis fixed already.\nPerhaps it's worth rechecking...\n\n[1] https://www.postgresql.org/message-id/d2bf3727-bae4-3aee-65f6-caec2c4ebaa8%40gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Apr 2024 11:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 05.04.24 18:19, Jelte Fennema-Nio wrote:\n> On Fri, 5 Apr 2024 at 17:24, Andres Freund <[email protected]> wrote:\n>> I recommend opening a bug report for clang, best with an already preprocessed\n>> input file.\n> \n>> We're going to need to do something about this from our side as well, I\n>> suspect. The times aren't great with gcc either, even if not as bad as with\n>> clang.\n> \n> Agreed. While not a full solution, I think this patch is still good to\n> address some of the pain: Waiting 10 seconds at the end of my build\n> with only 1 of my 10 cores doing anything.\n> \n> So while this doesn't decrease CPU time spent it does decrease\n> wall-clock time significantly in some cases, with afaict no downsides.\n\nI have tested this with various compilers, and I can confirm that this \nshaves off about 5 seconds from the build wall-clock time, which \nrepresents about 10%-20% of the total time. I think this is a good patch.\n\nInterestingly, if I apply the analogous changes to the makefiles, I \ndon't get any significant improvements. (Depends on local \ncircumstances, of course.) But I would still suggest to keep the \nmakefiles aligned.\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 10:02:18 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Mon, 8 Apr 2024 at 10:00, Alexander Lakhin <[email protected]> wrote:\n> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n> is fixed already.\n> Perhaps it's worth rechecking...\n\nUsing the attached script I got these timings. Clang is significantly\nslower in all of them. But especially with -Og the difference between\nis huge.\n\ngcc 11.4.0: 7.276s\nclang 18.1.3: 17.216s\ngcc 11.4.0 --debug: 7.441s\nclang 18.1.3 --debug: 18.164s\ngcc 11.4.0 --debug -Og: 2.418s\nclang 18.1.3 --debug -Og: 14.864s\n\nI reported this same issue to the LLVM project here:\nhttps://github.com/llvm/llvm-project/issues/87973", "msg_date": "Mon, 8 Apr 2024 10:36:59 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Mon, 8 Apr 2024 at 10:02, Peter Eisentraut <[email protected]> wrote:\n> I have tested this with various compilers, and I can confirm that this\n> shaves off about 5 seconds from the build wall-clock time, which\n> represents about 10%-20% of the total time. I think this is a good patch.\n\nGreat to hear.\n\n> Interestingly, if I apply the analogous changes to the makefiles, I\n> don't get any significant improvements. (Depends on local\n> circumstances, of course.) But I would still suggest to keep the\n> makefiles aligned.\n\nAttached is a patch that also updates the Makefiles, but indeed I\ndon't get any perf gains there either.\n\nOn Mon, 8 Apr 2024 at 07:23, Michael Paquier <[email protected]> wrote:\n> Well, this is also painful with ./configure. So, even if we are going\n> to move away from it at this point, we still need to support it for a\n> couple of years. It looks to me that letting the clang folks know\n> about the situation is the best way forward.\n\nI reported the issue to the clang folks:\nhttps://github.com/llvm/llvm-project/issues/87973\n\nBut even if my patch doesn't help for ./configure, it still seems like\na good idea to me to still reduce compile times when using meson while\nwe wait for clang folks to address the issue.", "msg_date": "Mon, 8 Apr 2024 10:59:17 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hello Jelte,\n\n08.04.2024 11:36, Jelte Fennema-Nio wrote:\n> On Mon, 8 Apr 2024 at 10:00, Alexander Lakhin <[email protected]> wrote:\n>> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n>> is fixed already.\n>> Perhaps it's worth rechecking...\n> Using the attached script I got these timings. Clang is significantly\n> slower in all of them. But especially with -Og the difference between\n> is huge.\n>\n> gcc 11.4.0: 7.276s\n> clang 18.1.3: 17.216s\n> gcc 11.4.0 --debug: 7.441s\n> clang 18.1.3 --debug: 18.164s\n> gcc 11.4.0 --debug -Og: 2.418s\n> clang 18.1.3 --debug -Og: 14.864s\n>\n> I reported this same issue to the LLVM project here:\n> https://github.com/llvm/llvm-project/issues/87973\n\nMaybe we're talking about different problems.\nAt [1] Thomas (and then I) was unhappy with more than 200 seconds\nduration...\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLvJ7-%3DfS-J9kN%3DaZWrpyqykwqCBbxXLEhUa9831dPFcg%40mail.gmail.com\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Apr 2024 12:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn Mon, 8 Apr 2024 at 11:00, Alexander Lakhin <[email protected]> wrote:\n>\n> Hello Michael,\n>\n> 08.04.2024 08:23, Michael Paquier wrote:\n> > On Fri, Apr 05, 2024 at 06:19:20PM +0200, Jelte Fennema-Nio wrote:\n> >> Agreed. While not a full solution, I think this patch is still good to\n> >> address some of the pain: Waiting 10 seconds at the end of my build\n> >> with only 1 of my 10 cores doing anything.\n> >>\n> >> So while this doesn't decrease CPU time spent it does decrease\n> >> wall-clock time significantly in some cases, with afaict no downsides.\n> > Well, this is also painful with ./configure. So, even if we are going\n> > to move away from it at this point, we still need to support it for a\n> > couple of years. It looks to me that letting the clang folks know\n> > about the situation is the best way forward.\n> >\n>\n> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n> is fixed already.\n> Perhaps it's worth rechecking...\n>\n> [1] https://www.postgresql.org/message-id/d2bf3727-bae4-3aee-65f6-caec2c4ebaa8%40gmail.com\n\nI had this problem on my local computer. My build times are:\n\ngcc: 20s\nclang-15: 24s\nclang-16: 105s\nclang-17: 111s\nclang-18: 25s\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Mon, 8 Apr 2024 12:23:56 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Mon, Apr 08, 2024 at 12:23:56PM +0300, Nazir Bilal Yavuz wrote:\n> On Mon, 8 Apr 2024 at 11:00, Alexander Lakhin <[email protected]> wrote:\n>> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n>> is fixed already.\n>> Perhaps it's worth rechecking...\n>>\n>> [1] https://www.postgresql.org/message-id/d2bf3727-bae4-3aee-65f6-caec2c4ebaa8%40gmail.com\n> \n> I had this problem on my local computer. My build times are:\n> \n> gcc: 20s\n> clang-15: 24s\n> clang-16: 105s\n> clang-17: 111s\n> clang-18: 25s\n\nInteresting. A parallel build of ecpg shows similar numbers here:\nclang-16: 101s\nclang-17: 112s\nclang-18: 14s\ngcc: 10s\n\nMost of the time is still spent on preproc.c with clang-18, but that's\nmuch, much faster (default version of clang is 16 on Debian GID where\nI've run these numbers).\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 14:01:00 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Tue, Apr 9, 2024 at 5:01 PM Michael Paquier <[email protected]> wrote:\n> On Mon, Apr 08, 2024 at 12:23:56PM +0300, Nazir Bilal Yavuz wrote:\n> > On Mon, 8 Apr 2024 at 11:00, Alexander Lakhin <[email protected]> wrote:\n> >> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n> >> is fixed already.\n> >> Perhaps it's worth rechecking...\n> >>\n> >> [1] https://www.postgresql.org/message-id/d2bf3727-bae4-3aee-65f6-caec2c4ebaa8%40gmail.com\n> >\n> > I had this problem on my local computer. My build times are:\n> >\n> > gcc: 20s\n> > clang-15: 24s\n> > clang-16: 105s\n> > clang-17: 111s\n> > clang-18: 25s\n>\n> Interesting. A parallel build of ecpg shows similar numbers here:\n> clang-16: 101s\n> clang-17: 112s\n> clang-18: 14s\n> gcc: 10s\n\nI don't expect it to get fixed BTW, because it's present in 16.0.6,\nand .6 is the terminal release, if I understand their system\ncorrectly. They're currently only doing bug fixes for 18, and even\nthere not for much longer. Interesting that not everyone saw this at\nfirst, perhaps the bug arrived in a minor release that some people\ndidn't have yet? Or perhaps there is something special required to\ntrigger it?\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:13:52 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 17:13:52 +1200, Thomas Munro wrote:\n> On Tue, Apr 9, 2024 at 5:01 PM Michael Paquier <[email protected]> wrote:\n> > On Mon, Apr 08, 2024 at 12:23:56PM +0300, Nazir Bilal Yavuz wrote:\n> > > On Mon, 8 Apr 2024 at 11:00, Alexander Lakhin <[email protected]> wrote:\n> > >> As I wrote in [1], I didn't observe the issue with clang-18, so maybe it\n> > >> is fixed already.\n> > >> Perhaps it's worth rechecking...\n> > >>\n> > >> [1] https://www.postgresql.org/message-id/d2bf3727-bae4-3aee-65f6-caec2c4ebaa8%40gmail.com\n> > >\n> > > I had this problem on my local computer. My build times are:\n> > >\n> > > gcc: 20s\n> > > clang-15: 24s\n> > > clang-16: 105s\n> > > clang-17: 111s\n> > > clang-18: 25s\n> >\n> > Interesting. A parallel build of ecpg shows similar numbers here:\n> > clang-16: 101s\n> > clang-17: 112s\n> > clang-18: 14s\n> > gcc: 10s\n>\n> I don't expect it to get fixed BTW, because it's present in 16.0.6,\n> and .6 is the terminal release, if I understand their system\n> correctly. They're currently only doing bug fixes for 18, and even\n> there not for much longer. Interesting that not everyone saw this at\n> first, perhaps the bug arrived in a minor release that some people\n> didn't have yet? Or perhaps there is something special required to\n> trigger it?\n\nI think we need to do something about the compile time of this file, even with\ngcc. Our main grammar already is an issue and stacking all the ecpg stuff on\ntop makes it considerably worse.\n\nISTM there's a bunch of pretty pointless stuff in the generated preproc.y,\nwhich do seem to have some impact on compile time. E.g. a good bit of the file\nis just stuff like\n\n reserved_keyword:\n ALL\n {\n $$ = mm_strdup(\"all\");\n}\n...\n\n\nWhy are strduping all of these? We could instead just use the value of the\ntoken, instead of forcing the compiler to generate branches for all individual\nkeywords etc.\n\nI don't know off-hand if the keyword lookup machinery ends up with an\nuppercase keyword, but if so, that'd be easy enough to change.\n\n\nIt actually looks to me like the many calls to mm_strdup() might actually be\nwhat's driving clang nuts. I hacked up preproc.y to not need those calls for\n unreserved_keyword\n col_name_keyword\n type_func_name_keyword\n reserved_keyword\n bare_label_keyword\nby removing the actions and defining those tokens to be of type str. There are\nmany more such calls that could be dealt with similarly.\n\nThat alone reduced compile times with\n clang-16 -O1 from 18.268s to 12.516s\n clang-16 -O2 from 345.188 to 158.084s\n clang-19 -O2 from 26.018s to 15.200s\n\n\nI suspect what is happening is that clang tries to optimize the number of\ncalls to mm_strdup(), by separating the argument setup from the function\ncall. Which leads to a control flow graph with *many* incoming edges to the\nbasic block containing the function call to mm_strdup(), triggering a normally\nharmless O(N^2) or such.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:33:10 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I think we need to do something about the compile time of this file, even with\n> gcc. Our main grammar already is an issue and stacking all the ecpg stuff on\n> top makes it considerably worse.\n\nSeems reasonable, if we can.\n\n> Why are strduping all of these?\n\nIIRC, the issue is that the mechanism for concatenating the tokens\nback together frees the input strings\n\nstatic char *\ncat2_str(char *str1, char *str2)\n{\n char * res_str = (char *)mm_alloc(strlen(str1) + strlen(str2) + 2);\n\n strcpy(res_str, str1);\n if (strlen(str1) != 0 && strlen(str2) != 0)\n strcat(res_str, \" \");\n strcat(res_str, str2);\n free(str1); <------------------\n free(str2); <------------------\n return res_str;\n}\n\nSo that ought to dump core if you don't make all the productions\nreturn malloc'd strings. How did you work around that?\n\n(Maybe it'd be okay to just leak all the strings?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 19:00:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 15:33:10 -0700, Andres Freund wrote:\n> Which leads to a control flow graph with *many* incoming edges to the basic\n> block containing the function call to mm_strdup(), triggering a normally\n> harmless O(N^2) or such.\n\nWith clang-16 -O2 there is a basic block with 3904 incoming basic blocks. With\nthe hacked up preproc.y it's 2968. A 30% increase leading to a doubling of\nruntime imo seems consistent with my theory of there being some ~quadratic\nbehaviour.\n\n\nI suspect that this is also what's causing gram.c compilation to be fairly\nslow, with both clang and gcc. There aren't as many pstrdup()s in gram.y as\nthe are mm_strdup() in preproc.y, but there still are many.\n\n\nISTM that there are many pstrdup()s that really make very little sense. I\nthink it's largely because there are many rules declared %type <str>, which\nprevents us from returning a string constant without a warning.\n\nThere may be (?) some rules that modify strings returned by subsidiary rules,\nbut if so, it can't be many.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Tue, 9 Apr 2024 16:11:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 19:00:41 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think we need to do something about the compile time of this file, even with\n> > gcc. Our main grammar already is an issue and stacking all the ecpg stuff on\n> > top makes it considerably worse.\n>\n> Seems reasonable, if we can.\n>\n> > Why are strduping all of these?\n>\n> IIRC, the issue is that the mechanism for concatenating the tokens\n> back together frees the input strings\n\nAh, that explains it - but also seems somewhat unnecessary.\n\n\n> So that ought to dump core if you don't make all the productions\n> return malloc'd strings. How did you work around that?\n\nI just tried to get to the point of understanding the reasons for slow\ncompilation, not to actually keep it working :). I.e. I didn't.\n\n\n> (Maybe it'd be okay to just leak all the strings?)\n\nHm. The input to ecpg can be fairly large, I guess. And we have fun code like\ncat_str(), which afaict is O(arguments^2) in its memory usage if we wouldn't\nfree?\n\nNot immediately sure what the right path is.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 16:22:38 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-09 19:00:41 -0400, Tom Lane wrote:\n>> Andres Freund <[email protected]> writes:\n>>> Why are strduping all of these?\n\n>> IIRC, the issue is that the mechanism for concatenating the tokens\n>> back together frees the input strings\n\n> Ah, that explains it - but also seems somewhat unnecessary.\n\nI experimented with replacing mm_strdup() with\n\n#define mm_strdup(x) (x)\n\nAs you did, I wasn't trying to get to a working result, so I didn't do\nanything about removing all the free's or fixing the cast-away-const\nwarnings. The result was disappointing though. On my Mac laptop\n(Apple clang version 15.0.0), the compile time for preproc.o went from\n6.7sec to 5.5sec. Which is better, but not enough better to persuade\nme to do all the janitorial work of restructuring ecpg's\nstring-slinging. I think we haven't really identified the problem.\n\nAs a comparison point, compiling gram.o on the same machine\ntakes 1.3sec. So I am seeing a problem here, sure enough,\nalthough not as bad as it is in some other clang versions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 19:44:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 19:44:03 -0400, Tom Lane wrote:\n> I experimented with replacing mm_strdup() with\n>\n> #define mm_strdup(x) (x)\n>\n> As you did, I wasn't trying to get to a working result, so I didn't do\n> anything about removing all the free's or fixing the cast-away-const\n> warnings. The result was disappointing though. On my Mac laptop\n> (Apple clang version 15.0.0), the compile time for preproc.o went from\n> 6.7sec to 5.5sec. Which is better, but not enough better to persuade\n> me to do all the janitorial work of restructuring ecpg's\n> string-slinging. I think we haven't really identified the problem.\n\nWith what level of optimization was that? It kinda looks like their version\nmight be from before the worst of the issue...\n\n\nFWIW, just redefining mm_strdup() that way doesn't help much here either,\neven with an affected compiler. The gain increases substantially after\nsimplifying unreserved_keyword etc to just use the default action.\n\nI think having the non-default actions for those branches leaves you with a\nsimilar issue, each of the actions just set a register, storing that and going\nto the loop iteration is the same.\n\nFWIW:\n clang-19 -O2\n\"plain\" 0m24.354s\nmm_strdup redefined 0m23.741s\n+use default action 0m14.218s\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:03:51 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-09 19:44:03 -0400, Tom Lane wrote:\n>> As you did, I wasn't trying to get to a working result, so I didn't do\n>> anything about removing all the free's or fixing the cast-away-const\n>> warnings. The result was disappointing though. On my Mac laptop\n>> (Apple clang version 15.0.0), the compile time for preproc.o went from\n>> 6.7sec to 5.5sec. Which is better, but not enough better to persuade\n>> me to do all the janitorial work of restructuring ecpg's\n>> string-slinging. I think we haven't really identified the problem.\n\n> With what level of optimization was that? It kinda looks like their version\n> might be from before the worst of the issue...\n\nJust the autoconf-default -O2.\n\n> FWIW, just redefining mm_strdup() that way doesn't help much here either,\n> even with an affected compiler. The gain increases substantially after\n> simplifying unreserved_keyword etc to just use the default action.\n\nHm.\n\nIn any case, this is all moot unless we can come to a new design for\nhow ecpg does its string-mashing. Thoughts?\n\nI thought for a bit about not allocating strings as such, but just\npassing around pointers into the source text plus lengths, and\nreassembling the string data only at the end when we need to output it.\nNot sure how well that would work, but it could be a starting point.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 20:12:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 20:12:48 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > FWIW, just redefining mm_strdup() that way doesn't help much here either,\n> > even with an affected compiler. The gain increases substantially after\n> > simplifying unreserved_keyword etc to just use the default action.\n> \n> Hm.\n> \n> In any case, this is all moot unless we can come to a new design for\n> how ecpg does its string-mashing. Thoughts?\n\nTthere might be a quick-n-dirty way: We could make pgc.l return dynamically\nallocated keywords.\n\n\nAm I missing something, or is ecpg string handling almost comically\ninefficient? Building up strings in tiny increments, which then get mashed\ntogether to get slightly larger pieces, just to then be mashed together again?\nIt's like an intentional allocator stress test.\n\nIt's particularly absurd because in the end we just print those strings, after\ncarefully assembling them...\n\n\n> I thought for a bit about not allocating strings as such, but just\n> passing around pointers into the source text plus lengths, and\n> reassembling the string data only at the end when we need to output it.\n> Not sure how well that would work, but it could be a starting point.\n\nI was wondering about something vaguely similar: Instead of concatenating\nindividual strings, append them to a stringinfo. The stringinfo can be reset\nindividually...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:23:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-09 20:12:48 -0400, Tom Lane wrote:\n>> In any case, this is all moot unless we can come to a new design for\n>> how ecpg does its string-mashing. Thoughts?\n\n> Am I missing something, or is ecpg string handling almost comically\n> inefficient? Building up strings in tiny increments, which then get mashed\n> together to get slightly larger pieces, just to then be mashed together again?\n> It's like an intentional allocator stress test.\n> It's particularly absurd because in the end we just print those strings, after\n> carefully assembling them...\n\nIt is that. Here's what I'm thinking: probably 90% of what ecpg\ndoes is to verify that a chunk of its input represents a valid bit\nof SQL (or C) syntax and then emit a representation of that chunk.\nCurrently, that representation tends to case-normalize tokens and\nsmash inter-token whitespace and comments to a single space.\nI propose though that neither of those behaviors is mission-critical,\nor even all that desirable. I think few users would complain if\necpg preserved the input's casing and spacing and comments.\n\nGiven that definition, most of ecpg's productions (certainly just\nabout all the auto-generated ones) would simply need to return a\npointer and length describing a part of the input string. There are\nplaces where ecpg wants to insert some text it generates, and I think\nit might need to re-order text in a few places, so we need a\nproduction result representation that can cope with those cases ---\nbut if we can make \"regurgitate the input\" cases efficient, I think\nwe'll have licked the performance problem.\n\nWith that in mind, I wonder whether we couldn't make the simple\ncases depend on bison's existing support for location tracking.\nIn which case, the actual actions for all those cases could be\ndefault, achieving one of the goals you mention.\n\nObviously, this is not going to be a small lift, but it kind\nof seems do-able.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 20:55:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Wed, Apr 10, 2024 at 11:44 AM Tom Lane <[email protected]> wrote:\n> ... On my Mac laptop\n> (Apple clang version 15.0.0), the compile time for preproc.o went from\n> 6.7sec to 5.5sec.\n\nHaving seen multi-minute compile times on FreeBSD (where clang is the\nsystem compiler) and Debian (where I get packages from apt.llvm.org),\nI have been quietly waiting for this issue to hit Mac users too (where\na clang with unknown proprietary changes is the system compiler), but\nit never did. Huh.\n\nI tried to understand a bit more about Apple's version soup. This\nseems to be an up-to-date table (though I don't understand their\nsource of information):\n\nhttps://en.wikipedia.org/wiki/Xcode#Xcode_15.0_-_(since_visionOS_support)_2\n\nAccording to cc -v on my up-to-date MacBook Air, it has \"Apple clang\nversion 15.0.0 (clang-1500.3.9.4)\", which, if the table is correct,\nmeans that it's using LLVM 16.0.0 (note, not 16.0.6, the final version\nof that branch of [open] LLVM, and the version I saw the issue with on\nFreeBSD and Debian). They relabel everything to match the Xcode\nversion that shipped it, and they're currently off by one.\n\nI wondered if perhaps the table just wasn't accurate in the final\ndigits, so I looked for clues in strings in the binary, and sure\nenough it contains \"LLVM 15.0.0\". My guess would be that they've\nclobbered the major version, but not the rest: the Xcode version is\n15.3, and I don't see a 3, so I guess this is really derived from LLVM\n16.0.0.\n\nOne explanation would be that they rebase their proprietary bits and\npieces over the .0 version of each major release, and then cherry-pick\nurgent fixes and stuff later, not pulling in the whole minor release;\nthey also presumably have to maintain it for much longer than the LLVM\nproject's narrow support window. Who knows. So now I wonder if it\ncould be that LLVM 16.0.6 does this, but LLVM 16.0.0 doesn't.\n\nI installed clang-16 (16.0.6) with MacPorts, and it does show the problem.\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:57:43 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Wed, Apr 10, 2024 at 11:44 AM Tom Lane <[email protected]> wrote:\n>> ... On my Mac laptop\n>> (Apple clang version 15.0.0), the compile time for preproc.o went from\n>> 6.7sec to 5.5sec.\n\n> Having seen multi-minute compile times on FreeBSD (where clang is the\n> system compiler) and Debian (where I get packages from apt.llvm.org),\n> I have been quietly waiting for this issue to hit Mac users too (where\n> a clang with unknown proprietary changes is the system compiler), but\n> it never did. Huh.\n\nI don't doubt that there are other clang versions where the problem\nbites a lot harder. What result do you get from the test I tried\n(turning mm_strdup into a no-op macro)?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 01:03:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Wed, Apr 10, 2024 at 5:03 PM Tom Lane <[email protected]> wrote:\n> I don't doubt that there are other clang versions where the problem\n> bites a lot harder. What result do you get from the test I tried\n> (turning mm_strdup into a no-op macro)?\n\n#define mm_strdup(x) (x) does this:\n\nApple clang 15: master: 14s -> 9s\nMacPorts clang 16, master: 170s -> 10s\n\n\n", "msg_date": "Wed, 10 Apr 2024 21:15:37 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Wed, Apr 10, 2024 at 5:03 PM Tom Lane <[email protected]> wrote:\n>> I don't doubt that there are other clang versions where the problem\n>> bites a lot harder. What result do you get from the test I tried\n>> (turning mm_strdup into a no-op macro)?\n\n> #define mm_strdup(x) (x) does this:\n> Apple clang 15: master: 14s -> 9s\n> MacPorts clang 16, master: 170s -> 10s\n\nWow. So (a) it's definitely worth doing something about this;\nbut (b) anything that would move the needle would probably require\nsignificant refactoring of ecpg's string handling, and I doubt we\nwant to consider that post-feature-freeze. The earliest we could\nland such a fix would be ~ July, if we follow past schedules.\n\nThe immediate question then is do we want to take Jelte's patch\nas a way to ameliorate the pain meanwhile. I'm kind of down on\nit, because AFAICS what would happen if you break the core\ngrammar is that (most likely) the failure would be reported\nagainst ecpg first. That seems pretty confusing. However, maybe\nit's tolerable as a short-term band-aid that we plan to revert later.\nI would not commit the makefile side of the patch though, as\nthat creates the same problem for makefile users while providing\nlittle benefit.\n\nAs for the longer-term fix, I'd be willing to have a go at\nimplementing the sketch I wrote last night; but I'm also happy\nto defer to anyone else who's hot to work on this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:33:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 10.04.24 17:33, Tom Lane wrote:\n> The immediate question then is do we want to take Jelte's patch\n> as a way to ameliorate the pain meanwhile. I'm kind of down on\n> it, because AFAICS what would happen if you break the core\n> grammar is that (most likely) the failure would be reported\n> against ecpg first. That seems pretty confusing.\n\nYeah that would be confusing.\n\nI suppose we could just take the part of the patch that moves up preproc \namong the subdirectories of ecpg, but I don't know if that by itself \nwould really buy anything.\n\n\n", "msg_date": "Wed, 17 Apr 2024 13:40:58 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Wed, 17 Apr 2024 at 13:41, Peter Eisentraut <[email protected]> wrote:\n>\n> On 10.04.24 17:33, Tom Lane wrote:\n> > The immediate question then is do we want to take Jelte's patch\n> > as a way to ameliorate the pain meanwhile. I'm kind of down on\n> > it, because AFAICS what would happen if you break the core\n> > grammar is that (most likely) the failure would be reported\n> > against ecpg first. That seems pretty confusing.\n>\n> Yeah that would be confusing.\n\nHow can I test if this actually happens? Because it sounds like if\nthat indeed happens it should be fixable fairly easily.\n\n\n", "msg_date": "Wed, 17 Apr 2024 14:03:37 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> How can I test if this actually happens? Because it sounds like if\n> that indeed happens it should be fixable fairly easily.\n\nBreak gram.y (say, misspell some token in a production) and\nsee what happens. The behavior's likely to be timing sensitive\nthough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Apr 2024 10:00:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Wed, 17 Apr 2024 at 16:00, Tom Lane <[email protected]> wrote:\n> Break gram.y (say, misspell some token in a production) and\n> see what happens. The behavior's likely to be timing sensitive\n> though.\n\nThanks for clarifying. It took me a little while to break gram.y in\nsuch a way that I was able to consistently reproduce, but I managed in\nthe end using the attached small diff.\nAnd then running ninja in non-parallel mode:\n\nninja -C build all -j1\n\nAs I expected this problem was indeed fairly easy to address by still\nbuilding \"backend/parser\" before \"interfaces\". See attached patch.", "msg_date": "Wed, 17 Apr 2024 16:51:49 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> As I expected this problem was indeed fairly easy to address by still\n> building \"backend/parser\" before \"interfaces\". See attached patch.\n\nI think we should hold off on this. I found a simpler way to address\necpg's problem than what I sketched upthread. I have a not-ready-to-\nshow-yet patch that allows the vast majority of ecpg's grammar\nproductions to use the default semantic action. Testing on my M1\nMacbook with clang 16.0.6 from MacPorts, I see the compile time for\npreproc.o in HEAD as about 1m44 sec; but with this patch, 0.6 sec.\n\nThe core idea of the patch is to get rid of <str> results from\ngrammar nonterminals and instead pass the strings back as yylloc\nresults, which we can do by redefining YYLTYPE as \"char *\"; since\necpg isn't using Bison's location logic for anything, this is free.\nThen we can implement a one-size-fits-most token concatenation\nrule in YYLLOC_DEFAULT, and only the various handmade rules that\ndon't want to just concatenate their inputs need to do something\ndifferent.\n\nThe patch presently passes regression tests, but its memory management\nis shoddy as can be (basically \"leak like there's no tomorrow\"), and\nI want to fix that before presenting it. One could almost argue that\nwe don't care about memory consumption of the ecpg preprocessor;\nbut I think it's possible to do better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Apr 2024 23:10:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 2024-04-17 23:10:53 -0400, Tom Lane wrote:\n> Jelte Fennema-Nio <[email protected]> writes:\n> > As I expected this problem was indeed fairly easy to address by still\n> > building \"backend/parser\" before \"interfaces\". See attached patch.\n> \n> I think we should hold off on this. I found a simpler way to address\n> ecpg's problem than what I sketched upthread. I have a not-ready-to-\n> show-yet patch that allows the vast majority of ecpg's grammar\n> productions to use the default semantic action. Testing on my M1\n> Macbook with clang 16.0.6 from MacPorts, I see the compile time for\n> preproc.o in HEAD as about 1m44 sec; but with this patch, 0.6 sec.\n\nThat's pretty amazing.\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:37:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-17 23:10:53 -0400, Tom Lane wrote:\n>> I think we should hold off on this. I found a simpler way to address\n>> ecpg's problem than what I sketched upthread. I have a not-ready-to-\n>> show-yet patch that allows the vast majority of ecpg's grammar\n>> productions to use the default semantic action. Testing on my M1\n>> Macbook with clang 16.0.6 from MacPorts, I see the compile time for\n>> preproc.o in HEAD as about 1m44 sec; but with this patch, 0.6 sec.\n\n> That's pretty amazing.\n\nPatch posted at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2011420.1713493114%40sss.pgh.pa.us\n\n\n", "msg_date": "Thu, 18 Apr 2024 22:24:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Wed, Apr 17, 2024 at 11:11 PM Tom Lane <[email protected]> wrote:\n> I think we should hold off on this. I found a simpler way to address\n> ecpg's problem than what I sketched upthread. I have a not-ready-to-\n> show-yet patch that allows the vast majority of ecpg's grammar\n> productions to use the default semantic action. Testing on my M1\n> Macbook with clang 16.0.6 from MacPorts, I see the compile time for\n> preproc.o in HEAD as about 1m44 sec; but with this patch, 0.6 sec.\n\nIf this is the consensus opinion, then\nhttps://commitfest.postgresql.org/48/4914/ should be marked Rejected.\nHowever, while I think the improvements that Tom was able to make here\nsound fantastic, I don't understand why that's an argument against\nJelte's patches. After all, Tom's work will only go into v18, but this\npatch could be adopted in v17 and back-patched to all releases that\nsupport meson builds, saving oodles of compile time for as long as\nthose releases are supported. The most obvious beneficiary of that\ncourse of action would seem to be Tom himself, since he back-patches\nmore fixes than anybody, last I checked, but it'd be also be useful to\nget slightly quicker results from the buildfarm and slightly quicker\nresults for anyone using CI on back-branches and for other hackers who\nare looking to back-patch bug fixes. I don't quite understand why we\nwant to throw those potential benefits out the window just because we\nhave a better fix for the future.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 16:17:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> If this is the consensus opinion, then\n> https://commitfest.postgresql.org/48/4914/ should be marked Rejected.\n> However, while I think the improvements that Tom was able to make here\n> sound fantastic, I don't understand why that's an argument against\n> Jelte's patches. After all, Tom's work will only go into v18, but this\n> patch could be adopted in v17 and back-patched to all releases that\n> support meson builds, saving oodles of compile time for as long as\n> those releases are supported. The most obvious beneficiary of that\n> course of action would seem to be Tom himself, since he back-patches\n> more fixes than anybody, last I checked, but it'd be also be useful to\n> get slightly quicker results from the buildfarm and slightly quicker\n> results for anyone using CI on back-branches and for other hackers who\n> are looking to back-patch bug fixes. I don't quite understand why we\n> want to throw those potential benefits out the window just because we\n> have a better fix for the future.\n\nAs I mentioned upthread, I'm more worried about confusing error\nreports than the machine time. It would save me personally exactly\nnada, since (a) I usually develop with gcc not clang, (b) when\nI do use clang it's not a heavily-affected version, and (c) since\nwe *very* seldom change the grammar in stable branches, ccache will\nhide the problem pretty effectively anyway in the back branches.\n\n(If you're not using ccache, please don't complain about build time.)\n\nI grant that there are people who are more affected, but still, I'd\njust as soon not contort the build rules for a temporary problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 May 2024 16:59:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Fri, May 17, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n> As I mentioned upthread, I'm more worried about confusing error\n> reports than the machine time.\n\nWell, Jelte fixed that.\n\n> I grant that there are people who are more affected, but still, I'd\n> just as soon not contort the build rules for a temporary problem.\n\nArguably by doing this, but I don't think it's enough of a contortion\nto get excited about.\n\nAnyone else want to vote?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 17:01:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 17.05.24 23:01, Robert Haas wrote:\n> On Fri, May 17, 2024 at 4:59 PM Tom Lane <[email protected]> wrote:\n>> As I mentioned upthread, I'm more worried about confusing error\n>> reports than the machine time.\n> \n> Well, Jelte fixed that.\n> \n>> I grant that there are people who are more affected, but still, I'd\n>> just as soon not contort the build rules for a temporary problem.\n> \n> Arguably by doing this, but I don't think it's enough of a contortion\n> to get excited about.\n> \n> Anyone else want to vote?\n\nI retested the patch from 2024-04-07 (I think that's the one that \"fixed \nthat\"? There are multiple \"v1\" patches in this thread.) using gcc-14 \nand clang-18, with ccache disabled of course. The measured effects of \nthe patch are:\n\ngcc-14: 1% slower\nclang-18: 3% faster\n\nSo with that, it doesn't seem very interesting.\n\n\n\n", "msg_date": "Sat, 18 May 2024 00:35:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 2024-May-17, Robert Haas wrote:\n\n> Anyone else want to vote?\n\nI had pretty much the same thought as you. It seems a waste to leave\nthe code in existing branches be slow only because we have a much better\napproach for a branch that doesn't even exist yet.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"We're here to devour each other alive\" (Hobbes)\n\n\n", "msg_date": "Sat, 18 May 2024 13:00:46 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> On 2024-May-17, Robert Haas wrote:\n>> Anyone else want to vote?\n\n> I had pretty much the same thought as you. It seems a waste to leave\n> the code in existing branches be slow only because we have a much better\n> approach for a branch that doesn't even exist yet.\n\nI won't complain too loudly as long as we remember to revert the\npatch from HEAD once the ecpg fix goes in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 12:58:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Sat, May 18, 2024 at 6:09 PM Andres Freund <[email protected]> wrote:\n> A few tests with ccache disabled:\n\nThese tests seem to show no difference between the two releases, so I\nwonder what you're intending to demonstrate here.\n\nLocally, I have ninja 1.11.1. I'm sure Andres will be absolutely\nshocked, shocked I say, to hear that I haven't upgraded to the very\nlatest.\n\nAnyway, I tried a clean build with CC=clang without the patch and then\nwith the patch and got:\n\nunpatched - real 1m9.872s\npatched - real 1m6.130s\n\nThat's the time to run my script that first calls meson setup and then\nafterward runs ninja. I tried ninja -v and put the output to a file.\nWith the patch:\n\n[292/2402] /opt/local/bin/perl\n../pgsql/src/interfaces/ecpg/preproc/parse.pl --srcdir\n../pgsql/src/interfaces/ecpg/preproc --parser\n../pgsql/src/interfaces/ecpg/preproc/../../../backend/parser/gram.y\n--output src/interfaces/ecpg/preproc/preproc.y\n\nAnd without:\n\n[1854/2402] /opt/local/bin/perl\n../pgsql/src/interfaces/ecpg/preproc/parse.pl --srcdir\n../pgsql/src/interfaces/ecpg/preproc --parser\n../pgsql/src/interfaces/ecpg/preproc/../../../backend/parser/gram.y\n--output src/interfaces/ecpg/preproc/preproc.y\n\nWith my usual CC='ccache clang', I get real 0m37.500s unpatched and\nreal 0m37.786s patched. I also tried this with the original v1 patch\n(not to be confused with the revised, still-v1 patch) and got real\n37.950s.\n\nSo I guess I'm now feeling pretty unexcited about this. If the patch\nreally did what the subject line says, that would be nice to have.\nHowever, since Tom will patch the underlying problem in v18, this will\nonly help people building the back-branches. Of those, it will only\nhelp the people using a slow compiler; I read the thread as suggesting\nthat you need to be using clang rather than gcc and also not using\nccache. Plus, it sounds like you also need an old ninja version. Even\nthen, the highest reported savings is 10s and I only save 3s. I think\nthat's a small enough savings with enough conditionals that we should\nnot bother. It seems fine to say that people who need the fastest\npossible builds of back-branches should use at least one of gcc,\nccache, and ninja >= 1.12.\n\nI'll go mark this patch Rejected in the CommitFest. Incidentally, this\nthread is an excellent object lesson in why it's so darn hard to cut\ndown the size of the CF. I mean, this is a 7-line patch and we've now\ngot a 30+ email thread about it. In my role as temporary\nself-appointed wielder of the CommitFest mace, I have now spent\nprobably a good 2 hours trying to figure out what to do about it.\nThere are more than 230 active patches in the CommitFest. That means\nCommitFest management would be more than a full-time job for an\nexperienced committer even if every patch were as simple as this one,\nwhich is definitely not the case. If we want to restore some sanity\nhere, we're going to have to find some way of distributing the burden\nacross more people, including the patch authors. Note that Jelte's\nlast update to the thread is over a month ago. I'm not saying that\nhe's done something wrong, but I do strongly suspect that the time\nthat the community as whole has spent on this patch is a pretty\nsignificant multiple of the time that he personally spent on it, and\nsuch dynamics are bound to create scaling issues. I don't know how we\ndo better: I just want to highlight the problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 09:37:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 2024-05-20 09:37:46 -0400, Robert Haas wrote:\n> On Sat, May 18, 2024 at 6:09 PM Andres Freund <[email protected]> wrote:\n> > A few tests with ccache disabled:\n> \n> These tests seem to show no difference between the two releases, so I\n> wonder what you're intending to demonstrate here.\n\nThey show a few seconds of win for 'real' time.\n-O0: 0m21.577s->0m19.529s\n-O3: 0m59.730s->0m54.853s\n\n\n", "msg_date": "Mon, 20 May 2024 08:37:48 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On Mon, May 20, 2024 at 11:37 AM Andres Freund <[email protected]> wrote:\n> On 2024-05-20 09:37:46 -0400, Robert Haas wrote:\n> > On Sat, May 18, 2024 at 6:09 PM Andres Freund <[email protected]> wrote:\n> > > A few tests with ccache disabled:\n> >\n> > These tests seem to show no difference between the two releases, so I\n> > wonder what you're intending to demonstrate here.\n>\n> They show a few seconds of win for 'real' time.\n> -O0: 0m21.577s->0m19.529s\n> -O3: 0m59.730s->0m54.853s\n\nAh, OK, I think I misinterpreted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 11:52:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" }, { "msg_contents": "On 19.05.24 00:09, Andres Freund wrote:\n> On 2024-05-18 00:35:08 +0200, Peter Eisentraut wrote:\n>> I retested the patch from 2024-04-07 (I think that's the one that \"fixed\n>> that\"? There are multiple \"v1\" patches in this thread.) using gcc-14 and\n>> clang-18, with ccache disabled of course. The measured effects of the patch\n>> are:\n>>\n>> gcc-14: 1% slower\n>> clang-18: 3% faster\n>>\n>> So with that, it doesn't seem very interesting.\n> I wonder whether the reason you're seing less benefit than Jelte is that - I'm\n> guessing - you now used ninja 1.12 and Jelte something older. Ninja 1.12\n> prioritizes building edges using a \"critical path\" heuristic, leading to\n> scheduling nodes with more incoming dependencies, and deeper in the dependency\n> graph earlier.\n\nYes! Very interesting!\n\nWith ninja 1.11 and gcc-14, I see the patch gives about a 17% speedup.\n\n\n", "msg_date": "Tue, 21 May 2024 07:47:23 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed up clean meson builds by ~25%" } ]
[ { "msg_contents": "Looking for an index to support top-n searches, were n has a fixed maximum\n\nRecently, I've been looking at strategies to handle top-n queries in\nPostgres. In my current cases, we've got definition tables, and very large\nrelated tables. Here's a stripped-down example, the real tables are much\nwider.\n\nCREATE TABLE IF NOT EXISTS data.inventory (\n id uuid NOT NULL DEFAULT NULL PRIMARY KEY,\n inv_id uuid NOT NULL DEFAULT NULL\n);\n\n\nCREATE TABLE IF NOT EXISTS data.scan (\n id uuid NOT NULL DEFAULT NULL PRIMARY KEY,\n inv_id uuid NOT NULL DEFAULT NULL\n scan_dts_utc timestamp NOT NULL DEFAULT NOW(); -- We run out\nservers on UTC\n);\n\nEvery item in inventory is scanned when it passes through various steps in\na clean-dispatch-arrive-use-clean sort of a work cycle. The ratio between\ninventory and scan is 1:0-n, where n can be virtually any number. In\nanother table pair like this, the average is 1:1,000. In the inventory\nexample, it's roughly 0:200,000. The distribution of related row counts is\nall over the map. The reasons behind these ratios sometimes map to valid\nprocesses, and sometimes are a consequence of some low-quality data leaking\ninto the system. In the case of inventory, the results make sense. In our\ncase:\n\n* The goal value for n is often 1, and not more than up to 25.\n\n* Depending on the tables, the % of rows that are discarded because they're\npast the 25th most recent record is 97% or more.\n\n* Partial indexes do not work as well on huge tables as I hoped. The same\ndata copied via a STATEMENT trigger into a thin, subsetted table is much\nfaster. I think this has to do with the increase in random disk access\nrequired for a table and/or index with more pages spread around on the disk.\n\n* We can't filter in advance by *any* reasonable date range. Could get 25\nscans on one item in an hour, or a year, or five years, or never.\n\nWe're finding that we need the top-n records more and more often, returned\nquickly. This gets harder to do as the table(s) grow.\n\n SELECT id, scan_dts_utc\n FROM data.scan\n WHERE inv_id = 'b7db5d06-8275-224d-a38a-ac263dc1c767' curve.\n ORDER BY scan_dts_utc DESC\n LIMIT 25; -- Full search product might be 0, 200K, or anything\nin-between. Not on a bell curve.\n\nA compound index works really well to optimize these kinds of searches:\n\nCREATE INDEX scan_inv_id_scan_time_utc_dts_idx\n ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC);\n\nWhat I'm wondering is if there is some index option, likely not with a\nB-tree, that can *automatically* enforce a maximum-length list of top\nvalues, based on a defined sort\n\nCREATE INDEX scan_inv_id_scan_time_utc_dts_idx\n ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC) --\nThis defines the ordering\n LIMIT 25; --\nThis sets the hard max for n\n\nThe goal is to have an automatically maintained list of the top values *in*\nthe index itself. In the right situations (like ours), this reduces the\nindex size by 20x or more. Smaller index, faster results. And, since the\nindex is on the source table, the row references are already there.\n(Something I lose when maintaining this by hand in a side/shadow/top table.)\n\nI've looked at a ton of plans, and Postgres *clearly* goes to a lot of\neffort to recognize and optimize top-n searches already. That's\nencouraging, as it suggests that the planner takes LIMIT into account.\n(I've picked up already that maintaining the purity of the planner and\nexecutor abstractions is a core value to the project.)\n\nAnd, for sure, I can build and maintain my own custom, ordered list in\nvarious ways. None of them seem like they can possibly rival the trimming\nbehavior being handled by an index.\n\nI'm out over my skis here, but I'm intuiting that this might be a job for\none of the multi-value/inverted index types/frameworks. I tried some\nexperiments, but only got worse results.\n\nHope that reads as understandable...grateful for any suggestions.\n\nLooking for an index to support top-n searches, were n has a fixed maximumRecently, I've been looking at strategies to handle top-n queries in Postgres. In my current cases, we've got definition tables, and very large related tables. Here's a stripped-down example, the real tables are much wider. CREATE TABLE IF NOT EXISTS data.inventory (    id              uuid          NOT NULL DEFAULT NULL PRIMARY KEY,    inv_id          uuid          NOT NULL DEFAULT NULL);CREATE TABLE IF NOT EXISTS data.scan (    id              uuid          NOT NULL DEFAULT NULL PRIMARY KEY,    inv_id          uuid          NOT NULL DEFAULT NULL    scan_dts_utc    timestamp     NOT NULL DEFAULT NOW(); -- We run out servers on UTC);Every item in inventory is scanned when it passes through various steps in a clean-dispatch-arrive-use-clean sort of a work cycle. The ratio between inventory and scan is 1:0-n, where n can be virtually any number. In another table pair like this, the average is 1:1,000. In the inventory example, it's roughly 0:200,000. The distribution of related row counts is all over the map. The reasons behind these ratios sometimes map to valid processes, and sometimes are a consequence of some low-quality data leaking into the system. In the case of inventory, the results make sense. In our case:* The goal value for n is often 1, and not more than up to 25.* Depending on the tables, the % of rows that are discarded because they're past the 25th most recent record is 97% or more.* Partial indexes do not work as well on huge tables as I hoped. The same data copied via a STATEMENT trigger into a thin, subsetted table is much faster. I think this has to do with the increase in random disk access required for a table and/or index with more pages spread around on the disk.* We can't filter in advance by any reasonable date range. Could get 25 scans on one item in an hour, or a year, or five years, or never.We're finding that we need the top-n records more and more often, returned quickly. This gets harder to do as the table(s) grow.   SELECT id, scan_dts_utc     FROM data.scan     WHERE inv_id = 'b7db5d06-8275-224d-a38a-ac263dc1c767'  curve. ORDER BY scan_dts_utc DESC    LIMIT 25; -- Full search product might be 0, 200K, or anything in-between. Not on a bell curve.A compound index works really well to optimize these kinds of searches:CREATE INDEX scan_inv_id_scan_time_utc_dts_idx          ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC);What I'm wondering is if there is some index option, likely not with a B-tree, that can *automatically* enforce a maximum-length list of top values, based on a defined sortCREATE INDEX scan_inv_id_scan_time_utc_dts_idx          ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC) -- This defines the ordering       LIMIT 25;                                                     -- This sets the hard max for nThe goal is to have an automatically maintained list of the top values in the index itself. In the right situations (like ours), this reduces the index size by 20x or more. Smaller index, faster results. And, since the index is on the source table, the row references are already there. (Something I lose when maintaining this by hand in a side/shadow/top table.)I've looked at a ton of plans, and Postgres *clearly* goes to a lot of effort to recognize and optimize top-n searches already. That's encouraging, as it suggests that the planner takes LIMIT into account. (I've picked up already that maintaining the purity of the planner and executor abstractions is a core value to the project.)And, for sure, I can build and maintain my own custom, ordered list in various ways. None of them seem like they can possibly rival the trimming behavior being handled by an index. I'm out over my skis here, but I'm intuiting that this might be a job for one of the multi-value/inverted index types/frameworks. I tried some experiments, but only got worse results.Hope that reads as understandable...grateful for any suggestions.", "msg_date": "Fri, 5 Apr 2024 11:27:28 +1100", "msg_from": "Morris de Oryx <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for an index that supports top-n searches by enforcing a\n max-n automatically" }, { "msg_contents": "Just about as soon as I sent the above, I realized that it's unlikely to\nmake sense in the real world in a row-store. If the goal is to keep the\ntop-25 results and trim the rest, what happens when values are\nadded/modified/deleted? You now *have to go look at all of the data you\naren't caching in the index. *Unless you can *guarantee* that data is\nentered in perfect order, and/or never changes, I don't think what I'm\nlooking for is likely to make sense.\n\n\nOn Fri, Apr 5, 2024 at 11:27 AM Morris de Oryx <[email protected]>\nwrote:\n\n> Looking for an index to support top-n searches, were n has a fixed maximum\n>\n> Recently, I've been looking at strategies to handle top-n queries in\n> Postgres. In my current cases, we've got definition tables, and very large\n> related tables. Here's a stripped-down example, the real tables are much\n> wider.\n>\n> CREATE TABLE IF NOT EXISTS data.inventory (\n> id uuid NOT NULL DEFAULT NULL PRIMARY KEY,\n> inv_id uuid NOT NULL DEFAULT NULL\n> );\n>\n>\n> CREATE TABLE IF NOT EXISTS data.scan (\n> id uuid NOT NULL DEFAULT NULL PRIMARY KEY,\n> inv_id uuid NOT NULL DEFAULT NULL\n> scan_dts_utc timestamp NOT NULL DEFAULT NOW(); -- We run out\n> servers on UTC\n> );\n>\n> Every item in inventory is scanned when it passes through various steps in\n> a clean-dispatch-arrive-use-clean sort of a work cycle. The ratio between\n> inventory and scan is 1:0-n, where n can be virtually any number. In\n> another table pair like this, the average is 1:1,000. In the inventory\n> example, it's roughly 0:200,000. The distribution of related row counts is\n> all over the map. The reasons behind these ratios sometimes map to valid\n> processes, and sometimes are a consequence of some low-quality data leaking\n> into the system. In the case of inventory, the results make sense. In our\n> case:\n>\n> * The goal value for n is often 1, and not more than up to 25.\n>\n> * Depending on the tables, the % of rows that are discarded because\n> they're past the 25th most recent record is 97% or more.\n>\n> * Partial indexes do not work as well on huge tables as I hoped. The same\n> data copied via a STATEMENT trigger into a thin, subsetted table is much\n> faster. I think this has to do with the increase in random disk access\n> required for a table and/or index with more pages spread around on the disk.\n>\n> * We can't filter in advance by *any* reasonable date range. Could get 25\n> scans on one item in an hour, or a year, or five years, or never.\n>\n> We're finding that we need the top-n records more and more often, returned\n> quickly. This gets harder to do as the table(s) grow.\n>\n> SELECT id, scan_dts_utc\n> FROM data.scan\n> WHERE inv_id = 'b7db5d06-8275-224d-a38a-ac263dc1c767' curve.\n> ORDER BY scan_dts_utc DESC\n> LIMIT 25; -- Full search product might be 0, 200K, or anything\n> in-between. Not on a bell curve.\n>\n> A compound index works really well to optimize these kinds of searches:\n>\n> CREATE INDEX scan_inv_id_scan_time_utc_dts_idx\n> ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC);\n>\n> What I'm wondering is if there is some index option, likely not with a\n> B-tree, that can *automatically* enforce a maximum-length list of top\n> values, based on a defined sort\n>\n> CREATE INDEX scan_inv_id_scan_time_utc_dts_idx\n> ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC) --\n> This defines the ordering\n> LIMIT 25; --\n> This sets the hard max for n\n>\n> The goal is to have an automatically maintained list of the top values\n> *in* the index itself. In the right situations (like ours), this reduces\n> the index size by 20x or more. Smaller index, faster results. And, since\n> the index is on the source table, the row references are already there.\n> (Something I lose when maintaining this by hand in a side/shadow/top table.)\n>\n> I've looked at a ton of plans, and Postgres *clearly* goes to a lot of\n> effort to recognize and optimize top-n searches already. That's\n> encouraging, as it suggests that the planner takes LIMIT into account.\n> (I've picked up already that maintaining the purity of the planner and\n> executor abstractions is a core value to the project.)\n>\n> And, for sure, I can build and maintain my own custom, ordered list in\n> various ways. None of them seem like they can possibly rival the trimming\n> behavior being handled by an index.\n>\n> I'm out over my skis here, but I'm intuiting that this might be a job for\n> one of the multi-value/inverted index types/frameworks. I tried some\n> experiments, but only got worse results.\n>\n> Hope that reads as understandable...grateful for any suggestions.\n>\n\nJust about as soon as I sent the above, I realized that it's unlikely to make sense in the real world in a row-store. If the goal is to keep the top-25 results and trim the rest, what happens when values are added/modified/deleted? You now have to go look at all of the data you aren't caching in the index. Unless you can guarantee that data is entered in perfect order, and/or never changes, I don't think what I'm looking for is likely to make sense. On Fri, Apr 5, 2024 at 11:27 AM Morris de Oryx <[email protected]> wrote:Looking for an index to support top-n searches, were n has a fixed maximumRecently, I've been looking at strategies to handle top-n queries in Postgres. In my current cases, we've got definition tables, and very large related tables. Here's a stripped-down example, the real tables are much wider. CREATE TABLE IF NOT EXISTS data.inventory (    id              uuid          NOT NULL DEFAULT NULL PRIMARY KEY,    inv_id          uuid          NOT NULL DEFAULT NULL);CREATE TABLE IF NOT EXISTS data.scan (    id              uuid          NOT NULL DEFAULT NULL PRIMARY KEY,    inv_id          uuid          NOT NULL DEFAULT NULL    scan_dts_utc    timestamp     NOT NULL DEFAULT NOW(); -- We run out servers on UTC);Every item in inventory is scanned when it passes through various steps in a clean-dispatch-arrive-use-clean sort of a work cycle. The ratio between inventory and scan is 1:0-n, where n can be virtually any number. In another table pair like this, the average is 1:1,000. In the inventory example, it's roughly 0:200,000. The distribution of related row counts is all over the map. The reasons behind these ratios sometimes map to valid processes, and sometimes are a consequence of some low-quality data leaking into the system. In the case of inventory, the results make sense. In our case:* The goal value for n is often 1, and not more than up to 25.* Depending on the tables, the % of rows that are discarded because they're past the 25th most recent record is 97% or more.* Partial indexes do not work as well on huge tables as I hoped. The same data copied via a STATEMENT trigger into a thin, subsetted table is much faster. I think this has to do with the increase in random disk access required for a table and/or index with more pages spread around on the disk.* We can't filter in advance by any reasonable date range. Could get 25 scans on one item in an hour, or a year, or five years, or never.We're finding that we need the top-n records more and more often, returned quickly. This gets harder to do as the table(s) grow.   SELECT id, scan_dts_utc     FROM data.scan     WHERE inv_id = 'b7db5d06-8275-224d-a38a-ac263dc1c767'  curve. ORDER BY scan_dts_utc DESC    LIMIT 25; -- Full search product might be 0, 200K, or anything in-between. Not on a bell curve.A compound index works really well to optimize these kinds of searches:CREATE INDEX scan_inv_id_scan_time_utc_dts_idx          ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC);What I'm wondering is if there is some index option, likely not with a B-tree, that can *automatically* enforce a maximum-length list of top values, based on a defined sortCREATE INDEX scan_inv_id_scan_time_utc_dts_idx          ON ascendco.analytic_scan (inv_id, scan_time_utc_dts DESC) -- This defines the ordering       LIMIT 25;                                                     -- This sets the hard max for nThe goal is to have an automatically maintained list of the top values in the index itself. In the right situations (like ours), this reduces the index size by 20x or more. Smaller index, faster results. And, since the index is on the source table, the row references are already there. (Something I lose when maintaining this by hand in a side/shadow/top table.)I've looked at a ton of plans, and Postgres *clearly* goes to a lot of effort to recognize and optimize top-n searches already. That's encouraging, as it suggests that the planner takes LIMIT into account. (I've picked up already that maintaining the purity of the planner and executor abstractions is a core value to the project.)And, for sure, I can build and maintain my own custom, ordered list in various ways. None of them seem like they can possibly rival the trimming behavior being handled by an index. I'm out over my skis here, but I'm intuiting that this might be a job for one of the multi-value/inverted index types/frameworks. I tried some experiments, but only got worse results.Hope that reads as understandable...grateful for any suggestions.", "msg_date": "Fri, 5 Apr 2024 19:46:40 +1100", "msg_from": "Morris de Oryx <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for an index that supports top-n searches by enforcing a\n max-n automatically" } ]
[ { "msg_contents": "Hi hackers\n\nDiscussion[1] and the relevant commit[2] improved the selectivity\ncalculation for IN/NOT IN.\n\nThis is the current logic for NOT IN selectivity calculation and it loops\nover the array elements.\n\nelse\n{\n s1 = s1 * s2;\n if (isInequality)\n s1disjoint += s2 - 1.0;\n}\n\nBy calculating s2 for each array element, it calls neqsel and returns 1 -\neqsel - nullfrac.\nIf I expand the s1disjoint calculation for a NOT IN (2,5,8) clause,\nIt eventually becomes 1 - eqsel(2) - eqsel(5) - eqsel(8) - 3*nullfrac.\nIf nullfrac is big, s1disjoint will be less than 0 quickly when the array\nhas more elements,\nand the selectivity algorithm falls back to the one prior to commit[2]\nwhich had bad estimation for NOT IN as well.\n\nIt seems to me that nullfrac should be subtracted only once. Is it feasible\nthat we have a new variable s1disjoint2\nthat add back nullfrac when we get back the result for each s2 and subtract\nit once at the end of the loop as a 2nd heuristic?\nWe then maybe prefer s1disjoint2 over s1disjoint and then s1?\n\nDonghang Lin\n(ServiceNow)\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CA%2Bmi_8aPEAzBgWZpNTABGM%3DcSq7mRMyPWbMsU8eGmUfH75OTLA%40mail.gmail.com\n[2]\nhttps://github.com/postgres/postgres/commit/66a7e6bae98592d1d98d9ef589753f0e953c5828\n\nHi hackersDiscussion[1] and the relevant commit[2] improved the selectivity calculation for IN/NOT IN.This is the current logic for NOT IN selectivity calculation and it loops over the array elements.else\t\t\t{    s1 = s1 * s2;    if (isInequality)         s1disjoint += s2 - 1.0;\t\t\t}By calculating s2 for each array element, it calls neqsel and returns 1 - eqsel - nullfrac.If I expand the s1disjoint calculation for a NOT IN (2,5,8) clause, It eventually becomes 1 - eqsel(2) - eqsel(5) - eqsel(8) - 3*nullfrac. If nullfrac is big, s1disjoint will be less than 0 quickly when the array has more elements, and the selectivity algorithm falls back to the one prior to commit[2] which had bad estimation for NOT IN as well. It seems to me that nullfrac should be subtracted only once. Is it feasible that we have a new variable s1disjoint2 that add back nullfrac when we get back the result for each s2 and subtract it once at the end of the loop as a 2nd heuristic?We then maybe prefer s1disjoint2 over s1disjoint and then s1?Donghang Lin(ServiceNow)[1] https://www.postgresql.org/message-id/flat/CA%2Bmi_8aPEAzBgWZpNTABGM%3DcSq7mRMyPWbMsU8eGmUfH75OTLA%40mail.gmail.com[2] https://github.com/postgres/postgres/commit/66a7e6bae98592d1d98d9ef589753f0e953c5828", "msg_date": "Fri, 5 Apr 2024 01:20:52 -0700", "msg_from": "Donghang Lin <[email protected]>", "msg_from_op": true, "msg_subject": "Bad estimation for NOT IN clause with big null fraction" } ]
[ { "msg_contents": "Dear hackers,\n\nGenerating a \".partial\" WAL segment is pretty common nowadays (using pg_receivewal or during standby promotion).\nHowever, we currently don't do anything with it unless the user manually removes that \".partial\" extension.\n\nThe 028_pitr_timelines tests are highlighting that fact: with test data being being in 000000020000000000000003 and 000000010000000000000003.partial, a recovery following the latest timeline (2) will succeed but fail if we follow the current timeline (1).\n\nBy simply trying to fetch the \".partial\" file in XLogFileRead, we can easily recover more data and also cover that (current timeline) recovery case.\n\nSo, this proposed patch makes XLogFileRead try to restore \".partial\" WAL archives and adds a test to 028_pitr_timelines using current recovery_target_timeline.\n\nAs far as I've seen, the current pg_receivewal tests only seem to cover the archives generation but not actually trying to recover using it. I wasn't sure it was interesting to add such tests right now, so I didn't considered it for this patch.\n\nMany thanks in advance for your feedback and thoughts about this,\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)", "msg_date": "Fri, 05 Apr 2024 09:45:18 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": true, "msg_subject": "Recovery of .partial WAL segments" }, { "msg_contents": "Hi,\n\nI've added a CF entry for this patch:\nhttps://commitfest.postgresql.org/49/5148/\n\nNot sure why CFbot CI fails on macOS/Windows while it works with the Github\nCI on my fork (\nhttps://cirrus-ci.com/github/pgstef/postgres/partial-walseg-recovery).\n\nMany thanks in advance for your feedback and thoughts about this patch,\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\nOn Thu, Aug 1, 2024 at 10:23 PM Stefan Fercot <[email protected]>\nwrote:\n\n> Dear hackers,\n>\n> Generating a \".partial\" WAL segment is pretty common nowadays (using\n> pg_receivewal or during standby promotion).\n> However, we currently don't do anything with it unless the user manually\n> removes that \".partial\" extension.\n>\n> The 028_pitr_timelines tests are highlighting that fact: with test data\n> being being in 000000020000000000000003 and\n> 000000010000000000000003.partial, a recovery following the latest timeline\n> (2) will succeed but fail if we follow the current timeline (1).\n>\n> By simply trying to fetch the \".partial\" file in XLogFileRead, we can\n> easily recover more data and also cover that (current timeline) recovery\n> case.\n>\n> So, this proposed patch makes XLogFileRead try to restore \".partial\" WAL\n> archives and adds a test to 028_pitr_timelines using current\n> recovery_target_timeline.\n>\n> As far as I've seen, the current pg_receivewal tests only seem to cover\n> the archives generation but not actually trying to recover using it. I\n> wasn't sure it was interesting to add such tests right now, so I didn't\n> considered it for this patch.\n>\n> Many thanks in advance for your feedback and thoughts about this,\n> Kind Regards,\n> --\n> Stefan FERCOT\n> Data Egret (https://dataegret.com)\n\nHi,I've added a CF entry for this patch: https://commitfest.postgresql.org/49/5148/Not sure why CFbot CI fails on macOS/Windows while it works with the Github CI on my fork (https://cirrus-ci.com/github/pgstef/postgres/partial-walseg-recovery).Many thanks in advance for your feedback and thoughts about this patch,\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)On Thu, Aug 1, 2024 at 10:23 PM Stefan Fercot <[email protected]> wrote:Dear hackers,\n\nGenerating a \".partial\" WAL segment is pretty common nowadays (using pg_receivewal or during standby promotion).\nHowever, we currently don't do anything with it unless the user manually removes that \".partial\" extension.\n\nThe 028_pitr_timelines tests are highlighting that fact: with test data being being in 000000020000000000000003 and 000000010000000000000003.partial, a recovery following the latest timeline (2) will succeed but fail if we follow the current timeline (1).\n\nBy simply trying to fetch the \".partial\" file in XLogFileRead, we can easily recover more data and also cover that (current timeline) recovery case.\n\nSo, this proposed patch makes XLogFileRead try to restore \".partial\" WAL archives and adds a test to 028_pitr_timelines using current recovery_target_timeline.\n\nAs far as I've seen, the current pg_receivewal tests only seem to cover the archives generation but not actually trying to recover using it. I wasn't sure it was interesting to add such tests right now, so I didn't considered it for this patch.\n\nMany thanks in advance for your feedback and thoughts about this,\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)", "msg_date": "Fri, 2 Aug 2024 08:47:02 +0200", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovery of .partial WAL segments" }, { "msg_contents": "> On Fri, Aug 02, 2024 at 08:47:02AM GMT, Stefan Fercot wrote:\n>\n> Not sure why CFbot CI fails on macOS/Windows while it works with the Github\n> CI on my fork (\n> https://cirrus-ci.com/github/pgstef/postgres/partial-walseg-recovery).\n\nI guess it's because the test has to wait a bit after the node has been\nstarted until the log lines will appear. One can see it in the\nnode_pitr3 logs, first it was hit by\n\n SELECT pg_is_in_recovery() = 'f'\n\nand only some moments later produced\n\n restored log file \"000000010000000000000003.partial\" from archive\n\nwhere the test has those operations in reversed order. Seems like the\nretry loop from 019_replslot_limit might help.\n\n\n", "msg_date": "Fri, 9 Aug 2024 16:29:48 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovery of .partial WAL segments" }, { "msg_contents": "Hi,\n\nOn Fri, Aug 9, 2024 at 4:29 PM Dmitry Dolgov <[email protected]> wrote:\n\n> Seems like the retry loop from 019_replslot_limit might help.\n>\n\nThanks for the tip. Attached v2 adds the retry loop in the test which would\nhopefully fix the cfbot.\n\nKind Regards,\nStefan", "msg_date": "Fri, 9 Aug 2024 20:25:34 +0200", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recovery of .partial WAL segments" } ]
[ { "msg_contents": "CopyReadLineText quoth:\n\n * The objective of this loop is to transfer the entire next input line\n * into line_buf. Hence, we only care for detecting newlines (\\r and/or\n * \\n) and the end-of-copy marker (\\.).\n *\n * In CSV mode, \\r and \\n inside a quoted field are just part of the data\n * value and are put in line_buf. We keep just enough state to know if we\n * are currently in a quoted field or not.\n *\n * These four characters, and the CSV escape and quote characters, are\n * assumed the same in frontend and backend encodings.\n\nWhen that last bit was written, it was because we were detecting\nnewlines and end-of-copy markers before performing encoding\nconversion. That's not true any more: by the time CopyReadLineText\nsees the data, it was already converted by CopyConvertBuf. So\nI don't believe there actually is any such dependency anymore,\nand we should simply remove that last sentence. Any objections?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Apr 2024 16:07:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Obsolete comment in CopyReadLineText()" } ]
[ { "msg_contents": "\nIn hashfn_unstable.h, fasthash32() is declared as:\n\n /* like fasthash64, but returns a 32-bit hashcode */\n static inline uint64\n fasthash32(const char *k, size_t len, uint64 seed)\n\nIs the return type of uint64 a typo?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 05 Apr 2024 13:47:29 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "fasthash32() returning uint64?" }, { "msg_contents": "On Sat, Apr 6, 2024 at 3:47 AM Jeff Davis <[email protected]> wrote:\n> In hashfn_unstable.h, fasthash32() is declared as:\n>\n> /* like fasthash64, but returns a 32-bit hashcode */\n> static inline uint64\n> fasthash32(const char *k, size_t len, uint64 seed)\n>\n> Is the return type of uint64 a typo?\n\nYes it is, will fix, thanks!\n\n\n", "msg_date": "Sat, 6 Apr 2024 08:08:19 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fasthash32() returning uint64?" } ]
[ { "msg_contents": "I wondered why buildfarm member copperhead has started to fail\nxversion-upgrade-HEAD-HEAD tests. I soon reproduced the problem here:\n\npg_restore: creating ACL \"regress_pg_dump_schema.TYPE \"test_type\"\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 4355; 0 0 ACL TYPE \"test_type\" buildfarm\npg_restore: error: could not execute query: ERROR: role \"74603\" does not exist\nCommand was: SELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\nGRANT ALL ON TYPE \"regress_pg_dump_schema\".\"test_type\" TO \"74603\";\nSELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\nREVOKE ALL ON TYPE \"regress_pg_dump_schema\".\"test_type\" FROM \"74603\";\n\n(So now I'm wondering why *only* copperhead has shown this so far.\nAre our other cross-version-upgrade testing animals AWOL?)\n\nI believe this is a longstanding problem that was exposed by accident\nby commit 936e3fa37. If you run \"make installcheck\" in HEAD's\nsrc/test/modules/test_pg_dump, and then poke around in the leftover\ncontrib_regression database, you can find dangling grants in\npg_init_privs:\n\ncontrib_regression=# table pg_init_privs;\n objoid | classoid | objsubid | privtype | initprivs \n \n--------+----------+----------+----------+--------------------------------------\n---------------------------\n ...\nes}\n 43134 | 1259 | 0 | e | {postgres=rwU/postgres,43125=U/postgr\nes}\n 43128 | 1259 | 0 | e | {postgres=arwdDxtm/postgres,43125=r/p\nostgres}\n ...\n\nThe fact that the DROP ROLE added by 936e3fa37 succeeded indicates\nthat these role references weren't captured in pg_shdepend.\nI imagine that we also lack code that would allow DROP OWNED BY to\nfollow up on such entries if they existed, but I've not checked that\nfor sure. In any case, there's probably a nontrivial amount of code\nto be written to make this work.\n\nGiven the lack of field complaints, I suspect that extension scripts\nsimply don't grant privileges to random roles that aren't the\nextension's owner. So I wonder a little bit if this is even worth\nfixing, as opposed to blocking off somehow. But probably we should\nfirst try to fix it.\n\nI doubt this is something we'll have fixed by Monday, so I will\ngo add an open item for it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Apr 2024 19:10:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Fri, Apr 05, 2024 at 07:10:59PM -0400, Tom Lane wrote:\n> I wondered why buildfarm member copperhead has started to fail\n> xversion-upgrade-HEAD-HEAD tests. I soon reproduced the problem here:\n> \n> pg_restore: creating ACL \"regress_pg_dump_schema.TYPE \"test_type\"\"\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 4355; 0 0 ACL TYPE \"test_type\" buildfarm\n> pg_restore: error: could not execute query: ERROR: role \"74603\" does not exist\n> Command was: SELECT pg_catalog.binary_upgrade_set_record_init_privs(true);\n> GRANT ALL ON TYPE \"regress_pg_dump_schema\".\"test_type\" TO \"74603\";\n> SELECT pg_catalog.binary_upgrade_set_record_init_privs(false);\n> REVOKE ALL ON TYPE \"regress_pg_dump_schema\".\"test_type\" FROM \"74603\";\n> \n> (So now I'm wondering why *only* copperhead has shown this so far.\n> Are our other cross-version-upgrade testing animals AWOL?)\n> \n> I believe this is a longstanding problem that was exposed by accident\n> by commit 936e3fa37. If you run \"make installcheck\" in HEAD's\n> src/test/modules/test_pg_dump, and then poke around in the leftover\n> contrib_regression database, you can find dangling grants in\n> pg_init_privs:\n> \n> contrib_regression=# table pg_init_privs;\n> objoid | classoid | objsubid | privtype | initprivs \n> \n> --------+----------+----------+----------+--------------------------------------\n> ---------------------------\n> ...\n> es}\n> 43134 | 1259 | 0 | e | {postgres=rwU/postgres,43125=U/postgr\n> es}\n> 43128 | 1259 | 0 | e | {postgres=arwdDxtm/postgres,43125=r/p\n> ostgres}\n> ...\n> \n> The fact that the DROP ROLE added by 936e3fa37 succeeded indicates\n> that these role references weren't captured in pg_shdepend.\n> I imagine that we also lack code that would allow DROP OWNED BY to\n> follow up on such entries if they existed, but I've not checked that\n> for sure. In any case, there's probably a nontrivial amount of code\n> to be written to make this work.\n> \n> Given the lack of field complaints, I suspect that extension scripts\n> simply don't grant privileges to random roles that aren't the\n> extension's owner. So I wonder a little bit if this is even worth\n> fixing, as opposed to blocking off somehow. But probably we should\n> first try to fix it.\n\nThis sounds closely-related to the following thread:\nhttps://www.postgresql.org/message-id/flat/1573808483712.96817%40Optiver.com\n\n\n", "msg_date": "Fri, 5 Apr 2024 18:46:25 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Noah Misch <[email protected]> writes:\n> On Fri, Apr 05, 2024 at 07:10:59PM -0400, Tom Lane wrote:\n>> The fact that the DROP ROLE added by 936e3fa37 succeeded indicates\n>> that these role references weren't captured in pg_shdepend.\n>> I imagine that we also lack code that would allow DROP OWNED BY to\n>> follow up on such entries if they existed, but I've not checked that\n>> for sure. In any case, there's probably a nontrivial amount of code\n>> to be written to make this work.\n>> \n>> Given the lack of field complaints, I suspect that extension scripts\n>> simply don't grant privileges to random roles that aren't the\n>> extension's owner. So I wonder a little bit if this is even worth\n>> fixing, as opposed to blocking off somehow. But probably we should\n>> first try to fix it.\n\n> This sounds closely-related to the following thread:\n> https://www.postgresql.org/message-id/flat/1573808483712.96817%40Optiver.com\n\nOh, interesting, I'd forgotten that thread completely.\n\nSo Stephen was pushing back against dealing with the case because\nhe thought that the SQL commands issued in that example should not\nhave produced pg_init_privs entries in the first place. Which nobody\nelse wanted to opine on, so the thread stalled. However, in the case\nof the test_pg_dump extension, the test_pg_dump--1.0.sql script\nabsolutely did grant those privileges so it's very hard for me to\nthink that they shouldn't be listed in pg_init_privs. Hence, I think\nwe've accidentally stumbled across a case where we do need all that\nmechanism --- unless somebody wants to argue that what\ntest_pg_dump--1.0.sql is doing should be disallowed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Apr 2024 22:10:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 6 Apr 2024, at 01:10, Tom Lane <[email protected]> wrote:\n\n> (So now I'm wondering why *only* copperhead has shown this so far.\n> Are our other cross-version-upgrade testing animals AWOL?)\n\nClicking around searching for Xversion animals I didn't spot any, but without\naccess to the database it's nontrivial to know which animal does what.\n\n> I doubt this is something we'll have fixed by Monday, so I will\n> go add an open item for it.\n\n+1. Having opened the can of worms I'll have a look at it next week.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sat, 6 Apr 2024 09:22:02 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 6 Apr 2024, at 01:10, Tom Lane <[email protected]> wrote:\n>> (So now I'm wondering why *only* copperhead has shown this so far.\n>> Are our other cross-version-upgrade testing animals AWOL?)\n\n> Clicking around searching for Xversion animals I didn't spot any, but without\n> access to the database it's nontrivial to know which animal does what.\n\nI believe I see why this is (or isn't) happening. The animals\ncurrently running xversion tests are copperhead, crake, drongo,\nand fairywren. copperhead is using the makefiles while the others\nare using meson. And I find this in\nsrc/test/modules/test_pg_dump/meson.build (from 3f0e786cc):\n\n # doesn't delete its user\n 'runningcheck': false,\n\nSo the meson animals are not running the test that sets up the\nproblematic data.\n\nI think we should remove the above, since (a) the reason to have\nit is gone, and (b) it seems really bad that the set of tests\nrun by meson is different from that run by the makefiles.\n\nHowever, once we do that, those other three animals will presumably go\nred, greatly complicating detection of any Windows-specific problems.\nSo I'm inclined to not do it till just before we intend to commit\na fix for the underlying problem. (Enough before that we can confirm\nthat they do go red.)\n\nSpeaking of which ...\n\n>> I doubt this is something we'll have fixed by Monday, so I will\n>> go add an open item for it.\n\n> +1. Having opened the can of worms I'll have a look at it next week.\n\n... were you going to look at it? I can take a whack if it's\ntoo far down your priority list.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Apr 2024 17:08:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 21 Apr 2024, at 23:08, Tom Lane <[email protected]> wrote:\n\n> ... were you going to look at it? I can take a whack if it's too far down your priority list.\n\nYeah, I’m working on a patchset right now.\n\n ./daniel\n\n", "msg_date": "Mon, 22 Apr 2024 23:09:42 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 21 Apr 2024, at 23:08, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>>> On 6 Apr 2024, at 01:10, Tom Lane <[email protected]> wrote:\n>>> (So now I'm wondering why *only* copperhead has shown this so far.\n>>> Are our other cross-version-upgrade testing animals AWOL?)\n> \n>> Clicking around searching for Xversion animals I didn't spot any, but without\n>> access to the database it's nontrivial to know which animal does what.\n> \n> I believe I see why this is (or isn't) happening. The animals\n> currently running xversion tests are copperhead, crake, drongo,\n> and fairywren. copperhead is using the makefiles while the others\n> are using meson. And I find this in\n> src/test/modules/test_pg_dump/meson.build (from 3f0e786cc):\n> \n> # doesn't delete its user\n> 'runningcheck': false,\n> \n> So the meson animals are not running the test that sets up the\n> problematic data.\n\nugh =/\n\n> I think we should remove the above, since (a) the reason to have\n> it is gone, and (b) it seems really bad that the set of tests\n> run by meson is different from that run by the makefiles.\n\nAgreed.\n\n> However, once we do that, those other three animals will presumably go\n> red, greatly complicating detection of any Windows-specific problems.\n> So I'm inclined to not do it till just before we intend to commit\n> a fix for the underlying problem. (Enough before that we can confirm\n> that they do go red.)\n\nAgreed, we definitely want that but compromising the ability to find Windows\nissues at this point in the cycle seems bad.\n\n> ... were you going to look at it? I can take a whack if it's\n> too far down your priority list.\n\nI took a look at this, reading code and the linked thread. My gut feeling is\nthat Stephen is right in that the underlying bug is these privileges ending up\nin pg_init_privs to begin with. That being said, I wasn't able to fix that in\na way that doesn't seem like a terrible hack. The attached POC hack fixes it\nfor me but I'm not sure how to fix it properly. Your wisdom would be much appreciated.\n\nClusters which already has such entries aren't helped by a fix for this though,\nfixing that would either require pg_dump to skip them, or pg_upgrade to have a\ncheck along with instructions for fixing the issue. Not sure what's the best\nstrategy here, the lack of complaints could indicate this isn't terribly common\nso spending cycles on it for every pg_dump might be excessive compared to a\npg_upgrade check?\n\n--\nDaniel Gustafsson", "msg_date": "Fri, 26 Apr 2024 15:41:41 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 21 Apr 2024, at 23:08, Tom Lane <[email protected]> wrote:\n>> So the meson animals are not running the test that sets up the\n>> problematic data.\n\n> I took a look at this, reading code and the linked thread. My gut feeling is\n> that Stephen is right in that the underlying bug is these privileges ending up\n> in pg_init_privs to begin with. That being said, I wasn't able to fix that in\n> a way that doesn't seem like a terrible hack.\n\nHmm, can't we put the duplicate logic inside recordExtensionInitPriv?\nEven if these calls need a different result from others, adding a flag\nparameter seems superior to having N copies of the logic.\n\nA bigger problem though is that I think you are addressing the\noriginal complaint from the older thread, which while it's a fine\nthing to fix seems orthogonal to the failure we're seeing in the\nbuildfarm. The buildfarm's problem is not that we're recording\nincorrect pg_init_privs entries, it's that when we do create such\nentries we're failing to show their dependency on the grantee role\nin pg_shdepend. We've missed spotting that so far because it's\nso seldom that pg_init_privs entries reference any but built-in\nroles (or at least roles that'd likely outlive the extension).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 26 Apr 2024 17:03:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "I wrote:\n> A bigger problem though is that I think you are addressing the\n> original complaint from the older thread, which while it's a fine\n> thing to fix seems orthogonal to the failure we're seeing in the\n> buildfarm. The buildfarm's problem is not that we're recording\n> incorrect pg_init_privs entries, it's that when we do create such\n> entries we're failing to show their dependency on the grantee role\n> in pg_shdepend. We've missed spotting that so far because it's\n> so seldom that pg_init_privs entries reference any but built-in\n> roles (or at least roles that'd likely outlive the extension).\n\nHere's a draft patch that attacks that. It seems to fix the\nproblem with test_pg_dump: no dangling pg_init_privs grants\nare left behind.\n\nA lot of the changes here are just involved with needing to pass the\nobject's owner OID to recordExtensionInitPriv so that it can be passed\nto updateAclDependencies. One thing I'm a bit worried about is that\nsome of the new code assumes that all object types that are of\ninterest here will have catcaches on OID, so that it's possible to\nfetch the owner OID for a generic object-with-privileges using the\ncatcache and objectaddress.c's tables of object properties. That\nassumption seems to exist already, eg ExecGrant_common also assumes\nit, but it's not obvious that it must be so.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 27 Apr 2024 18:45:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "I wrote:\n> Here's a draft patch that attacks that. It seems to fix the\n> problem with test_pg_dump: no dangling pg_init_privs grants\n> are left behind.\n\nHere's a v2 that attempts to add some queries to test_pg_dump.sql\nto provide visual verification that pg_shdepend and pg_init_privs\nare updated correctly during DROP OWNED BY. It's a little bit\nnasty to look at the ACL column of pg_init_privs, because that text\ninvolves the bootstrap superuser's name which is site-dependent.\nWhat I did to try to make the test stable is\n\n replace(initprivs::text, current_user, 'postgres') AS initprivs\n\nThis is of course not bulletproof: with a sufficiently weird\nbootstrap superuser name, we could get false matches to parts\nof \"regress_dump_test_role\" or to privilege strings. That\nseems unlikely enough to live with, but I wonder if anybody has\na better idea.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 28 Apr 2024 14:52:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 28 Apr 2024, at 20:52, Tom Lane <[email protected]> wrote:\n> \n> I wrote:\n>> Here's a draft patch that attacks that. It seems to fix the\n>> problem with test_pg_dump: no dangling pg_init_privs grants\n>> are left behind.\n\nReading this I can't find any sharp edges, and I prefer your changes to\nrecordExtensionInitPriv over the version I had half-baked by the time you had\nthis finished. Trying to break it with the testcases I had devised also\nfailed, so +1.\n\n> Here's a v2 that attempts to add some queries to test_pg_dump.sql\n> to provide visual verification that pg_shdepend and pg_init_privs\n> are updated correctly during DROP OWNED BY. It's a little bit\n> nasty to look at the ACL column of pg_init_privs, because that text\n> involves the bootstrap superuser's name which is site-dependent.\n> What I did to try to make the test stable is\n> \n> replace(initprivs::text, current_user, 'postgres') AS initprivs\n\nMaybe that part warrants a small comment in the testfile to keep it from\nsending future readers into rabbitholes?\n\n> This is of course not bulletproof: with a sufficiently weird\n> bootstrap superuser name, we could get false matches to parts\n> of \"regress_dump_test_role\" or to privilege strings. That\n> seems unlikely enough to live with, but I wonder if anybody has\n> a better idea.\n\nI think that will be bulletproof enough to keep it working in the buildfarm and\namong 99% of hackers.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 20:29:39 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n>> On 28 Apr 2024, at 20:52, Tom Lane <[email protected]> wrote:\n>> ... It's a little bit\n>> nasty to look at the ACL column of pg_init_privs, because that text\n>> involves the bootstrap superuser's name which is site-dependent.\n>> What I did to try to make the test stable is\n>> replace(initprivs::text, current_user, 'postgres') AS initprivs\n\n> Maybe that part warrants a small comment in the testfile to keep it from\n> sending future readers into rabbitholes?\n\nAgreed.\n\n>> This is of course not bulletproof: with a sufficiently weird\n>> bootstrap superuser name, we could get false matches to parts\n>> of \"regress_dump_test_role\" or to privilege strings. That\n>> seems unlikely enough to live with, but I wonder if anybody has\n>> a better idea.\n\n> I think that will be bulletproof enough to keep it working in the buildfarm and\n> among 99% of hackers.\n\nIt occurred to me to use \"aclexplode\" to expand the initprivs, and\nthen we can substitute names with simple equality tests. The test\nquery is a bit more complicated, but I feel better about it.\n\nv3 attached also has a bit more work on code comments.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 29 Apr 2024 15:15:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 29 Apr 2024, at 21:15, Tom Lane <[email protected]> wrote:\n\n> It occurred to me to use \"aclexplode\" to expand the initprivs, and\n> then we can substitute names with simple equality tests. The test\n> query is a bit more complicated, but I feel better about it.\n\nNice, I didn't even remember that function existed. I agree that it's an\nimprovement even at the increased query complexity.\n\n> v3 attached also has a bit more work on code comments.\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 29 Apr 2024 22:52:44 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 29 Apr 2024, at 21:15, Tom Lane <[email protected]> wrote:\n>> v3 attached also has a bit more work on code comments.\n\n> LGTM.\n\nPushed, thanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 19:26:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "I wrote:\n> Pushed, thanks for reviewing!\n\nArgh, I forgot I'd meant to push b0c5b215d first not second.\nOh well, it was only neatnik-ism that made me want to see\nthose other animals fail --- and a lot of the buildfarm is\nred right now for $other_reasons anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 29 Apr 2024 19:50:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Monday, April 29, 2024, Tom Lane <[email protected]> wrote:\n\n> Daniel Gustafsson <[email protected]> writes:\n> >> On 28 Apr 2024, at 20:52, Tom Lane <[email protected]> wrote:\n>\n>\n> >> This is of course not bulletproof: with a sufficiently weird\n> >> bootstrap superuser name, we could get false matches to parts\n> >> of \"regress_dump_test_role\" or to privilege strings. That\n> >> seems unlikely enough to live with, but I wonder if anybody has\n> >> a better idea.\n>\n> > I think that will be bulletproof enough to keep it working in the\n> buildfarm and\n> > among 99% of hackers.\n>\n> It occurred to me to use \"aclexplode\" to expand the initprivs, and\n> then we can substitute names with simple equality tests. The test\n> query is a bit more complicated, but I feel better about it.\n>\n\nMy solution to this was to rely on the fact that the bootstrap superuser is\nassigned OID 10 regardless of its name.\n\nDavid J.\n\nOn Monday, April 29, 2024, Tom Lane <[email protected]> wrote:Daniel Gustafsson <[email protected]> writes:\n>> On 28 Apr 2024, at 20:52, Tom Lane <[email protected]> wrote:\n\n>> This is of course not bulletproof: with a sufficiently weird\n>> bootstrap superuser name, we could get false matches to parts\n>> of \"regress_dump_test_role\" or to privilege strings.  That\n>> seems unlikely enough to live with, but I wonder if anybody has\n>> a better idea.\n\n> I think that will be bulletproof enough to keep it working in the buildfarm and\n> among 99% of hackers.\n\nIt occurred to me to use \"aclexplode\" to expand the initprivs, and\nthen we can substitute names with simple equality tests.  The test\nquery is a bit more complicated, but I feel better about it.\nMy solution to this was to rely on the fact that the bootstrap superuser is assigned OID 10 regardless of its name.David J.", "msg_date": "Mon, 29 Apr 2024 21:00:20 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> My solution to this was to rely on the fact that the bootstrap superuser is\n> assigned OID 10 regardless of its name.\n\nYeah, I wrote it that way to start with too, but reconsidered\nbecause\n\n(1) I don't like hard-coding numeric OIDs. We can avoid that in C\ncode but it's harder to do in SQL.\n\n(2) It's not clear to me that this test couldn't be run by a\nnon-bootstrap superuser. I think \"current_user\" is actually\nthe correct thing for the role executing the test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Apr 2024 00:10:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Monday, April 29, 2024, Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > My solution to this was to rely on the fact that the bootstrap superuser\n> is\n> > assigned OID 10 regardless of its name.\n>\n> Yeah, I wrote it that way to start with too, but reconsidered\n> because\n>\n> (1) I don't like hard-coding numeric OIDs. We can avoid that in C\n> code but it's harder to do in SQL.\n\n\nIf the tests don’t involve, e.g., the predefined role pg_monitor and its\ngrantor of the memberships in the other predefined roles, this indeed can\nbe avoided. So I think my test still needs to check for 10 even if some\nother superuser is allowed to produce the test output since a key output in\nmy case was the bootstrap superuser and the initdb roles.\n\n\n> (2) It's not clear to me that this test couldn't be run by a\n> non-bootstrap superuser. I think \"current_user\" is actually\n> the correct thing for the role executing the test.\n>\n\nAgreed, testing against current_role is correct if the things being queried\nwere created while executing the test. I would need to do this as well to\nremove the current requirement that my tests be run by the bootstrap\nsuperuser.\n\nDavid J.\n\nOn Monday, April 29, 2024, Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> My solution to this was to rely on the fact that the bootstrap superuser is\n> assigned OID 10 regardless of its name.\n\nYeah, I wrote it that way to start with too, but reconsidered\nbecause\n\n(1) I don't like hard-coding numeric OIDs.  We can avoid that in C\ncode but it's harder to do in SQL.If the tests don’t involve, e.g., the predefined role pg_monitor and its grantor of the memberships in the other predefined roles, this indeed can be avoided.  So I think my test still needs to check for 10 even if some other superuser is allowed to produce the test output since a key output in my case was the bootstrap superuser and the initdb roles.\n\n(2) It's not clear to me that this test couldn't be run by a\nnon-bootstrap superuser.  I think \"current_user\" is actually\nthe correct thing for the role executing the test.\nAgreed, testing against current_role is correct if the things being queried were created while executing the test.  I would need to do this as well to remove the current requirement that my tests be run by the bootstrap superuser.David J.", "msg_date": "Mon, 29 Apr 2024 21:40:48 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "While the 'DROP OWNED BY fails to clean out pg_init_privs grants'\nissue is now fixed,we have a similar issue with REASSIGN OWNED BY that\nis still there:\n\nTested on fresh git checkout om May 20th\n\ntest=# create user privtestuser superuser;\nCREATE ROLE\ntest=# set role privtestuser;\nSET\ntest=# create extension pg_stat_statements ;\nCREATE EXTENSION\ntest=# select * from pg_init_privs where privtype ='e';\n objoid | classoid | objsubid | privtype |\ninitprivs\n--------+----------+----------+----------+------------------------------------------------------\n 16405 | 1259 | 0 | e |\n{privtestuser=arwdDxtm/privtestuser,=r/privtestuser}\n 16422 | 1259 | 0 | e |\n{privtestuser=arwdDxtm/privtestuser,=r/privtestuser}\n 16427 | 1255 | 0 | e | {privtestuser=X/privtestuser}\n(3 rows)\n\ntest=# reset role;\nRESET\ntest=# reassign owned by privtestuser to hannuk;\nREASSIGN OWNED\ntest=# select * from pg_init_privs where privtype ='e';\n objoid | classoid | objsubid | privtype |\ninitprivs\n--------+----------+----------+----------+------------------------------------------------------\n 16405 | 1259 | 0 | e |\n{privtestuser=arwdDxtm/privtestuser,=r/privtestuser}\n 16422 | 1259 | 0 | e |\n{privtestuser=arwdDxtm/privtestuser,=r/privtestuser}\n 16427 | 1255 | 0 | e | {privtestuser=X/privtestuser}\n(3 rows)\n\ntest=# drop user privtestuser ;\nDROP ROLE\ntest=# select * from pg_init_privs where privtype ='e';\n objoid | classoid | objsubid | privtype | initprivs\n--------+----------+----------+----------+---------------------------------\n 16405 | 1259 | 0 | e | {16390=arwdDxtm/16390,=r/16390}\n 16422 | 1259 | 0 | e | {16390=arwdDxtm/16390,=r/16390}\n 16427 | 1255 | 0 | e | {16390=X/16390}\n(3 rows)\n\n\nThis will cause pg_dump to produce something that cant be loaded back\ninto the database:\n\nCREATE EXTENSION IF NOT EXISTS pg_stat_statements WITH SCHEMA public;\n...\nREVOKE ALL ON TABLE public.pg_stat_statements FROM \"16390\";\n...\n\nAnd this will, among other things, break pg_upgrade.\n\n\n-----\nHannu\n\n\n\nOn Tue, Apr 30, 2024 at 6:40 AM David G. Johnston\n<[email protected]> wrote:\n>\n> On Monday, April 29, 2024, Tom Lane <[email protected]> wrote:\n>>\n>> \"David G. Johnston\" <[email protected]> writes:\n>> > My solution to this was to rely on the fact that the bootstrap superuser is\n>> > assigned OID 10 regardless of its name.\n>>\n>> Yeah, I wrote it that way to start with too, but reconsidered\n>> because\n>>\n>> (1) I don't like hard-coding numeric OIDs. We can avoid that in C\n>> code but it's harder to do in SQL.\n>\n>\n> If the tests don’t involve, e.g., the predefined role pg_monitor and its grantor of the memberships in the other predefined roles, this indeed can be avoided. So I think my test still needs to check for 10 even if some other superuser is allowed to produce the test output since a key output in my case was the bootstrap superuser and the initdb roles.\n>\n>>\n>> (2) It's not clear to me that this test couldn't be run by a\n>> non-bootstrap superuser. I think \"current_user\" is actually\n>> the correct thing for the role executing the test.\n>\n>\n> Agreed, testing against current_role is correct if the things being queried were created while executing the test. I would need to do this as well to remove the current requirement that my tests be run by the bootstrap superuser.\n>\n> David J.\n>\n\n\n", "msg_date": "Fri, 24 May 2024 00:08:11 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> While the 'DROP OWNED BY fails to clean out pg_init_privs grants'\n> issue is now fixed,we have a similar issue with REASSIGN OWNED BY that\n> is still there:\n\nUgh, how embarrassing. I'll take a look tomorrow, if no one\nbeats me to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 May 2024 19:01:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 24 May 2024, at 01:01, Tom Lane <[email protected]> wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n>> While the 'DROP OWNED BY fails to clean out pg_init_privs grants'\n>> issue is now fixed,we have a similar issue with REASSIGN OWNED BY that\n>> is still there:\n> \n> Ugh, how embarrassing. I'll take a look tomorrow, if no one\n> beats me to it.\n\nI had a look, but I didn't beat you to a fix since it's not immediately clear\nto me how this should work for REASSING OWNED (DROP OWNED seems a simpler\ncase). Should REASSIGN OWNED alter the rows in pg_shdepend matching init privs\nfrom SHARED_DEPENDENCY_OWNER to SHARED_DEPENDENCY_INITACL, so that these can be\nmopped up with a later DROP OWNED? Trying this in a POC patch it fails with\nRemoveRoleFromInitPriv not removing the rows, shortcircuiting that for a test\nseems to make it work but is it the right approach?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 24 May 2024 13:27:18 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> I had a look, but I didn't beat you to a fix since it's not immediately clear\n> to me how this should work for REASSING OWNED (DROP OWNED seems a simpler\n> case). Should REASSIGN OWNED alter the rows in pg_shdepend matching init privs\n> from SHARED_DEPENDENCY_OWNER to SHARED_DEPENDENCY_INITACL, so that these can be\n> mopped up with a later DROP OWNED? Trying this in a POC patch it fails with\n> RemoveRoleFromInitPriv not removing the rows, shortcircuiting that for a test\n> seems to make it work but is it the right approach?\n\nI've tentatively concluded that I shouldn't have modeled\nSHARED_DEPENDENCY_INITACL so closely on SHARED_DEPENDENCY_ACL,\nin particular the decision that we don't need such an entry if\nthere's also SHARED_DEPENDENCY_OWNER. I think one reason we\ncan get away with omitting a SHARED_DEPENDENCY_ACL entry for the\nowner is that the object's normal ACL is part of its primary\ncatalog row, so it goes away automatically if the object is\ndropped. But obviously that's not true for a pg_init_privs\nentry. I can see two routes to a solution:\n\n1. Create SHARED_DEPENDENCY_INITACL, if applicable, whether the\nrole is the object's owner or not. Then, clearing out the\npg_shdepend entry cues us to go delete the pg_init_privs entry.\n\n2. Just always search pg_init_privs for relevant entries\nwhen dropping an object.\n\nI don't especially like #2 on performance grounds, but it has\na lot fewer moving parts than #1. In particular, there's some\nhandwaving in changeDependencyOnOwner() about why we should\ndrop SHARED_DEPENDENCY_ACL when changing owner, and I've not\nwrapped my head around how those concerns map to INITACL\nif we treat it in this different way.\n\nAnother point: shdepReassignOwned explicitly does not touch grants\nor default ACLs. It feels like the same should be true of\npg_init_privs entries, or at least if not, why not? In that case\nthere's nothing to be done in shdepReassignOwned (although maybe its\ncomments should be adjusted to mention this explicitly). The bug is\njust that DROP OWNED isn't getting rid of the entries because there's\nno INITACL entry to cue it to do so.\n\nAnother thing I'm wondering about right now is privileges on global\nobjects (roles, databases, tablespaces). The fine manual says\n\"Although an extension script is not prohibited from creating such\nobjects, if it does so they will not be tracked as part of the\nextension\". Presumably, that also means that no pg_init_privs\nentries are made; but do we do that correctly?\n\nAnyway, -ENOCAFFEINE for the moment. I'll look more later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 10:20:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 24 May 2024, at 16:20, Tom Lane <[email protected]> wrote:\n\n> I've tentatively concluded that I shouldn't have modeled\n> SHARED_DEPENDENCY_INITACL so closely on SHARED_DEPENDENCY_ACL,\n> in particular the decision that we don't need such an entry if\n> there's also SHARED_DEPENDENCY_OWNER. \n\n+1, in light of this report I think we need to go back on that.\n\n> I can see two routes to a solution:\n> \n> 1. Create SHARED_DEPENDENCY_INITACL, if applicable, whether the\n> role is the object's owner or not. Then, clearing out the\n> pg_shdepend entry cues us to go delete the pg_init_privs entry.\n> \n> 2. Just always search pg_init_privs for relevant entries\n> when dropping an object.\n> \n> I don't especially like #2 on performance grounds, but it has\n> a lot fewer moving parts than #1.\n\n#1 is more elegant, but admittedly also more complicated. An unscientific\nguess is that a majority of objects dropped won't have init privs, making the\nextra scan in #2 quite possibly more than academic. #2 could however be\nbackported and solve the issue in existing clusters.\n\n> Another point: shdepReassignOwned explicitly does not touch grants\n> or default ACLs. It feels like the same should be true of\n> pg_init_privs entries,\n\nAgreed, I can't see why pg_init_privs should be treated differently.\n\n> Another thing I'm wondering about right now is privileges on global\n> objects (roles, databases, tablespaces). The fine manual says\n> \"Although an extension script is not prohibited from creating such\n> objects, if it does so they will not be tracked as part of the\n> extension\". Presumably, that also means that no pg_init_privs\n> entries are made; but do we do that correctly?\n\nI'm away from a tree to check, but that does warrant investigation. If we\ndon't have a test for it already then it might be worth constructing something\nto catch that.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 24 May 2024 16:46:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 24 May 2024, at 16:20, Tom Lane <[email protected]> wrote:\n>> Another point: shdepReassignOwned explicitly does not touch grants\n>> or default ACLs. It feels like the same should be true of\n>> pg_init_privs entries,\n\n> Agreed, I can't see why pg_init_privs should be treated differently.\n\nThinking about this some more: the point of pg_init_privs is to record\nan object's privileges as they stood at the end of CREATE EXTENSION\n(or extension update), with the goal that pg_dump should be able to\ncompute the delta between that and the object's current privileges\nand emit GRANT/REVOKE commands to restore those current privileges\nafter a fresh extension install. (We slide gently past the question\nof whether the fresh extension install is certain to create privileges\nmatching the previous pg_init_privs entry.) So this goal seems to\nmean that neither ALTER OWNER nor REASSIGN OWNED should touch\npg_init_privs at all, as that would break its function of recording\na historical state. Only DROP OWNED should get rid of pg_init_privs\ngrants, and that only because there's no choice -- if the role is\nabout to go away, we can't hang on to a reference to its OID.\n\nHowever ... then what are the implications of doing ALTER OWNER on\nan extension-owned object? Is pg_dump supposed to recognize that\nthat's happened and replay it too? If not, is it sane at all to\ntry to restore the current privileges, which are surely dependent\non the current owner? I kind of doubt that that's possible at all,\nand even if it is it might result in security issues. It seems\nlike pg_init_privs has missed a critical thing, which is to record\nthe original owner not only the original privileges.\n\n(Alternatively, maybe we should forbid ALTER OWNER on extension-owned\nobjects? Or at least on those having pg_init_privs entries?)\n\n\nI'm wondering too about this scenario:\n\n1. CREATE EXTENSION installs an object and sets some initial privileges.\n\n2. DBA manually modifies the object's privileges.\n\n3. ALTER EXTENSION UPDATE further modifies the object's privileges.\n\nI think what will happen is that at the end of ALTER EXTENSION,\nwe'll store the object's current ACL verbatim in pg_init_privs,\ntherefore including the effects of step 2. This seems undesirable,\nbut I'm not sure how to get around it.\n\n\nAnyway, this is starting to look like the sort of can of worms\nbest not opened post-beta1. v17 has made some things better in this\narea, and I don't think it's made anything worse; so maybe we should\ndeclare victory for the moment and hope to address these additional\nconcerns later. I've added an open item though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 11:59:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Fri, May 24, 2024 at 11:59 AM Tom Lane <[email protected]> wrote:\n> Thinking about this some more: the point of pg_init_privs is to record\n> an object's privileges as they stood at the end of CREATE EXTENSION\n> (or extension update), with the goal that pg_dump should be able to\n> compute the delta between that and the object's current privileges\n> and emit GRANT/REVOKE commands to restore those current privileges\n> after a fresh extension install. (We slide gently past the question\n> of whether the fresh extension install is certain to create privileges\n> matching the previous pg_init_privs entry.)\n\n+1 to all of this.\n\n> So this goal seems to\n> mean that neither ALTER OWNER nor REASSIGN OWNED should touch\n> pg_init_privs at all, as that would break its function of recording\n> a historical state. Only DROP OWNED should get rid of pg_init_privs\n> grants, and that only because there's no choice -- if the role is\n> about to go away, we can't hang on to a reference to its OID.\n\nBut I would have thought that the right thing to do to pg_init_privs\nhere would be essentially s/$OLDOWNER/$NEWOWNER/g.\n\nI know I'm late to the party here, but why is that idea wrong?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 14:16:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, May 24, 2024 at 11:59 AM Tom Lane <[email protected]> wrote:\n>> So this goal seems to\n>> mean that neither ALTER OWNER nor REASSIGN OWNED should touch\n>> pg_init_privs at all, as that would break its function of recording\n>> a historical state. Only DROP OWNED should get rid of pg_init_privs\n>> grants, and that only because there's no choice -- if the role is\n>> about to go away, we can't hang on to a reference to its OID.\n\n> But I would have thought that the right thing to do to pg_init_privs\n> here would be essentially s/$OLDOWNER/$NEWOWNER/g.\n\nDoesn't seem right to me. That will give pg_dump the wrong idea\nof what the initial privileges actually were, and I don't see how\nit can construct correct delta GRANT/REVOKE on the basis of false\ninformation. During the dump reload, the extension will be\nrecreated with the original owner (I think), causing its objects'\nprivileges to go back to the original pg_init_privs values.\nApplying a delta that starts from some other state seems pretty\nquestionable in that case.\n\nIt could be that if we expect pg_dump to issue an ALTER OWNER\nto move ownership of the altered extension object to its new\nowner, and only then apply its computed delta GRANT/REVOKEs,\nthen indeed the right thing is for the original ALTER OWNER\nto apply s/$OLDOWNER/$NEWOWNER/g to pg_init_privs. I've not\nthought this through in complete detail, but it feels like\nthat might work, because the reload-time ALTER OWNER would\napply exactly that change to both the object's ACL and its\npg_init_privs, and then the delta is starting from the right state.\nOf course, pg_dump can't do that right now because it lacks the\ninformation that such an ALTER is needed.\n\nAlthough ... this is tickling a recollection that pg_dump doesn't\ntry very hard to run CREATE EXTENSION with the same owner that\nthe extension had originally. That's a leftover from the times\nwhen basically all extensions required superuser to install,\nand of course one superuser is as good as the next. There might\nbe some work we have to do on that side too if we want to up\nour game in this area.\n\nAnother case that's likely not handled well is what if the extension\nreally shouldn't have its original owner (e.g. you're using\n--no-owner). If it's restored under a new owner then the\npg_init_privs data certainly doesn't apply, and it feels like it'd\nbe mostly luck if the precomputed delta GRANT/REVOKEs lead to a\nstate you like.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 14:57:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Fri, May 24, 2024 at 2:57 PM Tom Lane <[email protected]> wrote:\n> Doesn't seem right to me. That will give pg_dump the wrong idea\n> of what the initial privileges actually were, and I don't see how\n> it can construct correct delta GRANT/REVOKE on the basis of false\n> information. During the dump reload, the extension will be\n> recreated with the original owner (I think), causing its objects'\n> privileges to go back to the original pg_init_privs values.\n\nOh! That does seem like it would make what I said wrong, but how would\nit even know who the original owner was? Shouldn't we be recreating\nthe object with the owner it had at dump time?\n\n> Although ... this is tickling a recollection that pg_dump doesn't\n> try very hard to run CREATE EXTENSION with the same owner that\n> the extension had originally. That's a leftover from the times\n> when basically all extensions required superuser to install,\n> and of course one superuser is as good as the next. There might\n> be some work we have to do on that side too if we want to up\n> our game in this area.\n\nHmm, yeah.\n\n> Another case that's likely not handled well is what if the extension\n> really shouldn't have its original owner (e.g. you're using\n> --no-owner). If it's restored under a new owner then the\n> pg_init_privs data certainly doesn't apply, and it feels like it'd\n> be mostly luck if the precomputed delta GRANT/REVOKEs lead to a\n> state you like.\n\nI'm not sure exactly how this computation works, but if tgl granted\nnmisch privileges on an object and the extension is now owned by\nrhaas, it would seem like the right thing to do would be for rhaas to\ngrant nmisch those same privileges. Conversely if tgl started with\nprivileges to do X and Y and later was granted privileges to do Z and\nwe dump and restore such that the extension is owned by rhaas, I'd\npresume rhaas would end up with those same privileges. I'm probably\ntoo far from the code to give terribly useful advice here, but I think\nthe expected behavior is that the new owner replaces the old one for\nall purposes relating to the owned object(s). At least, I can't\ncurrently see what else makes any sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 15:47:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, May 24, 2024 at 2:57 PM Tom Lane <[email protected]> wrote:\n>> Doesn't seem right to me. That will give pg_dump the wrong idea\n>> of what the initial privileges actually were, and I don't see how\n>> it can construct correct delta GRANT/REVOKE on the basis of false\n>> information. During the dump reload, the extension will be\n>> recreated with the original owner (I think), causing its objects'\n>> privileges to go back to the original pg_init_privs values.\n\n> Oh! That does seem like it would make what I said wrong, but how would\n> it even know who the original owner was? Shouldn't we be recreating\n> the object with the owner it had at dump time?\n\nKeep in mind that the whole point here is for the pg_dump script to\njust say \"CREATE EXTENSION foo\", not to mess with the individual\nobjects therein. So the objects are (probably) going to be owned by\nthe user that issued CREATE EXTENSION.\n\nIn the original conception, that was the end of it: what you got for\nthe member objects was whatever state CREATE EXTENSION left behind.\nThe idea of pg_init_privs is to support dump/reload of subsequent\nmanual alterations of privileges for extension-created objects.\nI'm not, at this point, 100% certain that that's a fully realizable\ngoal. But I definitely think it's insane to expect that to work\nwithout also tracking changes in the ownership of said objects.\n\nMaybe forbidding ALTER OWNER on extension-owned objects isn't\nsuch a bad idea?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 May 2024 16:00:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Fri, May 24, 2024 at 10:00 PM Tom Lane <[email protected]> wrote:\n>\n> Robert Haas <[email protected]> writes:\n> > On Fri, May 24, 2024 at 2:57 PM Tom Lane <[email protected]> wrote:\n> >> Doesn't seem right to me. That will give pg_dump the wrong idea\n> >> of what the initial privileges actually were, and I don't see how\n> >> it can construct correct delta GRANT/REVOKE on the basis of false\n> >> information. During the dump reload, the extension will be\n> >> recreated with the original owner (I think), causing its objects'\n> >> privileges to go back to the original pg_init_privs values.\n>\n> > Oh! That does seem like it would make what I said wrong, but how would\n> > it even know who the original owner was? Shouldn't we be recreating\n> > the object with the owner it had at dump time?\n>\n> Keep in mind that the whole point here is for the pg_dump script to\n> just say \"CREATE EXTENSION foo\", not to mess with the individual\n> objects therein. So the objects are (probably) going to be owned by\n> the user that issued CREATE EXTENSION.\n>\n> In the original conception, that was the end of it: what you got for\n> the member objects was whatever state CREATE EXTENSION left behind.\n> The idea of pg_init_privs is to support dump/reload of subsequent\n> manual alterations of privileges for extension-created objects.\n> I'm not, at this point, 100% certain that that's a fully realizable\n> goal.\n\nThe issue became visible because pg_dump issued a bogus\n\nREVOKE ALL ON TABLE public.pg_stat_statements FROM \"16390\";\n\nMaybe the right place for a fix is in pg_dump and the fix would be to *not*\nissue REVOKE ALL ON <any object> FROM <non-existing users> ?\n\nOr alternatively change REVOKE to treat non-existing users as a no-op ?\n\nAlso, the pg_init_privs entry should either go away or at least be\nchanged at the point when the user referenced in init-privs is\ndropped.\n\nHaving an pg_init_privs entry referencing a non-existing user is\ncertainly of no practical use.\n\nOr maybe we should change the user at that point to NULL or some\nspecial non-existing-user-id ?\n\n> But I definitely think it's insane to expect that to work\n> without also tracking changes in the ownership of said objects.\n>\n> Maybe forbidding ALTER OWNER on extension-owned objects isn't\n> such a bad idea?\n\n\n", "msg_date": "Sat, 25 May 2024 16:09:44 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Having an pg_init_privs entry referencing a non-existing user is\n> certainly of no practical use.\n\nSure, that's not up for debate. What I think we're discussing\nright now is\n\n1. What other cases are badly handled by the pg_init_privs\nmechanisms.\n\n2. How much of that is practical to fix in v17, seeing that\nit's all long-standing bugs and we're already past beta1.\n\nI kind of doubt that the answer to #2 is \"all of it\".\nBut perhaps we can do better than \"none of it\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 25 May 2024 10:47:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Sat, May 25, 2024 at 4:48 PM Tom Lane <[email protected]> wrote:\n>\n> Hannu Krosing <[email protected]> writes:\n> > Having an pg_init_privs entry referencing a non-existing user is\n> > certainly of no practical use.\n>\n> Sure, that's not up for debate. What I think we're discussing\n> right now is\n>\n> 1. What other cases are badly handled by the pg_init_privs\n> mechanisms.\n>\n> 2. How much of that is practical to fix in v17, seeing that\n> it's all long-standing bugs and we're already past beta1.\n>\n> I kind of doubt that the answer to #2 is \"all of it\".\n> But perhaps we can do better than \"none of it\".\n\nPutting the fix either in pg_dump or making REVOKE tolerate\nnon-existing users would definitely be most practical / useful fixes,\nas these would actually allow pg_upgrade to v17 to work without\nchanging anything in older versions.\n\nCurrently one already can revoke a privilege that is not there in the\nfirst place, with the end state being that the privilege (still) does\nnot exist.\nThis does not even generate a warning.\n\nExtending this to revoking from users that do not exist does not seem\nany different on conceptual level, though I understand that\nimplementation would be very different as it needs catching the user\nlookup error from a very different part of the code.\n\nThat said, it would be better if we can have something that would be\neasy to backport something that would make pg_upgrade work for all\nsupported versions.\nMaking REVOKE silently ignore revoking from non-existing users would\nimprove general robustness but could conceivably change behaviour if\nsomebody relies on it in their workflows.\n\nRegards,\nHannu\n\n\n", "msg_date": "Sun, 26 May 2024 00:05:43 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Attached is a minimal patch to allow missing roles in REVOKE command\n\nThis should fix the pg_upgrade issue and also a case where somebody\nhas dropped a role you are trying to revoke privileges from :\n\nsmalltest=# create table revoketest();\nCREATE TABLE\nsmalltest=# revoke select on revoketest from bob;\nWARNING: ignoring REVOKE FROM a missing role \"bob\"\nREVOKE\nsmalltest=# create user bob;\nCREATE ROLE\nsmalltest=# grant select on revoketest to bob;\nGRANT\nsmalltest=# \\du\n List of roles\n Role name | Attributes\n-----------+------------------------------------------------------------\n bob |\n hannuk | Superuser, Create role, Create DB, Replication, Bypass RLS\n\nsmalltest=# \\dp\n Access privileges\n Schema | Name | Type | Access privileges | Column\nprivileges | Policies\n--------+------------+-------+------------------------+-------------------+----------\n public | revoketest | table | hannuk=arwdDxtm/hannuk+| |\n | | | bob=r/hannuk | |\n public | vacwatch | table | | |\n(2 rows)\n\nsmalltest=# revoke select on revoketest from bob, joe;\nWARNING: ignoring REVOKE FROM a missing role \"joe\"\nREVOKE\nsmalltest=# \\dp\n Access privileges\n Schema | Name | Type | Access privileges | Column\nprivileges | Policies\n--------+------------+-------+------------------------+-------------------+----------\n public | revoketest | table | hannuk=arwdDxtm/hannuk | |\n public | vacwatch | table | | |\n(2 rows)\n\n\nOn Sun, May 26, 2024 at 12:05 AM Hannu Krosing <[email protected]> wrote:\n>\n> On Sat, May 25, 2024 at 4:48 PM Tom Lane <[email protected]> wrote:\n> >\n> > Hannu Krosing <[email protected]> writes:\n> > > Having an pg_init_privs entry referencing a non-existing user is\n> > > certainly of no practical use.\n> >\n> > Sure, that's not up for debate. What I think we're discussing\n> > right now is\n> >\n> > 1. What other cases are badly handled by the pg_init_privs\n> > mechanisms.\n> >\n> > 2. How much of that is practical to fix in v17, seeing that\n> > it's all long-standing bugs and we're already past beta1.\n> >\n> > I kind of doubt that the answer to #2 is \"all of it\".\n> > But perhaps we can do better than \"none of it\".\n>\n> Putting the fix either in pg_dump or making REVOKE tolerate\n> non-existing users would definitely be most practical / useful fixes,\n> as these would actually allow pg_upgrade to v17 to work without\n> changing anything in older versions.\n>\n> Currently one already can revoke a privilege that is not there in the\n> first place, with the end state being that the privilege (still) does\n> not exist.\n> This does not even generate a warning.\n>\n> Extending this to revoking from users that do not exist does not seem\n> any different on conceptual level, though I understand that\n> implementation would be very different as it needs catching the user\n> lookup error from a very different part of the code.\n>\n> That said, it would be better if we can have something that would be\n> easy to backport something that would make pg_upgrade work for all\n> supported versions.\n> Making REVOKE silently ignore revoking from non-existing users would\n> improve general robustness but could conceivably change behaviour if\n> somebody relies on it in their workflows.\n>\n> Regards,\n> Hannu", "msg_date": "Sun, 26 May 2024 23:19:57 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Attached is a minimal patch to allow missing roles in REVOKE command\n\nFTR, I think this is a very bad idea.\n\nIt might be OK if we added some kind of IF EXISTS option,\nbut I'm not eager about that concept either.\n\nThe right thing here is to fix the backend so that pg_dump doesn't\nsee these bogus ACLs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 May 2024 17:25:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 26 May 2024, at 23:25, Tom Lane <[email protected]> wrote:\n> \n> Hannu Krosing <[email protected]> writes:\n>> Attached is a minimal patch to allow missing roles in REVOKE command\n> \n> FTR, I think this is a very bad idea.\n\nAgreed, this is papering over a bug. If we are worried about pg_upgrade it\nwould be better to add a check to pg_upgrade which detects this case and\nadvices the user how to deal with it.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 26 May 2024 23:27:25 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hi Daniel,\n\npg_upgrade is just one important user of pg_dump which is the one that\ngenerates REVOKE for a non-existent role.\n\nWe should definitely also fix pg_dump, likely just checking that the\nrole exists when generating REVOKE commands (may be a good practice\nfor other cases too so instead of casting to ::regrole do the actual\njoin)\n\n\n## here is the fix for pg_dump\n\nWhile flying to Vancouver I looked around in pg_dump code, and it\nlooks like the easiest way to mitigate the dangling pg_init_priv\nentries is to replace the query in pg_dump with one that filters out\ninvalid entries\n\nThe current query is at line 9336:\n\n/* Fetch initial-privileges data */\nif (fout->remoteVersion >= 90600)\n{\nprintfPQExpBuffer(query,\n \"SELECT objoid, classoid, objsubid, privtype, initprivs \"\n \"FROM pg_init_privs\");\n\nres = ExecuteSqlQuery(fout, query->data, PGRES_TUPLES_OK);\n\n\nAnd we need the same but filtering out invalid aclitems from initprivs\nsomething like this\n\nWITH q AS (\n SELECT objoid, classoid, objsubid, privtype, unnest(initprivs) AS\ninitpriv FROM saved_init_privs\n)\nSELECT objoid, classoid, objsubid, privtype, array_agg(initpriv) as initprivs\n FROM q\n WHERE is_valid_value_for_type(initpriv::text, 'aclitem')\n GROUP BY 1,2,3,4;\n\n### The proposed re[placement query:\n\nUnfortunately we do not have an existing\nis_this_a_valid_value_for_type(value text, type text, OUT res boolean)\nfunction, so for a read-only workaround the following seems to work:\n\nHere I first collect the initprivs array elements which fail the\nconversion to text and back into an array and store it in GUC\npg_dump.bad_aclitems\n\nThen I use this stored list to filter out the bad ones in the actual query.\n\nDO $$\nDECLARE\n aclitem_text text;\n bad_aclitems text[] = '{}';\nBEGIN\n FOR aclitem_text IN\n SELECT DISTINCT unnest(initprivs)::text FROM pg_init_privs\n LOOP\n BEGIN /* try to convert back to aclitem */\n PERFORM aclitem_text::aclitem;\n EXCEPTION WHEN OTHERS THEN /* collect bad aclitems */\n bad_aclitems := bad_aclitems || ARRAY[aclitem_text];\n END;\n END LOOP;\n IF bad_aclitems != '{}' THEN\n RAISE WARNING 'Ignoring bad aclitems \"%\" in pg_init_privs', bad_aclitems;\n END IF;\n PERFORM set_config('pg_dump.bad_aclitems', bad_aclitems::text,\nfalse); -- true for trx-local\nEND;\n$$;\nWITH q AS (\n SELECT objoid, classoid, objsubid, privtype, unnest(initprivs) AS\ninitpriv FROM pg_init_privs\n)\nSELECT objoid, classoid, objsubid, privtype, array_agg(initpriv) AS initprivs\n FROM q\n WHERE NOT initpriv::text = ANY\n(current_setting('pg_dump.bad_aclitems')::text[])\n GROUP BY 1,2,3,4;\n\n--\nHannu\n\nOn Sun, May 26, 2024 at 11:27 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 26 May 2024, at 23:25, Tom Lane <[email protected]> wrote:\n> >\n> > Hannu Krosing <[email protected]> writes:\n> >> Attached is a minimal patch to allow missing roles in REVOKE command\n> >\n> > FTR, I think this is a very bad idea.\n>\n> Agreed, this is papering over a bug. If we are worried about pg_upgrade it\n> would be better to add a check to pg_upgrade which detects this case and\n> advices the user how to deal with it.\n>\n> --\n> Daniel Gustafsson\n>\n\n\n", "msg_date": "Wed, 29 May 2024 03:06:21 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Tue, May 28, 2024 at 9:06 PM Hannu Krosing <[email protected]> wrote:\n> We should definitely also fix pg_dump, likely just checking that the\n> role exists when generating REVOKE commands (may be a good practice\n> for other cases too so instead of casting to ::regrole do the actual\n> join)\n>\n> ## here is the fix for pg_dump\n>\n> While flying to Vancouver I looked around in pg_dump code, and it\n> looks like the easiest way to mitigate the dangling pg_init_priv\n> entries is to replace the query in pg_dump with one that filters out\n> invalid entries\n\n+1 for this approach. I agree with Tom that fixing this in REVOKE is a\nbad plan; REVOKE is used by way too many things other than pg_dump,\nand the behavior change is not in general desirable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 09:30:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Fri, May 24, 2024 at 4:00 PM Tom Lane <[email protected]> wrote:\n> > Oh! That does seem like it would make what I said wrong, but how would\n> > it even know who the original owner was? Shouldn't we be recreating\n> > the object with the owner it had at dump time?\n>\n> Keep in mind that the whole point here is for the pg_dump script to\n> just say \"CREATE EXTENSION foo\", not to mess with the individual\n> objects therein. So the objects are (probably) going to be owned by\n> the user that issued CREATE EXTENSION.\n>\n> In the original conception, that was the end of it: what you got for\n> the member objects was whatever state CREATE EXTENSION left behind.\n> The idea of pg_init_privs is to support dump/reload of subsequent\n> manual alterations of privileges for extension-created objects.\n> I'm not, at this point, 100% certain that that's a fully realizable\n> goal. But I definitely think it's insane to expect that to work\n> without also tracking changes in the ownership of said objects.\n>\n> Maybe forbidding ALTER OWNER on extension-owned objects isn't\n> such a bad idea?\n\nI think the root of my confusion, or at least one of the roots of my\nconfusion, was whether we were talking about altering the extension or\nthe objects within the extension. If somebody creates an extension as\nuser nitin and afterwards changes the ownership to user lucia, then I\nwould hope for the same pg_init_privs contents as if user lucia had\ncreated the extension in the first place. I think that might be hard\nto achieve in complex cases, if the extension changes the ownership of\nsome objects during the creation process, or if some ownership has\nbeen changed afterward. But in the simple case where nothing like that\nhappens, it sounds possible. If, on the other hand, somebody creates\nthe extension and modifies the ownership of an object in the\nextension, then I agree that pg_init_privs shouldn't be updated.\nHowever, it also seems pretty clear that pg_init_privs is not\nrecording enough state for pg_dump to mimic the ownership change,\nbecause it only records the original privileges, not the original\nownership.\n\nSo, can we recreate the original privileges without recreating the\noriginal ownership? It doesn't really seem like this is going to work\nin general. If nitin creates the extension and grants privileges to\nlucia, and then the ownership of the extension is changed to swara,\nthen nitin is no longer a valid grantor. Even if we could fix that by\nmodifying the grants to substitute swara for nitin, that could create\nimpermissible circularities in the permissions graph. Maybe there are\nsome scenarios where we can fix things up, but it doesn't seem\npossible in general.\n\nIn a perfect world, the fix here is probably to have pg_init_privs or\nsomething similar record the ownership as well as the permissions, but\nthat is not back-patchable and nobody's on the hook to fix up somebody\nelse's feature. So what do we do? Imposing a constraint that you can't\nchange the ownership of an extension-owned object, as you propose,\nseems fairly likely to break a few existing extension scripts, and\nalso won't fix existing instances that are already broken, but maybe\nit's still worth considering if we don't have a better idea. Another\nline of thinking might be to somehow nerf pg_init_privs, or the use of\nit, so that we don't even try to cover this case e.g. if the ownership\nis changed, we nuke the pg_init_privs entry or resnap it to the\ncurrent state, and the dump fails to recreate the original state but\nwe get out from under that by declaring the case unsupported (with\nappropriate documentation changes, hopefully). pg_init_privs seems\nadequate for normal system catalog entries where the ownership\nshouldn't ever change from the bootstrap superuser, but extensions\nseem like they require more infrastructure.\n\nI think the only thing we absolutely have to fix here is the dangling\nACL references. Those are a hazard, because the OID could be reused by\nan unrelated role, which seems like it could even give rise to\nsecurity concerns. Making the whole pg_init_privs system actually work\nfor this case case would be even better, but that seems to require\nfilling in at least one major hole the original design, which is a lot\nto ask for (1) a back-patched fix or (2) a patch that someone who is\nnot the original author has to try to write and be responsible for\ndespite not being the one who created the problem.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Jun 2024 10:16:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I think the only thing we absolutely have to fix here is the dangling\n> ACL references.\n\nHere's a draft patch that takes care of Hannu's example, and I think\nit fixes other potential dangling-reference scenarios too.\n\nI'm not sure whether I like the direction this is going, but I do\nthink we may not be able to do a lot more than this for v17.\n\nThe core change is to install SHARED_DEPENDENCY_INITACL entries in\npg_shdepend for all references to non-pinned roles in pg_init_privs,\nwhether they are the object's owner or not. Doing that ensures that\nwe can't drop a role that is still referenced there, and it provides\na basis for shdepDropOwned and shdepReassignOwned to take some kind\nof action for such references.\n\nThe semantics I've implemented on top of that are:\n\n* ALTER OWNER does not touch pg_init_privs entries.\n\n* REASSIGN OWNED replaces pg_init_privs references with the new\nrole, whether the references are as grantor or grantee.\n\n* DROP OWNED removes pg_init_privs mentions of the doomed role\n(as grantor or grantee), removing the pg_init_privs entry\naltogether if it goes to zero clauses. (This is what happened\nalready, but only if if a SHARED_DEPENDENCY_INITACL entry existed.)\n\nI'm not terribly thrilled with this, because it's still possible\nto get into a state where a pg_init_privs entry is based on\nan owning role that's no longer the owner: you just have to use\nALTER OWNER directly rather than via REASSIGN OWNED. While\nI don't think the backend has much problem with that, it probably\nwill confuse pg_dump to some extent. However, such cases are\ngoing to confuse pg_dump anyway for reasons discussed upthread,\nnamely that we don't really support dump/restore of extensions\nwhere not all the objects are owned by the extension owner.\nI'm content to leave that in the pile of unfinished work for now.\n\nAn alternative definition could be that ALTER OWNER also replaces\nold role with new in pg_init_privs entries. That would reduce\nthe surface for confusing pg_dump a little bit, but I don't think\nthat I like it better, for two reasons:\n\n* ALTER OWNER would have to probe pg_init_acl for potential\nentries every time, which would be wasted work for most ALTERs.\n\n* This gets away from the notion that pg_init_privs should be\na historical record of the state that existed at the end of CREATE\nEXTENSION. Now, maybe that notion is unworkable anyway, but\nI don't want to let go of it before we're sure about that.\n\nA couple of more-minor points for review:\n\n* As this stands, updateInitAclDependencies() no longer pays any\nattention to its ownerId argument, and in one place I depend on\nthat to skip doing a rather expensive lookup of the current object\nowner. Perhaps we should remove that argument altogether, and\nin consequence simplify some other callers too? However, that\nwould only help much if we were willing to revert 534287403's\nchanges to pass the object's owner ID to recordExtensionInitPriv,\nwhich I'm hesitant to do because I suspect we'll end up wanting\nto record the owner ID explicitly in pg_init_privs entries.\nOn the third hand, maybe we'll never do that, so perhaps we should\nrevert those changes for now; some of them add nontrivial lookup\ncosts.\n\n* In shdepReassignOwned, I refactored to move the switch on\nsdepForm->classid into a new subroutine. We could have left\nit where it is, but it would need a couple more tab stops of\nindentation which felt like too much. It's in the eye of\nthe beholder though.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 14 Jun 2024 19:46:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "I wrote:\n> The semantics I've implemented on top of that are:\n> * ALTER OWNER does not touch pg_init_privs entries.\n> * REASSIGN OWNED replaces pg_init_privs references with the new\n> role, whether the references are as grantor or grantee.\n> * DROP OWNED removes pg_init_privs mentions of the doomed role\n> (as grantor or grantee), removing the pg_init_privs entry\n> altogether if it goes to zero clauses. (This is what happened\n> already, but only if if a SHARED_DEPENDENCY_INITACL entry existed.)\n\n> I'm not terribly thrilled with this, because it's still possible\n> to get into a state where a pg_init_privs entry is based on\n> an owning role that's no longer the owner: you just have to use\n> ALTER OWNER directly rather than via REASSIGN OWNED. While\n> I don't think the backend has much problem with that, it probably\n> will confuse pg_dump to some extent.\n\nI poked at this some more, and I'm now moderately convinced that\nthis is a good place to stop for v17, primarily because ALTER OWNER\ndoesn't touch pg_init_privs in the older branches either. Thus,\nwhile we've not made things better for pg_dump, at least we've not\nintroduced a new behavior it will have to account for in the future.\n\nI experimented with this test scenario (to set up, do\n\"make install\" in src/test/modules/test_pg_dump):\n\n-----\ncreate role regress_dump_test_role;\ncreate user test_super superuser;\ncreate user joe;\ncreate database tpd;\n\\c tpd test_super \ncreate extension test_pg_dump;\nalter function regress_pg_dump_schema.test_func() owner to joe;\n\\df+ regress_pg_dump_schema.test_func()\n-----\n\nThe \\df+ will correctly show that test_func() is owned by joe\nand has ACL\n\n| Access privileges |\n+------------------------------+\n| =X/joe +|\n| joe=X/joe +|\n| regress_dump_test_role=X/joe |\n\nNow, if you pg_dump this database, what you get is\n\nCREATE EXTENSION IF NOT EXISTS test_pg_dump WITH SCHEMA public;\n...\n--\n-- Name: FUNCTION test_func(); Type: ACL; Schema: regress_pg_dump_schema; Owner: joe\n--\n\nREVOKE ALL ON FUNCTION regress_pg_dump_schema.test_func() FROM PUBLIC;\nREVOKE ALL ON FUNCTION regress_pg_dump_schema.test_func() FROM test_super;\nREVOKE ALL ON FUNCTION regress_pg_dump_schema.test_func() FROM regress_dump_test_role;\nGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO joe;\nGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO PUBLIC;\nGRANT ALL ON FUNCTION regress_pg_dump_schema.test_func() TO regress_dump_test_role;\n\nSo pg_dump realizes that the privileges are not what they were,\nbut it's fairly confused about what to do about it. And it really\ncan't get that right without better modeling (er, more than none at\nall) of the ownership of extension-member objects. If you restore\nthis dump as postgres, you'll find that test_func is now owned by\npostgres and has ACL\n\n| Access privileges |\n+-----------------------------------+\n| postgres=X/postgres +|\n| joe=X/postgres +|\n| =X/postgres +|\n| regress_dump_test_role=X/postgres |\n\nThe grantees are okay, more or less, but we've totally failed to\nreplicate the owner/grantor. But this is exactly the same as what\nyou get if you do the experiment in v16 or before. Given the lack\nof complaints about that, I think it's okay to stop here for now.\nWe've at least made REASSIGN OWNED work better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 15 Jun 2024 14:40:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 15 Jun 2024, at 01:46, Tom Lane <[email protected]> wrote:\n> \n> Robert Haas <[email protected]> writes:\n>> I think the only thing we absolutely have to fix here is the dangling\n>> ACL references.\n> \n> Here's a draft patch that takes care of Hannu's example, and I think\n> it fixes other potential dangling-reference scenarios too.\n> \n> I'm not sure whether I like the direction this is going, but I do\n> think we may not be able to do a lot more than this for v17.\n\nAgreed.\n\n> The core change is to install SHARED_DEPENDENCY_INITACL entries in\n> pg_shdepend for all references to non-pinned roles in pg_init_privs,\n> whether they are the object's owner or not. Doing that ensures that\n> we can't drop a role that is still referenced there, and it provides\n> a basis for shdepDropOwned and shdepReassignOwned to take some kind\n> of action for such references.\n\nI wonder if this will break any tools/scripts in prod which relies on the\nprevious (faulty) behaviour. It will be interesting to see if anything shows\nup on -bugs. Off the cuff it seems like a good idea judging by where we are\nand what we can fix with it.\n\n> I'm not terribly thrilled with this, because it's still possible\n> to get into a state where a pg_init_privs entry is based on\n> an owning role that's no longer the owner: you just have to use\n> ALTER OWNER directly rather than via REASSIGN OWNED. While\n> I don't think the backend has much problem with that, it probably\n> will confuse pg_dump to some extent. However, such cases are\n> going to confuse pg_dump anyway for reasons discussed upthread,\n> namely that we don't really support dump/restore of extensions\n> where not all the objects are owned by the extension owner.\n> I'm content to leave that in the pile of unfinished work for now.\n\n+1\n\n> An alternative definition could be that ALTER OWNER also replaces\n> old role with new in pg_init_privs entries. That would reduce\n> the surface for confusing pg_dump a little bit, but I don't think\n> that I like it better, for two reasons:\n> \n> * ALTER OWNER would have to probe pg_init_acl for potential\n> entries every time, which would be wasted work for most ALTERs.\n\nUnless it would magically fix all the pg_dump problems I'd prefer to avoid\nthis.\n\n> * As this stands, updateInitAclDependencies() no longer pays any\n> attention to its ownerId argument, and in one place I depend on\n> that to skip doing a rather expensive lookup of the current object\n> owner. Perhaps we should remove that argument altogether, and\n> in consequence simplify some other callers too? However, that\n> would only help much if we were willing to revert 534287403's\n> changes to pass the object's owner ID to recordExtensionInitPriv,\n> which I'm hesitant to do because I suspect we'll end up wanting\n> to record the owner ID explicitly in pg_init_privs entries.\n> On the third hand, maybe we'll never do that, so perhaps we should\n> revert those changes for now; some of them add nontrivial lookup\n> costs.\n\nI wonder if it's worth reverting passing the owner ID for v17 and revisiting\nthat in 18 if we work on recording the ID. Shaving a few catalog lookups is\ngenerally worthwhile, doing them without needing the result for the next five\nyears might bite us.\n\nRe-reading 534287403 I wonder about this hunk in RemoveRoleFromInitPriv:\n\n+ if (!isNull)\n+ old_acl = DatumGetAclPCopy(oldAclDatum);\n+ else\n+ old_acl = NULL; /* this case shouldn't happen, probably */\n\nI wonder if we should Assert() on old_acl being NULL? I can't imagine a case\nwhere it should legitimately be that and catching such in development might be\nuseful for catching stray bugs?\n\n> * In shdepReassignOwned, I refactored to move the switch on\n> sdepForm->classid into a new subroutine. We could have left\n> it where it is, but it would need a couple more tab stops of\n> indentation which felt like too much. It's in the eye of\n> the beholder though.\n\nI prefer the new way.\n\n+1 on going ahead with this patch. There is more work to do but I agree that\nthis about all that makes sense in v17 at this point.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 15:22:37 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> On 15 Jun 2024, at 01:46, Tom Lane <[email protected]> wrote:\n>> The core change is to install SHARED_DEPENDENCY_INITACL entries in\n>> pg_shdepend for all references to non-pinned roles in pg_init_privs,\n>> whether they are the object's owner or not. Doing that ensures that\n>> we can't drop a role that is still referenced there, and it provides\n>> a basis for shdepDropOwned and shdepReassignOwned to take some kind\n>> of action for such references.\n\n> I wonder if this will break any tools/scripts in prod which relies on the\n> previous (faulty) behaviour. It will be interesting to see if anything shows\n> up on -bugs. Off the cuff it seems like a good idea judging by where we are\n> and what we can fix with it.\n\nConsidering that SHARED_DEPENDENCY_INITACL has existed for less than\ntwo months, it's hard to believe that any outside code has grown any\ndependencies on it, much less that it couldn't be adjusted readily.\n\n> I wonder if it's worth reverting passing the owner ID for v17 and revisiting\n> that in 18 if we work on recording the ID. Shaving a few catalog lookups is\n> generally worthwhile, doing them without needing the result for the next five\n> years might bite us.\n\nYeah, that was the direction I was leaning in, too. I'll commit the\nrevert of that separately, so that un-reverting it shouldn't be too\npainful if we eventually decide to do so.\n\n> Re-reading 534287403 I wonder about this hunk in RemoveRoleFromInitPriv:\n\n> + if (!isNull)\n> + old_acl = DatumGetAclPCopy(oldAclDatum);\n> + else\n> + old_acl = NULL; /* this case shouldn't happen, probably */\n\n> I wonder if we should Assert() on old_acl being NULL? I can't imagine a case\n> where it should legitimately be that and catching such in development might be\n> useful for catching stray bugs?\n\nHmm, yeah. I was trying to be agnostic about whether it's okay for a\npg_init_privs ACL to be NULL ... but I can't imagine a real use for\nthat either, and the new patch does add some code that's effectively\nassuming it isn't. Agreed, let's be uniform about insisting !isNull.\n\n> +1 on going ahead with this patch. There is more work to do but I agree that\n> this about all that makes sense in v17 at this point.\n\nThanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 17 Jun 2024 10:56:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "> On 17 Jun 2024, at 16:56, Tom Lane <[email protected]> wrote:\n> Daniel Gustafsson <[email protected]> writes:\n\n>> I wonder if this will break any tools/scripts in prod which relies on the\n>> previous (faulty) behaviour. It will be interesting to see if anything shows\n>> up on -bugs. Off the cuff it seems like a good idea judging by where we are\n>> and what we can fix with it.\n> \n> Considering that SHARED_DEPENDENCY_INITACL has existed for less than\n> two months, it's hard to believe that any outside code has grown any\n> dependencies on it, much less that it couldn't be adjusted readily.\n\nDoh, I was thinking about it backwards, clearly not a worry =)\n\n>> I wonder if it's worth reverting passing the owner ID for v17 and revisiting\n>> that in 18 if we work on recording the ID. Shaving a few catalog lookups is\n>> generally worthwhile, doing them without needing the result for the next five\n>> years might bite us.\n> \n> Yeah, that was the direction I was leaning in, too. I'll commit the\n> revert of that separately, so that un-reverting it shouldn't be too\n> painful if we eventually decide to do so.\n\nSounds good.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 17 Jun 2024 18:37:47 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hi Tom,\n\nIs there anything that could be back-patched with reasonable effort ?\n\n--\nHannu\n\nOn Mon, Jun 17, 2024 at 6:37 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 17 Jun 2024, at 16:56, Tom Lane <[email protected]> wrote:\n> > Daniel Gustafsson <[email protected]> writes:\n>\n> >> I wonder if this will break any tools/scripts in prod which relies on the\n> >> previous (faulty) behaviour. It will be interesting to see if anything shows\n> >> up on -bugs. Off the cuff it seems like a good idea judging by where we are\n> >> and what we can fix with it.\n> >\n> > Considering that SHARED_DEPENDENCY_INITACL has existed for less than\n> > two months, it's hard to believe that any outside code has grown any\n> > dependencies on it, much less that it couldn't be adjusted readily.\n>\n> Doh, I was thinking about it backwards, clearly not a worry =)\n>\n> >> I wonder if it's worth reverting passing the owner ID for v17 and revisiting\n> >> that in 18 if we work on recording the ID. Shaving a few catalog lookups is\n> >> generally worthwhile, doing them without needing the result for the next five\n> >> years might bite us.\n> >\n> > Yeah, that was the direction I was leaning in, too. I'll commit the\n> > revert of that separately, so that un-reverting it shouldn't be too\n> > painful if we eventually decide to do so.\n>\n> Sounds good.\n>\n> --\n> Daniel Gustafsson\n>\n\n\n", "msg_date": "Thu, 20 Jun 2024 12:14:19 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Is there anything that could be back-patched with reasonable effort ?\n\nAfraid not. The whole thing is dependent on pg_shdepend entries\nthat won't exist in older branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 11:35:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Then maybe we should put a query / function in the release notes to\nclean up the existing mess.\n\nThinking of it we should do it anyway, as the patch only prevents new\nmessiness from happening and does not fix existing issues.\n\nI could share a query to update the pg_init_privs with non-existent\nrole to replace it with the owner of the object if we figure out a\ncorrect place to publish it.\n\n---\nHannu\n\n\n\n\nOn Thu, Jun 20, 2024 at 5:35 PM Tom Lane <[email protected]> wrote:\n>\n> Hannu Krosing <[email protected]> writes:\n> > Is there anything that could be back-patched with reasonable effort ?\n>\n> Afraid not. The whole thing is dependent on pg_shdepend entries\n> that won't exist in older branches.\n>\n> regards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 19:41:01 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Or perhaps we should still also patch pg_dump to ignore the aclentries\nwhich refer to roles that do not exist in the database ?\n\nOn Thu, Jun 20, 2024 at 7:41 PM Hannu Krosing <[email protected]> wrote:\n>\n> Then maybe we should put a query / function in the release notes to\n> clean up the existing mess.\n>\n> Thinking of it we should do it anyway, as the patch only prevents new\n> messiness from happening and does not fix existing issues.\n>\n> I could share a query to update the pg_init_privs with non-existent\n> role to replace it with the owner of the object if we figure out a\n> correct place to publish it.\n>\n> ---\n> Hannu\n>\n>\n>\n>\n> On Thu, Jun 20, 2024 at 5:35 PM Tom Lane <[email protected]> wrote:\n> >\n> > Hannu Krosing <[email protected]> writes:\n> > > Is there anything that could be back-patched with reasonable effort ?\n> >\n> > Afraid not. The whole thing is dependent on pg_shdepend entries\n> > that won't exist in older branches.\n> >\n> > regards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 19:43:06 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Or perhaps we should still also patch pg_dump to ignore the aclentries\n> which refer to roles that do not exist in the database ?\n\nI didn't want to do that before, and I still don't. Given that this\nissue has existed since pg_init_privs was invented (9.6) without\nprior reports, I don't think it's a big enough problem in practice\nto be worth taking extraordinary actions for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 14:25:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Thu, Jun 20, 2024 at 2:25 PM Tom Lane <[email protected]> wrote:\n> Hannu Krosing <[email protected]> writes:\n> > Or perhaps we should still also patch pg_dump to ignore the aclentries\n> > which refer to roles that do not exist in the database ?\n>\n> I didn't want to do that before, and I still don't. Given that this\n> issue has existed since pg_init_privs was invented (9.6) without\n> prior reports, I don't think it's a big enough problem in practice\n> to be worth taking extraordinary actions for.\n\nIf we don't fix it in the code and we don't document it anywhere, the\nnext person who hits it is going to have to try to discover the fact\nthat there's a problem from the pgsql-hackers archives. That doesn't\nseem good. I don't have an educated opinion about what we should do\nhere specifically, and I realize that we don't have any official place\nto document known issues, but it can be pretty inconvenient for users\nnot to know about them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 15:18:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "It does happen with some regularity.\n\nAt least one large cloud database provider I know of saw this more\nthan once a month until the mitigations were integrated in the major\nversion upgrade process.\n\nIt is possible that making database upgrades easier via better\nautomation is what made this turn up more, as now less experienced /\nnon-DBA types are more comfortable doing the version upgrades, whereas\nbefore it would be something done by a person who can also diagnose it\nand manually fix pg_init_privs.\n\nStill it would be nice to have some public support for users of\nnon-managed PostgreSQL databases as well\n\nOn Thu, Jun 20, 2024 at 8:25 PM Tom Lane <[email protected]> wrote:\n>\n> Hannu Krosing <[email protected]> writes:\n> > Or perhaps we should still also patch pg_dump to ignore the aclentries\n> > which refer to roles that do not exist in the database ?\n>\n> I didn't want to do that before, and I still don't. Given that this\n> issue has existed since pg_init_privs was invented (9.6) without\n> prior reports, I don't think it's a big enough problem in practice\n> to be worth taking extraordinary actions for.\n>\n> regards, tom lane\n\n\n", "msg_date": "Thu, 20 Jun 2024 21:42:56 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "On Thu, Jun 20, 2024 at 3:43 PM Hannu Krosing <[email protected]> wrote:\n> Still it would be nice to have some public support for users of\n> non-managed PostgreSQL databases as well\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Jun 2024 16:09:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": ">On Thu, Jun 20, 2024 at 3:43PM Hannu Krosing < [email protected] > wrote:\n>> Still it would be nice to have some public support for users of\n>> non-managed PostgreSQL databases as well\n>+1.\n>\n>--\n>Robert Haas\n>EDB: http://www.enterprisedb.com\nHello! I have recently been researching postgres build using meson. And I came across a failure of the src/test/modules/test_pg_dump-running test.\nYou can see regression.diffs in attached file.\nIt turned out that the test fails if the cluster is initialized by custom user. In my case by user postgres.\nScript to reproduce test_pg_fump failure:\n---\nmeson setup -Dprefix=$PGPREFIX build\n \nninja -j64 -C build install >/dev/null\nninja -j64 -C build install-test-files >/dev/null\n \ninitdb -U postgres -k -D $PGPREFIX/data\npg_ctl -D $PGPREFIX/data -l logfile start\n \npsql -U postgres -c \"CREATE USER test SUPERUSER\"\npsql -U postgres -c \"CREATE DATABASE test\"\n \nmeson test --setup running --suite postgresql:test_pg_dump-running -C build/\n---\nYou can catch the same failure if build using make and run make installcheck -C src/test/modules/test_pg_dump.\n \nI have looked at the test and found queries like below:\n---\nSELECT pg_describe_object(classid,objid,objsubid) COLLATE \"C\" AS obj,\n  pg_describe_object(refclassid,refobjid,0) AS refobj,\n  deptype\n  FROM pg_shdepend JOIN pg_database d ON dbid = d.oid\n  WHERE d.datname = current_database()\n  ORDER BY 1, 3;\n---\nThis query does not expect that test database may already contain some information about custom user that ran test_pg_dump-running.\n--\nEgor Chindyaskin\nPostgres Professional: https://postgrespro.com", "msg_date": "Wed, 18 Sep 2024 07:03:21 +0300", "msg_from": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IERST1AgT1dORUQgQlkgZmFpbHMgdG8gY2xlYW4gb3V0IHBnX2luaXRf?=\n =?UTF-8?B?cHJpdnMgZ3JhbnRz?=" }, { "msg_contents": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <[email protected]> writes:\n> This query does not expect that test database may already contain some information about custom user that ran test_pg_dump-running.\n\nI'm perfectly content to reject this as being an abuse of the test\ncase. Our TAP tests are built on the assumption that they use\ndatabases created within the test case. Apparently, you've found a\nway to use the meson test infrastructure to execute a TAP test in\nthe equivalent of \"make installcheck\" rather than \"make check\" mode.\nI am unwilling to buy into the proposition that our TAP tests should\nbe proof against doing that after making arbitrary changes to the\ndatabase's initial state. If anything, the fact that this is possible\nis a bug in our meson scripts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Sep 2024 00:31:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" }, { "msg_contents": "Hello!\n\nIIUC the regression test test_pg_dump [1] fails, see the attached \nregression.diffs:\n\ndiff -U3 \n/Users/test/Work/postgrespro/src/test/modules/test_pg_dump/expected/test_pg_dump.out \n/Users/test/Work/postgrespro/build/testrun/test_pg_dump-running/regress/results/test_pg_dump.out\n--- \n/Users/test/Work/postgrespro/src/test/modules/test_pg_dump/expected/test_pg_dump.out\t2024-09-12 \n15:02:26.345434331 +0700\n+++ \n/Users/test/Work/postgrespro/build/testrun/test_pg_dump-running/regress/results/test_pg_dump.out\t2024-09-12 \n15:42:09.341520173 +0700\n\n[1] \nhttps://github.com/postgres/postgres/blob/master/src/test/modules/test_pg_dump/sql/test_pg_dump.sql\n\nOn 2024-09-18 07:31, Tom Lane wrote:\n> =?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <[email protected]> \n> writes:\n>> This query does not expect that test database may already contain some \n>> information about custom user that ran test_pg_dump-running.\n> \n> I'm perfectly content to reject this as being an abuse of the test\n> case. Our TAP tests are built on the assumption that they use\n> databases created within the test case. Apparently, you've found a\n> way to use the meson test infrastructure to execute a TAP test in\n> the equivalent of \"make installcheck\" rather than \"make check\" mode.\n> I am unwilling to buy into the proposition that our TAP tests should\n> be proof against doing that after making arbitrary changes to the\n> database's initial state. If anything, the fact that this is possible\n> is a bug in our meson scripts.\n> \n> \t\t\tregards, tom lane\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:10:57 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY fails to clean out pg_init_privs grants" } ]
[ { "msg_contents": "Hello all\n\nThe issue described bellow exists in postgresql ver 16.2 (found in some\nprevious major versions)\n\nThe documentation defines a comment as:\n\n> A comment is a sequence of characters beginning with double dashes and\n> extending to the end of the line\n\n\nWhen using such a comment within CASE control block, it ends up with an\nerror:\n\nDO LANGUAGE plpgsql $$\nDECLARE\n t TEXT = 'a';\nBEGIN\n CASE t\n WHEN 'a' -- my comment\n THEN RAISE NOTICE 'a';\n WHEN 'b'\n THEN RAISE NOTICE 'b';\n ELSE NULL;\n END CASE;\nEND;$$;\n\nERROR: syntax error at end of input\nLINE 1: \"__Case__Variable_2__\" IN ('a' -- my comment)\n ^\nQUERY: \"__Case__Variable_2__\" IN ('a' -- my comment)\nCONTEXT: PL/pgSQL function inline_code_block line 5 at CASE\n\nWith Regards\nMichal Bartak\n\nHello allThe issue described bellow exists in postgresql ver 16.2 (found in some previous major versions)The documentation defines a comment as:A comment is a sequence of characters beginning with double dashes and extending to the end of the line When using such a comment within CASE control block, it ends up with an error:DO LANGUAGE plpgsql $$DECLARE    t TEXT = 'a';BEGIN    CASE t        WHEN 'a'  -- my comment        THEN RAISE NOTICE 'a';        WHEN 'b'        THEN RAISE NOTICE 'b';        ELSE NULL;    END CASE;END;$$;ERROR:  syntax error at end of inputLINE 1: \"__Case__Variable_2__\" IN ('a'  -- my comment)                                                      ^QUERY:  \"__Case__Variable_2__\" IN ('a'  -- my comment)CONTEXT:  PL/pgSQL function inline_code_block line 5 at CASEWith RegardsMichal Bartak", "msg_date": "Sat, 6 Apr 2024 20:14:35 +0200", "msg_from": "Michal Bartak <[email protected]>", "msg_from_op": true, "msg_subject": "CASE control block broken by a single line comment" }, { "msg_contents": "On 2024-04-06 20:14 +0200, Michal Bartak wrote:\n> The issue described bellow exists in postgresql ver 16.2 (found in some\n> previous major versions)\n\nCan confirm also on master.\n\n> The documentation defines a comment as:\n> \n> > A comment is a sequence of characters beginning with double dashes and\n> > extending to the end of the line\n> \n> \n> When using such a comment within CASE control block, it ends up with an\n> error:\n> \n> DO LANGUAGE plpgsql $$\n> DECLARE\n> t TEXT = 'a';\n> BEGIN\n> CASE t\n> WHEN 'a' -- my comment\n> THEN RAISE NOTICE 'a';\n> WHEN 'b'\n> THEN RAISE NOTICE 'b';\n> ELSE NULL;\n> END CASE;\n> END;$$;\n> \n> ERROR: syntax error at end of input\n> LINE 1: \"__Case__Variable_2__\" IN ('a' -- my comment)\n> ^\n> QUERY: \"__Case__Variable_2__\" IN ('a' -- my comment)\n> CONTEXT: PL/pgSQL function inline_code_block line 5 at CASE\n\nI'm surprised that the comment is not skipped by the scanner at this\npoint. Maybe because the parser just reads the raw expression between\nWHEN and THEN with plpgsql_append_source_text via read_sql_construct.\n\nHow about the attached patch? It's a workaround by simply adding a line\nfeed character between the raw expression and the closing parenthesis.\n\n-- \nErik", "msg_date": "Sat, 6 Apr 2024 23:14:23 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-04-06 20:14 +0200, Michal Bartak wrote:\n>> The issue described bellow exists in postgresql ver 16.2 (found in some\n>> previous major versions)\n\n> Can confirm also on master.\n\nI'm sure it's been there a while :-(\n\n> I'm surprised that the comment is not skipped by the scanner at this\n> point. Maybe because the parser just reads the raw expression between\n> WHEN and THEN with plpgsql_append_source_text via read_sql_construct.\n\n> How about the attached patch? It's a workaround by simply adding a line\n> feed character between the raw expression and the closing parenthesis.\n\nI don't have time to look into this on this deadline weekend, but\nwhat's bothering me about this report is the worry that we've made\nthe same mistake elsewhere, or will do so in future. I suspect\nit'd be much more robust if we could remove the comment from the\nexpr->query string. No idea how hard that is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 00:33:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "On 2024-04-07 06:33 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > I'm surprised that the comment is not skipped by the scanner at this\n> > point. Maybe because the parser just reads the raw expression between\n> > WHEN and THEN with plpgsql_append_source_text via read_sql_construct.\n> \n> > How about the attached patch? It's a workaround by simply adding a line\n> > feed character between the raw expression and the closing parenthesis.\n> \n> I don't have time to look into this on this deadline weekend,\n\nSure, no rush.\n\n> but what's bothering me about this report is the worry that we've made\n> the same mistake elsewhere, or will do so in future.\n\nRight. At the moment only make_case is affected by this because it uses\nthe raw expression for rewriting. I checked other uses of\nread_psql_construct (e.g. IF ... THEN, FOR ... LOOP) and they don't show\nthis bug.\n\n> I suspect it'd be much more robust if we could remove the comment from\n> the expr->query string. No idea how hard that is.\n\nI slept on it and I think this can be fixed by tracking the end of the\nlast token before THEN and use that instead of yylloc in the call to\nplpgsql_append_source_text. We already already track the token length\nin plpgsql_yyleng but don't make it available outside pl_scanner.c yet.\n\nAttached v2 tries to do that. But it breaks other test cases, probably\nbecause the calculation of endlocation is off. I'm missing something\nhere.\n\n-- \nErik\n\n\n", "msg_date": "Sun, 7 Apr 2024 16:55:27 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "I wrote:\n> Attached v2 tries to do that.\n\nHit send too soon. Attached now.\n\n-- \nErik", "msg_date": "Sun, 7 Apr 2024 16:56:53 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-04-07 06:33 +0200, Tom Lane wrote:\n>> I suspect it'd be much more robust if we could remove the comment from\n>> the expr->query string. No idea how hard that is.\n\n> I slept on it and I think this can be fixed by tracking the end of the\n> last token before THEN and use that instead of yylloc in the call to\n> plpgsql_append_source_text. We already already track the token length\n> in plpgsql_yyleng but don't make it available outside pl_scanner.c yet.\n> Attached v2 tries to do that. But it breaks other test cases, probably\n> because the calculation of endlocation is off. I'm missing something\n> here.\n\nI poked at this and found that the failures occur when the patched\ncode decides to trim an expression like \"_r.v\" to just \"_r\", naturally\nbreaking the semantics completely. That happens because when\nplpgsql_yylex recognizes a compound token, it doesn't bother to\nadjust the token length to include the additional word(s). I vaguely\nremember having thought about that when writing the lookahead logic,\nand deciding that it wasn't worth the trouble -- but now it is.\nUp to now, the only thing we did with plpgsql_yyleng was to set the\ncutoff point for text reported by plpgsql_yyerror. Extending the\ntoken length changes reports like this:\n\nregression=# do $$ declare r record; r.x$$;\nERROR: syntax error at or near \"r\"\nLINE 1: do $$ declare r record; r.x$$;\n ^\n\nto this:\n\nregression=# do $$ declare r record; r.x$$;\nERROR: syntax error at or near \"r.x\"\nLINE 1: do $$ declare r record; r.x$$;\n ^\n\nwhich seems like strictly an improvement to me (the syntax error is\npremature EOF, which is after the \"x\"); but in any case it's minor\nenough to not be worth worrying about.\n\nLooking around, I noticed that we *have* had a similar case in the\npast, which 4adead1d2 noticed and worked around by suppressing the\nwhitespace-trimming action in read_sql_construct. We could probably\nreach a near-one-line fix for the current problem by passing\ntrim=false in the CASE calls, but TBH that discovery just reinforces\nmy feeling that we need a cleaner fix. The attached v3 reverts\nthe make-trim-optional hack that 4adead1d2 added, since we don't\nneed or want the manual trimming anymore.\n\nWith this in mind, I find the other manual whitespace trimming logic,\nin make_execsql_stmt(), quite scary; but it looks somewhat nontrivial\nto get rid of it. (The problem is that parsing of an INTO clause\nwill leave us with a pushed-back token as next, and then we don't\nknow where the end of the token before that is.) Since we don't\ncurrently do anything as crazy as combining execsql statements,\nI think the problem is only latent, but still...\n\nAnyway, the attached works for me.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 08 Apr 2024 18:54:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "út 9. 4. 2024 v 0:55 odesílatel Tom Lane <[email protected]> napsal:\n\n> Erik Wienhold <[email protected]> writes:\n> > On 2024-04-07 06:33 +0200, Tom Lane wrote:\n> >> I suspect it'd be much more robust if we could remove the comment from\n> >> the expr->query string. No idea how hard that is.\n>\n> > I slept on it and I think this can be fixed by tracking the end of the\n> > last token before THEN and use that instead of yylloc in the call to\n> > plpgsql_append_source_text. We already already track the token length\n> > in plpgsql_yyleng but don't make it available outside pl_scanner.c yet.\n> > Attached v2 tries to do that. But it breaks other test cases, probably\n> > because the calculation of endlocation is off. I'm missing something\n> > here.\n>\n> I poked at this and found that the failures occur when the patched\n> code decides to trim an expression like \"_r.v\" to just \"_r\", naturally\n> breaking the semantics completely. That happens because when\n> plpgsql_yylex recognizes a compound token, it doesn't bother to\n> adjust the token length to include the additional word(s). I vaguely\n> remember having thought about that when writing the lookahead logic,\n> and deciding that it wasn't worth the trouble -- but now it is.\n> Up to now, the only thing we did with plpgsql_yyleng was to set the\n> cutoff point for text reported by plpgsql_yyerror. Extending the\n> token length changes reports like this:\n>\n> regression=# do $$ declare r record; r.x$$;\n> ERROR: syntax error at or near \"r\"\n> LINE 1: do $$ declare r record; r.x$$;\n> ^\n>\n> to this:\n>\n> regression=# do $$ declare r record; r.x$$;\n> ERROR: syntax error at or near \"r.x\"\n> LINE 1: do $$ declare r record; r.x$$;\n> ^\n>\n> which seems like strictly an improvement to me (the syntax error is\n> premature EOF, which is after the \"x\"); but in any case it's minor\n> enough to not be worth worrying about.\n>\n> Looking around, I noticed that we *have* had a similar case in the\n> past, which 4adead1d2 noticed and worked around by suppressing the\n> whitespace-trimming action in read_sql_construct. We could probably\n> reach a near-one-line fix for the current problem by passing\n> trim=false in the CASE calls, but TBH that discovery just reinforces\n> my feeling that we need a cleaner fix. The attached v3 reverts\n> the make-trim-optional hack that 4adead1d2 added, since we don't\n> need or want the manual trimming anymore.\n>\n> With this in mind, I find the other manual whitespace trimming logic,\n> in make_execsql_stmt(), quite scary; but it looks somewhat nontrivial\n> to get rid of it. (The problem is that parsing of an INTO clause\n> will leave us with a pushed-back token as next, and then we don't\n> know where the end of the token before that is.) Since we don't\n> currently do anything as crazy as combining execsql statements,\n> I think the problem is only latent, but still...\n>\n> Anyway, the attached works for me.\n>\n\n+1\n\nPavel\n\n\n> regards, tom lane\n>\n>\n\nút 9. 4. 2024 v 0:55 odesílatel Tom Lane <[email protected]> napsal:Erik Wienhold <[email protected]> writes:\n> On 2024-04-07 06:33 +0200, Tom Lane wrote:\n>> I suspect it'd be much more robust if we could remove the comment from\n>> the expr->query string.  No idea how hard that is.\n\n> I slept on it and I think this can be fixed by tracking the end of the\n> last token before THEN and use that instead of yylloc in the call to\n> plpgsql_append_source_text.  We already already track the token length\n> in plpgsql_yyleng but don't make it available outside pl_scanner.c yet.\n> Attached v2 tries to do that.  But it breaks other test cases, probably\n> because the calculation of endlocation is off.  I'm missing something\n> here.\n\nI poked at this and found that the failures occur when the patched\ncode decides to trim an expression like \"_r.v\" to just \"_r\", naturally\nbreaking the semantics completely.  That happens because when\nplpgsql_yylex recognizes a compound token, it doesn't bother to\nadjust the token length to include the additional word(s).  I vaguely\nremember having thought about that when writing the lookahead logic,\nand deciding that it wasn't worth the trouble -- but now it is.\nUp to now, the only thing we did with plpgsql_yyleng was to set the\ncutoff point for text reported by plpgsql_yyerror.  Extending the\ntoken length changes reports like this:\n\nregression=# do $$ declare r record; r.x$$;\nERROR:  syntax error at or near \"r\"\nLINE 1: do $$ declare r record; r.x$$;\n                                ^\n\nto this:\n\nregression=# do $$ declare r record; r.x$$;\nERROR:  syntax error at or near \"r.x\"\nLINE 1: do $$ declare r record; r.x$$;\n                                ^\n\nwhich seems like strictly an improvement to me (the syntax error is\npremature EOF, which is after the \"x\"); but in any case it's minor\nenough to not be worth worrying about.\n\nLooking around, I noticed that we *have* had a similar case in the\npast, which 4adead1d2 noticed and worked around by suppressing the\nwhitespace-trimming action in read_sql_construct.  We could probably\nreach a near-one-line fix for the current problem by passing\ntrim=false in the CASE calls, but TBH that discovery just reinforces\nmy feeling that we need a cleaner fix.  The attached v3 reverts\nthe make-trim-optional hack that 4adead1d2 added, since we don't\nneed or want the manual trimming anymore.\n\nWith this in mind, I find the other manual whitespace trimming logic,\nin make_execsql_stmt(), quite scary; but it looks somewhat nontrivial\nto get rid of it.  (The problem is that parsing of an INTO clause\nwill leave us with a pushed-back token as next, and then we don't\nknow where the end of the token before that is.)  Since we don't\ncurrently do anything as crazy as combining execsql statements,\nI think the problem is only latent, but still...\n\nAnyway, the attached works for me.+1Pavel\n\n                        regards, tom lane", "msg_date": "Tue, 9 Apr 2024 06:09:40 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "On 2024-04-09 00:54 +0200, Tom Lane wrote:\n> I poked at this and found that the failures occur when the patched\n> code decides to trim an expression like \"_r.v\" to just \"_r\", naturally\n> breaking the semantics completely. That happens because when\n> plpgsql_yylex recognizes a compound token, it doesn't bother to\n> adjust the token length to include the additional word(s).\n\nThanks Tom! I haven't had the time to look at your patch.\n\nI'm surprised that the lexer handles compound tokens. I'd expect to\nfind that in the parser, especially because of using the context-aware\nplpgsql_ns_lookup to determine if we have a T_DATUM or T_{WORD,CWORD}.\n\nIs this done by the lexer to allow push-back of those compound tokens\nand maybe even to also simplify some parser rules?\n\n-- \nErik\n\n\n", "msg_date": "Fri, 12 Apr 2024 22:15:08 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> I'm surprised that the lexer handles compound tokens. I'd expect to\n> find that in the parser, especially because of using the context-aware\n> plpgsql_ns_lookup to determine if we have a T_DATUM or T_{WORD,CWORD}.\n\nI'm not here to defend plpgsql's factorization ;-). However, it\ndoesn't really have a parser of its own, at least not for expressions,\nso I'm not sure how your suggestion could be made to work.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Apr 2024 18:20:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "On 2024-04-13 00:20 +0200, Tom Lane wrote:\n> Erik Wienhold <[email protected]> writes:\n> > I'm surprised that the lexer handles compound tokens. I'd expect to\n> > find that in the parser, especially because of using the context-aware\n> > plpgsql_ns_lookup to determine if we have a T_DATUM or T_{WORD,CWORD}.\n> \n> I'm not here to defend plpgsql's factorization ;-). However, it\n> doesn't really have a parser of its own, at least not for expressions,\n> so I'm not sure how your suggestion could be made to work.\n\nNot a suggestion. Just a question about the general design, unrelated\nto this fix, in case you know the answer off the cuff. I see that\n863a62064c already had the lexer handle those compound tokens, but\nunfortunately without an explanation on why. Never mind if that's too\nmuch to ask about a design descision made over 25 years ago.\n\n-- \nErik\n\n\n", "msg_date": "Sat, 13 Apr 2024 05:07:37 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" }, { "msg_contents": "Erik Wienhold <[email protected]> writes:\n> On 2024-04-13 00:20 +0200, Tom Lane wrote:\n>> I'm not here to defend plpgsql's factorization ;-). However, it\n>> doesn't really have a parser of its own, at least not for expressions,\n>> so I'm not sure how your suggestion could be made to work.\n\n> Not a suggestion. Just a question about the general design, unrelated\n> to this fix, in case you know the answer off the cuff. I see that\n> 863a62064c already had the lexer handle those compound tokens, but\n> unfortunately without an explanation on why. Never mind if that's too\n> much to ask about a design descision made over 25 years ago.\n\nWell, it was a design decision I wasn't involved in, so I dunno.\nYou could reach out to Jan on the slim chance he remembers.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2024 01:45:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CASE control block broken by a single line comment" } ]
[ { "msg_contents": "Hi,\n\nOn Fri, 5 Apr 2024 at 03:28, Melih Mutlu\n<m(dot)melihmutlu(at)gmail(dot)com> wrote:\n>Right. It was a mistake, forgot to remove that. Fixed it in v5.\n\nIf you don't mind, I have some suggestions for patch v5.\n\n1. Shouldn't PqSendBufferSize be of type size_t?\nThere are several comparisons with other size_t variables.\nstatic size_t PqSendBufferSize; /* Size send buffer */\n\nI think this would prevent possible overflows.\n\n2. If PqSendBufferSize is changed to size_t, in the function\nsocket_putmessage_noblock, the variable which name is *required*, should\nbe changed to size_t as well.\n\nstatic void\nsocket_putmessage_noblock(char msgtype, const char *s, size_t len)\n{\nint res PG_USED_FOR_ASSERTS_ONLY;\nsize_t required;\n\n3. In the internal_putbytes function, the *amout* variable could\nhave the scope safely reduced.\n\nelse\n{\nsize_t amount;\n\namount = PqSendBufferSize - PqSendPointer;\n\n4. In the function internal_flush_buffer, the variables named\n*bufptr* and *bufend* could be const char * type, like:\n\nstatic int\ninternal_flush_buffer(const char *s, size_t *start, size_t *end)\n{\nstatic int last_reported_send_errno = 0;\n\nconst char *bufptr = s + *start;\nconst char *bufend = s + *end;\n\nbest regards,\nRanier Vilela\n\nHi,\nOn Fri, 5 Apr 2024 at 03:28, Melih Mutlu <m(dot)melihmutlu(at)gmail(dot)com> wrote: >Right. It was a mistake, forgot to remove that. Fixed it in v5. If you don't mind, I have some suggestions for patch v5.1. \nShouldn't PqSendBufferSize be of type size_t?\nThere are several comparisons with other size_t variables.static size_t PqSendBufferSize; /* Size send buffer */I think this would prevent possible overflows.2. If PqSendBufferSize is changed to size_t, in the functionsocket_putmessage_noblock, the variable which name is *required*, shouldbe changed to size_t as well.static voidsocket_putmessage_noblock(char msgtype, const char *s, size_t len){int res PG_USED_FOR_ASSERTS_ONLY;size_t required;\n3. In the internal_putbytes function, the *amout* variable couldhave the scope safely reduced.else{size_t amount;amount = PqSendBufferSize - PqSendPointer;4. In the function internal_flush_buffer, the variables named*bufptr* and *bufend* could be const char * type, like:static intinternal_flush_buffer(const char *s, size_t *start, size_t *end){\tstatic int\tlast_reported_send_errno = 0;\tconst char\t\t*bufptr = s + *start;\tconst char\t\t*bufend = s + *end;best regards,Ranier Vilela", "msg_date": "Sat, 6 Apr 2024 18:33:16 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flushing large data immediately in pqcomm" } ]
[ { "msg_contents": "Hi,\n\nI recently started to be bothered by regress_* logs after some kinds of test\nfailures containing the whole log of a test failure. E.g. in\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2024-04-06%2016%3A28%3A38\n\n...\n### Restarting node \"standby\"\n# Running: pg_ctl -w -D /home/bf/bf-build/serinus/HEAD/pgsql.build/testrun/recovery/035_standby_logical_decoding/data/t_035_standby_logical_decoding_standby_data/pgdata -l /home/bf/bf-build/serinus/HEAD/pgsql.build/testrun/recovery/035_standby_logical_decoding/log/035_standby_logical_decoding_standby.log restart\nwaiting for server to shut down........................................................................................................................................................................................................................................................................................................................................................................... failed\npg_ctl: server does not shut down\n# pg_ctl restart failed; logfile:\n2024-04-06 16:33:37.496 UTC [2628363][postmaster][:0] LOG: starting PostgreSQL 17devel on x86_64-linux, compiled by gcc-14.0.1, 64-bit\n2024-04-06 16:33:37.503 UTC [2628363][postmaster][:0] LOG: listening on Unix socket \"/tmp/55kikMaTyW/.s.PGSQL.63274\"\n<many lines>\n\n\nLooks like the printing of the entire log was added in:\n\ncommit 33774978c78175095da9e6c276e8bcdb177725f8\nAuthor: Daniel Gustafsson <[email protected]>\nDate: 2023-09-22 13:35:37 +0200\n\n Avoid using internal test methods in SSL tests\n\n\nIt might be useful to print a few lines, but the whole log files can be\nseveral megabytes worth of output. In the buildfarm that leads to the same\ninformation being collected multiple times, and locally it makes it hard to\nsee where the \"normal\" contents of regress_log* continue.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Apr 2024 14:44:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Cluster::restart dumping logs when stop fails" }, { "msg_contents": "> On 6 Apr 2024, at 23:44, Andres Freund <[email protected]> wrote:\n\n> It might be useful to print a few lines, but the whole log files can be\n> several megabytes worth of output.\n\nThe non-context aware fix would be to just print the last 1024 (or something)\nbytes from the logfile:\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\nindex 54e1008ae5..53d4751ffc 100644\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -951,8 +951,8 @@ sub start\n \n \tif ($ret != 0)\n \t{\n-\t\tprint \"# pg_ctl start failed; logfile:\\n\";\n-\t\tprint PostgreSQL::Test::Utils::slurp_file($self->logfile);\n+\t\tprint \"# pg_ctl start failed; logfile excerpt:\\n\";\n+\t\tprint substr PostgreSQL::Test::Utils::slurp_file($self->logfile), -1024;\n \n \t\t# pg_ctl could have timed out, so check to see if there's a pid file;\n \t\t# otherwise our END block will fail to shut down the new postmaster.\n\n\nWould that be a reasonable fix?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 00:19:35 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "Hi,\n\nOn 2024-04-07 00:19:35 +0200, Daniel Gustafsson wrote:\n> > On 6 Apr 2024, at 23:44, Andres Freund <[email protected]> wrote:\n> \n> > It might be useful to print a few lines, but the whole log files can be\n> > several megabytes worth of output.\n> \n> The non-context aware fix would be to just print the last 1024 (or something)\n> bytes from the logfile:\n\nThat'd be better, yes. I'd mainly log the path to the logfile though, that's\nprobably at least as helpful for actually investigating the issue?\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 54e1008ae5..53d4751ffc 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -951,8 +951,8 @@ sub start\n> \n> \tif ($ret != 0)\n> \t{\n> -\t\tprint \"# pg_ctl start failed; logfile:\\n\";\n> -\t\tprint PostgreSQL::Test::Utils::slurp_file($self->logfile);\n> +\t\tprint \"# pg_ctl start failed; logfile excerpt:\\n\";\n> +\t\tprint substr PostgreSQL::Test::Utils::slurp_file($self->logfile), -1024;\n> \n> \t\t# pg_ctl could have timed out, so check to see if there's a pid file;\n> \t\t# otherwise our END block will fail to shut down the new postmaster.\n\nThat's probably unnecessary optimization, but it seems a tad silly to read an\nentire, potentially sizable, file to just use the last 1k. Not sure if the way\nslurp_file() uses seek supports negative ofsets, the docs read to me like that\nmay only be supported with SEEK_END.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 6 Apr 2024 17:49:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "> On 7 Apr 2024, at 02:49, Andres Freund <[email protected]> wrote:\n> On 2024-04-07 00:19:35 +0200, Daniel Gustafsson wrote:\n>>> On 6 Apr 2024, at 23:44, Andres Freund <[email protected]> wrote:\n\n>> The non-context aware fix would be to just print the last 1024 (or something)\n>> bytes from the logfile:\n> \n> That'd be better, yes. I'd mainly log the path to the logfile though, that's\n> probably at least as helpful for actually investigating the issue?\n\nIIRC this was modelled around how it used to be earlier, in v14 in the\npre-refactored version we print the full logfile.\n\nHow about doing the below backpatched down to 15?\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\nindex 54e1008ae5..a2f9409842 100644\n--- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n@@ -951,8 +951,7 @@ sub start\n \n \tif ($ret != 0)\n \t{\n-\t\tprint \"# pg_ctl start failed; logfile:\\n\";\n-\t\tprint PostgreSQL::Test::Utils::slurp_file($self->logfile);\n+\t\tprint \"# pg_ctl start failed; see logfile for details: \" . $self->logfile . \"\\n\";\n \n \t\t# pg_ctl could have timed out, so check to see if there's a pid file;\n \t\t# otherwise our END block will fail to shut down the new postmaster.\n@@ -1090,8 +1089,7 @@ sub restart\n \n \tif ($ret != 0)\n \t{\n-\t\tprint \"# pg_ctl restart failed; logfile:\\n\";\n-\t\tprint PostgreSQL::Test::Utils::slurp_file($self->logfile);\n+\t\tprint \"# pg_ctl restart failed; see logfile for details: \" . $self->logfile . \"\\n\";\n \n \t\t# pg_ctl could have timed out, so check to see if there's a pid file;\n \t\t# otherwise our END block will fail to shut down the new postmaster.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 11:39:39 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "On 2024-04-06 Sa 20:49, Andres Freund wrote:\n>\n> That's probably unnecessary optimization, but it seems a tad silly to read an\n> entire, potentially sizable, file to just use the last 1k. Not sure if the way\n> slurp_file() uses seek supports negative ofsets, the docs read to me like that\n> may only be supported with SEEK_END.\n>\n\nWe should enhance slurp_file() so it uses SEEK_END if the offset is \nnegative. It would be a trivial patch:\n\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Utils.pm b/src/test/perl/PostgreSQL/Test/Utils.pm\nindex 42d5a50dc8..8256573957 100644\n--- a/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -524,7 +524,7 @@ sub slurp_file\n \n     if (defined($offset))\n     {\n-       seek($fh, $offset, SEEK_SET)\n+       seek($fh, $offset, ($offset < 0 ? SEEK_END : SEEK_SET))\n           or croak \"could not seek \\\"$filename\\\": $!\";\n     }\n\ncheers\n\n\nandrew\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-06 Sa 20:49, Andres Freund\n wrote:\n\n\n\nThat's probably unnecessary optimization, but it seems a tad silly to read an\nentire, potentially sizable, file to just use the last 1k. Not sure if the way\nslurp_file() uses seek supports negative ofsets, the docs read to me like that\nmay only be supported with SEEK_END.\n\n\n\n\n\nWe should enhance slurp_file() so it uses SEEK_END if the offset\n is negative. It would be a trivial patch:\n\n\ndiff --git a/src/test/perl/PostgreSQL/Test/Utils.pm b/src/test/perl/PostgreSQL/Test/Utils.pm\nindex 42d5a50dc8..8256573957 100644\n--- a/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -524,7 +524,7 @@ sub slurp_file\n \n    if (defined($offset))\n    {\n-       seek($fh, $offset, SEEK_SET)\n+       seek($fh, $offset, ($offset < 0 ? SEEK_END : SEEK_SET))\n          or croak \"could not seek \\\"$filename\\\": $!\";\n    }\n\n\n\ncheers\n\n\nandrew\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 7 Apr 2024 08:51:06 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "> On 7 Apr 2024, at 14:51, Andrew Dunstan <[email protected]> wrote:\n> On 2024-04-06 Sa 20:49, Andres Freund wrote:\n\n>> That's probably unnecessary optimization, but it seems a tad silly to read an\n>> entire, potentially sizable, file to just use the last 1k. Not sure if the way\n>> slurp_file() uses seek supports negative ofsets, the docs read to me like that\n>> may only be supported with SEEK_END.\n> \n> We should enhance slurp_file() so it uses SEEK_END if the offset is negative.\n\nAbsolutely agree. Reading the thread I think Andres argues for not printing\nanything at all in this case but we should support negative offsets anyways, it\nwill fort sure come in handy.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 16:52:05 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "On 2024-04-07 16:52:05 +0200, Daniel Gustafsson wrote:\n> > On 7 Apr 2024, at 14:51, Andrew Dunstan <[email protected]> wrote:\n> > On 2024-04-06 Sa 20:49, Andres Freund wrote:\n> \n> >> That's probably unnecessary optimization, but it seems a tad silly to read an\n> >> entire, potentially sizable, file to just use the last 1k. Not sure if the way\n> >> slurp_file() uses seek supports negative ofsets, the docs read to me like that\n> >> may only be supported with SEEK_END.\n> > \n> > We should enhance slurp_file() so it uses SEEK_END if the offset is negative.\n> \n> Absolutely agree. Reading the thread I think Andres argues for not printing\n> anything at all in this case but we should support negative offsets anyways, it\n> will fort sure come in handy.\n\nI'm ok with printing path + some content or just the path.\n\n- Andres\n\n\n", "msg_date": "Sun, 7 Apr 2024 09:28:56 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "> On 7 Apr 2024, at 18:28, Andres Freund <[email protected]> wrote:\n> \n> On 2024-04-07 16:52:05 +0200, Daniel Gustafsson wrote:\n>>> On 7 Apr 2024, at 14:51, Andrew Dunstan <[email protected]> wrote:\n>>> On 2024-04-06 Sa 20:49, Andres Freund wrote:\n>> \n>>>> That's probably unnecessary optimization, but it seems a tad silly to read an\n>>>> entire, potentially sizable, file to just use the last 1k. Not sure if the way\n>>>> slurp_file() uses seek supports negative ofsets, the docs read to me like that\n>>>> may only be supported with SEEK_END.\n>>> \n>>> We should enhance slurp_file() so it uses SEEK_END if the offset is negative.\n>> \n>> Absolutely agree. Reading the thread I think Andres argues for not printing\n>> anything at all in this case but we should support negative offsets anyways, it\n>> will fort sure come in handy.\n> \n> I'm ok with printing path + some content or just the path.\n\nI think printing the last 512 bytes or so would be a good approach, I'll take\ncare of it later tonight. That would be a backpatchable change IMHO.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 18:51:40 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "On 2024-04-07 18:51:40 +0200, Daniel Gustafsson wrote:\n> > On 7 Apr 2024, at 18:28, Andres Freund <[email protected]> wrote:\n> > \n> > On 2024-04-07 16:52:05 +0200, Daniel Gustafsson wrote:\n> >>> On 7 Apr 2024, at 14:51, Andrew Dunstan <[email protected]> wrote:\n> >>> On 2024-04-06 Sa 20:49, Andres Freund wrote:\n> >> \n> >>>> That's probably unnecessary optimization, but it seems a tad silly to read an\n> >>>> entire, potentially sizable, file to just use the last 1k. Not sure if the way\n> >>>> slurp_file() uses seek supports negative ofsets, the docs read to me like that\n> >>>> may only be supported with SEEK_END.\n> >>> \n> >>> We should enhance slurp_file() so it uses SEEK_END if the offset is negative.\n> >> \n> >> Absolutely agree. Reading the thread I think Andres argues for not printing\n> >> anything at all in this case but we should support negative offsets anyways, it\n> >> will fort sure come in handy.\n> > \n> > I'm ok with printing path + some content or just the path.\n> \n> I think printing the last 512 bytes or so would be a good approach, I'll take\n> care of it later tonight. That would be a backpatchable change IMHO.\n\n+1 - thanks for quickly improving this.\n\n\n", "msg_date": "Sun, 7 Apr 2024 10:01:14 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" }, { "msg_contents": "> On 7 Apr 2024, at 18:51, Daniel Gustafsson <[email protected]> wrote:\n>> On 7 Apr 2024, at 18:28, Andres Freund <[email protected]> wrote:\n\n>> I'm ok with printing path + some content or just the path.\n> \n> I think printing the last 512 bytes or so would be a good approach, I'll take\n> care of it later tonight. That would be a backpatchable change IMHO.\n\nIn the end I opted for just printing the path to avoid introducing logic (at\nthis point in the cycle) for ensuring that a negative offset doesn't exceed the\nfilesize. Since it changes behavior I haven't backpatched it, but can do that\nif we think it's more of a fix than a change (I think it can be argued either\nway, I have no strong feelings).\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 00:28:42 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cluster::restart dumping logs when stop fails" } ]
[ { "msg_contents": "While refactoring the Kerberos test module in preparation for adding \nlibpq encryption negotiation tests [1], I noticed that if the test \nscript die()s during setup, the whole test is marked as SKIPped rather \nthan failed. The cleanup END section is missing this trick:\n\n--- a/src/test/kerberos/t/001_auth.pl\n+++ b/src/test/kerberos/t/001_auth.pl\n@@ -203,7 +203,12 @@ system_or_bail $krb5kdc, '-P', $kdc_pidfile;\n\n END\n {\n+ # take care not to change the script's exit value\n+ my $exit_code = $?;\n+\n kill 'INT', `cat $kdc_pidfile` if defined($kdc_pidfile) && -f \n$kdc_pidfile;\n+\n+ $? = $exit_code;\n }\n\nThe PostgreSQL::Cluster module got that right, but this test and the \nLdapServer module didn't get the memo.\n\nAfter fixing that, the ldap tests started failing on my laptop:\n\n[12:45:28.997](0.054s) # setting up LDAP server\n# Checking port 59839\n# Found port 59839\n# Checking port 59840\n# Found port 59840\n# Running: /usr/sbin/slapd -f \n/home/heikki/git-sandbox/postgresql/build/testrun/ldap/001_auth/data/ldap-001_auth_j_WZ/slapd.conf \n-s0 -h ldap://localhost:59839 ldaps://localhost:59840\nCan't exec \"/usr/sbin/slapd\": No such file or directory at \n/home/heikki/git-sandbox/postgresql/src/test/perl/PostgreSQL/Test/Utils.pm \nline 349.\n[12:45:29.004](0.008s) Bail out! command \"/usr/sbin/slapd -f \n/home/heikki/git-sandbox/postgresql/build/testrun/ldap/001_auth/data/ldap-001_auth_j_WZ/slapd.conf \n-s0 -h ldap://localhost:59839 ldaps://localhost:59840\" exited with value 2\n\nThat's because I don't have 'slapd' installed. The test script it \nsupposed to check for that, and mark the test as SKIPped, but it's not \nreally doing that on Linux. Attached patch fixes that, and also makes \nthe error message a bit more precise, when the OpenLDAP installation is \nnot found.\n\nThere's a lot more we could do with that code that tries to find the \nOpenLDAP installation. It should probably be a configure/meson test. \nThis patch is just the minimum to keep this working after fixing the END \nblock.\n\n1st patch fixes the LDAP setup tests, and 2nd patch fixes the error \nhandling in the END blocks.\n\n[1] https://commitfest.postgresql.org/47/4742/\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sun, 7 Apr 2024 13:19:01 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "LDAP & Kerberos test woes" }, { "msg_contents": "On 07/04/2024 13:19, Heikki Linnakangas wrote:\n> 1st patch fixes the LDAP setup tests, and 2nd patch fixes the error\n> handling in the END blocks.\n\nCommitted and backpatched these test fixes.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 20:32:27 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LDAP & Kerberos test woes" } ]
[ { "msg_contents": "Hello Hackers,\n\nA question about the behavior of the JSON Path parser. The docs[1] have this to say about numbers:\n\n> Numeric literals in SQL/JSON path expressions follow JavaScript rules, which are different from both SQL and JSON in some minor details. For example, SQL/JSON path allows .1 and 1., which are invalid in JSON.\n\nIn other words, this is valid:\n\ndavid=# select '2.'::jsonpath;\n jsonpath \n----------\n 2\n\nBut this feature creates a bit of a conflict with the use of a dot for path expressions. Consider `0x2.p10`. How should that be parsed? As an invalid decimal expression (\"trailing junk after numeric literal”), or as a valid integer 2 followed by the path segment “p10”? Here’s the parser’s answer:\n\ndavid=# select '0x2.p10'::jsonpath;\n jsonpath \n-----------\n (2).\"p10\"\n\nSo it would seem that, other things being equal, a path key expression (`.foo`) is slightly higher precedence than a decimal expression. Is that intentional/correct?\n\nDiscovered while writing my Go lexer and throwing all of Go’s floating point literal examples[2] at it and comparing to the Postgres path parser. Curiously, this is only an issue for 0x/0o/0b numeric expressions; a decimal expression does not behave in the same way:\n\ndavid=# select '2.p10'::jsonpath;\nERROR: trailing junk after numeric literal at or near \"2.p\" of jsonpath input\nLINE 1: select '2.p10'::jsonpath;\n\nWhich maybe seems a bit inconsistent.\n\nThoughts on what the “correct” behavior should be?\n\nBest,\n\nDavid\n\n [1]: https://www.postgresql.org/docs/devel/datatype-json.html#DATATYPE-JSONPATH\n [2]: https://tip.golang.org/ref/spec#Floating-point_literals\n\n", "msg_date": "Sun, 7 Apr 2024 12:13:45 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On 2024-04-07 18:13 +0200, David E. Wheeler wrote:\n> A question about the behavior of the JSON Path parser. The docs[1]\n> have this to say about numbers:\n> \n> > Numeric literals in SQL/JSON path expressions follow JavaScript\n> > rules, which are different from both SQL and JSON in some minor\n> > details. For example, SQL/JSON path allows .1 and 1., which are\n> > invalid in JSON.\n> \n> In other words, this is valid:\n> \n> david=# select '2.'::jsonpath;\n> jsonpath \n> ----------\n> 2\n> \n> But this feature creates a bit of a conflict with the use of a dot for\n> path expressions. Consider `0x2.p10`. How should that be parsed? As an\n> invalid decimal expression (\"trailing junk after numeric literal”), or\n> as a valid integer 2 followed by the path segment “p10”? Here’s the\n> parser’s answer:\n> \n> david=# select '0x2.p10'::jsonpath;\n> jsonpath \n> -----------\n> (2).\"p10\"\n> \n> So it would seem that, other things being equal, a path key expression\n> (`.foo`) is slightly higher precedence than a decimal expression. Is\n> that intentional/correct?\n\nI guess jsonpath assumes that hex, octal, and binary literals are\nintegers. So there's no ambiguity about any fractional part that might\nfollow.\n\n> Discovered while writing my Go lexer and throwing all of Go’s floating\n> point literal examples[2] at it and comparing to the Postgres path\n> parser. Curiously, this is only an issue for 0x/0o/0b numeric\n> expressions; a decimal expression does not behave in the same way:\n> \n> david=# select '2.p10'::jsonpath;\n> ERROR: trailing junk after numeric literal at or near \"2.p\" of jsonpath input\n> LINE 1: select '2.p10'::jsonpath;\n\nIt scans the decimal \"2.\" and then finds junks \"p10\".\n\nWorks with a full decimal:\n\n test=# select '3.14.p10'::jsonpath;\n jsonpath\n --------------\n (3.14).\"p10\"\n (1 row)\n\nAnd with extra whitespace to resolve the ambiguity:\n\n test=# select '2 .p10'::jsonpath;\n jsonpath\n -----------\n (2).\"p10\"\n (1 row)\n\n> Which maybe seems a bit inconsistent.\n> \n> Thoughts on what the “correct” behavior should be?\n\nI'd say a member accessor after a number doesn't really make sense\nbecause object keys are strings. One could argue that path \"$.2.p10\"\nshould match JSON '{\"2\":{\"p10\":42}}', i.e. the numeric accessor is\nconverted to a string. For example, in nodejs I can do:\n\n > var x = {2: {p10: 42}}\n > x[2].p10\n 42\n\nBut that's JavaScript, not JSON.\n\nAlso, is there even a use case for path \"0x2.p10\"? The path has to\nstart with \"$\" or (\"@\" in case of a filter expression), doesn't it? And\nit that case it doesn't parse:\n\n test=# select '$.0x2.p10'::jsonpath;\n ERROR: trailing junk after numeric literal at or near \".0x\" of jsonpath input\n LINE 1: select '$.0x2.p10'::jsonpath;\n\nEven with extra whitespace:\n\n test=# select '$ . 0x2 . p10'::jsonpath;\n ERROR: syntax error at or near \"0x2\" of jsonpath input\n LINE 1: select '$ . 0x2 . p10'::jsonpath;\n\nOr should it behave like an array accessor? Similar to:\n\n test=# select jsonb_path_query('[0,1,{\"p10\":42},3]', '$[0x2].p10'::jsonpath);\n jsonb_path_query\n ------------------\n 42\n (1 row)\n\n> [1]: https://www.postgresql.org/docs/devel/datatype-json.html#DATATYPE-JSONPATH\n> [2]: https://tip.golang.org/ref/spec#Floating-point_literals\n\n-- \nErik\n\n\n", "msg_date": "Sun, 7 Apr 2024 21:46:58 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?4p2T?= JSON Path Dot Precedence" }, { "msg_contents": "On Apr 7, 2024, at 15:46, Erik Wienhold <[email protected]> wrote:\n\n> I guess jsonpath assumes that hex, octal, and binary literals are\n> integers. So there's no ambiguity about any fractional part that might\n> follow.\n\nYeah, that’s what the comment in the flex file says:\n\nhttps://github.com/postgres/postgres/blob/b4a71cf/src/backend/utils/adt/jsonpath_scan.l#L102-L105\n\n\n> Also, is there even a use case for path \"0x2.p10\"? The path has to\n> start with \"$\" or (\"@\" in case of a filter expression), doesn't it? And\n> it that case it doesn't parse:\n> \n> test=# select '$.0x2.p10'::jsonpath;\n> ERROR: trailing junk after numeric literal at or near \".0x\" of jsonpath input\n> LINE 1: select '$.0x2.p10'::jsonpath;\n> \n> Even with extra whitespace:\n> \n> test=# select '$ . 0x2 . p10'::jsonpath;\n> ERROR: syntax error at or near \"0x2\" of jsonpath input\n> LINE 1: select '$ . 0x2 . p10'::jsonpath;\n> \n> Or should it behave like an array accessor? Similar to:\n> \n> test=# select jsonb_path_query('[0,1,{\"p10\":42},3]', '$[0x2].p10'::jsonpath);\n> jsonb_path_query\n> ------------------\n> 42\n> (1 row)\n\nI too am curious why these parse successfully, but don’t appear to be useful.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Sun, 7 Apr 2024 18:11:32 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On 07.04.24 18:13, David E. Wheeler wrote:\n> Hello Hackers,\n> \n> A question about the behavior of the JSON Path parser. The docs[1] have this to say about numbers:\n> \n>> Numeric literals in SQL/JSON path expressions follow JavaScript rules, which are different from both SQL and JSON in some minor details. For example, SQL/JSON path allows .1 and 1., which are invalid in JSON.\n> \n> In other words, this is valid:\n> \n> david=# select '2.'::jsonpath;\n> jsonpath\n> ----------\n> 2\n> \n> But this feature creates a bit of a conflict with the use of a dot for path expressions. Consider `0x2.p10`. How should that be parsed? As an invalid decimal expression (\"trailing junk after numeric literal”), or as a valid integer 2 followed by the path segment “p10”? Here’s the parser’s answer:\n> \n> david=# select '0x2.p10'::jsonpath;\n> jsonpath\n> -----------\n> (2).\"p10\"\n> \n> So it would seem that, other things being equal, a path key expression (`.foo`) is slightly higher precedence than a decimal expression. Is that intentional/correct?\n\nI think the derivation would be like this:\n\n(I'm not sure what the top-level element would be, so let's start \nsomewhere in the middle ...)\n\n<JSON unary expression> ::= <JSON accessor expression>\n\n<JSON accessor expression> ::= <JSON path primary> <JSON accessor op>\n\n<JSON path primary> ::= <JSON path literal>\n\n<JSON accessor op> ::= <JSON member accessor>\n\n<JSON member accessor> ::= <period> <JSON path key name>\n\nSo the whole thing is\n\n<JSON path literal> <period> <JSON path key name>\n\nThe syntax of <JSON path literal> and <JSON path key name> is then \npunted to ECMAScript 5.1.\n\n0x2 is a HexIntegerLiteral. (There can be no dots in that.)\n\np10 is an Identifier.\n\nSo I think this is all correct.\n\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:29:28 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On Apr 10, 2024, at 10:29, Peter Eisentraut <[email protected]> wrote:\n\n> So the whole thing is\n> \n> <JSON path literal> <period> <JSON path key name>\n> \n> The syntax of <JSON path literal> and <JSON path key name> is then punted to ECMAScript 5.1.\n> \n> 0x2 is a HexIntegerLiteral. (There can be no dots in that.)\n> \n> p10 is an Identifier.\n> \n> So I think this is all correct.\n\nThat makes sense, thanks. It’s just a little odd to me that the resulting path isn’t a query at all. To Erik’s point: what path can `'0x2.p10` even select?\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:44:51 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "Hi, following up on some old threads.\n\n> On Apr 10, 2024, at 16:44, David E. Wheeler <[email protected]> wrote:\n> \n> That makes sense, thanks. It’s just a little odd to me that the resulting path isn’t a query at all. To Erik’s point: what path can `'0x2.p10` even select?\n\nI’m wondering whether the jsonpath parser should be updated to reject cases like this. I think it will always return no results. AFAICT, there’s no way to navigate to an object identifier immediately after a number:\n\ndavid=# select '0x2.p10'::jsonpath;\n jsonpath \n-----------\n (2).\"p10\"\n(1 row)\n\ndavid=# select jsonb_path_query(target => '[0, 1, {\"p10\": true}]', path => '0x2.p10');\n jsonb_path_query \n------------------\n(0 rows)\n\ndavid=# select jsonb_path_query(target => '{\"0x2\": {\"p10\": true}}', path => '0x2.p10');\n jsonb_path_query \n------------------\n(0 rows)\n\n\nIt’s just inherently meaningless. BTW, it’s not limited to hex numbers:\n\ndavid=# select '(2).p10'::jsonpath;\n jsonpath \n-----------\n (2).\"p10\"\n\nOTOH, maybe that’s a corner case we can live with.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 11:27:36 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On Mon, Jul 8, 2024 at 8:27 AM David E. Wheeler <[email protected]>\nwrote:\n\n> Hi, following up on some old threads.\n>\n> > On Apr 10, 2024, at 16:44, David E. Wheeler <[email protected]>\n> wrote:\n> >\n> > That makes sense, thanks. It’s just a little odd to me that the\n> resulting path isn’t a query at all. To Erik’s point: what path can\n> `'0x2.p10` even select?\n>\n> I’m wondering whether the jsonpath parser should be updated to reject\n> cases like this. I think it will always return no results. AFAICT, there’s\n> no way to navigate to an object identifier immediately after a number:\n>\n>\nIf we go down this path wouldn't the correct framing be: do not allow\naccessors after scalars ? The same argument applies to false/\"john\" and\nother scalar types since by definition none of them have subcomponents to\nbe accessed.\n\nThat said, the parser has a lax mode which somewhat implies it doesn't\nexpect the jsonpath type to perform much in the way of validation of the\nsemantic correctness of the encoded path expression.\n\nI like the idea of a smarter expression-holding type and would even wish to\nhave had this on day one. Retrofitting is less appealing. We document a\nsimilarity with regular expressions here where we, for better and worse,\nhave lived without a regexppath data type forever and leave it to the\nexecutor to tell the user their pattern is invalid. Leaning on that\nprecedence here makes accepting the status quo more reasonable. Though\nstrict/lax modes and, I think, variables, motivates me to put my vote\ntoward the \"do more validation\" group.\n\nDoes the standard even have a separate type here or is that our\nimplementation detail invention?\n\nDavid J.\n\nOn Mon, Jul 8, 2024 at 8:27 AM David E. Wheeler <[email protected]> wrote:Hi, following up on some old threads.\n\n> On Apr 10, 2024, at 16:44, David E. Wheeler <[email protected]> wrote:\n> \n> That makes sense, thanks. It’s just a little odd to me that the resulting path isn’t a query at all. To Erik’s point: what path can `'0x2.p10` even select?\n\nI’m wondering whether the jsonpath parser should be updated to reject cases like this. I think it will always return no results. AFAICT, there’s no way to navigate to an object identifier immediately after a number:If we go down this path wouldn't the correct framing be:  do not allow accessors after scalars ?  The same argument applies to false/\"john\" and other scalar types since by definition none of them have subcomponents to be accessed.That said, the parser has a lax mode which somewhat implies it doesn't expect the jsonpath type to perform much in the way of validation of the semantic correctness of the encoded path expression.I like the idea of a smarter expression-holding type and would even wish to have had this on day one.  Retrofitting is less appealing.  We document a similarity with regular expressions here where we, for better and worse, have lived without a regexppath data type forever and leave it to the executor to tell the user their pattern is invalid.  Leaning on that precedence here makes accepting the status quo more reasonable.  Though strict/lax modes and, I think, variables, motivates me to put my vote toward the \"do more validation\" group.Does the standard even have a separate type here or is that our implementation detail invention?David J.", "msg_date": "Mon, 8 Jul 2024 09:05:30 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On Jul 8, 2024, at 12:05, David G. Johnston <[email protected]> wrote:\n\n> If we go down this path wouldn't the correct framing be: do not allow accessors after scalars ? The same argument applies to false/\"john\" and other scalar types since by definition none of them have subcomponents to be accessed.\n\nYes, excellent point.\n\n> That said, the parser has a lax mode which somewhat implies it doesn't expect the jsonpath type to perform much in the way of validation of the semantic correctness of the encoded path expression.\n\nMy understanding is that lax mode means it ignores where the JSON doesn’t abide by expectations of the path expression, not that the path parsing is lax.\n\n> I like the idea of a smarter expression-holding type and would even wish to have had this on day one. Retrofitting is less appealing. We document a similarity with regular expressions here where we, for better and worse, have lived without a regexppath data type forever and leave it to the executor to tell the user their pattern is invalid. Leaning on that precedence here makes accepting the status quo more reasonable. Though strict/lax modes and, I think, variables, motivates me to put my vote toward the \"do more validation\" group.\n\nThis feels different from a documented difference in behavior as an implementation choice, like path regex vs. Spencer. In this case, the expression is technically meaningless, but there’s never so much as an error thrown.\n\n> Does the standard even have a separate type here or is that our implementation detail invention?\n\nSorry, separate type for what?\n\nBest,\n\nDavid\n\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:11:58 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On Mon, Jul 8, 2024 at 9:12 AM David E. Wheeler <[email protected]>\nwrote:\n\n> On Jul 8, 2024, at 12:05, David G. Johnston <[email protected]>\n> wrote:\n>\n> > Does the standard even have a separate type here or is that our\n> implementation detail invention?\n>\n> Sorry, separate type for what?\n>\n>\nWe created a data type named: jsonpath. Does the standard actually have\nthat data type and defined parsing behavior or does it just have functions\nwhere one of the inputs is text whose contents are a path expression?\n\nDavid J.\n\nOn Mon, Jul 8, 2024 at 9:12 AM David E. Wheeler <[email protected]> wrote:On Jul 8, 2024, at 12:05, David G. Johnston <[email protected]> wrote:\n> Does the standard even have a separate type here or is that our implementation detail invention?\n\nSorry, separate type for what?We created a data type named: jsonpath.  Does the standard actually have that data type and defined parsing behavior or does it just have functions where one of the inputs is text whose contents are a path expression?David J.", "msg_date": "Mon, 8 Jul 2024 09:17:09 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" }, { "msg_contents": "On Jul 8, 2024, at 12:17, David G. Johnston <[email protected]> wrote:\n\n> We created a data type named: jsonpath. Does the standard actually have that data type and defined parsing behavior or does it just have functions where one of the inputs is text whose contents are a path expression?\n\nAh, got it.\n\nD\n\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:54:48 -0400", "msg_from": "\"David E. Wheeler\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?utf-8?Q?Re=3A_=E2=9D=93_JSON_Path_Dot_Precedence?=" } ]
[ { "msg_contents": "Hi,\n\nSorry, without any connection with the technical part of the thread.\nBut I couldn't help but record this, and congratulate Andres Freund, for\nthe excellent work.\nIt's not every day that a big press, in Brazil or around the world,\npublishes something related to technology people.\n\nToday I came across a publication in Brazil's most widely circulated\nnewspaper, calling him a hero for preventing the attack.\n\nprogramador-conserta-falha-no-linux-evita-ciberataque-e-vira-heroi-da-internet.shtml\n<https://www1.folha.uol.com.br/tec/2024/04/programador-conserta-falha-no-linux-evita-ciberataque-e-vira-heroi-da-internet.shtml>\n\nCongratulations Andres Freund, nice work.\n\nbest regards,\nRanier Vilela\n\nHi,Sorry, without any connection with the technical part of the thread.But I couldn't help but record this, and congratulate Andres Freund, for the excellent work.It's not every day that a big press, in Brazil or around the world, publishes something related to technology people.Today I came across a publication in Brazil's most widely circulated newspaper, calling him a hero for preventing the attack.programador-conserta-falha-no-linux-evita-ciberataque-e-vira-heroi-da-internet.shtmlCongratulations Andres Freund, nice work.best regards,Ranier Vilela", "msg_date": "Sun, 7 Apr 2024 17:44:53 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Security lessons from liblzma" } ]
[ { "msg_contents": "Add tests for libpq gssencmode and sslmode options\n\nTest all combinations of gssencmode, sslmode, whether the server\nsupports SSL and/or GSSAPI encryption, and whether they are accepted\nby pg_hba.conf. This is in preparation for refactoring that code in\nlibpq, and for adding a new option for \"direct SSL\" connections, which\nadds another dimension to the logic.\n\nIf we add even more options in the future, testing all combinations\nwill become unwieldy and we'll need to rethink this, but for now an\nexhaustive test is nice.\n\nAuthor: Heikki Linnakangas, Matthias van de Meent\nReviewed-by: Jacob Champion\nDiscussion: https://www.postgresql.org/message-id/[email protected]\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/1169920ff77025550718b90a5cafc6849875f43f\n\nModified Files\n--------------\n.cirrus.tasks.yml | 2 +-\nsrc/test/libpq_encryption/Makefile | 25 +\nsrc/test/libpq_encryption/README | 31 ++\nsrc/test/libpq_encryption/meson.build | 18 +\n.../libpq_encryption/t/001_negotiate_encryption.pl | 548 +++++++++++++++++++++\nsrc/test/meson.build | 1 +\n6 files changed, 624 insertions(+), 1 deletion(-)", "msg_date": "Sun, 07 Apr 2024 23:50:08 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Add tests for libpq gssencmode and sslmode options" }, { "msg_contents": "On 08.04.24 01:50, Heikki Linnakangas wrote:\n> Add tests for libpq gssencmode and sslmode options\n\nWhy aren't these tests at \nsrc/interfaces/libpq/t/nnn_negotiate_encryption.pl ?\n\n> Test all combinations of gssencmode, sslmode, whether the server\n> supports SSL and/or GSSAPI encryption, and whether they are accepted\n> by pg_hba.conf. This is in preparation for refactoring that code in\n> libpq, and for adding a new option for \"direct SSL\" connections, which\n> adds another dimension to the logic.\n> \n> If we add even more options in the future, testing all combinations\n> will become unwieldy and we'll need to rethink this, but for now an\n> exhaustive test is nice.\n> \n> Author: Heikki Linnakangas, Matthias van de Meent\n> Reviewed-by: Jacob Champion\n> Discussion: https://www.postgresql.org/message-id/[email protected]\n> \n> Branch\n> ------\n> master\n> \n> Details\n> -------\n> https://git.postgresql.org/pg/commitdiff/1169920ff77025550718b90a5cafc6849875f43f\n> \n> Modified Files\n> --------------\n> .cirrus.tasks.yml | 2 +-\n> src/test/libpq_encryption/Makefile | 25 +\n> src/test/libpq_encryption/README | 31 ++\n> src/test/libpq_encryption/meson.build | 18 +\n> .../libpq_encryption/t/001_negotiate_encryption.pl | 548 +++++++++++++++++++++\n> src/test/meson.build | 1 +\n> 6 files changed, 624 insertions(+), 1 deletion(-)\n> \n\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:48:11 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add tests for libpq gssencmode and sslmode options" }, { "msg_contents": "On 10/04/2024 17:48, Peter Eisentraut wrote:\n> On 08.04.24 01:50, Heikki Linnakangas wrote:\n>> Add tests for libpq gssencmode and sslmode options\n> \n> Why aren't these tests at\n> src/interfaces/libpq/t/nnn_negotiate_encryption.pl ?\n\nTo be honest, it never occurred to me. It started out as extra tests \nunder src/test/ssl/, and when I decided to move them out to its own \nmodule, I didn't think of moving them to src/interfaces/libpq/t/.\n\nI will move it, barring any objections or better ideas.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 10 Apr 2024 18:54:57 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add tests for libpq gssencmode and sslmode options" }, { "msg_contents": "(moved to pgsql-hackers, change subject)\n\nOn 10/04/2024 18:54, Heikki Linnakangas wrote:\n> On 10/04/2024 17:48, Peter Eisentraut wrote:\n>> On 08.04.24 01:50, Heikki Linnakangas wrote:\n>>> Add tests for libpq gssencmode and sslmode options\n>>\n>> Why aren't these tests at\n>> src/interfaces/libpq/t/nnn_negotiate_encryption.pl ?\n> \n> To be honest, it never occurred to me. It started out as extra tests\n> under src/test/ssl/, and when I decided to move them out to its own\n> module, I didn't think of moving them to src/interfaces/libpq/t/.\n> \n> I will move it, barring any objections or better ideas.\n\nMoved.\n\nI also added an extra check for PG_TEST_EXTRA=kerberos, so that the \ntests that require a MIT Kerberos installation are only run if \nPG_TEST_EXTRA=kerberos is specified. That seems prudent; it seems \nunlikely that you would want to run libpq_encryption tests with Kerberos \ntests included, but not the main kerberos tests. If you specify \nPG_TEST_EXTRA=libpq_encryption, but not 'kerberos', it's probably \nbecause you don't have an MIT Kerberos installation on your system.\n\nI added documentation for the new PG_TEST_EXTRA=libpq_encryption option, \nI missed that earlier, with a note on the above interaction with 'kerberos'.\n\n\nAs we accumulate more PG_TEST_EXTRA options, I think we should \ncategorize the tests by the capabilities they need or the risk \nassociated, rather than by test names. Currently we have:\n\n- kerberos: Requires MIT Kerberos installation and opens TCP/IP listen \nsockets\n- ldap: Requires OpenLDAP installation and opens TCP/IP listen sockets\n- ssl: Opens TCP/IP listen sockets.\n- load_balance: Requires editing the system 'hosts' file and opens \nTCP/IP listen sockets.\n- libpq_encryption: Opens TCP/IP listen sockets. For the GSSAPI tests, \nrequires MIT Kerberos installation\n- wal_consistency_checking: is resource intensive\n- xid_wraparound: is resource intensive\n\nThere are a few clear themes here:\n\n- tests that open TCP/IP listen sockets\n- tests that require OpenLDAP installation\n- tests that require MIT Kerberos installation\n- tests that require editing 'hosts' file\n- tests that are resource intensive\n\nWe could have PG_TEST_EXTRA options that match those themes, and \nenable/disable the individual tests based on those requirements. For \nexample, if you're on a single-user system and have no issue with \nopening TCP/IP listen sockets, you would specify \n\"PG_TEST_EXTRA=tcp-listen\", and all the tests that need to open TCP/IP \nlisten sockets would run. Also it would be nice to have autoconf/meson \ntests for the presence of OpenLDAP / MIT Kerberos installations, instead \nof having to enable/disable them with PG_TEST_EXTRA.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n", "msg_date": "Fri, 12 Apr 2024 20:03:03 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "PG_TEST_EXTRAs by theme rather than test name (Re: pgsql: Add tests\n for libpq gssencmode and sslmode options)" }, { "msg_contents": "On 12.04.24 19:03, Heikki Linnakangas wrote:\n> As we accumulate more PG_TEST_EXTRA options, I think we should \n> categorize the tests by the capabilities they need or the risk \n> associated, rather than by test names.\n\nThis was recently discussed at [0], without success.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAN55FZ1zPuyoj0KtTOZ_oTsqdVd-SCRAb2RP7c-z0jWPneu76g%40mail.gmail.com\n\n\n\n", "msg_date": "Sun, 14 Apr 2024 10:16:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_TEST_EXTRAs by theme rather than test name (Re: pgsql: Add\n tests for libpq gssencmode and sslmode options)" } ]
[ { "msg_contents": "Today's Coverity run produced this:\n\n/srv/coverity/git/pgsql-git/postgresql/src/include/lib/simplehash.h: 1138 in bh_nodeidx_stat()\n1132 avg_collisions = 0;\n1133 }\n1134 \n1135 sh_log(\"size: \" UINT64_FORMAT \", members: %u, filled: %f, total chain: %u, max chain: %u, avg chain: %f, total_collisions: %u, max_collisions: %u, avg_collisions: %f\",\n1136 tb->size, tb->members, fillfactor, total_chain_length, max_chain_length, avg_chain_length,\n1137 total_collisions, max_collisions, avg_collisions);\n>>> CID 1596268: Resource leaks (RESOURCE_LEAK)\n>>> Variable \"collisions\" going out of scope leaks the storage it points to.\n1138 }\n1139 \n1140 #endif /* SH_DEFINE */\n\nI have no idea why we didn't see this warning before --- but AFAICS\nit's quite right, and it looks like a nontrivial amount of memory\ncould be at stake:\n\n uint32 *collisions = (uint32 *) palloc0(tb->size * sizeof(uint32));\n\nI realize this function is only debug support, but wouldn't it\nbe appropriate to pfree(collisions) before exiting?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 21:03:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Coverity complains about simplehash.h's SH_STAT()" }, { "msg_contents": "Hi,\n\nOn 2024-04-07 21:03:53 -0400, Tom Lane wrote:\n> Today's Coverity run produced this:\n> \n> /srv/coverity/git/pgsql-git/postgresql/src/include/lib/simplehash.h: 1138 in bh_nodeidx_stat()\n> 1132 avg_collisions = 0;\n> 1133 }\n> 1134 \n> 1135 sh_log(\"size: \" UINT64_FORMAT \", members: %u, filled: %f, total chain: %u, max chain: %u, avg chain: %f, total_collisions: %u, max_collisions: %u, avg_collisions: %f\",\n> 1136 tb->size, tb->members, fillfactor, total_chain_length, max_chain_length, avg_chain_length,\n> 1137 total_collisions, max_collisions, avg_collisions);\n> >>> CID 1596268: Resource leaks (RESOURCE_LEAK)\n> >>> Variable \"collisions\" going out of scope leaks the storage it points to.\n> 1138 }\n> 1139 \n> 1140 #endif /* SH_DEFINE */\n> \n> I have no idea why we didn't see this warning before --- but AFAICS\n> it's quite right, and it looks like a nontrivial amount of memory\n> could be at stake:\n> \n> uint32 *collisions = (uint32 *) palloc0(tb->size * sizeof(uint32));\n> \n> I realize this function is only debug support, but wouldn't it\n> be appropriate to pfree(collisions) before exiting?\n\nIt indeed looks like that memory should be freed. Very odd that coverity\nstarted to complain about that just now. If coverity had started to complain\nafter da41d71070d, I'd understand, but that was ~5 years ago.\n\nI can't see a way it could hurt in the back branches, so I'm inclined to\nbackpatch the pfree?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 7 Apr 2024 18:31:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Coverity complains about simplehash.h's SH_STAT()" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-07 21:03:53 -0400, Tom Lane wrote:\n>> I realize this function is only debug support, but wouldn't it\n>> be appropriate to pfree(collisions) before exiting?\n\n> It indeed looks like that memory should be freed. Very odd that coverity\n> started to complain about that just now. If coverity had started to complain\n> after da41d71070d, I'd understand, but that was ~5 years ago.\n\nIf we recently added a new simplehash caller, Coverity might see that\nas a new bug. Still doesn't explain why nothing about the old callers.\nWe might have over-hastily dismissed such warnings as uninteresting,\nperhaps.\n\n> I can't see a way it could hurt in the back branches, so I'm inclined to\n> backpatch the pfree?\n\n+1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 07 Apr 2024 21:41:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Coverity complains about simplehash.h's SH_STAT()" }, { "msg_contents": "On 2024-04-07 21:41:23 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I can't see a way it could hurt in the back branches, so I'm inclined to\n> > backpatch the pfree?\n> \n> +1\n\nDone\n\n\n", "msg_date": "Sun, 7 Apr 2024 19:10:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Coverity complains about simplehash.h's SH_STAT()" } ]
[ { "msg_contents": "Hello,\n\nDuring the review of using extended statistics to improve join estimates\n[1], I found some code level optimization opportunities which apply to\nexisting code as well. To make the discussion easier, open this new\nthread.\n\nI don't know how to name the thread name, just use the patch 1 for the\nnaming.\n\ncommit d228d9734e70b4f43ad824d736fb1279d2aad5fc (HEAD -> misc_stats)\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Apr 8 11:43:37 2024 +0800\n\n Fast path for empty clauselist in clauselist_selectivity_ext\n \n It should be common in the real life, for example:\n \n SELECT * FROM t1, t2 WHERE t1.a = t2.a AND t1.a > 3;\n \n clauses == NIL at the scan level of t2.\n \n This would also make some debug life happier.\n\ncommit e852ce631f9348d5d29c8a53090ee71f7253767c\nAuthor: yizhi.fzh <[email protected]>\nDate: Mon Apr 8 11:13:57 2024 +0800\n\n Improve FunctionCall2Coll with FunctionCallInvoke\n \n If a FunctionCall2Coll is called multi times with the same FmgrInfo,\n turning it into FunctionCallInvoke will be helpful since the later one\n can use the shared FunctionCallInfo.\n \n There are many other places like GITSTATE which have the similar\n chances, but I'd see if such changes is necessary at the first\n place.\n\n\n[1]\nhttps://www.postgresql.org/message-id/c8c0ff31-3a8a-7562-bbd3-78b2ec65f16c%40enterprisedb.com \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 08 Apr 2024 12:25:00 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Replace FunctionCall2Coll with FunctionCallInvoke" }, { "msg_contents": "On Mon, Apr 08, 2024 at 12:25:00PM +0800, Andy Fan wrote:\n> During the review of using extended statistics to improve join estimates\n> [1], I found some code level optimization opportunities which apply to\n> existing code as well. To make the discussion easier, open this new\n> thread.\n\nIs that measurable?\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 14:26:30 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replace FunctionCall2Coll with FunctionCallInvoke" }, { "msg_contents": "\n\nHello Michael, \n\n> [[PGP Signed Part:Undecided]]\n> On Mon, Apr 08, 2024 at 12:25:00PM +0800, Andy Fan wrote:\n>> During the review of using extended statistics to improve join estimates\n>> [1], I found some code level optimization opportunities which apply to\n>> existing code as well. To make the discussion easier, open this new\n>> thread.\n>\n> Is that measurable?\n\nI didn't spent time on testing it. Compared with if the improvement is\nmeasureable, I'm more interested with if it is better than before or\nnot. As for measurable respect, I'm with the idea at [1]. Do you think\nthe measurable matter? \n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoZJ0a_Dcn%2BST4YSeSrLnnmajmcsi7ZvEpgkKNiF0SwBuw%40mail.gmail.com \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 08 Apr 2024 13:36:20 +0800", "msg_from": "Andy Fan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replace FunctionCall2Coll with FunctionCallInvoke" } ]
[ { "msg_contents": "Hello.\n\nI noticed that NLS doesn't work for pg_combinebackup. The cause is\nthat the tool forgets to call set_pglocale_pgservice().\n\nThis issue is fixed by the following chage.\n\ndiff --git a/src/bin/pg_combinebackup/pg_combinebackup.c b/src/bin/pg_combinebackup/pg_combinebackup.c\nindex 1b07ca3fb6..2788c78fdd 100644\n--- a/src/bin/pg_combinebackup/pg_combinebackup.c\n+++ b/src/bin/pg_combinebackup/pg_combinebackup.c\n@@ -154,6 +154,7 @@ main(int argc, char *argv[])\n \n \tpg_logging_init(argv[0]);\n \tprogname = get_progname(argv[0]);\n+\tset_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_combinebackup\"));\n \thandle_help_version_opts(argc, argv, progname, help);\n \n \tmemset(&opt, 0, sizeof(opt));\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Apr 2024 16:27:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "NLS doesn't work for pg_combinebackup" }, { "msg_contents": "At Mon, 08 Apr 2024 16:27:02 +0900 (JST), Kyotaro Horiguchi <[email protected]> wrote in \n> Hello.\n> \n> I noticed that NLS doesn't work for pg_combinebackup. The cause is\n> that the tool forgets to call set_pglocale_pgservice().\n> \n> This issue is fixed by the following chage.\n> \n> diff --git a/src/bin/pg_combinebackup/pg_combinebackup.c b/src/bin/pg_combinebackup/pg_combinebackup.c\n> index 1b07ca3fb6..2788c78fdd 100644\n> --- a/src/bin/pg_combinebackup/pg_combinebackup.c\n> +++ b/src/bin/pg_combinebackup/pg_combinebackup.c\n> @@ -154,6 +154,7 @@ main(int argc, char *argv[])\n> \n> \tpg_logging_init(argv[0]);\n> \tprogname = get_progname(argv[0]);\n> +\tset_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_combinebackup\"));\n> \thandle_help_version_opts(argc, argv, progname, help);\n> \n> \tmemset(&opt, 0, sizeof(opt));\n\nForgot to mention, but pg_walsummary has the same issue.\n\ndiff --git a/src/bin/pg_walsummary/pg_walsummary.c b/src/bin/pg_walsummary/pg_walsummary.c\nindex 5e41b376d7..daf6cd14ce 100644\n--- a/src/bin/pg_walsummary/pg_walsummary.c\n+++ b/src/bin/pg_walsummary/pg_walsummary.c\n@@ -67,6 +67,7 @@ main(int argc, char *argv[])\n \n \tpg_logging_init(argv[0]);\n \tprogname = get_progname(argv[0]);\n+\tset_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_walsummary\"));\n \thandle_help_version_opts(argc, argv, progname, help);\n \n \t/* process command-line options */\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 08 Apr 2024 16:31:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NLS doesn't work for pg_combinebackup" }, { "msg_contents": "On Mon, Apr 08, 2024 at 04:27:02PM +0900, Kyotaro Horiguchi wrote:\n> I noticed that NLS doesn't work for pg_combinebackup. The cause is\n> that the tool forgets to call set_pglocale_pgservice().\n> \n> This issue is fixed by the following chage.\n\nIndeed. Good catch.\n--\nMichael", "msg_date": "Mon, 8 Apr 2024 16:35:08 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NLS doesn't work for pg_combinebackup" }, { "msg_contents": "On Mon, Apr 08, 2024 at 04:31:05PM +0900, Kyotaro Horiguchi wrote:\n>> diff --git a/src/bin/pg_combinebackup/pg_combinebackup.c b/src/bin/pg_combinebackup/pg_combinebackup.c\n>> index 1b07ca3fb6..2788c78fdd 100644\n>> +++ b/src/bin/pg_combinebackup/pg_combinebackup.c\n> +++ b/src/bin/pg_walsummary/pg_walsummary.c\n> \tprogname = get_progname(argv[0]);\n> +\tset_pglocale_pgservice(argv[0], PG_TEXTDOMAIN(\"pg_walsummary\"));\n\nI've checked the whole tree, and the two you are pointing at are the\nonly incorrect paths. So applied, thanks!\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 15:00:27 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NLS doesn't work for pg_combinebackup" }, { "msg_contents": "At Tue, 9 Apr 2024 15:00:27 +0900, Michael Paquier <[email protected]> wrote in \n> I've checked the whole tree, and the two you are pointing at are the\n> only incorrect paths. So applied, thanks!\n\nThank you for cross-checking and committing!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 09 Apr 2024 16:45:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NLS doesn't work for pg_combinebackup" } ]
[ { "msg_contents": "Fix the intermittent buildfarm failures in 040_standby_failover_slots_sync.\n\nIt is possible that even if the primary waits for the subscriber to catch\nup and then disables the subscription, the XLOG_RUNNING_XACTS record gets\ninserted between the two steps by bgwriter and walsender processes it.\nThis can move the restart_lsn of the corresponding slot in an\nunpredictable way which further leads to slot sync failure.\n\nTo ensure predictable behaviour, we drop the subscription and manually\ncreate the slot before the test. The other idea we discussed to write a\npredictable test is to use injection points to control the bgwriter\nlogging XLOG_RUNNING_XACTS but that needs more analysis. We can add a\nseparate test using injection points.\n\nPer buildfarm\n\nAuthor: Hou Zhijie\nReviewed-by: Amit Kapila, Shveta Malik\nDiscussion: https://postgr.es/m/CAA4eK1JD8h_XLRsK_o_Xh=5MhTzm+6d4Cb4_uPgFJ2wSQDah=g@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/6f3d8d5e7cc2f2e2367cd6da6f8affe98d1f5729\n\nModified Files\n--------------\n.../recovery/t/040_standby_failover_slots_sync.pl | 62 +++++++++-------------\n1 file changed, 24 insertions(+), 38 deletions(-)", "msg_date": "Mon, 08 Apr 2024 08:03:59 +0000", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql: Fix the intermittent buildfarm failures in\n 040_standby_failover_" }, { "msg_contents": "On Mon, Apr 8, 2024 at 4:04 AM Amit Kapila <[email protected]> wrote:\n> Fix the intermittent buildfarm failures in 040_standby_failover_slots_sync.\n>\n> It is possible that even if the primary waits for the subscriber to catch\n> up and then disables the subscription, the XLOG_RUNNING_XACTS record gets\n> inserted between the two steps by bgwriter and walsender processes it.\n> This can move the restart_lsn of the corresponding slot in an\n> unpredictable way which further leads to slot sync failure.\n>\n> To ensure predictable behaviour, we drop the subscription and manually\n> create the slot before the test. The other idea we discussed to write a\n> predictable test is to use injection points to control the bgwriter\n> logging XLOG_RUNNING_XACTS but that needs more analysis. We can add a\n> separate test using injection points.\n\nHi,\n\nI'm concerned that the failover slots feature may not be in\nsufficiently good shape for us to ship it. Since this test file was\nintroduced at the end of January, it's been touched by a total of 16\ncommits, most of which seem to be trying to get it to pass reliably:\n\n6f3d8d5e7c Fix the intermittent buildfarm failures in\n040_standby_failover_slots_sync.\n6f132ed693 Allow synced slots to have their inactive_since.\n2ec005b4e2 Ensure that the sync slots reach a consistent state after\npromotion without losing data.\n6ae701b437 Track invalidation_reason in pg_replication_slots.\nbf279ddd1c Introduce a new GUC 'standby_slot_names'.\ndef0ce3370 Fix BF failure introduced by commit b3f6b14cf4.\nb3f6b14cf4 Fixups for commit 93db6cbda0.\nd13ff82319 Fix BF failure in commit 93db6cbda0.\n93db6cbda0 Add a new slot sync worker to synchronize logical slots.\n801792e528 Improve ERROR/LOG messages added by commits ddd5f4f54a and\n7a424ece48.\nb7bdade6a4 Disable autovacuum on primary in\n040_standby_failover_slots_sync test.\nd9e225f275 Change the LOG level in 040_standby_failover_slots_sync.pl to DEBUG2.\n9bc1eee988 Another try to fix BF failure introduced in commit ddd5f4f54a.\nbd8fc1677b Fix BF introduced in commit ddd5f4f54a.\nddd5f4f54a Add a slot synchronization function.\n776621a5e4 Add a failover option to subscriptions.\n\nIt's not really the test failures themselves that concern me here, so\nmuch as the possibility of users who try to make use of this feature\nhaving a similar amount of difficulty getting it to work reliably. The\ntest case seems to be taking more and more elaborate precautions to\nprevent incidental things from breaking the feature. But users won't\nlike this feature very much if they have to take elaborate precautions\nto get it to work in the real world. Is there a reason to believe that\nall of this stabilization work is addressing artificial problems that\nwon't inconvenience real users, or should we be concerned that the\nfeature itself is going to be difficult to use effectively?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 11:53:48 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix the intermittent buildfarm failures in\n 040_standby_failover_" }, { "msg_contents": "On Mon, Apr 8, 2024 at 9:24 PM Robert Haas <[email protected]> wrote:\n>\n> Hi,\n>\n> I'm concerned that the failover slots feature may not be in\n> sufficiently good shape for us to ship it. Since this test file was\n> introduced at the end of January, it's been touched by a total of 16\n> commits, most of which seem to be trying to get it to pass reliably:\n>\n\nAmong the 16 commits, there are 6 feature commits (3 of which are\nanother feature that has interaction with this feature), 1 code\nimprovement commit, 2 bug fixes commit, and 7 test stabilization\ncommits. See [1] for the categorization of commits. Now, among these 7\ntest stabilization commits (which seems to be the main source of your\nconcern), 4 are due to the reason that we are expecting the slots to\nbe synced in one function call with pg_sync_replication_slots() which\nsometimes didn't happen when there is an unexpected WAL generation say\nby bgwriter (XLOG_RUNNING_XACTS ) or an extra XID generation by\nauto(analyze). This shouldn't be a problem in practice where users are\nexpected to use slotsync worker which will keep syncing slots at\nregular intervals. All these required stabilizations are in two of the\ntests involving the use of the function pg_sync_replication_slots() to\nsync slots. We can think of getting rid of this function and relying\nonly on slotsync worker functionality but I find this function quite\nconvenient for debugging and in some cases writing targeted tests\n(though it caused instability in tests). We can provide more\ninformation in docs for the use of this API.\n\nThe other stabilization fixes are as follows: 1 is a Perl scripting\nissue to check LOGs, 1 is to increase the DEBUG level to catch more\ninformation for failures, and 1 is a test setup miss which is already\ndone in other similar tests.\n\nHaving said that, I have kept an eye on the reports (-hackers, -bugs,\netc.) related to this feature and if we find that this feature is\ninconvenient to use then we should consider either improving it, if\npossible, or reverting it.\n\n[1]:\nNew features:\n6f132ed693 Allow synced slots to have their inactive_since.\n6ae701b437 Track invalidation_reason in pg_replication_slots.\nbf279ddd1c Introduce a new GUC 'standby_slot_names'.\n93db6cbda0 Add a new slot sync worker to synchronize logical slots.\nddd5f4f54a Add a slot synchronization function.\n776621a5e4 Add a failover option to subscriptions.\n\nCode improvement\n801792e528 Improve ERROR/LOG messages added by commits ddd5f4f54a and\n7a424ece48.\n\nBug fixes:\n2ec005b4e2 Ensure that the sync slots reach a consistent state after\npromotion without losing data.\nb3f6b14cf4 Fixups for commit 93db6cbda0.\n\nStabilize test cases:\ndef0ce3370 Fix BF failure introduced by commit b3f6b14cf4.\nb7bdade6a4 Disable autovacuum on primary in\n040_standby_failover_slots_sync test.\nd9e225f275 Change the LOG level in 040_standby_failover_slots_sync.pl to DEBUG2.\n9bc1eee988 Another try to fix BF failure introduced in commit ddd5f4f54a.\nbd8fc1677b Fix BF introduced in commit ddd5f4f54a.\nd13ff82319 Fix BF failure in commit 93db6cbda0.\n6f3d8d5e7c Fix the intermittent buildfarm failures in\n040_standby_failover_slots_sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 9 Apr 2024 07:37:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql: Fix the intermittent buildfarm failures in\n 040_standby_failover_" } ]
[ { "msg_contents": "Over in [0] I asked whether it would be worthwhile converting all our README\nfiles to Markdown, and since it wasn't met with pitchforks I figured it would\nbe an interesting excercise to see what it would take (my honest gut feeling\nwas that it would be way too intrusive). Markdown does brings a few key\nfeatures however so IMHO it's worth attempting to see:\n\n* New developers are very used to reading/writing it\n* Using a defined format ensures some level of consistency\n* Many users and contributors new *as well as* old like reading documentation\n nicely formatted in a browser\n* The documentation now prints really well\n* pandoc et.al can be used to render nice looking PDF's\n* All the same benefits as discussed in [0]\n\nThe plan was to follow Grubers original motivation for Markdown closely:\n\n\t\"The idea is that a Markdown-formatted document should be publishable\n\tas-is, as plain text, without looking like it’s been marked up with\n\ttags or formatting instructions.\"\n\nThis translates to making the least amount of changes to achieve a) retained\nplain text readability at todays level, b) proper Markdown rendering, not\nlooking like text files in a HTML window, and c) absolutly no reflows and\nminimal impact on git blame.\n\nTurns out we've been writing Markdown for quite some time, so it really didn't\ntake much at all. I renamed all the files .md and with almost just changing\nwhitespace achieved what I think is pretty decent results. The rendered\nversions can be seen by browsing the tree below:\n\n\thttps://github.com/danielgustafsson/postgres/tree/markdown\n\nThe whitespace changes are mostly making sure that code (anything which is to\nbe rendered without styling really) is indented from column 0 with tab or 4\nspaces (depending on what was already used in the file) and has a blank line\nbefore and after. This is the bulk of the changes. The non-whitespace changes\nintroduced are:\n\n* Section/subsection markers: Basically all our files underline the main\n section with ==== and subsections with ----. This renders perfectly well with. \n Markdown so add these to the few that didn't have them.\n\n* The SSL readme starts a sentence with \">\" which renders as quote, removing\n that fixes rendering and makes the plain text version better IMHO.\n\n* In the regex README there are two file references using * as a wildcard, but\n the combination of the two makes Markdown render the text between them in\n italics. Wrapping these in backticks solves it, but I'm not a fan since we\n don't do that elsewhere. A solution which avoids backticks would ne nice.\n\n* Some bulletlists characters are changed to match the syntax, which also makes\n them more consistent with all the other README files in the tree. In one\n case (SSL test readme) there were no bullets at all which is both\n inconsistent and renders poorly.\n\n* Anything inside <> is rendered as a link if it matches, so in cases where <X>\n is used to indicatee \"replace with X\" I added whitespace like \"< X >\" which\n might be a bit ugly, but works. When referencing header files with <time.h>\n the <> are removed to just say the header name, which seemed like the least bad\n option there.\n\n* Text quoted with backticks, like `foo' is replaced with 'foo' to keep it from\n rendering like code.\n\n* Rather than indenting the whole original README for bsd_indent I added ``` to\n make it a code block, ie render without formatting.\n\nThe README files in doc/ are left untouched as they contain lots of <foo> XML\ntags which all would need to be wrapped in backticks at the cost of plain text\nreadability. Might not be controversial and in that case they can be done too,\nbut I left them for now since they deviated from the least-changes-possible\nplan for the patchset. It can probably be argued thats lots of other READMEs\ncan be skipped as well, like all the ones in test modules which have 4 lines\nsaying the directory contains a test for the thing which the name of the\ndirectory already gave away. For completeness I left those in though, they for\nthe most part go untouched.\n\nIt's not perfect by any stretch, there are still for example cases where a * in\nthe text turns on italic rendering which wasn't the intention if the author.\nResisting the temptation to go overboard with changes is however a design goal,\nthese are after all work documents and should be functional and practical.\n\nIn order to make review a bit easier I've split the patch into two, one for the\nfile renaming and one for the changes. Inspecting the 0002 diff by skipping\nwhitespace shows the above discussed changes.\n\nThoughts?\n\n--\nDaniel Gustafsson\n\n[0] [email protected]\n[1] CAG6XLEmGE95DdKqjk+Dd9vC8mfN7BnV2WFgYk_9ovW6ikN0YSg@mail.gmail.com\n[2] https://daringfireball.net/projects/markdown/", "msg_date": "Mon, 8 Apr 2024 21:29:40 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Converting README documentation to Markdown" }, { "msg_contents": "On 2024-04-08 21:29 +0200, Daniel Gustafsson wrote:\n> Over in [0] I asked whether it would be worthwhile converting all our README\n> files to Markdown, and since it wasn't met with pitchforks I figured it would\n> be an interesting excercise to see what it would take (my honest gut feeling\n> was that it would be way too intrusive). Markdown does brings a few key\n> features however so IMHO it's worth attempting to see:\n> \n> * New developers are very used to reading/writing it\n> * Using a defined format ensures some level of consistency\n> * Many users and contributors new *as well as* old like reading documentation\n> nicely formatted in a browser\n> * The documentation now prints really well\n> * pandoc et.al can be used to render nice looking PDF's\n> * All the same benefits as discussed in [0]\n> \n> The plan was to follow Grubers original motivation for Markdown closely:\n> \n> \t\"The idea is that a Markdown-formatted document should be publishable\n> \tas-is, as plain text, without looking like it’s been marked up with\n> \ttags or formatting instructions.\"\n\n+1 for keeping the plaintext readable.\n\n> This translates to making the least amount of changes to achieve a) retained\n> plain text readability at todays level, b) proper Markdown rendering, not\n> looking like text files in a HTML window, and c) absolutly no reflows and\n> minimal impact on git blame.\n> \n> Turns out we've been writing Markdown for quite some time, so it really didn't\n> take much at all. I renamed all the files .md and with almost just changing\n> whitespace achieved what I think is pretty decent results. The rendered\n> versions can be seen by browsing the tree below:\n> \n> \thttps://github.com/danielgustafsson/postgres/tree/markdown\n> \n> The whitespace changes are mostly making sure that code (anything which is to\n> be rendered without styling really) is indented from column 0 with tab or 4\n> spaces (depending on what was already used in the file) and has a blank line\n> before and after. This is the bulk of the changes.\n\nI've only peeked at a couple of those READMEs, but they look alright so\nfar (at least on GitHub). Should we settle on a specific Markdown\nflavor[1]? Because I'm never sure if some markups only work on\nspecific code-hosting sites. Maybe also a guide on writing Markdown\nthat renders properly, especially with regard to escaping that may be\nnecessary (see below).\n\n> The non-whitespace changes introduced are:\n> \n> [...]\n> \n> * In the regex README there are two file references using * as a wildcard, but\n> the combination of the two makes Markdown render the text between them in\n> italics. Wrapping these in backticks solves it, but I'm not a fan since we\n> don't do that elsewhere. A solution which avoids backticks would ne nice.\n\nEscaping does the trick: regc_\\*.c\n\n> [...]\n> \n> * Anything inside <> is rendered as a link if it matches, so in cases where <X>\n> is used to indicatee \"replace with X\" I added whitespace like \"< X >\" which\n> might be a bit ugly, but works. When referencing header files with <time.h>\n> the <> are removed to just say the header name, which seemed like the least bad\n> option there.\n\nCan be escaped as well: \\<X>\n\n[1] https://markdownguide.offshoot.io/extended-syntax/#lightweight-markup-languages\n\n-- \nErik\n\n\n", "msg_date": "Mon, 8 Apr 2024 22:30:20 +0200", "msg_from": "Erik Wienhold <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 8 Apr 2024, at 22:30, Erik Wienhold <[email protected]> wrote:\n> On 2024-04-08 21:29 +0200, Daniel Gustafsson wrote:\n\n> I've only peeked at a couple of those READMEs, but they look alright so\n> far (at least on GitHub). Should we settle on a specific Markdown\n> flavor[1]? Because I'm never sure if some markups only work on\n> specific code-hosting sites.\n\nProbably, but if we strive for maintained textual readability with avoiding\nmost of the creative markup then we're probably close to the original version.\nBut I agree, it should be evaluated.\n\n> Maybe also a guide on writing Markdown\n> that renders properly, especially with regard to escaping that may be\n> necessary (see below).\n\nThat's a good point, if we opt for an actual format there should be some form\nof documentation about that format, especially if we settle for using a\nfraction of the capabilities of the format.\n\n>> * In the regex README there are two file references using * as a wildcard, but\n>> the combination of the two makes Markdown render the text between them in\n>> italics. Wrapping these in backticks solves it, but I'm not a fan since we\n>> don't do that elsewhere. A solution which avoids backticks would ne nice.\n> \n> Escaping does the trick: regc_\\*.c\n\nRight, but that makes the plaintext version less readable than the backticks I\nthink.\n\n> Can be escaped as well: \\<X>\n\n..and same with this one. It's all very subjective though.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 8 Apr 2024 22:42:00 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On 08.04.24 21:29, Daniel Gustafsson wrote:\n> Over in [0] I asked whether it would be worthwhile converting all our README\n> files to Markdown, and since it wasn't met with pitchforks I figured it would\n> be an interesting excercise to see what it would take (my honest gut feeling\n> was that it would be way too intrusive). Markdown does brings a few key\n> features however so IMHO it's worth attempting to see:\n> \n> * New developers are very used to reading/writing it\n> * Using a defined format ensures some level of consistency\n> * Many users and contributors new*as well as* old like reading documentation\n> nicely formatted in a browser\n> * The documentation now prints really well\n> * pandoc et.al can be used to render nice looking PDF's\n> * All the same benefits as discussed in [0]\n> \n> The plan was to follow Grubers original motivation for Markdown closely:\n> \n> \t\"The idea is that a Markdown-formatted document should be publishable\n> \tas-is, as plain text, without looking like it’s been marked up with\n> \ttags or formatting instructions.\"\n> \n> This translates to making the least amount of changes to achieve a) retained\n> plain text readability at todays level, b) proper Markdown rendering, not\n> looking like text files in a HTML window, and c) absolutly no reflows and\n> minimal impact on git blame.\n\nI started looking through this and immediately found a bunch of tiny \nproblems. (This is probably in part because the READMEs under \nsrc/backend/access/ are some of the more complicated ones, but then they \nare also the ones that might benefit most from better rendering.)\n\nOne general problem is that original Markdown and GitHub-flavored \nMarkdown (GFM) are incompatible in some interesting aspects. For \nexample, the line\n\n A split initially marks the left page with the F_FOLLOW_RIGHT flag.\n\nis rendered by GFM as you'd expect. But original Markdown converts it to\n\n A split initially marks the left page with the F<em>FOLLOW</em>RIGHT\n flag.\n\nThis kind of problem is pervasive, as you'd expect.\n\nAnother incompatibility is that GFM accepts \"1)\" as a list marker (which \nappears to be used often in the READMEs), but original Markdown does \nnot. This then also affects surrounding formatting.\n\nAlso, the READMEs often do not indent lists in a non-ambiguous way. For \nexample, if you look into src/backend/optimizer/README, section \"Join \nTree Construction\", there are two list items, but it's not immediately \nclear which paragraphs belong to the list and which ones follow the \nlist. This also interacts with the previous point. The resulting \nformatting in GFM is quite misleading.\n\nsrc/port/README.md is a similar case.\n\nThere are also various places where whitespace is used for ad-hoc \nformatting. Consider for example in src/backend/access/gin/README\n\n the \"category\" of the null entry. These are the possible categories:\n\n 1 = ordinary null key value extracted from an indexable item\n 2 = placeholder for zero-key indexable item\n 3 = placeholder for null indexable item\n\n Placeholder null entries are inserted into the index because otherwise\n\nBut this does not preserve the list-like formatting, it just flows it \ntogether.\n\nThere is a similar case with the authors list at the end of \nsrc/backend/access/gist/README.md.\n\nsrc/test/README.md wasn't touched by your patch, but it also needs \nadjustments for list formatting.\n\n\nIn summary, I think before we could accept this, we'd need to go through \nthis with a fine-toothed comb line by line and page by page to make sure \nthe formatting is still sound. And we'd need to figure out which \nMarkdown flavor to target.\n\n\n\n", "msg_date": "Mon, 13 May 2024 09:20:24 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 13 May 2024, at 09:20, Peter Eisentraut <[email protected]> wrote:\n\n> I started looking through this and immediately found a bunch of tiny problems. (This is probably in part because the READMEs under src/backend/access/ are some of the more complicated ones, but then they are also the ones that might benefit most from better rendering.)\n\nThanks for looking!\n\n> One general problem is that original Markdown and GitHub-flavored Markdown (GFM) are incompatible in some interesting aspects.\n\nThat's true, but virtually every implementation of Markdown in practical use\ntoday is incompatible with Original Markdown.\n\nReading my email I realize I failed to mention the markdown platforms I was\ntargeting (and thus flavours), and citing Gruber made it even more confusing.\nFor online reading I verified with Github and VS Code since they have a huge\nmarket presence. For offline work I targeted rendering with pandoc since we\nalready have a dependency on it in the tree. I don't think targeting the\noriginal Markdown implementation is useful, or even realistic.\n\nAnother aspect of platform/flavour was to make the markdown version easy to\nmaintain for hackers writing content. Requiring the minimum amount of markup\nseems like the developer-friendly way here to keep productivity as well as\ndocument quality high.\n\nMost importantly though, I targeted reading the files as plain text without any\nrendering. We keep these files in text format close to the code for a reason,\nand maintaining readability as text was a north star.\n\n> For example, the line\n> \n> A split initially marks the left page with the F_FOLLOW_RIGHT flag.\n> \n> is rendered by GFM as you'd expect. But original Markdown converts it to\n> \n> A split initially marks the left page with the F<em>FOLLOW</em>RIGHT\n> flag.\n> \n> This kind of problem is pervasive, as you'd expect.\n\nCorrect, but I can't imagine that we'd like to wrap every instance of a name\nwith underscores in backticks like `F_FOLLOW_RIGHT`. There are very few\nMarkdown implementations which don't support underscores like this (testing\njust now on the top online editors and sites providing markdown editing I\nfailed to find a single one).\n\n> Also, the READMEs often do not indent lists in a non-ambiguous way. For example, if you look into src/backend/optimizer/README, section \"Join Tree Construction\", there are two list items, but it's not immediately clear which paragraphs belong to the list and which ones follow the list. This also interacts with the previous point. The resulting formatting in GFM is quite misleading.\n\nI agree that the rendered version excacerbates this problem. Writing a bullet\npoint list where each item spans multiple paragraphs indented the same way as\nthe paragraphs following the list is not helpful to the reader. In these cases\nboth the markdown and the text version will be improved by indentation.\n\n> There are also various places where whitespace is used for ad-hoc formatting. Consider for example in src/backend/access/gin/README\n> \n> the \"category\" of the null entry. These are the possible categories:\n> \n> 1 = ordinary null key value extracted from an indexable item\n> 2 = placeholder for zero-key indexable item\n> 3 = placeholder for null indexable item\n> \n> Placeholder null entries are inserted into the index because otherwise\n> \n> But this does not preserve the list-like formatting, it just flows it together.\n\nThat's the kind of sublists which need to be found as part of this work, and\nthe items prefixed with a list identifier. In this case, prefixing each row in\nthe sublist with '-' yields the correct result.\n\n> src/test/README.md wasn't touched by your patch, but it also needs adjustments for list formatting.\n\nI didn't re-indent that one in order to keep the changes to the absolute\nminimum, since I considered the rendered version passable even if not\nparticularly good. Re-indenting files like this will for sure make the end\nresult better, as long as the changes keep the text version readability.\n\n> In summary, I think before we could accept this, we'd need to go through this with a fine-toothed comb line by line and page by page to make sure the formatting is still sound. \n\nAbsolutely. I've been over every file to ensure they aren't blatantly wrong,\nbut I didn't want to spend the time if this was immmediately shot down as\nsomething the community don't want to maintain.\n\n> And we'd need to figure out which Markdown flavor to target.\n\nAbsolutely, and as I mentioned above, we need to pick based both the final\nresult (text and rendered) as well as the developer experience for maintaining\nthis.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 15 May 2024 14:26:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On 15.05.24 14:26, Daniel Gustafsson wrote:\n> Another aspect of platform/flavour was to make the markdown version easy to\n> maintain for hackers writing content. Requiring the minimum amount of markup\n> seems like the developer-friendly way here to keep productivity as well as\n> document quality high.\n> \n> Most importantly though, I targeted reading the files as plain text without any\n> rendering. We keep these files in text format close to the code for a reason,\n> and maintaining readability as text was a north star.\n\nI've been thinking about this some more. I think the most value here \nwould be to just improve the plain-text formatting, so that there are \nconsistent list styles, header styles, indentation, some of the \nambiguities cleared up -- much of which your 0001 patch does. You might \nas well be targeting markdown-like conventions with this; they are \nmostly reasonable.\n\nI tend to think that actually converting all the README files to \nREADME.md could be a net negative for maintainability. Because now you \nare requiring everyone who potentially wants to edit those to be aware \nof Markdown syntax and manually check the rendering. With things like \nDocBook, if you make a mess, you get error messages from the build step. \n If you make a mess in Markdown, you have to visually find it yourself. \n There are many READMEs that contain nested lists and code snippets and \ndiagrams and such all mixed together. Getting that right in Markdown \ncan be quite tricky. I'm also foreseeing related messes of trailing \nwhitespace, spaces-vs-tab confusion, gitattributes violations, etc. It \ncan be a lot of effort. It's okay to do this for prominent files like \nthe top-level one, but I suggest that for the rest we can keep it simple \nand just use plain text.\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 09:38:21 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On Fri, 28 Jun 2024 at 09:38, Peter Eisentraut <[email protected]> wrote:\n> Getting that right in Markdown can be quite tricky.\n\nI agree that in some cases it's tricky. But what's the worst case that\ncan happen when you get it wrong? It renders weird on github.com.\nLuckily there's a \"code\" button to go to the plain text format[1]. In\nall other cases (which I expect will be most) the doc will be easier\nto read. Forcing plaintext, just because sometimes we might make a\nmistake in the syntax seems like an overcorrection imho. Especially\nbecause these docs are (hopefully) read more often than written.\n\n[1]: https://github.com/postgres/postgres/blob/master/README.md?plain=1\n\n\n", "msg_date": "Fri, 28 Jun 2024 11:56:43 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> I've been thinking about this some more. I think the most value here\n> would be to just improve the plain-text formatting, so that there are\n> consistent list styles, header styles, indentation, some of the\n> ambiguities cleared up -- much of which your 0001 patch does. You\n> might as well be targeting markdown-like conventions with this; they\n> are mostly reasonable.\n> \n> I tend to think that actually converting all the README files to\n> README.md could be a net negative for maintainability. Because now\n> you are requiring everyone who potentially wants to edit those to be\n> aware of Markdown syntax and manually check the rendering. With\n> things like DocBook, if you make a mess, you get error messages from\n> the build step. If you make a mess in Markdown, you have to visually\n> find it yourself. There are many READMEs that contain nested lists\n> and code snippets and diagrams and such all mixed together. Getting\n> that right in Markdown can be quite tricky. I'm also foreseeing\n> related messes of trailing whitespace, spaces-vs-tab confusion,\n> gitattributes violations, etc. It can be a lot of effort. It's okay\n> to do this for prominent files like the top-level one, but I suggest\n> that for the rest we can keep it simple and just use plain text.\n\nAgreed.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 21:37:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On 28.06.24 11:56, Jelte Fennema-Nio wrote:\n> On Fri, 28 Jun 2024 at 09:38, Peter Eisentraut <[email protected]> wrote:\n>> Getting that right in Markdown can be quite tricky.\n> \n> I agree that in some cases it's tricky. But what's the worst case that\n> can happen when you get it wrong? It renders weird on github.com.\n\nI have my \"less\" set up so that \"less somefile.md\" automatically renders \nthe markdown. That's been pretty useful. But if that now keeps making \na mess out of PostgreSQL's README files, then I'm going to have to keep \nfixing things, and I might get really mad. That's the worst that could \nhappen. ;-)\n\nSo I don't agree with \"aspirational markdown\". If we're going to do it, \nthen I expect that the files are marked up correctly at all times.\n\nConversely, what's the best that could happen?\n\n\n\n", "msg_date": "Fri, 28 Jun 2024 20:40:12 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On Fri, 28 Jun 2024 at 20:40, Peter Eisentraut <[email protected]> wrote:\n> I have my \"less\" set up so that \"less somefile.md\" automatically renders\n> the markdown. That's been pretty useful. But if that now keeps making\n> a mess out of PostgreSQL's README files, then I'm going to have to keep\n> fixing things, and I might get really mad. That's the worst that could\n> happen. ;-)\n\nDo you have reason to think that this is going to be a bigger issue\nfor Postgres READMEs than for any other markdown files you encounter?\nBecause this sounds like a generic problem you'd run into with your\n\"less\" set up, which so far apparently has been small enough that it's\nworth the benefit of automatically rendering markdown files.\n\n> So I don't agree with \"aspirational markdown\". If we're going to do it,\n> then I expect that the files are marked up correctly at all times.\n\nI think for at least ~90% of our README files this shouldn't be a\nproblem. If you have specific ones in mind that contain difficult\nmarkup/diagrams, then maybe we shouldn't convert those.\n\n> Conversely, what's the best that could happen?\n\nThat your \"less\" would automatically render Postgres READMEs nicely.\nWhich you say has been pretty useful ;-) And maybe even show syntax\nhighlighting for codeblocks.\n\nP.S. Now I'm wondering what your \"less\" is.\n\n\n", "msg_date": "Sat, 29 Jun 2024 11:24:32 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 28 Jun 2024, at 20:40, Peter Eisentraut <[email protected]> wrote:\n\n> If we're going to do it, then I expect that the files are marked up correctly at all times.\n\nI agree with that. I don't think it will be a terribly high bar though since we\nwere pretty much already writing markdown. We already have pandoc in the meson\ntoolchain, adding a target to check syntax should be doable.\n\n> Conversely, what's the best that could happen?\n\nOne of the main goals of this work was to make sure the documentation renders\nnicely on platforms which potential new contributors consider part of the\nfabric of writing code. We might not be on Github (and I'm not advocating that\nwe should) but any new contributor we want to attract is pretty likely to be\nusing it. The best that can happen is that new contributors find the postgres\ncode more approachable and get excited about contributing to postgres.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 1 Jul 2024 11:42:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 28 Jun 2024, at 09:38, Peter Eisentraut <[email protected]> wrote:\n\n> I've been thinking about this some more. I think the most value here would be to just improve the plain-text formatting, so that there are consistent list styles, header styles, indentation, some of the ambiguities cleared up -- much of which your 0001 patch does. You might as well be targeting markdown-like conventions with this; they are mostly reasonable.\n\n(I assume you mean 0002). I agree that the increased consistency is worthwhile\neven if we don't officially convert to Markdown (ie only do 0002 and not 0001).\n\n> I tend to think that actually converting all the README files to README.md could be a net negative for maintainability. Because now you are requiring everyone who potentially wants to edit those to be aware of Markdown syntax\n\nFair enough, but we currently expect those editing to be aware of our syntax\nwhich isn't defined at all (leading to the variations this patchset fixes).\nI'm not sure whats best for maintainability but I do think the net change is\nall that big.\n\n> and manually check the rendering.\n\nThat however would be a new requirement, and I can see that being a deal-\nbreaker for introducing this.\n\nAttached is a v2 which fixes a conflict, if there is no interest in Markdown\nI'll drop 0001 and the markdown-specifics from 0002 to instead target increased\nconsistency.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 1 Jul 2024 12:22:10 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 1 Jul 2024, at 12:22, Daniel Gustafsson <[email protected]> wrote:\n\n> Attached is a v2 which fixes a conflict, if there is no interest in Markdown\n> I'll drop 0001 and the markdown-specifics from 0002 to instead target increased\n> consistency.\n\nSince there doesn't seem to be much interest in going all the way to Markdown,\nthe attached 0001 is just the formatting changes for achieving (to some degree)\nconsistency among the README's. This mostly boils down to using a consistent\namount of whitespace around code, using the same indentation on bullet lists\nand starting sections the same way. Inspecting the patch with git diff -w\nreveals that it's not much left once whitespace is ignored. There might be a\nfew markdown hunks left which I'll hunt down in case anyone is interested in\nthis.\n\nAs an added bonus this still makes most READMEs render nicely as Markdown, just\nnot automatically on Github as it doesn't know the filetype.\n\n--\nDaniel Gustafsson", "msg_date": "Tue, 10 Sep 2024 14:50:49 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "Daniel Gustafsson <[email protected]> writes:\n> Since there doesn't seem to be much interest in going all the way to Markdown,\n> the attached 0001 is just the formatting changes for achieving (to some degree)\n> consistency among the README's. This mostly boils down to using a consistent\n> amount of whitespace around code, using the same indentation on bullet lists\n> and starting sections the same way. Inspecting the patch with git diff -w\n> reveals that it's not much left once whitespace is ignored. There might be a\n> few markdown hunks left which I'll hunt down in case anyone is interested in\n> this.\n\n> As an added bonus this still makes most READMEs render nicely as Markdown, just\n> not automatically on Github as it doesn't know the filetype.\n\nI did not inspect the patch in detail, but this approach seems\nlike a reasonable compromise. However, if we're not officially\ngoing to Markdown, how likely is it that these files will\nstay valid in future edits? I suspect most of us don't have\nthose syntax rules wired into our fingers (I sure don't).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Sep 2024 11:37:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "> On 10 Sep 2024, at 17:37, Tom Lane <[email protected]> wrote:\n> \n> Daniel Gustafsson <[email protected]> writes:\n>> Since there doesn't seem to be much interest in going all the way to Markdown,\n>> the attached 0001 is just the formatting changes for achieving (to some degree)\n>> consistency among the README's. This mostly boils down to using a consistent\n>> amount of whitespace around code, using the same indentation on bullet lists\n>> and starting sections the same way. Inspecting the patch with git diff -w\n>> reveals that it's not much left once whitespace is ignored. There might be a\n>> few markdown hunks left which I'll hunt down in case anyone is interested in\n>> this.\n> \n>> As an added bonus this still makes most READMEs render nicely as Markdown, just\n>> not automatically on Github as it doesn't know the filetype.\n> \n> I did not inspect the patch in detail, but this approach seems\n> like a reasonable compromise. However, if we're not officially\n> going to Markdown, how likely is it that these files will\n> stay valid in future edits? I suspect most of us don't have\n> those syntax rules wired into our fingers (I sure don't).\n\nI'm not too worried, especially since we're not making any guarantees about\nconforming to a set syntax. We had written more or less correct Markdown\nalready, if we continue to create new content in the style of the surrounding\nexisting content then I'm confident they'll stay very close to markdown.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 10 Sep 2024 20:25:02 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On Tue, Sep 10, 2024 at 8:51 AM Daniel Gustafsson <[email protected]> wrote:\n> Since there doesn't seem to be much interest in going all the way to Markdown,\n\nJust for the record, I suspect going to Markdown is actually the right\nthing to do. I am personally unenthusiastic about it because I need\none more thing to worry about when committing like I need a hole in my\nhead, but a chronic complaint about the PostgreSQL project is that we\ninsist on doing everything our own way instead of admitting that there\nis significant value in conforming to, or at least being compatible\nwith, widely-adopted development practices, and using Markdown files\nto document stuff in git repos seems to be one of those. No single\nchange that we make is going to make the difference between us\nattracting the next generation of developers and not, but if we always\nprioritize what feels good to people who learned to code in the 1970s\nor 1980s (like me!) over what feels good to people who learned to code\nin the 2010s or 2020s, we will definitely run out of developers at\nsome point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Sep 2024 15:14:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" }, { "msg_contents": "On 10.09.24 14:50, Daniel Gustafsson wrote:\n>> On 1 Jul 2024, at 12:22, Daniel Gustafsson <[email protected]> wrote:\n> \n>> Attached is a v2 which fixes a conflict, if there is no interest in Markdown\n>> I'll drop 0001 and the markdown-specifics from 0002 to instead target increased\n>> consistency.\n> \n> Since there doesn't seem to be much interest in going all the way to Markdown,\n> the attached 0001 is just the formatting changes for achieving (to some degree)\n> consistency among the README's. This mostly boils down to using a consistent\n> amount of whitespace around code, using the same indentation on bullet lists\n> and starting sections the same way. Inspecting the patch with git diff -w\n> reveals that it's not much left once whitespace is ignored. There might be a\n> few markdown hunks left which I'll hunt down in case anyone is interested in\n> this.\n\nI went through this file by file and checked the results of a \nmarkdown-to-HTML conversion using cmark and looking at the raw output \nsource files.\n\nA lot of the changes are obvious and make sense. But there are a number \nof cases of code within lists or nested lists or both that need further \ncareful investigation. I'm attaching a fixup patch where I tried to \nimprove some of this (and a few other things I found along the way). \nSome of the more complicated ones, such as \nsrc/backend/storage/lmgr/README-SSI, will need to be checked again and \neven more carefully to make sure that the meaning is not altered by \nthese patches.\n\nOne underlying problem that I see is that markdown assumes four-space \ntabs, but a standard editor configuration (and apparently your editor) \nuses 8 tabs. But then, if you have a common situation like\n\n```\n1. Run this code\n\n<tab>$ sudo kill\n```\n\nthen that's incorrect (the code line will not be inside the list), \nbecause it should be\n\n```\n1. Run this code\n\n<tab><tab>$ sudo kill\n```\n\nor\n\n```\n1. Run this code\n\n<8 spaces>$ sudo kill\n```\n\nSo we need to think about a way to make this more robust for future \npeople editing. Maybe something in .gitattributes or some editor \nsettings. Otherwise, it will be all over the places after a while. \n(There are also a couple of places where apparently you changed \nwhitespace that wasn't necessary to be changed.)\n\nApart from this, I don't changing the placeholders like <foo> to < foo \n >. In some cases, this really decreases readability. Maybe we should \nlook for different approaches there.\n\nMaybe there are some easy changes that could be extracted from this \npatch, but the whitespace and list issue needs more consideration.", "msg_date": "Mon, 23 Sep 2024 13:58:56 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Converting README documentation to Markdown" } ]
[ { "msg_contents": "On Mon, Apr 8, 2024 at 10:42 AM Heikki Linnakangas <[email protected]> wrote:\n> Can you elaborate, which patches you think were not ready? Let's make\n> sure to capture any concrete concerns in the Open Items list.\n\nHi,\n\nI'm moving this topic to a new thread for better visibility and less\nadmixture of concerns. I'd like to invite everyone here present to\nopine on which patches we ought to be worried about. Here are a few\npicks from me to start things off. My intention here is to convey \"I\nfind these scary\" rather than \"these commits were irresponsible,\" so I\nrespectfully ask that you don't take the choice to list your patch\nhere as an attack, or the choice not to list your patch here as an\nendorsement. I'm very happy if I'm wrong and these patches are not\nactually scary. And likewise let's view any allegations of scariness\nby others as a genuine attempt to figure out what might be broken,\nrather than as an attempt to get anyone into trouble or whatever.\n\n- As I wrote about separately, I'm concerned that the failover slots\nstuff may not be in as good a shape as it needs to be for people to\nget good use out of it.\nhttp://postgr.es/m/CA+TgmoaA4oufUBR5B-4o83rnwGZ3zAA5UvwxDX=NjCm1TVgRsQ@mail.gmail.com\n\n- I also wrote separately about the flurry of recent table AM changes.\nhttp://postgr.es/m/CA+TgmoZpWB50GnJZYF1x8PenTqXDTFBH_Euu6ybGfzEy34o+5Q@mail.gmail.com\n\n- The streaming read API stuff was all committed very last minute. I\nthink this should have been committed much sooner. It's probably not\ngoing to break the world; it's more likely to have performance\nconsequences. But if it had gone in sooner, we'd have had more time to\nfigure that out.\n\n- The heap pruning refactoring work, on the other hand, seems to be at\nvery significant risk of breaking the world. I don't doubt the\nauthors, but this is some very critical stuff and a lot of it happened\nvery quickly right at the end.\n\n- Incremental backup could have all sorts of terrible data-corrupting\nbugs. 55a5ee30cd65886ff0a2e7ffef4ec2816fbec273 was a full brown-paper\nbag level fix, so the chances of there being further problems are\nprobably high.\n\n- I'm slightly worried about the TID store work (ee1b30f12, 30e144287,\n667e65aac35), perhaps for no reason. Actually, the results seem really\nimpressive, and a very quick look at some of the commits seemed kind\nof reassuring. But, like the changes to pruning and freezing, this is\nmaking some really fundamental changes to critical code. In this case,\nit's code that has evolved very little over the years and seems to\nhave now evolved quite a lot.\n\n- I couldn't understand why the \"Operate\nXLogCtl->log{Write,Flush}Result with atomics\" code was correct when I\nread it. That's not to say I know it to be incorrect. But, a spinlock\nprotecting two variables together guarantees more than atomic access\nto each of those variables separately.\n\nThere's probably a lot more to worry about; there have been so\nincredibly many changes recently. But this is as far as my speculation\nextends, as of now. Comments welcome. Additional concerns also\nwelcome, as noted above. And again, no offense is intended to anyone.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 15:47:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 7:47 AM Robert Haas <[email protected]> wrote:\n> - The streaming read API stuff was all committed very last minute. I\n> think this should have been committed much sooner. It's probably not\n> going to break the world; it's more likely to have performance\n> consequences. But if it had gone in sooner, we'd have had more time to\n> figure that out.\n\nOK, let me give an update on this work stream (pun intended).\n\nOne reason for the delay in committing was precisely that we were\nfretting about regressions risks. We tried pretty hard to identify\nand grind down every regression we could find, and cases with\noutstanding not-fully-understood or examined problems in that area\nhave been booted into the next cycle for more work: streaming bitmap\nheapscan, several streaming vacuum patches, and more, basically things\nthat seem to have more complex interactions with other machinery. The\nonly three places using streaming I/O that went in were:\n\n041b9680: Use streaming I/O in ANALYZE.\nb7b0f3f2: Use streaming I/O in sequential scans.\n3a352df0: Use streaming I/O in pg_prewarm.\n\nThe first is a good first exercise in streaming random blocks;\nhopefully no one would be too upset about an unexpected small\nregression in ANALYZE, but as it happens it goes faster hot and cold\naccording to all reports. The second is a good first exercise in\nstreaming sequential blocks, and it ranges from faster to no\nregression, according to testing and reports. The third is less\nimportant, but it also goes faster.\n\nOf those, streaming seq scan is clearly the most relevant to real\nworkloads that someone might be upset about, and I made a couple of\nchoices that you might say had damage control in mind:\n\n* A conservative choice not to get into the business of the issuing\nnew hints to the kernel for random jumps in cold scans, even though we\nthink we probably should for better performance: more research needed\nprecisely to avoid unexpected interactions (cf the booted bitmap\nheapscan where that sort of thing seems to be afoot).\n* A GUC to turn off I/O combining if it somehow upsets your storage in\nways we didn't foresee (io_combine_limit=1).\n\nFor fully cached hot scans, it does seem to be quite sensitive to tiny\nchanges in a hot code path that I and others spent a lot of time\noptimising and testing during the CF. Perhaps it is possible that\nsomeone else's microarchitecture or compiler could show a regression\nthat I don't see, and I will certainly look into it with vim and\nvigour if so. In that case we could consider a tiny\nmicro-optimisation that I've shared already (it seemed a little novel\nso I'd rather propose it in the new cycle if I can), or, if it comes\nto it based on evidence and inability to address a problem quickly,\nreverting just b7b0f3f2 which itself is a very small patch.\n\nAn aspect you didn't mention is correctness. I don't actually know\nhow to prove that buffer manager protocols are correct beyond thinking\nand torture testing, ie what kind of new test harness machinery could\nbe used to cross-check more things about buffer pool state explicitly,\nand that is a weakness I'm planning to look into.\n\nI realise that \"these are the good ones, you should see all the stuff\nwe decided not to commit!\" is not an argument, I'm just laying out how\nI see the patches that went in and why I thought they were good. It's\nalmost an architectural change, but done in tiny footsteps. I\nappreciate that people would have liked to see those particular tiny\nfootsteps in some of the other fine months available for patching the\ntree, and some of the earlier underpinning patches that were part of\nthe same patch series did go in around New Year, but clearly my\n\"commit spreading\" didn't go as well as planned after that (not helped\nby Jan/Feb summer vacation season down here).\n\nMr Paquier this year announced his personal code freeze a few weeks\nback on social media, which seemed like an interesting idea I might\nadopt. Perhaps that is what some other people are doing without\nsaying so, and perhaps the time they are using for that is the end of\nthe calendar year. I might still be naturally inclined to crunch-like\nbehaviour, but it wouldn't be at the same time as everyone else,\nexcept all the people who follow the same advice.\n\n\n", "msg_date": "Tue, 9 Apr 2024 09:35:00 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 09, 2024 at 09:35:00AM +1200, Thomas Munro wrote:\n> Mr Paquier this year announced his personal code freeze a few weeks\n> back on social media, which seemed like an interesting idea I might\n> adopt. Perhaps that is what some other people are doing without\n> saying so, and perhaps the time they are using for that is the end of\n> the calendar year. I might still be naturally inclined to crunch-like\n> behaviour, but it wouldn't be at the same time as everyone else,\n> except all the people who follow the same advice.\n\nThat's more linked to the fact that I was going silent without a\nlaptop for a few weeks before the end of the release cycle, and a way\nto say to not count on me, while I was trying to keep my room clean to\navoid noise for others who would rush patches. It is a vacation\nperiod for schools in Japan as the fiscal year finishes at the end of\nMarch, while the rest of the world still studies/works, so that makes \ntrips much easier with areas being less busy when going abroad. If\nyou want to limit commit activity during this period, the answer is\nsimple then: require that all the committers live in Japan.\n\nJokes apart, I really try to split commit effort across the year and\nnot rush things at the last minute. If something's not agreed upon\nand commit-ready by the 15th of March, the chances that I would apply\nit within the release cycle are really slim. That's a kind of\npersonal policy I have in place for a few years now.\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 07:58:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/8/24 21:47, Robert Haas wrote:\n> On Mon, Apr 8, 2024 at 10:42 AM Heikki Linnakangas <[email protected]> wrote:\n>> Can you elaborate, which patches you think were not ready? Let's make\n>> sure to capture any concrete concerns in the Open Items list.\n> \n> ...\n> \n> - Incremental backup could have all sorts of terrible data-corrupting\n> bugs. 55a5ee30cd65886ff0a2e7ffef4ec2816fbec273 was a full brown-paper\n> bag level fix, so the chances of there being further problems are\n> probably high.\n> \n\nI don't feel too particularly worried about this. Yes, backups are super\nimportant because it's often the only thing you have left when things go\nwrong, and the incremental aspect is all new. The code I've seen while\ndoing the CoW-related patches seemed very precise and careful, and the\none bug we found & fixed does not make it bad.\n\nSure, I can't rule there being more bugs, but I've been doing some\npretty extensive stress testing of this (doing incremental backups +\ncombinebackup, and comparing the results against the source, and that\nsort of stuff). And so far only that single bug this way. I'm still\ndoing this randomized stress testing, with more and more complex\nworkloads etc. and I'll let keep doing that for a while.\n\nMaybe I'm a bit too happy-go-lucky, but IMO the risk here is limited.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Apr 2024 01:16:02 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 09, 2024 at 01:16:02AM +0200, Tomas Vondra wrote:\n> I don't feel too particularly worried about this. Yes, backups are super\n> important because it's often the only thing you have left when things go\n> wrong, and the incremental aspect is all new. The code I've seen while\n> doing the CoW-related patches seemed very precise and careful, and the\n> one bug we found & fixed does not make it bad.\n> \n> Sure, I can't rule there being more bugs, but I've been doing some\n> pretty extensive stress testing of this (doing incremental backups +\n> combinebackup, and comparing the results against the source, and that\n> sort of stuff). And so far only that single bug this way. I'm still\n> doing this randomized stress testing, with more and more complex\n> workloads etc. and I'll let keep doing that for a while.\n> \n> Maybe I'm a bit too happy-go-lucky, but IMO the risk here is limited.\n\nEven if there's a critical bug, there are still other ways to take\nbackups, so there is an exit route even if a problem is found and even\nif this problem requires a complex solution to be able to work\ncorrectly.\n\nThis worries me less than other patches like the ones around heap\nchanges or the radix tree stuff for TID collection plugged into\nvacuum, which don't have explicit on/off switches AFAIK.\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 08:33:29 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Mon, Apr 8, 2024 at 10:42 AM Heikki Linnakangas <[email protected]> wrote:\n>> Can you elaborate, which patches you think were not ready? Let's make\n>> sure to capture any concrete concerns in the Open Items list.\n\n> Hi,\n\n> I'm moving this topic to a new thread for better visibility and less\n> admixture of concerns. I'd like to invite everyone here present to\n> opine on which patches we ought to be worried about. Here are a few\n> picks from me to start things off. My intention here is to convey \"I\n> find these scary\" rather than \"these commits were irresponsible,\" so I\n> respectfully ask that you don't take the choice to list your patch\n> here as an attack, or the choice not to list your patch here as an\n> endorsement.\n\nI have another one that I'm not terribly happy about:\n\n Author: Alexander Korotkov <[email protected]>\n Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n\n Transform OR clauses to ANY expression\n\nI don't know that I'd call it scary exactly, but I do think it\nwas premature. A week ago there was no consensus that it was\nready to commit, but Alexander pushed it (or half of it, anyway)\ndespite that. A few concrete concerns:\n\n* Yet another planner GUC. Do we really need or want that?\n\n* What the medical community would call off-label usage of\nquery jumbling. I'm not sure this is even correct as-used,\nand for sure it's using that code for something never intended.\nNor is the added code adequately (as in, at all) documented.\n\n* Patch refuses to group anything but Consts into the SAOP\ntransformation. I realize that if you want to produce an\narray Const you need Const inputs, but I wonder why it\nwasn't considered to produce an ARRAY[] construct if there\nare available clauses with pseudo-constant (eg Param)\ncomparison values.\n\n* I really, really dislike jamming this logic into prepqual.c,\nwhere it has no business being. I note that it was shoved\ninto process_duplicate_ors without even the courtesy of\nexpanding the header comment:\n\n * process_duplicate_ors\n *\t Given a list of exprs which are ORed together, try to apply\n *\t the inverse OR distributive law.\n\nAnother reason to think this wasn't a very well chosen place is\nthat the file's list of #include's went from 4 entries to 11.\nSomebody should have twigged to the idea that this was off-topic\nfor prepqual.c.\n\n* OrClauseGroupKey is not a Node type, so why does it have\na NodeTag? I wonder what value will appear in that field,\nand what will happen if the struct is passed to any code\nthat expects real Nodes.\n\nI could probably find some other nits if I spent more time\non it, but I think these are sufficient to show that this\nwas not commit-ready.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 22:12:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 9/4/2024 09:12, Tom Lane wrote:\n> I have another one that I'm not terribly happy about:\n> \n> Author: Alexander Korotkov <[email protected]>\n> Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n> \n> Transform OR clauses to ANY expression\nBecause I'm primary author of the idea, let me answer.\n> \n> I don't know that I'd call it scary exactly, but I do think it\n> was premature. A week ago there was no consensus that it was\n> ready to commit, but Alexander pushed it (or half of it, anyway)\n> despite that. A few concrete concerns:\n> \n> * Yet another planner GUC. Do we really need or want that?\nIt is the most interesting question here. Looking around planner \nfeatures designed but not applied for the same reason because they can \nproduce suboptimal plans in corner cases, I think about inventing \nflag-type parameters and hiding some features that work better for \ndifferent load types under such flagged parameters.\n> \n> * What the medical community would call off-label usage of\n> query jumbling. I'm not sure this is even correct as-used,\n> and for sure it's using that code for something never intended.\n> Nor is the added code adequately (as in, at all) documented.\nI agree with documentation and disagree with critics on the expression \njumbling. It was introduced in the core. Why don't we allow it to be \nused to speed up machinery with some hashing?\n> \n> * Patch refuses to group anything but Consts into the SAOP\n> transformation. I realize that if you want to produce an\n> array Const you need Const inputs, but I wonder why it\n> wasn't considered to produce an ARRAY[] construct if there\n> are available clauses with pseudo-constant (eg Param)\n> comparison values.\nGood point. I think, we can consider that in the future.\n> \n> * I really, really dislike jamming this logic into prepqual.c,\n> where it has no business being. I note that it was shoved\n> into process_duplicate_ors without even the courtesy of\n> expanding the header comment:\nYeah, I preferred to do it in parse_expr.c with the assumption of some \n'minimal' or 'canonical' tree form. You can see this code in the \nprevious version. I think we don't have any bugs here, but we have \ndifferent opinions on how it should work.\n> \n> * process_duplicate_ors\n> *\t Given a list of exprs which are ORed together, try to apply\n> *\t the inverse OR distributive law.\n> \n> Another reason to think this wasn't a very well chosen place is\n> that the file's list of #include's went from 4 entries to 11.\n> Somebody should have twigged to the idea that this was off-topic\n> for prepqual.c.\n> \n> * OrClauseGroupKey is not a Node type, so why does it have\n> a NodeTag? I wonder what value will appear in that field,\n> and what will happen if the struct is passed to any code\n> that expects real Nodes.\nIt's a hack authored by Alexander. I guess He can provide additional \nreasons in support of that.\n> \n> I could probably find some other nits if I spent more time\n> on it, but I think these are sufficient to show that this\n> was not commit-ready.\nIt's up to you. On the one hand, I don't see any bugs or strong \nperformance issues, and all the issues can be resolved further; on the \nother hand, I've got your good review and some ideas to work out.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 12:37:31 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Andrei Lepikhov <[email protected]> writes:\n> On 9/4/2024 09:12, Tom Lane wrote:\n>> I have another one that I'm not terribly happy about:\n>> Author: Alexander Korotkov <[email protected]>\n>> Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n>> Transform OR clauses to ANY expression\n\n>> * What the medical community would call off-label usage of\n>> query jumbling. I'm not sure this is even correct as-used,\n>> and for sure it's using that code for something never intended.\n>> Nor is the added code adequately (as in, at all) documented.\n\n> I agree with documentation and disagree with critics on the expression \n> jumbling. It was introduced in the core. Why don't we allow it to be \n> used to speed up machinery with some hashing?\n\nI would back up from that a good deal: why do we need to hash here in\nthe first place? There's no evidence I'm aware of that it's needful\nfrom a performance standpoint.\n\n>> * I really, really dislike jamming this logic into prepqual.c,\n>> where it has no business being. I note that it was shoved\n>> into process_duplicate_ors without even the courtesy of\n>> expanding the header comment:\n\n> Yeah, I preferred to do it in parse_expr.c with the assumption of some \n> 'minimal' or 'canonical' tree form.\n\nThat seems quite the wrong direction to me. AFAICS, the argument\nfor making this transformation depends on being able to convert\nto an indexscan condition, so I would try to apply it even later,\nwhen we have a set of restriction conditions to apply to a particular\nbaserel. (This would weaken the argument that we need hashing\nrather than naive equal() tests even further, I think.) Applying\nthe transform to join quals seems unlikely to be a win.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 01:55:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 09.04.24 00:58, Michael Paquier wrote:\n> That's more linked to the fact that I was going silent without a\n> laptop for a few weeks before the end of the release cycle, and a way\n> to say to not count on me, while I was trying to keep my room clean to\n> avoid noise for others who would rush patches. It is a vacation\n> period for schools in Japan as the fiscal year finishes at the end of\n> March, while the rest of the world still studies/works, so that makes\n> trips much easier with areas being less busy when going abroad. If\n> you want to limit commit activity during this period, the answer is\n> simple then: require that all the committers live in Japan.\n\nWell, due to the Easter holiday being earlier this year, I adopted a \nsimilar approach: Go on vacation the last week of March and watch the \nrest from the pool. :) So far I feel this was better for my well-being.\n\n\n", "msg_date": "Tue, 9 Apr 2024 07:59:08 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/9/24 01:33, Michael Paquier wrote:\n> On Tue, Apr 09, 2024 at 01:16:02AM +0200, Tomas Vondra wrote:\n>> I don't feel too particularly worried about this. Yes, backups are super\n>> important because it's often the only thing you have left when things go\n>> wrong, and the incremental aspect is all new. The code I've seen while\n>> doing the CoW-related patches seemed very precise and careful, and the\n>> one bug we found & fixed does not make it bad.\n>>\n>> Sure, I can't rule there being more bugs, but I've been doing some\n>> pretty extensive stress testing of this (doing incremental backups +\n>> combinebackup, and comparing the results against the source, and that\n>> sort of stuff). And so far only that single bug this way. I'm still\n>> doing this randomized stress testing, with more and more complex\n>> workloads etc. and I'll let keep doing that for a while.\n>>\n>> Maybe I'm a bit too happy-go-lucky, but IMO the risk here is limited.\n> \n> Even if there's a critical bug, there are still other ways to take\n> backups, so there is an exit route even if a problem is found and even\n> if this problem requires a complex solution to be able to work\n> correctly.\n> \n\nI think it's a bit more nuanced, because it's about backups/restore. The\nbug might be subtle, and you won't learn about it until the moment when\nyou need to restore (or perhaps even long after that). At which point\n\"You might have taken the backup in some other way.\" is not really a\nviable exit route.\n\nAnyway, I'm still not worried about this particular feature, and I'll\nkeep doing the stress testing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Apr 2024 13:24:06 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 7:24 AM Tomas Vondra\n<[email protected]> wrote:\n> I think it's a bit more nuanced, because it's about backups/restore. The\n> bug might be subtle, and you won't learn about it until the moment when\n> you need to restore (or perhaps even long after that). At which point\n> \"You might have taken the backup in some other way.\" is not really a\n> viable exit route.\n>\n> Anyway, I'm still not worried about this particular feature, and I'll\n> keep doing the stress testing.\n\nIn all sincerity, I appreciate the endorsement. Basically what's been\nscaring me about this feature is the possibility that there's some\nincurable design flaw that I've managed to completely miss. If it has\nsome more garden-variety bugs, that's still pretty bad: people will\npotentially lose data and be unable to get it back. But, as long as\nwe're able to find the bugs and fix them, the situation should improve\nover time until, hopefully, everybody trusts it roughly as much as we\ntrust, say, crash recovery. Perhaps even a bit more: I think this code\nis much better-written than our crash recovery code, which has grown\ninto a giant snarl that nobody seems able to untangle, despite\nmultiple refactoring attempts. However, if there's some reason why the\napproach is fundamentally unsound which I and others have failed to\ndetect, then we're at risk of shipping a feature that is irretrievably\nbroken. That would really suck.\n\nI'm fairly hopeful that there is no such design defect: I certainly\ncan't think of one. But, it's much easier to imagine an incurable\nproblem here than with, say, the recent pruning+freezing changes.\nThose changes might have bugs, and those bugs might be hard to find,\nbut if they do exist and are found, they can be fixed. Here, it's a\nlittle less obvious that that's true. We're relying on our ability, at\nincremental backup time, to sort out from the manifest and the WAL\nsummaries, what needs to be included in the backup in order for a\nsubsequent pg_combinebackup operation to produce correct results. The\nbasic idea is simple enough, but the details are complicated, and it\nfeels like a subtle defect in the algorithm could potentially scuttle\nthe whole thing. I'd certainly appreciate having more smart people try\nto think of things that I might have overlooked.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 08:46:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 9/4/2024 12:55, Tom Lane wrote:\n> Andrei Lepikhov <[email protected]> writes:\n>>> * I really, really dislike jamming this logic into prepqual.c,\n>>> where it has no business being. I note that it was shoved\n>>> into process_duplicate_ors without even the courtesy of\n>>> expanding the header comment:\n> \n>> Yeah, I preferred to do it in parse_expr.c with the assumption of some\n>> 'minimal' or 'canonical' tree form.\n> \n> That seems quite the wrong direction to me. AFAICS, the argument\n> for making this transformation depends on being able to convert\n> to an indexscan condition, so I would try to apply it even later,\n> when we have a set of restriction conditions to apply to a particular\n> baserel. (This would weaken the argument that we need hashing\n> rather than naive equal() tests even further, I think.) Applying\n> the transform to join quals seems unlikely to be a win.\nOur first prototype did this job right at the stage of index path \ncreation. Unfortunately, this approach was too narrow and expensive.\nThe most problematic cases we encountered were from BitmapOr paths: if \nan incoming query has a significant number of OR clauses, the optimizer \nspends a lot of time generating these, in most cases, senseless paths \n(remember also memory allocated for that purpose). Imagine how much \nworse the situation becomes when we scale it with partitions.\nAnother issue we resolved with this transformation: shorter list of \nclauses speeds up planning and, sometimes, makes cardinality estimation \nmore accurate.\nMoreover, it helps even SeqScan: attempting to find a value in the \nhashed array is much faster than cycling a long-expression on each \nincoming tuple.\n\nOne more idea that I have set aside here is that the planner can utilize \nquick clause hashing:\n From time to time, in the mailing list, I see disputes on different \napproaches to expression transformation/simplification/grouping, and \nmost of the time, it ends up with the problem of search complexity. \nClause hash can be a way to solve this, can't it?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 20:20:11 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Hi,\n\nOn Tuesday, April 9th, 2024 at 2:46 PM, Robert Haas <[email protected]> wrote:\n> In all sincerity, I appreciate the endorsement. Basically what's been\n> scaring me about this feature is the possibility that there's some\n> incurable design flaw that I've managed to completely miss. If it has\n> some more garden-variety bugs, that's still pretty bad: people will\n> potentially lose data and be unable to get it back. But, as long as\n> we're able to find the bugs and fix them, the situation should improve\n> over time until, hopefully, everybody trusts it roughly as much as we\n> trust, say, crash recovery. Perhaps even a bit more: I think this code\n> is much better-written than our crash recovery code, which has grown\n> into a giant snarl that nobody seems able to untangle, despite\n> multiple refactoring attempts. However, if there's some reason why the\n> approach is fundamentally unsound which I and others have failed to\n> detect, then we're at risk of shipping a feature that is irretrievably\n> broken. That would really suck.\n\nIMHO it totally worth shipping such long-waited feature sooner than later.\nYes, it is a complex one, but you started advertising it since last January already, so people should already be able to play with it in Beta.\n\nAnd as you mentioned in your blog about the evergreen backup:\n\n> But if you're anything like me, you'll already see that this arrangement\n> has two serious weaknesses. First, if there are any data-corrupting bugs\n> in pg_combinebackup or any of the server-side code that supports\n> incremental backup, this approach could get you into big trouble.\n\nAt some point, the only way to really validate a backup is to actually try to restore it.\nAnd if people get encouraged to do that faster thanks to incremental backups, they could detect potential issues sooner.\nUltimately, users will still need their full backups and WAL archives.\nIf pg_combinebackup fails for any reason, the fix will be to perform the recovery from the full backup directly.\nThey still should be able to recover, just slower.\n\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\n\n", "msg_date": "Tue, 09 Apr 2024 13:34:12 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 2024-Apr-09, Stefan Fercot wrote:\n\n> At some point, the only way to really validate a backup is to actually try to restore it.\n> And if people get encouraged to do that faster thanks to incremental backups, they could detect potential issues sooner.\n> Ultimately, users will still need their full backups and WAL archives.\n> If pg_combinebackup fails for any reason, the fix will be to perform the recovery from the full backup directly.\n> They still should be able to recover, just slower.\n\nI completely agree that people should be testing the feature so that we\ncan fix any bugs as soon as possible. However, if my understanding is\ncorrect, restoring a full backup plus an incremental no longer needs the\nintervening WAL up to the incremental. Users wishing to save some disk\nspace might be tempted to delete that WAL. If they do, and later it\nturns out that the full+incremental cannot be restored for whatever\nreason, they are in danger.\n\nBut you're right that if they don't delete that WAL, then the full is\nrestorable on its own.\n\nMaybe we should explicitly advise users to not delete that WAL from\ntheir archives, until pg_combinebackup is hammered a bit more.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I must say, I am absolutely impressed with what pgsql's implementation of\nVALUES allows me to do. It's kind of ridiculous how much \"work\" goes away in\nmy code. Too bad I can't do this at work (Oracle 8/9).\" (Tom Allison)\n http://archives.postgresql.org/pgsql-general/2007-06/msg00016.php\n\n\n", "msg_date": "Tue, 9 Apr 2024 17:45:45 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\n> On 9 Apr 2024, at 18:45, Alvaro Herrera <[email protected]> wrote:\n> \n> Maybe we should explicitly advise users to not delete that WAL from\n> their archives, until pg_combinebackup is hammered a bit more.\n\nAs a backup tool maintainer, I always reference to out-of-the box Postgres tools as some bulletproof alternative.\nI really would like to stick to this reputation and not discredit these tools.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 9 Apr 2024 18:59:54 +0300", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n> I don't know that I'd call it scary exactly, but I do think it\n> was premature. A week ago there was no consensus that it was\n> ready to commit, but Alexander pushed it (or half of it, anyway)\n> despite that.\n\nSome of the most compelling cases for the transformation will involve\npath keys. If the transformation enables the optimizer to build a\nplain index scan (or index-only scan) with useful path keys, then that\nmight well lead to a far superior plan compared to what's possible\nwith BitmapOrs.\n\nI understand that it'll still be possible to use OR expression\nevaluation in such cases, without applying the transformation (via\nfilter quals), so in principle you don't need the transformation to\nget an index scan that can (say) terminate the scan early due to the\npresence of an \"ORDER BY ... LIMIT n\". But I suspect that that won't\nwork out much of the time, because the planner will believe (rightly\nor wrongly) that the filter quals will incur too many heap page\naccesses.\n\nAnother problem (at least as I see it) with the new\nor_to_any_transform_limit GUC is that its design seems to have nothing\nto say about the importance of these sorts of cases. Most of these\ncases will only have 2 or 3 constants, just because that's what's most\ncommon in general.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 9 Apr 2024 12:12:36 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> On Mon, Apr 8, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n>> I don't know that I'd call it scary exactly, but I do think it\n>> was premature. A week ago there was no consensus that it was\n>> ready to commit, but Alexander pushed it (or half of it, anyway)\n>> despite that.\n\n> Some of the most compelling cases for the transformation will involve\n> path keys. If the transformation enables the optimizer to build a\n> plain index scan (or index-only scan) with useful path keys, then that\n> might well lead to a far superior plan compared to what's possible\n> with BitmapOrs.\n\nI did not say it isn't a useful thing to have. I said the patch\ndid not appear ready to go in.\n\n> I understand that it'll still be possible to use OR expression\n> evaluation in such cases, without applying the transformation (via\n> filter quals), so in principle you don't need the transformation to\n> get an index scan that can (say) terminate the scan early due to the\n> presence of an \"ORDER BY ... LIMIT n\". But I suspect that that won't\n> work out much of the time, because the planner will believe (rightly\n> or wrongly) that the filter quals will incur too many heap page\n> accesses.\n\nThat's probably related to the fact that we don't have a mechanism\nfor evaluating non-indexed quals against columns that are retrievable\nfrom the index. We really oughta work on getting that done. But\nI've been keeping a side eye on the work to unify plain and index-only\nscans, because that's going to touch a lot of the same code so it\ndoesn't seem profitable to pursue those goals in parallel.\n\n> Another problem (at least as I see it) with the new\n> or_to_any_transform_limit GUC is that its design seems to have nothing\n> to say about the importance of these sorts of cases. Most of these\n> cases will only have 2 or 3 constants, just because that's what's most\n> common in general.\n\nYeah, that's one of the reasons I'm dubious that the committed\npatch was ready.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 12:27:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 12:27 PM Tom Lane <[email protected]> wrote:\n> > Some of the most compelling cases for the transformation will involve\n> > path keys. If the transformation enables the optimizer to build a\n> > plain index scan (or index-only scan) with useful path keys, then that\n> > might well lead to a far superior plan compared to what's possible\n> > with BitmapOrs.\n>\n> I did not say it isn't a useful thing to have. I said the patch\n> did not appear ready to go in.\n\nDidn't mean to suggest otherwise. I just wanted to hear your thoughts\non this aspect of these sorts of transformations.\n\n> > I understand that it'll still be possible to use OR expression\n> > evaluation in such cases, without applying the transformation (via\n> > filter quals), so in principle you don't need the transformation to\n> > get an index scan that can (say) terminate the scan early due to the\n> > presence of an \"ORDER BY ... LIMIT n\". But I suspect that that won't\n> > work out much of the time, because the planner will believe (rightly\n> > or wrongly) that the filter quals will incur too many heap page\n> > accesses.\n>\n> That's probably related to the fact that we don't have a mechanism\n> for evaluating non-indexed quals against columns that are retrievable\n> from the index. We really oughta work on getting that done.\n\nI agree that that is very important work, but I'm not sure that it\nmakes all that much difference here. Even if we had that improved\nmechanism already, today, using index quals would still be strictly\nbetter than expression evaluation. Index quals can allow nbtree to\nskip over irrelevant parts of the index as the need arises, which is a\nsignificant advantage in its own right.\n\nISTM that the planner should always prefer index quals over expression\nevaluation, on general principle, even when there's no reason to think\nit'll work out. At worst the executor has essentially the same\nphysical access patterns as the expression evaluation case. On the\nother hand, providing nbtree with that context might end up being a\ngreat deal faster.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 9 Apr 2024 12:40:28 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 8:55 AM Tom Lane <[email protected]> wrote:\n> Andrei Lepikhov <[email protected]> writes:\n> > On 9/4/2024 09:12, Tom Lane wrote:\n> >> I have another one that I'm not terribly happy about:\n> >> Author: Alexander Korotkov <[email protected]>\n> >> Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n> >> Transform OR clauses to ANY expression\n>\n> >> * What the medical community would call off-label usage of\n> >> query jumbling. I'm not sure this is even correct as-used,\n> >> and for sure it's using that code for something never intended.\n> >> Nor is the added code adequately (as in, at all) documented.\n>\n> > I agree with documentation and disagree with critics on the expression\n> > jumbling. It was introduced in the core. Why don't we allow it to be\n> > used to speed up machinery with some hashing?\n>\n> I would back up from that a good deal: why do we need to hash here in\n> the first place? There's no evidence I'm aware of that it's needful\n> from a performance standpoint.\n\nI think the feature is aimed to deal with large OR lists. I've seen a\nsignificant degradation on 10000 or-clause-groups. That might seem\nlike awfully a lot, but actually it's not unachievable in generated\nqueries.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfduJtO0s9E%3DSHUTzrCD88BH0eik0UNog1_q3XBF2wLmH6g%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 22:14:35 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 5:12 AM Tom Lane <[email protected]> wrote:\n> * OrClauseGroupKey is not a Node type, so why does it have\n> a NodeTag? I wonder what value will appear in that field,\n> and what will happen if the struct is passed to any code\n> that expects real Nodes.\n\nI used that to put both not-subject-of-transform nodes together with\nhash entries into the same list. This is used to save the order of\nclauses. I think this is an important property, and I have already\nexpressed it in [1]. That could be achieved without adding NodeTag to\nhash entries, but that would require a level of indirection. It's not\npassed to code that expects real Nodes, it doesn't go to anything\nexcept lists.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdutHt31sdt2rfU%3D4fsDMWxf6tvtnHARgCzLY2Tf21%2Bfgw%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 22:47:37 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 8:37 AM Andrei Lepikhov\n<[email protected]> wrote:\n> On 9/4/2024 09:12, Tom Lane wrote:\n> > I have another one that I'm not terribly happy about:\n> >\n> > Author: Alexander Korotkov <[email protected]>\n> > Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n> >\n> > Transform OR clauses to ANY expression\n> Because I'm primary author of the idea, let me answer.\n> >\n> > I don't know that I'd call it scary exactly, but I do think it\n> > was premature. A week ago there was no consensus that it was\n> > ready to commit, but Alexander pushed it (or half of it, anyway)\n> > despite that. A few concrete concerns:\n> >\n> > * Yet another planner GUC. Do we really need or want that?\n> It is the most interesting question here. Looking around planner\n> features designed but not applied for the same reason because they can\n> produce suboptimal plans in corner cases, I think about inventing\n> flag-type parameters and hiding some features that work better for\n> different load types under such flagged parameters.\n\nYes, I have spotted this transformation could cause a bitmap scan plan\nregressions in [1] and [2]. Fixing that required to treat ANY the\nsame as OR for bitmap scans. Andrei implemented that in [3], but that\nincreases planning complexity and elimitates significant part of the\nadvantages of OR-to-ANY transformation.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfduJtO0s9E%3DSHUTzrCD88BH0eik0UNog1_q3XBF2wLmH6g%40mail.gmail.com\n2. https://www.postgresql.org/message-id/CAPpHfdtSXxhdv3mLOLjEewGeXJ%2BFtfhjqodn1WWuq5JLsKx48g%40mail.gmail.com\n3. https://www.postgresql.org/message-id/6d27d752-db0b-4cac-9843-6ba3dd7a1e94%40postgrespro.ru\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 23:07:56 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\n> Peter Geoghegan <[email protected]> writes:\n> > On Mon, Apr 8, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n> >> I don't know that I'd call it scary exactly, but I do think it\n> >> was premature. A week ago there was no consensus that it was\n> >> ready to commit, but Alexander pushed it (or half of it, anyway)\n> >> despite that.\n>\n> > Some of the most compelling cases for the transformation will involve\n> > path keys. If the transformation enables the optimizer to build a\n> > plain index scan (or index-only scan) with useful path keys, then that\n> > might well lead to a far superior plan compared to what's possible\n> > with BitmapOrs.\n>\n> I did not say it isn't a useful thing to have. I said the patch\n> did not appear ready to go in.\n>\n> > I understand that it'll still be possible to use OR expression\n> > evaluation in such cases, without applying the transformation (via\n> > filter quals), so in principle you don't need the transformation to\n> > get an index scan that can (say) terminate the scan early due to the\n> > presence of an \"ORDER BY ... LIMIT n\". But I suspect that that won't\n> > work out much of the time, because the planner will believe (rightly\n> > or wrongly) that the filter quals will incur too many heap page\n> > accesses.\n>\n> That's probably related to the fact that we don't have a mechanism\n> for evaluating non-indexed quals against columns that are retrievable\n> from the index. We really oughta work on getting that done. But\n> I've been keeping a side eye on the work to unify plain and index-only\n> scans, because that's going to touch a lot of the same code so it\n> doesn't seem profitable to pursue those goals in parallel.\n>\n> > Another problem (at least as I see it) with the new\n> > or_to_any_transform_limit GUC is that its design seems to have nothing\n> > to say about the importance of these sorts of cases. Most of these\n> > cases will only have 2 or 3 constants, just because that's what's most\n> > common in general.\n>\n> Yeah, that's one of the reasons I'm dubious that the committed\n> patch was ready.\n\nWhile inventing this GUC, I was thinking more about avoiding\nregressions rather than about unleashing the full power of this\noptimization. But now I see that that wasn't good enough. And it was\ndefinitely hasty to commit to this shape. I apologize for this.\n\nTom, I think you are way more experienced in this codebase than me.\nAnd, probably more importantly, more experienced in making decisions\nfor planner development. If you see some way forward to polish this\npost-commit, Andrei and I are ready to work hard on this with you. If\nyou don't see (or don't think that's good), let's revert this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 23:23:16 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 5:12 AM Tom Lane <[email protected]> wrote:\n>> * OrClauseGroupKey is not a Node type, so why does it have\n>> a NodeTag? I wonder what value will appear in that field,\n>> and what will happen if the struct is passed to any code\n>> that expects real Nodes.\n\n> I used that to put both not-subject-of-transform nodes together with\n> hash entries into the same list. This is used to save the order of\n> clauses. I think this is an important property, and I have already\n> expressed it in [1].\n\nWhat exactly is the point of having a NodeTag in the struct though?\nIf you don't need it to be a valid Node, that seems pointless and\nconfusing. We certainly have plenty of other lists that contain\nplain structs without tags, so I don't buy that the List\ninfrastructure is making you do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 16:37:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\n>> Yeah, that's one of the reasons I'm dubious that the committed\n>> patch was ready.\n\n> While inventing this GUC, I was thinking more about avoiding\n> regressions rather than about unleashing the full power of this\n> optimization. But now I see that that wasn't good enough. And it was\n> definitely hasty to commit to this shape. I apologize for this.\n\n> Tom, I think you are way more experienced in this codebase than me.\n> And, probably more importantly, more experienced in making decisions\n> for planner development. If you see some way forward to polish this\n> post-commit, Andrei and I are ready to work hard on this with you. If\n> you don't see (or don't think that's good), let's revert this.\n\nIt wasn't ready to commit, and I think trying to fix it up post\nfeature freeze isn't appropriate project management. Let's revert\nit and work on it more in the v18 time frame.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 16:42:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 11:37 PM Tom Lane <[email protected]> wrote:\n> Alexander Korotkov <[email protected]> writes:\n> > On Tue, Apr 9, 2024 at 5:12 AM Tom Lane <[email protected]> wrote:\n> >> * OrClauseGroupKey is not a Node type, so why does it have\n> >> a NodeTag? I wonder what value will appear in that field,\n> >> and what will happen if the struct is passed to any code\n> >> that expects real Nodes.\n>\n> > I used that to put both not-subject-of-transform nodes together with\n> > hash entries into the same list. This is used to save the order of\n> > clauses. I think this is an important property, and I have already\n> > expressed it in [1].\n>\n> What exactly is the point of having a NodeTag in the struct though?\n> If you don't need it to be a valid Node, that seems pointless and\n> confusing. We certainly have plenty of other lists that contain\n> plain structs without tags, so I don't buy that the List\n> infrastructure is making you do that.\n\nThis code mixes Expr's and hash entries in the single list. The point\nof having a NodeTag in the struct is the ability to distinguish them\nlater.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 23:48:12 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 11:42 PM Tom Lane <[email protected]> wrote:\n> Alexander Korotkov <[email protected]> writes:\n> > On Tue, Apr 9, 2024 at 7:27 PM Tom Lane <[email protected]> wrote:\n> >> Yeah, that's one of the reasons I'm dubious that the committed\n> >> patch was ready.\n>\n> > While inventing this GUC, I was thinking more about avoiding\n> > regressions rather than about unleashing the full power of this\n> > optimization. But now I see that that wasn't good enough. And it was\n> > definitely hasty to commit to this shape. I apologize for this.\n>\n> > Tom, I think you are way more experienced in this codebase than me.\n> > And, probably more importantly, more experienced in making decisions\n> > for planner development. If you see some way forward to polish this\n> > post-commit, Andrei and I are ready to work hard on this with you. If\n> > you don't see (or don't think that's good), let's revert this.\n>\n> It wasn't ready to commit, and I think trying to fix it up post\n> feature freeze isn't appropriate project management. Let's revert\n> it and work on it more in the v18 time frame.\n\nOk, let's do this. I'd like to hear from you some directions for\nfurther development of this patch if possible.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 9 Apr 2024 23:49:43 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Alexander Korotkov <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 11:37 PM Tom Lane <[email protected]> wrote:\n>> What exactly is the point of having a NodeTag in the struct though?\n>> If you don't need it to be a valid Node, that seems pointless and\n>> confusing. We certainly have plenty of other lists that contain\n>> plain structs without tags, so I don't buy that the List\n>> infrastructure is making you do that.\n\n> This code mixes Expr's and hash entries in the single list. The point\n> of having a NodeTag in the struct is the ability to distinguish them\n> later.\n\nIf you're doing that, it really really ought to be a proper Node.\nIf nothing else, that would aid debugging by allowing the list\nto be pprint'ed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 17:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/10/24 01:59, Andrey M. Borodin wrote:\n> \n>> On 9 Apr 2024, at 18:45, Alvaro Herrera <[email protected]> wrote:\n>>\n>> Maybe we should explicitly advise users to not delete that WAL from\n>> their archives, until pg_combinebackup is hammered a bit more.\n> \n> As a backup tool maintainer, I always reference to out-of-the box Postgres tools as some bulletproof alternative.\n> I really would like to stick to this reputation and not discredit these tools.\n\n+1.\n\nEven so, only keeping WAL for the last backup is a dangerous move in any \ncase. Lots of things can happen to a backup (other than bugs in the \nsoftware) so keeping WAL back to the last full (or for all backups) is \nalways an excellent idea.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 10 Apr 2024 09:29:38 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Wed, Apr 10, 2024 at 09:29:38AM +1000, David Steele wrote:\n> Even so, only keeping WAL for the last backup is a dangerous move in any\n> case. Lots of things can happen to a backup (other than bugs in the\n> software) so keeping WAL back to the last full (or for all backups) is\n> always an excellent idea.\n\nYeah, that's an excellent practive, but is why I'm less worried for\nthis feature. The docs at [1] caution about \"not to remove earlier\nbackups if they might be needed when restoring later incremental\nbackups\". Like Alvaro said, should we insist a bit more about the WAL\nretention part in this section of the docs, down to the last full\nbackup?\n\n[1]: https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-INCREMENTAL-BACKUP\n--\nMichael", "msg_date": "Wed, 10 Apr 2024 08:50:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 2:47 AM Robert Haas <[email protected]> wrote:\n>\n> - I'm slightly worried about the TID store work (ee1b30f12, 30e144287,\n> 667e65aac35), perhaps for no reason. Actually, the results seem really\n> impressive,\n\nFirst, thanks for the complement. I actually suspect if we had this\nyears ago, it might never have occurred to anyone to go through the\ntrouble of adding parallel index cleanup.\n\nIn a funny way, it's almost too effective -- the claim is that m_w_m\nover 1GB is perfectly usable, but I haven't been able to get anywere\nnear that via vacuum (we have indirectly, via bespoke dev code,\nbut...). I don't have easy access to hardware that can hold a table\nbig enough to do so -- vacuuming 1 billion records stays under 400MB.\n\n> and a very quick look at some of the commits seemed kind\n> of reassuring. But, like the changes to pruning and freezing, this is\n> making some really fundamental changes to critical code. In this case,\n> it's code that has evolved very little over the years and seems to\n> have now evolved quite a lot.\n\nTrue. I'd say that at a high level, storage and retrieval of TIDs is a\nlot simpler conceptually than other aspects of vacuuming. The\nlow-level guts are now much more complex, but I'm confident it won't\njust output a wrong answer. That aspect has been working for a long\ntime, and when it has broken during development, it fails very quickly\nand obviously.\n\nThe more challenging aspects are less cut-and-dried, like memory\nmanagement, delegation of responsibility, how to expose locking (which\nvacuum doesn't even use), readability/maintainability. Those are more\nsubjective, but it seems to have finally clicked into place in a way\nthat feels like the right trade-offs have been made. That's hand-wavy,\nI realize.\n\nThe more recent follow-up commits are pieces that were discussed and\nplanned for earlier, but have had less review and were left out from\nthe initial commits since they're not essential to the functionality.\nI judged that test coverage was enough to have confidence in them.\n\nOn Tue, Apr 9, 2024 at 6:33 AM Michael Paquier <[email protected]> wrote:\n>\n> This worries me less than other patches like the ones around heap\n> changes or the radix tree stuff for TID collection plugged into\n> vacuum, which don't have explicit on/off switches AFAIK.\n\nYes, there is no switch. Interestingly enough, the previous item array\nended up with its own escape hatch to tamp down its eager allocation,\nautovacuum_work_mem. Echoing what I said to Robert, if we had the new\nstorage years ago, I doubt this GUC would ever have been proposed.\n\nI'll also mention the array is still in the code base, but only in a\ntest module as a standard to test against. Hopefully that offers some\nreassurance.\n\n\n", "msg_date": "Wed, 10 Apr 2024 08:49:38 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 9, 2024 at 1:59 AM Peter Eisentraut <[email protected]> wrote:\n> On 09.04.24 00:58, Michael Paquier wrote:\n> > That's more linked to the fact that I was going silent without a\n> > laptop for a few weeks before the end of the release cycle, and a way\n> > to say to not count on me, while I was trying to keep my room clean to\n> > avoid noise for others who would rush patches. It is a vacation\n> > period for schools in Japan as the fiscal year finishes at the end of\n> > March, while the rest of the world still studies/works, so that makes\n> > trips much easier with areas being less busy when going abroad. If\n> > you want to limit commit activity during this period, the answer is\n> > simple then: require that all the committers live in Japan.\n>\n> Well, due to the Easter holiday being earlier this year, I adopted a\n> similar approach: Go on vacation the last week of March and watch the\n> rest from the pool. :) So far I feel this was better for my well-being.\n\nI usually aim to have my major work for the release code complete by\napproximately September and committed by December or January. Then if\nit slips, I still have a chance of finishing before the freeze, and if\nit doesn't, then I don't have to deal with the mad flurry of activity\nat the end, and perhaps there's even time for some follow-up work\nafterwards (as in the case of IB), or just time to review some other\npatches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:15:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Mon, Apr 8, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n> I have another one that I'm not terribly happy about:\n>\n> Author: Alexander Korotkov <[email protected]>\n> Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n>\n> Transform OR clauses to ANY expression\n\nI realize that this has been reverted now, but what's really\nfrustrating about this case is that I reviewed this patch before and\ngave feedback similar to some of the feedback you gave, and it just\ndidn't matter, and the patch was committed anyway.\n\n> I don't know that I'd call it scary exactly, but I do think it\n> was premature. A week ago there was no consensus that it was\n> ready to commit, but Alexander pushed it (or half of it, anyway)\n> despite that. A few concrete concerns:\n>\n> * Yet another planner GUC. Do we really need or want that?\n\nIMHO, no, and I said so in\nhttps://www.postgresql.org/message-id/CA%2BTgmob%3DebuCHFSw327b55DJzE3JtOuZ5owxob%2BMgErb4me_Ag%40mail.gmail.com\n\n> * What the medical community would call off-label usage of\n> query jumbling. I'm not sure this is even correct as-used,\n> and for sure it's using that code for something never intended.\n> Nor is the added code adequately (as in, at all) documented.\n\nAnd I raised this point here:\nhttps://www.postgresql.org/message-id/CA%2BTgmoZCgP6FrBQEusn4yaWm02XU8OPeoEMk91q7PRBgwaAkFw%40mail.gmail.com\n\n> * Patch refuses to group anything but Consts into the SAOP\n> transformation. I realize that if you want to produce an\n> array Const you need Const inputs, but I wonder why it\n> wasn't considered to produce an ARRAY[] construct if there\n> are available clauses with pseudo-constant (eg Param)\n> comparison values.\n>\n> * I really, really dislike jamming this logic into prepqual.c,\n> where it has no business being. I note that it was shoved\n> into process_duplicate_ors without even the courtesy of\n> expanding the header comment:\n>\n> * process_duplicate_ors\n> * Given a list of exprs which are ORed together, try to apply\n> * the inverse OR distributive law.\n>\n> Another reason to think this wasn't a very well chosen place is\n> that the file's list of #include's went from 4 entries to 11.\n> Somebody should have twigged to the idea that this was off-topic\n> for prepqual.c.\n\nAll of this seems like it might be related to my comments in the above\nemail about the transformation being done too early.\n\n> * OrClauseGroupKey is not a Node type, so why does it have\n> a NodeTag? I wonder what value will appear in that field,\n> and what will happen if the struct is passed to any code\n> that expects real Nodes.\n\nI don't think I raised this issue.\n\n> I could probably find some other nits if I spent more time\n> on it, but I think these are sufficient to show that this\n> was not commit-ready.\n\nJust imagine if someone had taken time to give similar feedback before\nthe commit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:26:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/10/24 09:50, Michael Paquier wrote:\n> On Wed, Apr 10, 2024 at 09:29:38AM +1000, David Steele wrote:\n>> Even so, only keeping WAL for the last backup is a dangerous move in any\n>> case. Lots of things can happen to a backup (other than bugs in the\n>> software) so keeping WAL back to the last full (or for all backups) is\n>> always an excellent idea.\n> \n> Yeah, that's an excellent practive, but is why I'm less worried for\n> this feature. The docs at [1] caution about \"not to remove earlier\n> backups if they might be needed when restoring later incremental\n> backups\". Like Alvaro said, should we insist a bit more about the WAL\n> retention part in this section of the docs, down to the last full\n> backup?\n\nI think that would make sense in general. But if we are doing it because \nwe lack confidence in the incremental backup feature maybe that's a sign \nthat the feature should be released as experimental (or not released at \nall).\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 11 Apr 2024 09:36:11 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": ">\n> > Yeah, that's an excellent practive, but is why I'm less worried for\n> > this feature. The docs at [1] caution about \"not to remove earlier\n> > backups if they might be needed when restoring later incremental\n> > backups\". Like Alvaro said, should we insist a bit more about the WAL\n> > retention part in this section of the docs, down to the last full\n> > backup?\n>\n> I think that would make sense in general. But if we are doing it because\n> we lack confidence in the incremental backup feature maybe that's a sign\n> that the feature should be released as experimental (or not released at\n> all).\n>\n>\nThe extensive Beta process we have can be used to build confidence we need\nin a feature that has extensive review and currently has no known issues or\noutstanding objections.\n\n\n\n> Regards,\n> -David\n>\n>\n>\n\n-- \nThomas John Kincaid\n\n\n> Yeah, that's an excellent practive, but is why I'm less worried for\n> this feature.  The docs at [1] caution about \"not to remove earlier\n> backups if they might be needed when restoring later incremental\n> backups\".  Like Alvaro said, should we insist a bit more about the WAL\n> retention part in this section of the docs, down to the last full\n> backup?\n\nI think that would make sense in general. But if we are doing it because \nwe lack confidence in the incremental backup feature maybe that's a sign \nthat the feature should be released as experimental (or not released at \nall).\nThe extensive Beta process we have can be used to build confidence we need in a feature that has extensive review and currently has no known issues or outstanding objections. \nRegards,\n-David\n\n\n-- Thomas John Kincaid", "msg_date": "Wed, 10 Apr 2024 20:23:19 -0400", "msg_from": "Tom Kincaid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/11/24 10:23, Tom Kincaid wrote:\n> \n> The extensive Beta process we have can be used to build confidence we \n> need in a feature that has extensive review and currently has no known \n> issues or outstanding objections.\n\nI did have objections, here [1] and here [2]. I think the complexity, \nspace requirements, and likely performance issues involved in restores \nare going to be a real problem for users. Some of these can be addressed \nin future releases, but I can't escape the feeling that what we are \nreleasing here is half-baked.\n\nAlso, there are outstanding issues here [3] and now here [4].\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/590e3017-da1f-4af6-9bf0-1679511ca7e5%40pgmasters.net\n[2] \nhttps://www.postgresql.org/message-id/11b38a96-6ded-4668-b772-40f992132797%40pgmasters.net\n[3] \nhttps://www.postgresql.org/message-id/flat/05fb32c9-18d8-4f72-9af3-f41576c33119%40pgmasters.net#bb04b896f0f0147c10cee944a1391c1e\n[4] \nhttps://www.postgresql.org/message-id/flat/9badd24d-5bd9-4c35-ba85-4c38a2feb73e%40pgmasters.net\n\n\n", "msg_date": "Thu, 11 Apr 2024 11:52:00 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/11/24 03:52, David Steele wrote:\n> On 4/11/24 10:23, Tom Kincaid wrote:\n>>\n>> The extensive Beta process we have can be used to build confidence we\n>> need in a feature that has extensive review and currently has no known\n>> issues or outstanding objections.\n> \n> I did have objections, here [1] and here [2]. I think the complexity,\n> space requirements, and likely performance issues involved in restores\n> are going to be a real problem for users. Some of these can be addressed\n> in future releases, but I can't escape the feeling that what we are\n> releasing here is half-baked.\n> \n\nI haven't been part of those discussions, and that part of the thread is\na couple months old already, so I'll share my view here instead.\n\nI do not think it's half-baked. I certainly agree there are limitations,\nand there's all kinds of bells and whistles we could add, but I think\nthe fundamental infrastructure is corrent and a meaningful step forward.\nWould I wish it to handle .tar for example? Sure I would. But I think\nit's something we can add in the future - if we require all of this to\nhappen in a single release, it'll never happen.\n\nFWIW that discussion also mentions stuff that I think the feature should\nnot do. In particular, I don't think the ambition was (or should be) to\nmake pg_basebackup into a stand-alone tool. I always saw pg_basebackup\nmore as an interface to \"backup steps\" correctly rather than a complete\nbackup solution that'd manage backup registry, retention, etc.\n\n> Also, there are outstanding issues here [3] and now here [4].\n> \n\nI agree with some of this, I'll respond in the threads.\n\n\nregards\nTomas\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:26:38 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Hi!\n\nI also worked with this patch and until your explanation I didn’t fully \nunderstand the reasons why it was wrong to have this implementation when \nremoving duplicate OR expressions.\n\nThank you, now I understand it!\n\nI agree with explanation of Andrei Lepikhov regarding the fact that \nthere were difficulties in moving the patch to another place and\nthe explanation of Alexander Korotkov and Peter Geoghegan regarding the \nneed to apply this transformation.\n\nLet me just add that initially this patch tried to solve a problem where \n50,000 OR expressions were generated and\nthere was a problem running that query using a plan with BitmapScan. I \nwrote more about this here [0].\n\nIf the patch can be improved and you can tell me how, because I don’t \nquite understand how to do it yet, to be honest, then I’ll be happy to \nwork on it too.\n\n[0] \nhttps://www.postgresql.org/message-id/052172e4-6d75-8069-3179-26de339dca03%40postgrespro.ru\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 17:12:29 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/11/24 20:26, Tomas Vondra wrote:\n> \n> On 4/11/24 03:52, David Steele wrote:\n>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>\n>>> The extensive Beta process we have can be used to build confidence we\n>>> need in a feature that has extensive review and currently has no known\n>>> issues or outstanding objections.\n>>\n>> I did have objections, here [1] and here [2]. I think the complexity,\n>> space requirements, and likely performance issues involved in restores\n>> are going to be a real problem for users. Some of these can be addressed\n>> in future releases, but I can't escape the feeling that what we are\n>> releasing here is half-baked.\n> \n> I haven't been part of those discussions, and that part of the thread is\n> a couple months old already, so I'll share my view here instead.\n> \n> I do not think it's half-baked. I certainly agree there are limitations,\n> and there's all kinds of bells and whistles we could add, but I think\n> the fundamental infrastructure is corrent and a meaningful step forward.\n> Would I wish it to handle .tar for example? Sure I would. But I think\n> it's something we can add in the future - if we require all of this to\n> happen in a single release, it'll never happen.\n\nFair enough, but the current release is extremely limited and it would \nbe best if that was well understood by users.\n\n> FWIW that discussion also mentions stuff that I think the feature should\n> not do. In particular, I don't think the ambition was (or should be) to\n> make pg_basebackup into a stand-alone tool. I always saw pg_basebackup\n> more as an interface to \"backup steps\" correctly rather than a complete\n> backup solution that'd manage backup registry, retention, etc.\n\nRight -- this is exactly my issue. pg_basebackup was never easy to use \nas a backup solution and this feature makes it significantly more \ncomplicated. Complicated enough that it would be extremely difficult for \nmost users to utilize in a meaningful way.\n\nBut they'll try because it is a new pg_basebackup feature and they'll \nassume it is there to be used. Maybe it would be a good idea to make it \nclear in the documentation that significant tooling will be required to \nmake it work.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 12 Apr 2024 07:48:12 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Thu, Apr 11, 2024 at 5:48 PM David Steele <[email protected]> wrote:\n> But they'll try because it is a new pg_basebackup feature and they'll\n> assume it is there to be used. Maybe it would be a good idea to make it\n> clear in the documentation that significant tooling will be required to\n> make it work.\n\nI don't agree with that idea. LOTS of what we ship takes a significant\namount of effort to make it work. You may well need a connection\npooler. You may well need a failover manager which may or may not be\nseparate from your connection pooler. You need a backup tool. You need\na replication management tool which may or may not be separate from\nyour backup tool and may or may not be separate from your failover\ntool. You probably need various out-of-core connections for the\nprogramming languages you need. You may need a management tool, and\nyou probably need a monitoring tool. Some of the tools you might\nchoose to do all that stuff themselves have a whole bunch of complex\ndependencies. It's a mess.\n\nNow, if someone were to say that we ought to talk about these issues\nin our documentation and maybe give people some ideas about how to get\nstarted, I would likely be in favor of that, modulo the small\npolitical problem that various people would want their solution to be\nthe canonical one to which everyone gets referred. But I think it's\nwrong to pretend like this feature is somehow special, that it's\nsomehow more raw or unfinished than tons of other things. I actually\nthink it's significantly *better* than a lot of other things. If we\nadd a disclaimer to the documentation saying \"hey, this new\nincremental backup feature is half-finished garbage!\", and meanwhile\nthe documentation still says \"hey, you can use cp as your\narchive_command,\" then we have completely lost our minds.\n\nI also think that you're being more negative about this than the facts\njustify. As I said to several colleagues today, I *fully* acknowledge\nthat you have a lot more practical experience in this area than I do,\nand a bunch of good ideas. I was really pleased to see you talking\nabout how it would be good if these tools worked on tar files - and I\ncompletely agree, and I hope that will happen, and I hope to help in\nmaking that happen. I think there are a bunch of other problems too,\nonly some of which I can guess at. However, I think saying that this\nfeature is not realistically intended to be used by end-users or that\nthey will not be able to do so is over the top, and is actually kind\nof insulting. There has been more enthusiasm for this feature on this\nmailing list and elsewhere than I've gotten for anything I've\ndeveloped in years. And I don't think that's because all of the people\nwho have expressed enthusiasm are silly geese who don't understand how\nterrible it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 22:15:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/12/24 12:15, Robert Haas wrote:\n> On Thu, Apr 11, 2024 at 5:48 PM David Steele <[email protected]> wrote:\n>> But they'll try because it is a new pg_basebackup feature and they'll\n>> assume it is there to be used. Maybe it would be a good idea to make it\n>> clear in the documentation that significant tooling will be required to\n>> make it work.\n> \n> I don't agree with that idea. LOTS of what we ship takes a significant\n> amount of effort to make it work. You may well need a connection\n> pooler. You may well need a failover manager which may or may not be\n> separate from your connection pooler. You need a backup tool. You need\n> a replication management tool which may or may not be separate from\n> your backup tool and may or may not be separate from your failover\n> tool. You probably need various out-of-core connections for the\n> programming languages you need. You may need a management tool, and\n> you probably need a monitoring tool. Some of the tools you might\n> choose to do all that stuff themselves have a whole bunch of complex\n> dependencies. It's a mess.\n\nThe difference here is you *can* use Postgres without a connection \npooler (I have many times) or failover (if downtime is acceptable) but \nmost people would agree that you really *need* backup.\n\nThe backup tool should be clear and easy to use or misery will \ninevitably result. pg_basebackup is difficult enough to use and automate \nbecause it has no notion of a repository, no expiration, and no WAL \nhandling just to name a few things. Now there is an even more advanced \nfeature that is even harder to use. So, no, I really don't think this \nfeature is practically usable by the vast majority of end users.\n\n> Now, if someone were to say that we ought to talk about these issues\n> in our documentation and maybe give people some ideas about how to get\n> started, I would likely be in favor of that, modulo the small\n> political problem that various people would want their solution to be\n> the canonical one to which everyone gets referred. But I think it's\n> wrong to pretend like this feature is somehow special, that it's\n> somehow more raw or unfinished than tons of other things. I actually\n> think it's significantly *better* than a lot of other things. If we\n> add a disclaimer to the documentation saying \"hey, this new\n> incremental backup feature is half-finished garbage!\", and meanwhile\n> the documentation still says \"hey, you can use cp as your\n> archive_command,\" then we have completely lost our minds.\n\nFair point on cp, but that just points to an overall lack in our \ndocumentation and built-in backup/recovery tools in general.\n\n> I also think that you're being more negative about this than the facts\n> justify. As I said to several colleagues today, I *fully* acknowledge\n> that you have a lot more practical experience in this area than I do,\n> and a bunch of good ideas. I was really pleased to see you talking\n> about how it would be good if these tools worked on tar files - and I\n> completely agree, and I hope that will happen, and I hope to help in\n> making that happen. I think there are a bunch of other problems too,\n> only some of which I can guess at. However, I think saying that this\n> feature is not realistically intended to be used by end-users or that\n> they will not be able to do so is over the top, and is actually kind\n> of insulting. \n\nIt is not meant to be insulting, but I still believe it to be true. \nAfter years of working with users on backup problems I think I have a \npretty good bead on what the vast majority of admins are capable of \nand/or willing to do. Making this feature work is pretty high above that \nbar.\n\nIf the primary motivation is to provide a feature that can be integrated \nwith third party tools, as Tomas suggests, then I guess usability is \nsomewhat moot. But you are insisting that is not the case and I just \ndon't see it that way.\n\n> There has been more enthusiasm for this feature on this\n> mailing list and elsewhere than I've gotten for anything I've\n> developed in years. And I don't think that's because all of the people\n> who have expressed enthusiasm are silly geese who don't understand how\n> terrible it is.\n\nNo doubt there is enthusiasm. It's a great feature to have. In \nparticular I think the WAL summarizer is cool. But I do think the \nshortcomings are significant and that will become very apparent when \npeople start to implement. The last minute effort to add COW support is \nan indication of problems that people will see in the field.\n\nFurther, I do think some less that ideal design decisions were made. In \nparticular, I think sidelining manifests, i.e. making them optional, is \nnot a good choice. This has led directly to the issue we see in [1]. If \nwe require a manifest to make an incremental backup, why make it \noptional for combine?\n\nThis same design decision has led us to have \"marker files\" for \nzero-length files and unchanged files, which just seems extremely \nwasteful when these could be noted in the manifest. There are good \nreasons for writing everything out in a full backup, but for an \nincremental that can only be reconstructed using our tool the manifest \nshould be sufficient.\n\nMaybe all of this can be improved in a future release, along with tar \nreading, but none of those potential future improvements help me to \nbelieve that this is a user-friendly feature in this release.\n\nRegards,\n-David\n\n---\n\n[1] \nhttps://www.postgresql.org/message-id/flat/9badd24d-5bd9-4c35-ba85-4c38a2feb73e%40pgmasters.net\n\n\n", "msg_date": "Fri, 12 Apr 2024 13:11:09 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/11/24 20:26, Tomas Vondra wrote:\n> On 4/11/24 03:52, David Steele wrote:\n>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>\n>>> The extensive Beta process we have can be used to build confidence we\n>>> need in a feature that has extensive review and currently has no known\n>>> issues or outstanding objections.\n>>\n>> I did have objections, here [1] and here [2]. I think the complexity,\n>> space requirements, and likely performance issues involved in restores\n>> are going to be a real problem for users. Some of these can be addressed\n>> in future releases, but I can't escape the feeling that what we are\n>> releasing here is half-baked.\n>>\n> I do not think it's half-baked. I certainly agree there are limitations,\n> and there's all kinds of bells and whistles we could add, but I think\n> the fundamental infrastructure is corrent and a meaningful step forward.\n> Would I wish it to handle .tar for example? Sure I would. But I think\n> it's something we can add in the future - if we require all of this to\n> happen in a single release, it'll never happen.\n\nI'm not sure that I really buy this argument, anyway. It is not uncommon \nfor significant features to spend years in development before they are \ncommitted. This feature went from first introduction to commit in just \nover six months. Obviously Robert had been working on it for a while, \nbut for a feature this large six months is a sprint.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 12 Apr 2024 16:42:13 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/11/24 23:48, David Steele wrote:\n> On 4/11/24 20:26, Tomas Vondra wrote:\n>>\n>> On 4/11/24 03:52, David Steele wrote:\n>>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>>\n>>>> The extensive Beta process we have can be used to build confidence we\n>>>> need in a feature that has extensive review and currently has no known\n>>>> issues or outstanding objections.\n>>>\n>>> I did have objections, here [1] and here [2]. I think the complexity,\n>>> space requirements, and likely performance issues involved in restores\n>>> are going to be a real problem for users. Some of these can be addressed\n>>> in future releases, but I can't escape the feeling that what we are\n>>> releasing here is half-baked.\n>>\n>> I haven't been part of those discussions, and that part of the thread is\n>> a couple months old already, so I'll share my view here instead.\n>>\n>> I do not think it's half-baked. I certainly agree there are limitations,\n>> and there's all kinds of bells and whistles we could add, but I think\n>> the fundamental infrastructure is corrent and a meaningful step forward.\n>> Would I wish it to handle .tar for example? Sure I would. But I think\n>> it's something we can add in the future - if we require all of this to\n>> happen in a single release, it'll never happen.\n> \n> Fair enough, but the current release is extremely limited and it would\n> be best if that was well understood by users.\n> \n>> FWIW that discussion also mentions stuff that I think the feature should\n>> not do. In particular, I don't think the ambition was (or should be) to\n>> make pg_basebackup into a stand-alone tool. I always saw pg_basebackup\n>> more as an interface to \"backup steps\" correctly rather than a complete\n>> backup solution that'd manage backup registry, retention, etc.\n> \n> Right -- this is exactly my issue. pg_basebackup was never easy to use\n> as a backup solution and this feature makes it significantly more\n> complicated. Complicated enough that it would be extremely difficult for\n> most users to utilize in a meaningful way.\n> \n\nPerhaps, I agree we could/should try to do better job to do backups, no\nargument there. But I still don't quite see why introducing such\ninfrastructure to \"manage backups\" should be up to the patch adding\nincremental backups. I see it as something to build on top of\npg_basebackup/pg_combinebackup, not into those tools.\n\n> But they'll try because it is a new pg_basebackup feature and they'll\n> assume it is there to be used. Maybe it would be a good idea to make it\n> clear in the documentation that significant tooling will be required to\n> make it work.\n> \n\nSure, I'm not against making it clearer pg_combinebackup is not a\ncomplete backup solution, and documenting the existing restrictions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Apr 2024 14:12:34 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/12/24 08:42, David Steele wrote:\n> On 4/11/24 20:26, Tomas Vondra wrote:\n>> On 4/11/24 03:52, David Steele wrote:\n>>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>>\n>>>> The extensive Beta process we have can be used to build confidence we\n>>>> need in a feature that has extensive review and currently has no known\n>>>> issues or outstanding objections.\n>>>\n>>> I did have objections, here [1] and here [2]. I think the complexity,\n>>> space requirements, and likely performance issues involved in restores\n>>> are going to be a real problem for users. Some of these can be addressed\n>>> in future releases, but I can't escape the feeling that what we are\n>>> releasing here is half-baked.\n>>>\n>> I do not think it's half-baked. I certainly agree there are limitations,\n>> and there's all kinds of bells and whistles we could add, but I think\n>> the fundamental infrastructure is corrent and a meaningful step forward.\n>> Would I wish it to handle .tar for example? Sure I would. But I think\n>> it's something we can add in the future - if we require all of this to\n>> happen in a single release, it'll never happen.\n> \n> I'm not sure that I really buy this argument, anyway. It is not uncommon\n> for significant features to spend years in development before they are\n> committed. This feature went from first introduction to commit in just\n> over six months. Obviously Robert had been working on it for a while,\n> but for a feature this large six months is a sprint.\n> \n\nSure, but it's also not uncommon for significant features to be\ndeveloped incrementally, over multiple releases, introducing the basic\ninfrastructure first, and then expanding the capabilities later. I'd\ncite logical decoding/replication and parallel query as examples of this\napproach.\n\nIt's possible there's some fundamental flaw in the WAL summarization?\nSure, I can't rule that out, although I find it unlikely. Could there be\nbugs? Sure, that's possible, but that applies to all code.\n\nBut it seems to me all the comments are about the client side, not about\nthe infrastructure. Which is fair, I certainly agree it'd be nice to\nhandle more use cases with less effort, but I still think the patch is a\nmeaningful step forward.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Apr 2024 14:27:16 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Wed, Apr 10, 2024 at 5:27 PM Robert Haas <[email protected]> wrote:\n>\n> On Mon, Apr 8, 2024 at 10:12 PM Tom Lane <[email protected]> wrote:\n> > I have another one that I'm not terribly happy about:\n> >\n> > Author: Alexander Korotkov <[email protected]>\n> > Branch: master [72bd38cc9] 2024-04-08 01:27:52 +0300\n> >\n> > Transform OR clauses to ANY expression\n>\n> I realize that this has been reverted now, but what's really\n> frustrating about this case is that I reviewed this patch before and\n> gave feedback similar to some of the feedback you gave, and it just\n> didn't matter, and the patch was committed anyway.\n>\n> > I don't know that I'd call it scary exactly, but I do think it\n> > was premature. A week ago there was no consensus that it was\n> > ready to commit, but Alexander pushed it (or half of it, anyway)\n> > despite that. A few concrete concerns:\n> >\n> > * Yet another planner GUC. Do we really need or want that?\n>\n> IMHO, no, and I said so in\n> https://www.postgresql.org/message-id/CA%2BTgmob%3DebuCHFSw327b55DJzE3JtOuZ5owxob%2BMgErb4me_Ag%40mail.gmail.com\n>\n> > * What the medical community would call off-label usage of\n> > query jumbling. I'm not sure this is even correct as-used,\n> > and for sure it's using that code for something never intended.\n> > Nor is the added code adequately (as in, at all) documented.\n>\n> And I raised this point here:\n> https://www.postgresql.org/message-id/CA%2BTgmoZCgP6FrBQEusn4yaWm02XU8OPeoEMk91q7PRBgwaAkFw%40mail.gmail.com\n>\n> > * Patch refuses to group anything but Consts into the SAOP\n> > transformation. I realize that if you want to produce an\n> > array Const you need Const inputs, but I wonder why it\n> > wasn't considered to produce an ARRAY[] construct if there\n> > are available clauses with pseudo-constant (eg Param)\n> > comparison values.\n> >\n> > * I really, really dislike jamming this logic into prepqual.c,\n> > where it has no business being. I note that it was shoved\n> > into process_duplicate_ors without even the courtesy of\n> > expanding the header comment:\n> >\n> > * process_duplicate_ors\n> > * Given a list of exprs which are ORed together, try to apply\n> > * the inverse OR distributive law.\n> >\n> > Another reason to think this wasn't a very well chosen place is\n> > that the file's list of #include's went from 4 entries to 11.\n> > Somebody should have twigged to the idea that this was off-topic\n> > for prepqual.c.\n>\n> All of this seems like it might be related to my comments in the above\n> email about the transformation being done too early.\n>\n> > * OrClauseGroupKey is not a Node type, so why does it have\n> > a NodeTag? I wonder what value will appear in that field,\n> > and what will happen if the struct is passed to any code\n> > that expects real Nodes.\n>\n> I don't think I raised this issue.\n>\n> > I could probably find some other nits if I spent more time\n> > on it, but I think these are sufficient to show that this\n> > was not commit-ready.\n>\n> Just imagine if someone had taken time to give similar feedback before\n> the commit.\n\nFWIW, I made my conclusion that it isn't worth to commit stuff like\nthis without explicit consent from Tom. As well as it isn't worth to\ncommit table AM changes without explicit consent from Andres. And it\nisn't worth it to postpone large features to the last CF (it's better\nto postpone to the next release then).\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Fri, 12 Apr 2024 17:54:29 +0300", "msg_from": "Alexander Korotkov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/12/24 22:12, Tomas Vondra wrote:\n> On 4/11/24 23:48, David Steele wrote:\n>> On 4/11/24 20:26, Tomas Vondra wrote:\n >>\n>>> FWIW that discussion also mentions stuff that I think the feature should\n>>> not do. In particular, I don't think the ambition was (or should be) to\n>>> make pg_basebackup into a stand-alone tool. I always saw pg_basebackup\n>>> more as an interface to \"backup steps\" correctly rather than a complete\n>>> backup solution that'd manage backup registry, retention, etc.\n>>\n>> Right -- this is exactly my issue. pg_basebackup was never easy to use\n>> as a backup solution and this feature makes it significantly more\n>> complicated. Complicated enough that it would be extremely difficult for\n>> most users to utilize in a meaningful way.\n> \n> Perhaps, I agree we could/should try to do better job to do backups, no\n> argument there. But I still don't quite see why introducing such\n> infrastructure to \"manage backups\" should be up to the patch adding\n> incremental backups. I see it as something to build on top of\n> pg_basebackup/pg_combinebackup, not into those tools.\n\nI'm not saying that managing backups needs to be part of pg_basebackup, \nbut I am saying without that it is not a complete backup solution. \nTherefore introducing advanced features that the user then has to figure \nout how to manage puts a large burden on them. Implementing \npg_combinebackup inefficiently out of the gate just makes their life harder.\n\n>> But they'll try because it is a new pg_basebackup feature and they'll\n>> assume it is there to be used. Maybe it would be a good idea to make it\n>> clear in the documentation that significant tooling will be required to\n>> make it work.\n> \n> Sure, I'm not against making it clearer pg_combinebackup is not a\n> complete backup solution, and documenting the existing restrictions.\n\nLet's do that then. I think it would make sense to add caveats to the \npg_combinebackup docs including space requirements, being explicit about \nthe format required (e.g. plain), and also possible mitigation with COW \nfilesystems.\n\nRegards,\n-David\n\n\n", "msg_date": "Sat, 13 Apr 2024 09:03:28 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/12/24 22:27, Tomas Vondra wrote:\n> \n> \n> On 4/12/24 08:42, David Steele wrote:\n>> On 4/11/24 20:26, Tomas Vondra wrote:\n>>> On 4/11/24 03:52, David Steele wrote:\n>>>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>>>\n>>>>> The extensive Beta process we have can be used to build confidence we\n>>>>> need in a feature that has extensive review and currently has no known\n>>>>> issues or outstanding objections.\n>>>>\n>>>> I did have objections, here [1] and here [2]. I think the complexity,\n>>>> space requirements, and likely performance issues involved in restores\n>>>> are going to be a real problem for users. Some of these can be addressed\n>>>> in future releases, but I can't escape the feeling that what we are\n>>>> releasing here is half-baked.\n>>>>\n>>> I do not think it's half-baked. I certainly agree there are limitations,\n>>> and there's all kinds of bells and whistles we could add, but I think\n>>> the fundamental infrastructure is corrent and a meaningful step forward.\n>>> Would I wish it to handle .tar for example? Sure I would. But I think\n>>> it's something we can add in the future - if we require all of this to\n>>> happen in a single release, it'll never happen.\n>>\n>> I'm not sure that I really buy this argument, anyway. It is not uncommon\n>> for significant features to spend years in development before they are\n>> committed. This feature went from first introduction to commit in just\n>> over six months. Obviously Robert had been working on it for a while,\n>> but for a feature this large six months is a sprint.\n>>\n> \n> Sure, but it's also not uncommon for significant features to be\n> developed incrementally, over multiple releases, introducing the basic\n> infrastructure first, and then expanding the capabilities later. I'd\n> cite logical decoding/replication and parallel query as examples of this\n> approach.\n> \n> It's possible there's some fundamental flaw in the WAL summarization?\n> Sure, I can't rule that out, although I find it unlikely. Could there be\n> bugs? Sure, that's possible, but that applies to all code.\n> \n> But it seems to me all the comments are about the client side, not about\n> the infrastructure. Which is fair, I certainly agree it'd be nice to\n> handle more use cases with less effort, but I still think the patch is a\n> meaningful step forward.\n\nYes, my comments are all about the client code. I like the \nimplementation of the WAL summarizer a lot. I don't think there is a \nfundamental flaw in the design, either, but I wouldn't be surprised if \nthere are bugs. That's life in software development biz.\n\nEven for the summarizer, though, I do worry about the complexity of \nmaintaining it over time. It seems like it would be very easy to \nintroduce a bug and have it go unnoticed until it causes problems in the \nfield. A lot of testing was done outside of the test suite for this \nfeature and I'm not sure if we can rely on that focus with every release.\n\nFor me an incremental approach would be to introduce the WAL summarizer \nfirst. There are already plenty of projects that do page-level \nincremental (WAL-G, pg_probackup, pgBackRest) and could help shake out \nthe bugs. Then introduce the client tools later when they are more \nrobust. Or, release the client tools now but mark them as experimental \nor something so people know that changes are coming and they don't get \nblindsided by that in the next release. Or, at the very least, make the \ncaveats very clear so users can make an informed choice.\n\nRegards,\n-David\n\n\n", "msg_date": "Sat, 13 Apr 2024 09:23:25 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/13/24 01:03, David Steele wrote:\n> On 4/12/24 22:12, Tomas Vondra wrote:\n>> On 4/11/24 23:48, David Steele wrote:\n>>> On 4/11/24 20:26, Tomas Vondra wrote:\n>>>\n>>>> FWIW that discussion also mentions stuff that I think the feature\n>>>> should\n>>>> not do. In particular, I don't think the ambition was (or should be) to\n>>>> make pg_basebackup into a stand-alone tool. I always saw pg_basebackup\n>>>> more as an interface to \"backup steps\" correctly rather than a complete\n>>>> backup solution that'd manage backup registry, retention, etc.\n>>>\n>>> Right -- this is exactly my issue. pg_basebackup was never easy to use\n>>> as a backup solution and this feature makes it significantly more\n>>> complicated. Complicated enough that it would be extremely difficult for\n>>> most users to utilize in a meaningful way.\n>>\n>> Perhaps, I agree we could/should try to do better job to do backups, no\n>> argument there. But I still don't quite see why introducing such\n>> infrastructure to \"manage backups\" should be up to the patch adding\n>> incremental backups. I see it as something to build on top of\n>> pg_basebackup/pg_combinebackup, not into those tools.\n> \n> I'm not saying that managing backups needs to be part of pg_basebackup,\n> but I am saying without that it is not a complete backup solution.\n> Therefore introducing advanced features that the user then has to figure\n> out how to manage puts a large burden on them. Implementing\n> pg_combinebackup inefficiently out of the gate just makes their life\n> harder.\n> \n\nI agree with this in general, but I fail to see how it'd be the fault of\nthis patch. It merely extends what pg_basebackup did before, so if it's\nnot a complete solution now, it wasn't a complete solution before.\n\nSure, I 100% agree it'd be great to have a more efficient\npg_combinebackup with fewer restrictions. But if we make these\nlimitations clear, I think it's better to have this than having nothing.\n\n>>> But they'll try because it is a new pg_basebackup feature and they'll\n>>> assume it is there to be used. Maybe it would be a good idea to make it\n>>> clear in the documentation that significant tooling will be required to\n>>> make it work.\n>>\n>> Sure, I'm not against making it clearer pg_combinebackup is not a\n>> complete backup solution, and documenting the existing restrictions.\n> \n> Let's do that then. I think it would make sense to add caveats to the\n> pg_combinebackup docs including space requirements, being explicit about\n> the format required (e.g. plain), and also possible mitigation with COW\n> filesystems.\n> \n\nOK. I'll add this as an open item, for me and Robert.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Apr 2024 12:18:08 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "\n\nOn 4/13/24 01:23, David Steele wrote:\n> On 4/12/24 22:27, Tomas Vondra wrote:\n>>\n>>\n>> On 4/12/24 08:42, David Steele wrote:\n>>> On 4/11/24 20:26, Tomas Vondra wrote:\n>>>> On 4/11/24 03:52, David Steele wrote:\n>>>>> On 4/11/24 10:23, Tom Kincaid wrote:\n>>>>>>\n>>>>>> The extensive Beta process we have can be used to build confidence we\n>>>>>> need in a feature that has extensive review and currently has no\n>>>>>> known\n>>>>>> issues or outstanding objections.\n>>>>>\n>>>>> I did have objections, here [1] and here [2]. I think the complexity,\n>>>>> space requirements, and likely performance issues involved in restores\n>>>>> are going to be a real problem for users. Some of these can be\n>>>>> addressed\n>>>>> in future releases, but I can't escape the feeling that what we are\n>>>>> releasing here is half-baked.\n>>>>>\n>>>> I do not think it's half-baked. I certainly agree there are\n>>>> limitations,\n>>>> and there's all kinds of bells and whistles we could add, but I think\n>>>> the fundamental infrastructure is corrent and a meaningful step\n>>>> forward.\n>>>> Would I wish it to handle .tar for example? Sure I would. But I think\n>>>> it's something we can add in the future - if we require all of this to\n>>>> happen in a single release, it'll never happen.\n>>>\n>>> I'm not sure that I really buy this argument, anyway. It is not uncommon\n>>> for significant features to spend years in development before they are\n>>> committed. This feature went from first introduction to commit in just\n>>> over six months. Obviously Robert had been working on it for a while,\n>>> but for a feature this large six months is a sprint.\n>>>\n>>\n>> Sure, but it's also not uncommon for significant features to be\n>> developed incrementally, over multiple releases, introducing the basic\n>> infrastructure first, and then expanding the capabilities later. I'd\n>> cite logical decoding/replication and parallel query as examples of this\n>> approach.\n>>\n>> It's possible there's some fundamental flaw in the WAL summarization?\n>> Sure, I can't rule that out, although I find it unlikely. Could there be\n>> bugs? Sure, that's possible, but that applies to all code.\n>>\n>> But it seems to me all the comments are about the client side, not about\n>> the infrastructure. Which is fair, I certainly agree it'd be nice to\n>> handle more use cases with less effort, but I still think the patch is a\n>> meaningful step forward.\n> \n> Yes, my comments are all about the client code. I like the\n> implementation of the WAL summarizer a lot. I don't think there is a\n> fundamental flaw in the design, either, but I wouldn't be surprised if\n> there are bugs. That's life in software development biz.\n> \n\nAgreed.\n\n> Even for the summarizer, though, I do worry about the complexity of\n> maintaining it over time. It seems like it would be very easy to\n> introduce a bug and have it go unnoticed until it causes problems in the\n> field. A lot of testing was done outside of the test suite for this\n> feature and I'm not sure if we can rely on that focus with every release.\n> \n\nI'm not sure there's a simpler way to implement this. I haven't really\nworked on that part (not until the CoW changes a couple weeks ago), but\nI think Robert was very conscious of the complexity.\n\nI don't think expect this code to change very often, but I agree it's\nnot great to rely on testing outside the regular regression test suite.\nBut I'm not sure how much more we can do, really - for example my\ntesting was very much \"randomized stress testing\" with a lot of data and\nlong runs, looking for unexpected stuff. That's not something we could\ndo in the usual regression tests, I think.\n\nBut if you have suggestions how to extend the testing ...\n\n> For me an incremental approach would be to introduce the WAL summarizer\n> first. There are already plenty of projects that do page-level\n> incremental (WAL-G, pg_probackup, pgBackRest) and could help shake out\n> the bugs. Then introduce the client tools later when they are more\n> robust. Or, release the client tools now but mark them as experimental\n> or something so people know that changes are coming and they don't get\n> blindsided by that in the next release. Or, at the very least, make the\n> caveats very clear so users can make an informed choice.\n> \n\nI don't think introducing just the summarizer, without any client tools,\nwould really work. How would we even test the summarizer, for example?\nIf the only users of that code are external tools, we'd do only some\nvery rudimentary tests. But the more complex tests would happen in the\nexternal tools, which means it wouldn't be covered by cfbot, buildfarm\nand so on. Considering the external tools are likely a bit behind, It's\nnot clear to me how I would do the stress testing, for example.\n\nIMHO we should aim to have in-tree clients when possible, even if some\nexternal tools can do more advanced stuff etc.\n\nThis however reminds me my question is the summarizer provides the right\ninterface(s) for the external tools. One option is to do pg_basebackup\nand then parse the incremental files, but is that suitable for the\nexternal tools, or should there be a more convenient way?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 13 Apr 2024 13:02:03 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On 4/13/24 21:02, Tomas Vondra wrote:\n> On 4/13/24 01:23, David Steele wrote:\n> \n>> Even for the summarizer, though, I do worry about the complexity of\n>> maintaining it over time. It seems like it would be very easy to\n>> introduce a bug and have it go unnoticed until it causes problems in the\n>> field. A lot of testing was done outside of the test suite for this\n>> feature and I'm not sure if we can rely on that focus with every release.\n>>\n> \n> I'm not sure there's a simpler way to implement this. I haven't really\n> worked on that part (not until the CoW changes a couple weeks ago), but\n> I think Robert was very conscious of the complexity.\n> \n> I don't think expect this code to change very often, but I agree it's\n> not great to rely on testing outside the regular regression test suite.\n> But I'm not sure how much more we can do, really - for example my\n> testing was very much \"randomized stress testing\" with a lot of data and\n> long runs, looking for unexpected stuff. That's not something we could\n> do in the usual regression tests, I think.\n> \n> But if you have suggestions how to extend the testing ...\n\nDoing stress testing in the regular test suite is obviously a problem \ndue to runtime, but it would still be great to see tests for issues that \nwere found during external stress testing.\n\nFor example, the issue you and Jakub found was fixed in 55a5ee30 but \nthere is no accompanying test and no existing test was broken by the change.\n\n>> For me an incremental approach would be to introduce the WAL summarizer\n>> first. There are already plenty of projects that do page-level\n>> incremental (WAL-G, pg_probackup, pgBackRest) and could help shake out\n>> the bugs. Then introduce the client tools later when they are more\n>> robust. Or, release the client tools now but mark them as experimental\n>> or something so people know that changes are coming and they don't get\n>> blindsided by that in the next release. Or, at the very least, make the\n>> caveats very clear so users can make an informed choice.\n>>\n> \n> I don't think introducing just the summarizer, without any client tools,\n> would really work. How would we even test the summarizer, for example?\n> If the only users of that code are external tools, we'd do only some\n> very rudimentary tests. But the more complex tests would happen in the\n> external tools, which means it wouldn't be covered by cfbot, buildfarm\n> and so on. Considering the external tools are likely a bit behind, It's\n> not clear to me how I would do the stress testing, for example.\n> \n> IMHO we should aim to have in-tree clients when possible, even if some\n> external tools can do more advanced stuff etc.\n> \n> This however reminds me my question is the summarizer provides the right\n> interface(s) for the external tools. One option is to do pg_basebackup\n> and then parse the incremental files, but is that suitable for the\n> external tools, or should there be a more convenient way?\n\nRunning a pg_basebackup to get the incremental changes would not be at \nall satisfactory. Luckily there are the \npg_wal_summary_contents()/pg_available_wal_summaries() functions, which \nseem to provide the required information. I have not played with them \nmuch but I think they will do the trick.\n\nThey are pretty awkward to work with since they are essentially \ntime-series data but what you'd really want, I think, is the ability to \nget page changes for a particular relfileid/segment.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:17:51 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Hi,\n\nOn Saturday, April 13th, 2024 at 12:18 PM, Tomas Vondra wrote:\n> On 4/13/24 01:03, David Steele wrote:\n> > On 4/12/24 22:12, Tomas Vondra wrote:\n> > > On 4/11/24 23:48, David Steele wrote:\n> > > > On 4/11/24 20:26, Tomas Vondra wrote:\n> > > > \n> > > > > FWIW that discussion also mentions stuff that I think the feature\n> > > > > should\n> > > > > not do. In particular, I don't think the ambition was (or should be) to\n> > > > > make pg_basebackup into a stand-alone tool. I always saw pg_basebackup\n> > > > > more as an interface to \"backup steps\" correctly rather than a complete\n> > > > > backup solution that'd manage backup registry, retention, etc.\n> > > > \n> > > > Right -- this is exactly my issue. pg_basebackup was never easy to use\n> > > > as a backup solution and this feature makes it significantly more\n> > > > complicated. Complicated enough that it would be extremely difficult for\n> > > > most users to utilize in a meaningful way.\n\npg_basebackup has its own use-cases IMHO. It is very handy to simply take a copy of the running cluster, thanks to its ability to carry on the needed WAL segs without the user even needing to think about archive_command/pg_receivewal. And for this kind of tasks, the new incremental thing will not really be interesting and won't make things more complicated.\n\nI totally agree that we don't have any complete backup solution in core though. And adding more tools to the picture (pg_basebackup, pg_combinebackup, pg_receivewal, pg_verifybackup,...) will increase the need of on-top orchestration. But that's not new. And for people already having such orchestration, having the incremental feature will help.\n\n> > > Perhaps, I agree we could/should try to do better job to do backups, no\n> > > argument there. But I still don't quite see why introducing such\n> > > infrastructure to \"manage backups\" should be up to the patch adding\n> > > incremental backups. I see it as something to build on top of\n> > > pg_basebackup/pg_combinebackup, not into those tools.\n> > \n> > I'm not saying that managing backups needs to be part of pg_basebackup,\n> > but I am saying without that it is not a complete backup solution.\n> > Therefore introducing advanced features that the user then has to figure\n> > out how to manage puts a large burden on them. Implementing\n> > pg_combinebackup inefficiently out of the gate just makes their life\n> > harder.\n> \n> I agree with this in general, but I fail to see how it'd be the fault of\n> this patch. It merely extends what pg_basebackup did before, so if it's\n> not a complete solution now, it wasn't a complete solution before.\n\n+1. We can see it as a step to having a better backup solution in core for the future, but we definitely shouldn't rule out the fact that lots of people already developed such orchestration (home-made or relying to pgbarman, pgbackrest, wal-g,...). IMHO, if we're trying to extend the in core features, we should also aim at giving more lights and features for those tools (like adding more fields to the backup functions,...).\n\n> > > Sure, I'm not against making it clearer pg_combinebackup is not a\n> > > complete backup solution, and documenting the existing restrictions.\n> > \n> > Let's do that then. I think it would make sense to add caveats to the\n> > pg_combinebackup docs including space requirements, being explicit about\n> > the format required (e.g. plain), and also possible mitigation with COW\n> > filesystems.\n> \n> OK. I'll add this as an open item, for me and Robert.\n\nThanks for this! It's probably not up to core docs to state all the steps that would be needed to develop a complete backup solution but documenting the links between the tools and mostly all the caveats (the \"don't use INCREMENTAL.* filenames\",...) will help users not be caught off guard. And as I mentioned in [1], IMO the earlier we can catch a potential issue (wrong filename, missing file,...), the better for the user.\n\nThank you all for working on this.\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\n[1] https://www.postgresql.org/message-id/vJnnuiaye5rNnCPN8h3xN1Y3lyUDESIgEQnR-Urat9_ld_fozShSJbEk8JDM_3K6BVt5HXT-CatWpSfEZkYVeymlrxKO2_kfKmVZNWyCuJc%3D%40protonmail.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:47:14 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Mon, 2024-04-08 at 15:47 -0400, Robert Haas wrote:\n> - I couldn't understand why the \"Operate\n> XLogCtl->log{Write,Flush}Result with atomics\" code was correct when I\n> read it.\n\nI reviewed ee1cbe806d. It followed a good process of discussion and\nreview. It was a bit close to the feature freeze for comfort, but did\nnot feel rushed, to me.\n\nAdditional eyes are always welcome. There are also some related commits\nin that area like f3ff7bf83b, c9920a9068 and 91f2cae7a4.\n\n> But, a spinlock\n> protecting two variables together guarantees more than atomic access\n> to each of those variables separately.\n\nWe maintain the invariant:\n\n XLogCtl->logFlushResult <= XLogCtl->logWriteResult\n\nand the non-shared version:\n\n LogwrtResult.Flush <= LogwrtResult.Write\n\nand that the requests don't fall behind the results:\n\n XLogCtl->LogwrtRqst.Write >= XLogCtl->logWriteResult &&\n XLogCtl->LogwrtRqst.Flush >= XLogCtl->logFlushResult\n\nAre you concerned that:\n\n (a) you aren't convinced that we're maintaining the invariants\nproperly? or\n (b) you aren't convinced that the invariant is strong enough, and\nthat there are other variables that we aren't considering?\n\nAnd if you think this is right after all, where could we add some\nclarification?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 16 Apr 2024 16:20:25 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "On Tue, Apr 16, 2024 at 7:20 PM Jeff Davis <[email protected]> wrote:\n> We maintain the invariant:\n>\n> XLogCtl->logFlushResult <= XLogCtl->logWriteResult\n>\n> and the non-shared version:\n>\n> LogwrtResult.Flush <= LogwrtResult.Write\n>\n> and that the requests don't fall behind the results:\n\nI had missed the fact that this commit added a bunch of read and write\nbarriers; and I think that's the main reason why I was concerned. The\nexistence of those barriers, and the invariants they are intended to\nmaintain, makes me feel better about it.\n\nThanks for writing back about this!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 15:19:15 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Hi! Unfortunately,Iwas notableto fullyunderstandyourmessage.Couldyou \nexplainit to meplease?\n\nOn 09.04.2024 16:20, Andrei Lepikhov wrote:\n>\n> Moreover, it helps even SeqScan: attempting to find a value in the \n> hashed array is much faster than cycling a long-expression on each \n> incoming tuple.\n\nAsIunderstandit,youtalkedaboutspeedingupSeqScan \nbyfasterre-searchingthroughthe useof a hashtable. Atthe same time, \nwehaveto builditbeforethat,whenthere was the initiallookuptuples,right?\n\nIfoundthisinformationinthe ExecEvalHashedScalarArrayOp \nfunction,andIassumeyoumeantthisfunctioninyourmessage.\n\nBut I couldn't find information, when you told about cycling a \nlong-expression on each incoming tuple. Could you ask me what function \nyou were talking about or maybe functionality? I saw ExecSeqScan \nfunction, but I didn't see it.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nHi! Unfortunately, I was not able to fully understand your message. Could you explain it to me please?\nOn 09.04.2024 16:20, Andrei Lepikhov\n wrote:\n\n \n Moreover, it helps even SeqScan: attempting to find a value in the\n hashed array is much faster than cycling a long-expression on each\n incoming tuple.\n \n\nAs I understand it, you talked about speeding up SeqScan by faster re-searching through the use of a hash table. At the same time, we have to build it before that, when there was the initial lookup tuples, right?\nI found this information in the ExecEvalHashedScalarArrayOp function, and I assume you meant this function in your message.\nBut I couldn't find information, when you told about cycling a\n long-expression on each incoming tuple. Could you ask me what\n function you were talking about or maybe functionality? I saw\n ExecSeqScan function, but I didn't see it.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 18 Jun 2024 13:05:23 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" }, { "msg_contents": "Sorry, I've just noticed that the letter is shown incorrectly. I rewrote \nit below.\n\nAs I understand it, you talked about speeding up SeqScan by faster \nre-searching through the use of a hash table. At the same time, we have \nto build it before that, when there was the initial lookup tuples, right?\n\nI found this information in the ExecEvalHashedScalarArrayOp function, \nand I assume you meant this function in your message, right?\n\nBut I couldn't find information, when you told about cycling a \nlong-expression on each incoming tuple. Could you ask me what function \nyou were talking about or maybe functionality? I saw ExecSeqScan \nfunction, but I didn't see it.\n\nOn 18.06.2024 13:05, Alena Rybakina wrote:\n>\n> Hi! Unfortunately,Iwas notableto fullyunderstandyourmessage.Couldyou \n> explainit to meplease?\n>\n> On 09.04.2024 16:20, Andrei Lepikhov wrote:\n>>\n>> Moreover, it helps even SeqScan: attempting to find a value in the \n>> hashed array is much faster than cycling a long-expression on each \n>> incoming tuple.\n>\n> AsIunderstandit,youtalkedaboutspeedingupSeqScan \n> byfasterre-searchingthroughthe useof a hashtable. Atthe same time, \n> wehaveto builditbeforethat,whenthere was the initiallookuptuples,right?\n>\n> Ifoundthisinformationinthe ExecEvalHashedScalarArrayOp \n> function,andIassumeyoumeantthisfunctioninyourmessage.\n>\n> But I couldn't find information, when you told about cycling a \n> long-expression on each incoming tuple. Could you ask me what function \n> you were talking about or maybe functionality? I saw ExecSeqScan \n> function, but I didn't see it.\n>\n> -- \n> Regards,\n> Alena Rybakina\n> Postgres Professional:http://www.postgrespro.com\n> The Russian Postgres Company\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional:http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\nSorry, I've just noticed that the letter is shown incorrectly. I\n rewrote it below.\nAs I understand it, you talked about speeding up SeqScan by\n faster re-searching through the use of a hash table. At the same\n time, we have to build it before that, when there was the initial\n lookup tuples, right?\n\n I found this information in the ExecEvalHashedScalarArrayOp\n function, and I assume you meant this function in your message,\n right?\n\n But I couldn't find information, when you told about cycling a\n long-expression on each incoming tuple. Could you ask me what\n function you were talking about or maybe functionality? I saw\n ExecSeqScan function, but I didn't see it.\n\nOn 18.06.2024 13:05, Alena Rybakina\n wrote:\n\n\n\nHi! Unfortunately, I was not able to fully understand your message. Could you explain it to me please?\nOn 09.04.2024 16:20, Andrei Lepikhov\n wrote:\n\n \n Moreover, it helps even SeqScan: attempting to find a value in\n the hashed array is much faster than cycling a long-expression\n on each incoming tuple. \n\nAs I understand it, you talked about speeding up SeqScan by faster re-searching through the use of a hash table. At the same time, we have to build it before that, when there was the initial lookup tuples, right?\nI found this information in the ExecEvalHashedScalarArrayOp function, and I assume you meant this function in your message.\nBut I couldn't find information, when you told about cycling a\n long-expression on each incoming tuple. Could you ask me what\n function you were talking about or maybe functionality? I saw\n ExecSeqScan function, but I didn't see it.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 18 Jun 2024 14:56:34 +0300", "msg_from": "Alena Rybakina <[email protected]>", "msg_from_op": false, "msg_subject": "Re: post-freeze damage control" } ]
[ { "msg_contents": "Hi Postgres hackers,\n\nI'm reaching out to gather some comments on enhancing the efficiency of \nmigrating particularly large tables with significant data volumes in \nPostgreSQL.\n\nWhen migrating a particularly large table with a significant amount of \ndata, users sometimes tend to split the table into multiple segments and \nutilize multiple sessions to process data from different segments in \nparallel, aiming to enhance efficiency. When segmenting a large table, \nit's challenging if the table lacks fields suitable for segmentation or \nif the data distribution is uneven. I believe that the data volume in \neach block should be relatively balanced when vacuum is enabled. \nTherefore, the ctid can be used to segment a large table, and I am \nthinking the entire process can be outlined as follows:\n1) determine the minimum and maximum ctid.\n2) calculate the number of data blocks based on the maximum and minimum \nctid.\n3) generate multiple SQL queries, such as SELECT * FROM tbl WHERE ctid \n >= '(xx,1)' AND ctid < '(xxx,1)'.\n\nHowever, when executing SELECT min(ctid) and max(ctid), it performs a \nSeq Scan, which can be slow for a large table. Is there a way to \nretrieve the minimum and maximum ctid other than using the system \nfunctions min() and max()?\n\nSince the minimum and maximum ctid are in order, theoretically, it \nshould start searching from the first block and can stop as soon as it \nfinds the first available one when retrieving the minimum ctid. \nSimilarly, it should start searching in reverse order from the last \nblock and stop upon finding the first occurrence when retrieving the \nmaximum ctid. Here's a piece of code snippet:\n\n         /* scan the relation for minimum or maximum ctid */\n         if (find_max_ctid)\n             dir = BackwardScanDirection;\n         else\n             dir = ForwardScanDirection;\n\n         while ((tuple = heap_getnext(scan, dir)) != NULL)\n         ...\n\nThe attached is a simple POC by referring to the extension pgstattuple. \nAny feedback, suggestions, or alternative solutions from the community \nwould be greatly appreciated.\n\nThank you,\n\nDavid", "msg_date": "Mon, 8 Apr 2024 14:52:13 -0700", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "enhance the efficiency of migrating particularly large tables" }, { "msg_contents": "On Tue, 9 Apr 2024 at 09:52, David Zhang <[email protected]> wrote:\n> However, when executing SELECT min(ctid) and max(ctid), it performs a\n> Seq Scan, which can be slow for a large table. Is there a way to\n> retrieve the minimum and maximum ctid other than using the system\n> functions min() and max()?\n\nFinding the exact ctid seems overkill for what you need. Why you\ncould just find the maximum block with:\n\nN = pg_relation_size('name_of_your_table'::regclass) /\ncurrent_Setting('block_size')::int;\n\nand do WHERE ctid < '(N,1)';\n\nIf we wanted to optimise this in PostgreSQL, the way to do it would\nbe, around set_plain_rel_pathlist(), check if the relation's ctid is a\nrequired PathKey by the same means as create_index_paths() does, then\nif found, create another seqscan path without synchronize_seqscans *\nand tag that with the ctid PathKey sending the scan direction\naccording to the PathKey direction. nulls_first does not matter since\nctid cannot be NULL.\n\nMin(ctid) query should be able to make use of this as the planner\nshould rewrite those to subqueries with a ORDER BY ctid LIMIT 1.\n\n* We'd need to invent an actual Path type for SeqScanPath as I see\ncreate_seqscan_path() just uses the base struct Path.\nsynchronize_seqscans would have to become a property of that new Path\ntype and it would need to be carried forward into the plan and looked\nat in the executor so that we always start a scan at the first or last\nblock.\n\nUnsure if such a feature is worthwhile. I think maybe not for just\nmin(ctid)/max(ctid). However, there could be other reasons, such as\nthe transform OR to UNION stuff that Tom worked on a few years ago.\nThat needed to eliminate duplicate rows that matched both OR branches\nand that was done using ctid.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Apr 2024 10:23:28 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enhance the efficiency of migrating particularly large tables" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Unsure if such a feature is worthwhile. I think maybe not for just\n> min(ctid)/max(ctid). However, there could be other reasons, such as\n> the transform OR to UNION stuff that Tom worked on a few years ago.\n> That needed to eliminate duplicate rows that matched both OR branches\n> and that was done using ctid.\n\nI'm kind of allergic to adding features that fundamentally depend on\nctid, seeing that there's so much activity around non-heap table AMs\nthat may not have any such concept, or may have a row ID that looks\ntotally different. (That's one reason why I stopped working on that\nOR-to-UNION patch.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 19:02:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enhance the efficiency of migrating particularly large tables" }, { "msg_contents": "On Tue, 9 Apr 2024 at 11:02, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Unsure if such a feature is worthwhile. I think maybe not for just\n> > min(ctid)/max(ctid). However, there could be other reasons, such as\n> > the transform OR to UNION stuff that Tom worked on a few years ago.\n> > That needed to eliminate duplicate rows that matched both OR branches\n> > and that was done using ctid.\n>\n> I'm kind of allergic to adding features that fundamentally depend on\n> ctid, seeing that there's so much activity around non-heap table AMs\n> that may not have any such concept, or may have a row ID that looks\n> totally different. (That's one reason why I stopped working on that\n> OR-to-UNION patch.)\n\nI understand that point of view, however, I think if we were to\nmaintain it as a policy that we'd likely miss out on various\noptimisations that future AMs could provide.\n\nWhen I pushed TID Range Scans a few years ago, I added \"amflags\" and\nwe have AMFLAG_HAS_TID_RANGE so the planner can check the AM supports\nthat before adding the Path.\n\nAnyway, I'm not saying let's do the non-sync scan SeqScanPath thing,\nI'm just saying that blocking optimisations as some future AM might\nnot support it might mean we're missing out on some great speedups.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Apr 2024 11:49:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enhance the efficiency of migrating particularly large tables" }, { "msg_contents": "Thanks a lot David Rowley for your suggestion in details.\n\nOn 2024-04-08 3:23 p.m., David Rowley wrote:\n> On Tue, 9 Apr 2024 at 09:52, David Zhang<[email protected]> wrote:\n> Finding the exact ctid seems overkill for what you need. Why you\n> could just find the maximum block with:\n>\n> N = pg_relation_size('name_of_your_table'::regclass) /\n> current_Setting('block_size')::int;\n>\n> and do WHERE ctid < '(N,1)';\nWe experienced this approach using pg_relation_size and tried to compare \nthe performance. Below are some simple timing results for 100 million \nrecords in a table:\n\nUsing system function max():\nSELECT max(ctid) from t;\nTime: 2126.680 ms (00:02.127)\n\nUsing pg_relation_size and where condition:\nSELECT pg_relation_size('t'::regclass) / current_setting('block_size')::int;\nTime: 0.561 ms\n\nUsing the experimental function introduced in previous patch:\nSELECT ctid from get_ctid('t', 1);\nTime: 0.452 ms\n\n\nDelete about 1/3 records from the end of the table:\nSELECT max(ctid) from t;\nTime: 1552.975 ms (00:01.553)\n\nSELECT pg_relation_size('t'::regclass) / current_setting('block_size')::int;\nTime: 0.533 m\nBut before vacuum, pg_relation_size always return the same value as \nbefore and this relation_size may not be so accurate.\n\nSELECT ctid from get_ctid('t', 1);\nTime: 251.105 m\n\nAfter vacuum:\nSELECT ctid from get_ctid('t', 1);\nTime: 0.478 ms\n\n\nBelow are the comparison between system function min() and the \nexperimental function:\n\nSELECT min(ctid) from t;\nTime: 1932.554 ms (00:01.933)\n\nSELECT ctid from get_ctid('t', 0);\nTime: 0.478 ms\n\nAfter deleted about 1/3 records from the beginning of the table:\nSELECT min(ctid) from t;\nTime: 1305.799 ms (00:01.306)\n\nSELECT ctid from get_ctid('t', 0);\nTime: 244.336 ms\n\nAfter vacuum:\nSELECT ctid from get_ctid('t', 0);\nTime: 0.468 ms\n\n> If we wanted to optimise this in PostgreSQL, the way to do it would\n> be, around set_plain_rel_pathlist(), check if the relation's ctid is a\n> required PathKey by the same means as create_index_paths() does, then\n> if found, create another seqscan path without synchronize_seqscans *\n> and tag that with the ctid PathKey sending the scan direction\n> according to the PathKey direction. nulls_first does not matter since\n> ctid cannot be NULL.\n>\n> Min(ctid) query should be able to make use of this as the planner\n> should rewrite those to subqueries with a ORDER BY ctid LIMIT 1.\n\nIs there a simple way to get the min of ctid faster than using min(), \nbut similar to get the max of ctid using pg_relation_size?\n\n\nThank you,\n\nDavid Zhang\n\n\n\n\n\n\nThanks a lot David Rowley for your suggestion in details.\n\nOn 2024-04-08 3:23 p.m., David Rowley\n wrote:\n\n\nOn Tue, 9 Apr 2024 at 09:52, David Zhang <[email protected]> wrote:\n\n\n\nFinding the exact ctid seems overkill for what you need. Why you\ncould just find the maximum block with:\n\nN = pg_relation_size('name_of_your_table'::regclass) /\ncurrent_Setting('block_size')::int;\n\nand do WHERE ctid < '(N,1)';\n\n We experienced this approach using pg_relation_size and tried to\n compare the performance. Below are some simple timing results for\n 100 million records in a table:\n\n Using system function max():\n SELECT max(ctid) from t;\n Time: 2126.680 ms (00:02.127)\n\n Using pg_relation_size and where condition:\n SELECT pg_relation_size('t'::regclass) /\n current_setting('block_size')::int;\n Time: 0.561 ms\n\n Using the experimental function introduced in previous patch:\n SELECT ctid from get_ctid('t', 1);\n Time: 0.452 ms\n\n\n Delete about 1/3 records from the end of the table:\n SELECT max(ctid) from t;\n Time: 1552.975 ms (00:01.553)\n\n SELECT pg_relation_size('t'::regclass) /\n current_setting('block_size')::int;\n Time: 0.533 m\n But before vacuum, pg_relation_size always return the same value as\n before and this relation_size may not be so accurate.\n\n SELECT ctid from get_ctid('t', 1);\n Time: 251.105 m\n\n After vacuum:\n SELECT ctid from get_ctid('t', 1);\n Time: 0.478 ms\n\n\n Below are the comparison between system function min() and the\n experimental function:\n\n SELECT min(ctid) from t;\n Time: 1932.554 ms (00:01.933)\n\n SELECT ctid from get_ctid('t', 0);\n Time: 0.478 ms\n\n After deleted about 1/3 records from the beginning of the table:\n SELECT min(ctid) from t;\n Time: 1305.799 ms (00:01.306)\n\n SELECT ctid from get_ctid('t', 0);\n Time: 244.336 ms\n\n After vacuum:\n SELECT ctid from get_ctid('t', 0);\n Time: 0.468 ms\n\n\nIf we wanted to optimise this in PostgreSQL, the way to do it would\nbe, around set_plain_rel_pathlist(), check if the relation's ctid is a\nrequired PathKey by the same means as create_index_paths() does, then\nif found, create another seqscan path without synchronize_seqscans *\nand tag that with the ctid PathKey sending the scan direction\naccording to the PathKey direction. nulls_first does not matter since\nctid cannot be NULL.\n\nMin(ctid) query should be able to make use of this as the planner\nshould rewrite those to subqueries with a ORDER BY ctid LIMIT 1.\n\nIs there a simple way to get the min of ctid faster than using\n min(), but similar to get the max of ctid using pg_relation_size?\n\n\nThank you,\nDavid Zhang", "msg_date": "Thu, 2 May 2024 14:33:41 -0700", "msg_from": "David Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: enhance the efficiency of migrating particularly large tables" }, { "msg_contents": "On Fri, 3 May 2024 at 09:33, David Zhang <[email protected]> wrote:\n> Is there a simple way to get the min of ctid faster than using min(), but similar to get the max of ctid using pg_relation_size?\n\nThe equivalent approximation is always '(0,1)'.\n\nDavid\n\n\n", "msg_date": "Fri, 3 May 2024 10:55:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: enhance the efficiency of migrating particularly large tables" } ]
[ { "msg_contents": "Similar to f736e188c, I've attached a patch that fixes up a few\nmisusages of the StringInfo functions. These just swap one function\ncall for another function that is more suited to the use case.\n\nI've also attached the patch that I used to find these. That's not\nintended for commit.\n\nI feel like it's a good idea to fix these soon while they're new\nrather than wait N years and make backpatching bug fixes possibly\nharder.\n\nI've attached a file with git blame commands to identify where all of\nthese were introduced. All are new to PG17.\n\nAny objections?\n\nDavid", "msg_date": "Tue, 9 Apr 2024 12:53:21 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Fixup some StringInfo usages" }, { "msg_contents": "On Tue, Apr 09, 2024 at 12:53:21PM +1200, David Rowley wrote:\n> Similar to f736e188c, I've attached a patch that fixes up a few\n> misusages of the StringInfo functions. These just swap one function\n> call for another function that is more suited to the use case.\n> \n> I've also attached the patch that I used to find these. That's not\n> intended for commit.\n> \n> I feel like it's a good idea to fix these soon while they're new\n> rather than wait N years and make backpatching bug fixes possibly\n> harder.\n> \n> I've attached a file with git blame commands to identify where all of\n> these were introduced. All are new to PG17.\n> \n> Any objections?\n\nLooks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 8 Apr 2024 20:34:12 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup some StringInfo usages" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Similar to f736e188c, I've attached a patch that fixes up a few\n> misusages of the StringInfo functions. These just swap one function\n> call for another function that is more suited to the use case.\n\n> I feel like it's a good idea to fix these soon while they're new\n> rather than wait N years and make backpatching bug fixes possibly\n> harder.\n\n+1. Looks good in a quick eyeball scan.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 22:27:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup some StringInfo usages" }, { "msg_contents": "On Tue, 9 Apr 2024 at 14:27, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > Similar to f736e188c, I've attached a patch that fixes up a few\n> > misusages of the StringInfo functions. These just swap one function\n> > call for another function that is more suited to the use case.\n>\n> > I feel like it's a good idea to fix these soon while they're new\n> > rather than wait N years and make backpatching bug fixes possibly\n> > harder.\n>\n> +1. Looks good in a quick eyeball scan.\n\nThank you to both of you for having a look. Pushed.\n\nDavid\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:57:20 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fixup some StringInfo usages" } ]
[ { "msg_contents": "Attached is a patch which adjusts the copyright years of 2023 that\nhave crept in this year from patches that were written last year and\ncommitted without adjusting this to 2024.\n\nThe patch isn't produced by src/tools/copyright.pl as that'll\ntransform files which are new and only contain \"2023\" to become\n\"2023-2024\", which I don't believe is what we want in this case.\n\nNo other matches aside from .po files from:\ngit grep -e \"Copyright\" --and --not -e \"2024\" --and -e \"PostgreSQL\nGlobal Development Group\"\n\nShould we do this and is this a good time to?\n\nDavid", "msg_date": "Tue, 9 Apr 2024 14:21:42 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Fixup a few 2023 copyright years" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> Attached is a patch which adjusts the copyright years of 2023 that\n> have crept in this year from patches that were written last year and\n> committed without adjusting this to 2024.\n\n> The patch isn't produced by src/tools/copyright.pl as that'll\n> transform files which are new and only contain \"2023\" to become\n> \"2023-2024\", which I don't believe is what we want in this case.\n\nAgreed, copyright.pl is not quite the right tool, although you could\nuse its output as a starting point and manually adjust any wrong\nchanges.\n\n> Should we do this and is this a good time to?\n\nWe *should* do this sometime before branching v17, but I'm not\nin any hurry. My thought here is that some of these late changes\nmight end up getting reverted, in which case touching those files\nwould add a bit more complexity to the revert. We can do this\nsort of mechanical cleanup after the probability of reversion has\ndeclined a bit.\n\n(On the same logic, I'm resisting the temptation to do a tree-wide\npgindent right away. Yeah, the tree is indent-clean according to\nthe current contents of src/tools/pgindent/typedefs.list, but that\nhasn't been maintained with great accuracy, so we'll need an\nupdate and then a pgindent run at some point.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 22:36:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup a few 2023 copyright years" }, { "msg_contents": "On Mon, Apr 08, 2024 at 10:36:41PM -0400, Tom Lane wrote:\n> We *should* do this sometime before branching v17, but I'm not\n> in any hurry. My thought here is that some of these late changes\n> might end up getting reverted, in which case touching those files\n> would add a bit more complexity to the revert. We can do this\n> sort of mechanical cleanup after the probability of reversion has\n> declined a bit.\n> \n> (On the same logic, I'm resisting the temptation to do a tree-wide\n> pgindent right away. Yeah, the tree is indent-clean according to\n> the current contents of src/tools/pgindent/typedefs.list, but that\n> hasn't been maintained with great accuracy, so we'll need an\n> update and then a pgindent run at some point.)\n\nThe perl code in the tree has gathered dust, on the contrary ;)\n\nI would suggest to also wait until we're clearer with the situation\nfor all these mechanical changes, which I suspect is going to take 1~2\nweeks at least.\n--\nMichael", "msg_date": "Tue, 9 Apr 2024 12:25:43 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup a few 2023 copyright years" }, { "msg_contents": "On Tue, 9 Apr 2024 at 15:26, Michael Paquier <[email protected]> wrote:\n> I would suggest to also wait until we're clearer with the situation\n> for all these mechanical changes, which I suspect is going to take 1~2\n> weeks at least.\n\nSince the Oid resequencing and pgindent run is now done, I've pushed this patch.\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 15:03:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fixup a few 2023 copyright years" }, { "msg_contents": "On Wed, May 15, 2024 at 03:03:00PM +1200, David Rowley wrote:\n> On Tue, 9 Apr 2024 at 15:26, Michael Paquier <[email protected]> wrote:\n>> I would suggest to also wait until we're clearer with the situation\n>> for all these mechanical changes, which I suspect is going to take 1~2\n>> weeks at least.\n> \n> Since the Oid resequencing and pgindent run is now done, I've pushed this patch.\n\nThanks, that looks correct.\n\nWhile running src/tools/copyright.pl, I have noticed that that a\nnewline was missing at the end of index_including.sql, as an effect of\nthe test added by you in a63224be49b8. I've cleaned up that while on\nit, as it was getting added automatically, and we tend to clean these\nlike in 3f1197191685 or more recently c2df2ed90a82.\n--\nMichael", "msg_date": "Wed, 15 May 2024 14:32:22 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup a few 2023 copyright years" }, { "msg_contents": "On Wed, 15 May 2024 at 17:32, Michael Paquier <[email protected]> wrote:\n> While running src/tools/copyright.pl, I have noticed that that a\n> newline was missing at the end of index_including.sql, as an effect of\n> the test added by you in a63224be49b8. I've cleaned up that while on\n> it, as it was getting added automatically, and we tend to clean these\n> like in 3f1197191685 or more recently c2df2ed90a82.\n\nThanks for fixing that. I'm a little surprised that pgindent does not\nfix that sort of thing.\n\nDavid\n\n\n", "msg_date": "Wed, 15 May 2024 23:25:39 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fixup a few 2023 copyright years" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Wed, 15 May 2024 at 17:32, Michael Paquier <[email protected]> wrote:\n>> While running src/tools/copyright.pl, I have noticed that that a\n>> newline was missing at the end of index_including.sql, as an effect of\n>> the test added by you in a63224be49b8. I've cleaned up that while on\n>> it, as it was getting added automatically, and we tend to clean these\n>> like in 3f1197191685 or more recently c2df2ed90a82.\n\n> Thanks for fixing that. I'm a little surprised that pgindent does not\n> fix that sort of thing.\n\npgindent does not touch anything but .c and .h files.\n\nI do recommend running \"git diff --check\" (with --staged if you\nalready git-added your changes) before you're ready to commit\nsomething. That does find generic whitespace issues, and I\nbelieve it would've found this one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 May 2024 10:30:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fixup a few 2023 copyright years" } ]
[ { "msg_contents": "Hi,\n\nHere is an experimental patch for read_stream.c. The basic idea is\nthat when read_stream_next_buffer() gives you a page P1, it should\nalso tell the CPU to prefetch the header of the next page P2, and so\non. However, I recognise that its lack of timing control may be a\nfundamental flaw (see below). It just ... happens to work sometimes.\nCan we do better?\n\nOne thought is that this might really belong in the AM, along with\nDavid Rowley's work on intra-page memory prefetching[1]. Stepping\nback a bit to explain why I wonder about that...\n\nWhat I observe is that sometimes the attached patch makes a wide\nvariety of well-cached table scans measurably faster, unscientifically\nsomething like 10-15%ish. Other times it doesn't, but it doesn't\nhurt.\n\nWhich ones and when on which CPUs, I'm not yet sure. One observation\nis that if you naively pg_prewarm an empty buffer pool the table is\ncontiguous in memory and the hardware prefetcher[2] can perhaps detect\na sequential memory scan (?), but that's not like real life: if you\nrun a system for a while its buffer pool turns into scrambled eggs,\nand then I guess the hardware prefetcher can't predict much (I dunno,\nwe aren't passing around addresses for it to speculate about, just\nbuffer IDs, but maybe it's not beyond the realms of array index\ndetection (?), I dunno). I don't know if that is a factor, and I\ndon't know enough about CPUs to have specific ideas yet.\n\nThe problem with all this is that it's obviously hard to reason about\ntiming down here in lower level code with our tuple-at-a-time-volcano\nexecutor design. I've tangled with variants of this stuff before for\nhash joins[3] (uncommitted, sadly), and that was actually really quite\nsuccessful!, but that tried to tackle the timing problem head on: it\ncreated its own batching opportunities, instead of letting the program\ncounter escape into the wilderness. Part of the reason I got stuck\nwith that project is that I started wanting general batch mode support\nand batch mode tuple-lifetime instead of localised hacks, and then the\nquicksand got me.\n\nConcretely, if the caller does so much work with P1 that by the time\nit fetches P2, P2's header has fallen out of [some level of] cache,\nthen it won't help [at that level at least]. In theory it has also\ncommitted the crime of cache pollution in that case, ie possibly\nkicked something useful out of [some level of] cache. It is hard for\nread_stream.c to know what the caller is doing between calls, which\nmight include running up and down a volcano 100 times. Perhaps that\nmeans the whole problem should be punted to the caller -- if heap\nscans would like to prefetch the next page's memory, then they should\nsimply call read_stream_next_buffer() slightly before they finish\nprocessing P1, so they can prefetch whatever they want themselves, and\nmicro-manage the timing better. Or some more complex reordering of\nwork.\n\nThat leads to the idea that it might somehow actually fit in better\nwith David's earlier work; can we treat the inter-page hop as just one\nof the many hops in the general stream of memory accesses that he was\nalready hacking on for prefetching fragments of heap pages? Or\nsomething. Maybe the attached always leaves the header in at least L2\neven in the worst real cases and so it's worth considering! I dunno.\nBut I couldn't look at a > 10% performance increase from 2 lines of\ncode and not share, hence this hand-wavy nerd-snipy email.\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvpTRx7hqFZGiZJ%3Dd9JN4h1tzJ2%3Dxt7bM-9XRmpVj63psQ%40mail.gmail.com\n[2] https://compas.cs.stonybrook.edu/~nhonarmand/courses/sp18/cse502/slides/13-prefetch.pdf\n[3] https://www.postgresql.org/message-id/flat/CAEepm%3D2y9HM9QP%2BHhRZdQ3pU6FShSMyu%3DV1uHXhQ5gG-dketHg%40mail.gmail.com", "msg_date": "Tue, 9 Apr 2024 14:55:42 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Experimental prefetching of buffer memory" } ]
[ { "msg_contents": "Hi\n\nThis idea is due to Robert Haas, who complained that he feared that\nthe streaming I/O API already worked like this. It doesn't, but it\ncould! Here is a concept patch to try it out.\n\nNormally, read_stream_next_buffer() spits out buffers in the order\nthat the user's callback generated block numbers. This option says\nthat any order would be OK.\n\nI had been assuming that this sort of thing would come with real\nasynchronous I/O: if I/Os complete out of order and the caller\nexplicitly said she doesn't care about block order, we can stream them\nas the completion events arrive. But even with synchronous I/O, we\ncould stream already-in-cache blocks before ones that require I/O.\nLetting the caller chew on blocks that are already available maximises\nthe time between fadvise() and preadv() for misses, which minimises\nthe likelihood that the process will have to go to \"D\" sleep.\n\nThe patch is pretty trivial: if started with the\nREAD_STREAM_OUT_OF_ORDER flag, \"hit\" buffers are allowed to jump in\nfront of \"miss\" buffers in the queue. The attached coding may not be\noptimal, it's just a proof of concept.\n\nANALYZE benefits from this, for example, with certain kinds of\npartially cached initial states and fast disks (?). I'm not sure how\ngenerally useful it is though. I'm posting it because I wonder if it\ncould be interesting for the streaming bitmap heapscan project, and I\nwonder how many other things don't care about the order.", "msg_date": "Tue, 9 Apr 2024 16:18:28 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Streaming relation data out of order" }, { "msg_contents": "On Tue, Apr 9, 2024 at 12:19 AM Thomas Munro <[email protected]> wrote:\n> This idea is due to Robert Haas, who complained that he feared that\n> the streaming I/O API already worked like this. It doesn't, but it\n> could! Here is a concept patch to try it out.\n\nOh, no, it's all my fault!\n\nMy favorite part of this patch is the comment added to read_stream.h,\nwhich IMHO makes things a whole lot clearer.\n\nI have no opinion on the rest of it; I don't understand this code.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 10:37:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Streaming relation data out of order" } ]
[ { "msg_contents": "Typo. fix:\n\n- attempted first. If the server ejectes GSS encryption, SSL is\n+ attempted first. If the server rejects GSS encryption, SSL is\n\nErik", "msg_date": "Tue, 9 Apr 2024 06:40:05 +0200", "msg_from": "Erik Rijkers <[email protected]>", "msg_from_op": true, "msg_subject": "libpq.sgml: \"server ejectes GSS\" -> server rejects GSS" }, { "msg_contents": "On 09/04/2024 07:40, Erik Rijkers wrote:\n> Typo. fix:\n> \n> - attempted first. If the server ejectes GSS encryption, SSL is\n> + attempted first. If the server rejects GSS encryption, SSL is\n\nFixed, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 9 Apr 2024 08:04:49 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq.sgml: \"server ejectes GSS\" -> server rejects GSS" } ]
[ { "msg_contents": "hi all,\n We have an interesting problem, where PG went to PANIC due to stuck\nspinlock case.\nOn careful analysis and hours of trying to reproduce this(something that\nshowed up in production after almost 2 weeks of stress run), I did some\nstatistical analysis on the RNG generator that PG uses to create the\nbackoff for the spin locks.\n\n My expectation is that, this RNG will be fair to all backends.\nAfter studying the underlying hash function that PG uses,\"xoroshiro\" which\nis from the LFSR family, it seems, that it has failed a bunch of\nstatistical tests and has been mentioned in various blogs.\nThe one that caught my attention was :\nhttps://www.pcg-random.org/posts/xoshiro-repeat-flaws.html\nThis mentions the zeroland problem and the big belt when in low Hamming\nindex. What that means, is that when the status reaches a low hamming index\nstate(state where it is mainly all 0's and less 1's), it takes a while to\nget back to regular entropy.\n\nHere is the code from s_lock.c that shows how we add delay for the backoff,\nbefore the TAS is tried again.\nAlso, there is a bug here in all likelihood, as the author wanted to round\noff the value coming out of the equation by adding 0.5. Integer casting in\nc/c++ will drop the decimal part out((int)12.9999 = 12).\nAnd if the RNG keeps producing values close to 0, the delay will never\nincrease(as it will keep integer casting to 0) and end up spinning 1000\ntimes within 1second itself. That is one of the reasons why we could\nreproduce this by forcing low RNG numbers in our test cases.\nSo, steps that I would recommend\n1. Firstly, use ceiling for round off\n2. cur_delay should increase by max(1000, RNG generated value)\n3. Change the hashing function\n4. Capture the new 0 state, and then try to reseed.\n\n /* increase delay by a random fraction between 1X and 2X */\n status->cur_delay += (int) (status->cur_delay *\n pg_prng_double(&pg_global_prng_state) +\n0.5);\n\nhi all,  We have an interesting problem, where PG went to PANIC due to stuck spinlock case. On careful analysis and hours of trying to reproduce this(something that showed up in production after almost 2 weeks of stress run), I did some statistical analysis on the RNG generator that PG uses to create the backoff for the spin locks.  My expectation is that, this RNG will be fair to all backends.After studying the underlying hash function that PG uses,\"xoroshiro\" which is from the LFSR family, it seems, that it has failed a bunch of statistical tests and has been mentioned in various blogs. The one that caught my attention was :https://www.pcg-random.org/posts/xoshiro-repeat-flaws.htmlThis mentions the zeroland problem and the big belt when in low Hamming index. What that means, is that when the status reaches a low hamming index state(state where it is mainly all 0's and less 1's), it takes a while to get back to regular entropy. Here is the code from s_lock.c that shows how we add delay for the backoff, before the TAS is tried again.Also, there is a bug here in all likelihood, as the author wanted to round off the value coming out of the equation by adding 0.5. Integer casting in c/c++ will drop the decimal part out((int)12.9999 = 12). And if the RNG keeps producing values close to 0, the delay will never increase(as it will keep integer casting to 0) and end up spinning 1000 times within 1second itself. That is one of the reasons why we could reproduce this by forcing low RNG numbers in our test cases. So, steps that I would recommend1. Firstly, use ceiling for round off2. cur_delay should increase by max(1000, RNG generated value)3. Change the hashing function4. Capture the new 0 state, and then try to reseed.        /* increase delay by a random fraction between 1X and 2X */        status->cur_delay += (int) (status->cur_delay *                                    pg_prng_double(&pg_global_prng_state) + 0.5);", "msg_date": "Mon, 8 Apr 2024 22:52:09 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-08 22:52:09 -0700, Parag Paul wrote:\n> We have an interesting problem, where PG went to PANIC due to stuck\n> spinlock case.\n> On careful analysis and hours of trying to reproduce this(something that\n> showed up in production after almost 2 weeks of stress run), I did some\n> statistical analysis on the RNG generator that PG uses to create the\n> backoff for the spin locks.\n\nISTM that the fix here is to not use a spinlock for whatever the contention is\non, rather than improve the RNG.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 14:05:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Tue, Apr 9, 2024 at 5:05 PM Andres Freund <[email protected]> wrote:\n> ISTM that the fix here is to not use a spinlock for whatever the contention is\n> on, rather than improve the RNG.\n\nI'm not convinced that we should try to improve the RNG, but surely we\nneed to put parentheses around pg_prng_double(&pg_global_prng_state) +\n0.5. IIUC, the current logic is making us multiply the spin delay by a\nvalue between 0 and 1 when what was intended was that it should be\nmultiplied by a value between 0.5 and 1.5.\n\nIf I'm reading this correctly, this was introduced here:\n\ncommit 59bb147353ba274e0836d06f429176d4be47452c\nAuthor: Bruce Momjian <[email protected]>\nDate: Fri Feb 3 12:45:47 2006 +0000\n\n Update random() usage so ranges are inclusive/exclusive as required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:43:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi Andres,\nThis is a little bit more complex than that. The spinlocks are taken in the\nLWLock(Mutex) code, when the lock is not available right away.\nThe spinlock is taken to attach the current backend to the wait list of the\nLWLock. This means, that this cannot be controlled.\nThe repro when it happens, it affects any mutex or LWLock code path, since\nthe low hamming index can cause problems by removing fairness from the\nsystem.\n\nAlso, I believe the rounding off error still remains within the RNG. I will\nsend a patch today.\n\nThanks for the response.\n-Parag\n\nOn Tue, Apr 9, 2024 at 2:05 PM Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On 2024-04-08 22:52:09 -0700, Parag Paul wrote:\n> > We have an interesting problem, where PG went to PANIC due to stuck\n> > spinlock case.\n> > On careful analysis and hours of trying to reproduce this(something that\n> > showed up in production after almost 2 weeks of stress run), I did some\n> > statistical analysis on the RNG generator that PG uses to create the\n> > backoff for the spin locks.\n>\n> ISTM that the fix here is to not use a spinlock for whatever the\n> contention is\n> on, rather than improve the RNG.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi Andres,This is a little bit more complex than that. The spinlocks are taken in the LWLock(Mutex) code, when the lock is not available right away. The spinlock is taken to attach the current backend to the wait list of the LWLock. This means, that this cannot be controlled. The repro when it happens, it affects any mutex or LWLock code path, since the low hamming index can cause problems by removing fairness from the system. Also, I believe the rounding off error still remains within the RNG. I will send a patch today.Thanks for the response.-ParagOn Tue, Apr 9, 2024 at 2:05 PM Andres Freund <[email protected]> wrote:Hi,\n\nOn 2024-04-08 22:52:09 -0700, Parag Paul wrote:\n>  We have an interesting problem, where PG went to PANIC due to stuck\n> spinlock case.\n> On careful analysis and hours of trying to reproduce this(something that\n> showed up in production after almost 2 weeks of stress run), I did some\n> statistical analysis on the RNG generator that PG uses to create the\n> backoff for the spin locks.\n\nISTM that the fix here is to not use a spinlock for whatever the contention is\non, rather than improve the RNG.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 10 Apr 2024 07:55:16 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Thank you Robert.\nI am in the process of patching this.\n-Parag\n\nOn Wed, Apr 10, 2024 at 7:43 AM Robert Haas <[email protected]> wrote:\n\n> On Tue, Apr 9, 2024 at 5:05 PM Andres Freund <[email protected]> wrote:\n> > ISTM that the fix here is to not use a spinlock for whatever the\n> contention is\n> > on, rather than improve the RNG.\n>\n> I'm not convinced that we should try to improve the RNG, but surely we\n> need to put parentheses around pg_prng_double(&pg_global_prng_state) +\n> 0.5. IIUC, the current logic is making us multiply the spin delay by a\n> value between 0 and 1 when what was intended was that it should be\n> multiplied by a value between 0.5 and 1.5.\n>\n> If I'm reading this correctly, this was introduced here:\n>\n> commit 59bb147353ba274e0836d06f429176d4be47452c\n> Author: Bruce Momjian <[email protected]>\n> Date: Fri Feb 3 12:45:47 2006 +0000\n>\n> Update random() usage so ranges are inclusive/exclusive as required.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nThank you Robert.I am in the process of patching this.-ParagOn Wed, Apr 10, 2024 at 7:43 AM Robert Haas <[email protected]> wrote:On Tue, Apr 9, 2024 at 5:05 PM Andres Freund <[email protected]> wrote:\n> ISTM that the fix here is to not use a spinlock for whatever the contention is\n> on, rather than improve the RNG.\n\nI'm not convinced that we should try to improve the RNG, but surely we\nneed to put parentheses around pg_prng_double(&pg_global_prng_state) +\n0.5. IIUC, the current logic is making us multiply the spin delay by a\nvalue between 0 and 1 when what was intended was that it should be\nmultiplied by a value between 0.5 and 1.5.\n\nIf I'm reading this correctly, this was introduced here:\n\ncommit 59bb147353ba274e0836d06f429176d4be47452c\nAuthor: Bruce Momjian <[email protected]>\nDate:   Fri Feb 3 12:45:47 2006 +0000\n\n    Update random() usage so ranges are inclusive/exclusive as required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Apr 2024 07:55:41 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I'm not convinced that we should try to improve the RNG, but surely we\n> need to put parentheses around pg_prng_double(&pg_global_prng_state) +\n> 0.5. IIUC, the current logic is making us multiply the spin delay by a\n> value between 0 and 1 when what was intended was that it should be\n> multiplied by a value between 0.5 and 1.5.\n\nNo, I think you are misreading it, because the assignment is += not =.\nThe present coding is\n\n /* increase delay by a random fraction between 1X and 2X */\n status->cur_delay += (int) (status->cur_delay *\n pg_prng_double(&pg_global_prng_state) + 0.5);\n\nwhich looks fine to me. The +0.5 is so that the conversion to integer\nrounds rather than truncating.\n\nIn any case, I concur with Andres: if this behavior is anywhere near\ncritical then the right fix is to not be using spinlocks.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:09:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "hi Tom,\n First of all thanks for you response. I did not misread it. The 0.5 is\nadded to the result of the multiplication which then uses C integer\ncasting, which does not round off, but just drops the decimal portion.\n\n\nstatus->cur_delay += (int) (status->cur_delay *\n pg_prng_double(&pg_global_prng_state) +\n0.5);\n\n\nSo, if RNG generated 0.0000001 and cur_delay =1000.\nResult will be\n1000 + int(1000*0.000001 + 5) = (int)(1000 + (0.1+.5)) = (int)1000.6 = 1000\n<-- back to the same value\nand there is no change after that starts happening, if the RNG is in the\nlow hamming index state. If avalanche happens soon, then it will correct it\nself, but in the mean time, we have a stuck_spin_lock PANIC that could\nbring down a production server.\n-Parag\n\nOn Wed, Apr 10, 2024 at 8:09 AM Tom Lane <[email protected]> wrote:\n\n> Robert Haas <[email protected]> writes:\n> > I'm not convinced that we should try to improve the RNG, but surely we\n> > need to put parentheses around pg_prng_double(&pg_global_prng_state) +\n> > 0.5. IIUC, the current logic is making us multiply the spin delay by a\n> > value between 0 and 1 when what was intended was that it should be\n> > multiplied by a value between 0.5 and 1.5.\n>\n> No, I think you are misreading it, because the assignment is += not =.\n> The present coding is\n>\n> /* increase delay by a random fraction between 1X and 2X */\n> status->cur_delay += (int) (status->cur_delay *\n> pg_prng_double(&pg_global_prng_state)\n> + 0.5);\n>\n> which looks fine to me. The +0.5 is so that the conversion to integer\n> rounds rather than truncating.\n>\n> In any case, I concur with Andres: if this behavior is anywhere near\n> critical then the right fix is to not be using spinlocks.\n>\n> regards, tom lane\n>\n\nhi Tom,  First of all thanks for you response. I did not misread it. The 0.5 is added to the result of the multiplication which then uses C integer casting, which does not round off, but just drops the decimal portion.status->cur_delay += (int) (status->cur_delay *                                    pg_prng_double(&pg_global_prng_state) + 0.5);So, if RNG generated 0.0000001 and cur_delay =1000.Result will be1000 + int(1000*0.000001 + 5) = (int)(1000 + (0.1+.5)) = (int)1000.6 = 1000 <--  back to the same valueand there is no change after that starts happening, if the RNG is in the low hamming index state. If avalanche happens soon, then it will correct it self, but in the mean time, we have a stuck_spin_lock PANIC that could bring down a production server. -ParagOn Wed, Apr 10, 2024 at 8:09 AM Tom Lane <[email protected]> wrote:Robert Haas <[email protected]> writes:\n> I'm not convinced that we should try to improve the RNG, but surely we\n> need to put parentheses around pg_prng_double(&pg_global_prng_state) +\n> 0.5. IIUC, the current logic is making us multiply the spin delay by a\n> value between 0 and 1 when what was intended was that it should be\n> multiplied by a value between 0.5 and 1.5.\n\nNo, I think you are misreading it, because the assignment is += not =.\nThe present coding is\n\n        /* increase delay by a random fraction between 1X and 2X */\n        status->cur_delay += (int) (status->cur_delay *\n                                    pg_prng_double(&pg_global_prng_state) + 0.5);\n\nwhich looks fine to me.  The +0.5 is so that the conversion to integer\nrounds rather than truncating.\n\nIn any case, I concur with Andres: if this behavior is anywhere near\ncritical then the right fix is to not be using spinlocks.\n\n                        regards, tom lane", "msg_date": "Wed, 10 Apr 2024 08:26:39 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Wed, Apr 10, 2024 at 11:09 AM Tom Lane <[email protected]> wrote:\n> No, I think you are misreading it, because the assignment is += not =.\n> The present coding is\n>\n> /* increase delay by a random fraction between 1X and 2X */\n> status->cur_delay += (int) (status->cur_delay *\n> pg_prng_double(&pg_global_prng_state) + 0.5);\n>\n> which looks fine to me. The +0.5 is so that the conversion to integer\n> rounds rather than truncating.\n\nOh, yeah ... right. But then why does the comment say that it's\nincreasing the delay between a random fraction between 1X and 2X?\nIsn't this a random fraction between 0X and 1X? Or am I still full of\ngarbage?\n\n> In any case, I concur with Andres: if this behavior is anywhere near\n> critical then the right fix is to not be using spinlocks.\n\nIt would certainly be interesting to know which spinlocks were at\nissue, here. But it also seems to me that it's reasonable to be\nunhappy about the possibility of this increasing cur_delay by exactly\n0. Storms of spinlock contention can and do happen on real-world\nproduction servers, and trying to guess which spinlocks might be\naffected before it happens is difficult. Until we fully eliminate all\nspinlocks from our code base, the theoretical possibility of such\nstorms remains, and if making sure that the increment here is >0\nmitigates that, then I think we should. Mind you, I don't think that\nthe original poster has necessarily provided convincing or complete\nevidence to justify a change here, but I find it odd that you and\nAndres seem to want to dismiss it out of hand.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:54:22 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Parag Paul <[email protected]> writes:\n> So, if RNG generated 0.0000001 and cur_delay =1000.\n> Result will be\n> 1000 + int(1000*0.000001 + 5) = (int)(1000 + (0.1+.5)) = (int)1000.6 = 1000\n> <-- back to the same value\n\nYes, with a sufficiently small RNG result, the sleep delay will not\nincrease that time through the loop. So what? It's supposed to be\na random amount of backoff, and I don't see why \"random\" shouldn't\noccasionally include \"zero\". Why would requiring the delay to\nincrease improve matters at all?\n\nNow, if the xoroshiro RNG is so bad that there's a strong likelihood\nthat multiple backends will sit on cur_delay = MIN_DELAY_USEC for\na long time, then yeah we could have a thundering-herd problem with\ncontention for the spinlock. You've provided no reason to think\nthat the probability of that is more than microscopic, though.\n(If it is more than microscopic, then we are going to have\nnonrandomness problems in a lot of other places, so I'd lean\ntowards revising the RNG not band-aiding s_lock. We've seen\nnothing to suggest such problems, though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 11:55:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Oh, yeah ... right. But then why does the comment say that it's\n> increasing the delay between a random fraction between 1X and 2X?\n\nI think the comment is meant to say that the new delay value will be\n1X to 2X the old value. If you want to suggest different phrasing,\nfeel free.\n\n> It would certainly be interesting to know which spinlocks were at\n> issue, here. But it also seems to me that it's reasonable to be\n> unhappy about the possibility of this increasing cur_delay by exactly\n> 0.\n\nAs I said to Parag, I see exactly no reason to believe that that's a\nproblem, unless it happens *a lot*, like hundreds of times in a row.\nIf it does, that's an RNG problem not s_lock's fault. Now, I'm not\ngoing to say that xoroshiro can't potentially do that, because I've\nnot studied it. But if it is likely to do that, I'd think we'd\nhave noticed the nonrandomness in other ways.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:02:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 07:55:16 -0700, Parag Paul wrote:\n> This is a little bit more complex than that. The spinlocks are taken in the\n> LWLock(Mutex) code, when the lock is not available right away.\n> The spinlock is taken to attach the current backend to the wait list of the\n> LWLock. This means, that this cannot be controlled.\n> The repro when it happens, it affects any mutex or LWLock code path, since\n> the low hamming index can cause problems by removing fairness from the\n> system.\n\nPlease provide a profile of a problematic workload. I'm fairly suspicious\nof the claim that RNG unfairness is a real part of the problem here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 09:09:31 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Actually ... Parag mentioned that this was specifically about\nlwlock.c's usage of spinlocks. It doesn't really use a spinlock,\nbut it does use s_lock.c's delay logic, and I think it's got the\nusage pattern wrong:\n\n while (true)\n {\n /* always try once to acquire lock directly */\n old_state = pg_atomic_fetch_or_u32(&lock->state, LW_FLAG_LOCKED);\n if (!(old_state & LW_FLAG_LOCKED))\n break; /* got lock */\n\n /* and then spin without atomic operations until lock is released */\n {\n SpinDelayStatus delayStatus;\n\n init_local_spin_delay(&delayStatus);\n\n while (old_state & LW_FLAG_LOCKED)\n {\n perform_spin_delay(&delayStatus);\n old_state = pg_atomic_read_u32(&lock->state);\n }\n#ifdef LWLOCK_STATS\n delays += delayStatus.delays;\n#endif\n finish_spin_delay(&delayStatus);\n }\n\n /*\n * Retry. The lock might obviously already be re-acquired by the time\n * we're attempting to get it again.\n */\n }\n\nI don't think it's correct to re-initialize the SpinDelayStatus each\ntime around the outer loop. That state should persist through the\nentire acquire operation, as it does in a regular spinlock acquire.\nAs this stands, it resets the delay to minimum each time around the\nouter loop, and I bet it is that behavior not the RNG that's to blame\nfor what he's seeing.\n\n(One should still wonder what is the LWLock usage pattern that is\ncausing this spot to become so heavily contended.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:28:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Wed, Apr 10, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n> As I said to Parag, I see exactly no reason to believe that that's a\n> problem, unless it happens *a lot*, like hundreds of times in a row.\n> If it does, that's an RNG problem not s_lock's fault. Now, I'm not\n> going to say that xoroshiro can't potentially do that, because I've\n> not studied it. But if it is likely to do that, I'd think we'd\n> have noticed the nonrandomness in other ways.\n\nThe blog post to which Parag linked includes this histogram as an\nexample of a low-Hamming-weight situation:\n\nReps | Value\n-----+--------\n 84 | 0x00\n 10 | 0x2d\n 6 | 0xa0\n 6 | 0x68\n 6 | 0x40\n 6 | 0x02\n 5 | 0x0b\n 4 | 0x05\n 4 | 0x01\n 3 | 0xe1\n 3 | 0x5a\n 3 | 0x41\n 3 | 0x20\n 3 | 0x16\n\nThat's certainly a lot more zeroes than you'd like to see in 192 bytes\nof output, but it's not hundreds in a row either.\n\nAlso, the blog post says \"On the one hand, from a practical\nperspective, having vastly, vastly more close repeats than it should\nisn't likely to be an issue that users will ever detect in practice.\nXoshiro256's large state space means it is too big to fail any\nstatistical tests that look for close repeats.\" So your theory that we\nhave a bug elsewhere in the code seems like it might be the real\nanswer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:37:12 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "hi Tom,\n Sorry for the delayed response. I was collecting of the data from my\nproduction servers.\n\nThe reason why this could be a problem is a flaw in the RNG with the\nenlarged Hamming belt.\nI attached an image here, with the RNG outputs from 2 backends. I ran our\ncode for weeks, and collected ther\nvalues generated by the RNG over many backends. The one in Green (say\nbackend id 600), stopped flapping values and\nonly produced low (near 0 ) values for half an hour, whereas the Blue(say\nbackend 700), kept generating good values and had\na range between [0-1)\nDuring this period, the backed 600 suffered and ended up with spinlock\nstuck condition.\n\nExtensive analysis here -\nhttps://www.pcg-random.org/posts/xoshiro-repeat-flaws.html\n-Parag\n\n\nOn Wed, Apr 10, 2024 at 9:28 AM Tom Lane <[email protected]> wrote:\n\n> Actually ... Parag mentioned that this was specifically about\n> lwlock.c's usage of spinlocks. It doesn't really use a spinlock,\n> but it does use s_lock.c's delay logic, and I think it's got the\n> usage pattern wrong:\n>\n> while (true)\n> {\n> /* always try once to acquire lock directly */\n> old_state = pg_atomic_fetch_or_u32(&lock->state, LW_FLAG_LOCKED);\n> if (!(old_state & LW_FLAG_LOCKED))\n> break; /* got lock */\n>\n> /* and then spin without atomic operations until lock is released\n> */\n> {\n> SpinDelayStatus delayStatus;\n>\n> init_local_spin_delay(&delayStatus);\n>\n> while (old_state & LW_FLAG_LOCKED)\n> {\n> perform_spin_delay(&delayStatus);\n> old_state = pg_atomic_read_u32(&lock->state);\n> }\n> #ifdef LWLOCK_STATS\n> delays += delayStatus.delays;\n> #endif\n> finish_spin_delay(&delayStatus);\n> }\n>\n> /*\n> * Retry. The lock might obviously already be re-acquired by the\n> time\n> * we're attempting to get it again.\n> */\n> }\n>\n> I don't think it's correct to re-initialize the SpinDelayStatus each\n> time around the outer loop. That state should persist through the\n> entire acquire operation, as it does in a regular spinlock acquire.\n> As this stands, it resets the delay to minimum each time around the\n> outer loop, and I bet it is that behavior not the RNG that's to blame\n> for what he's seeing.\n>\n> (One should still wonder what is the LWLock usage pattern that is\n> causing this spot to become so heavily contended.)\n>\n> regards, tom lane\n>", "msg_date": "Wed, 10 Apr 2024 09:39:45 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "hi Robert,\nWe are using xoroshiro128 and not moved to the next state of art.\nWe did see a lot of low values as put in my last message.\n-Parag\n\nOn Wed, Apr 10, 2024 at 9:37 AM Robert Haas <[email protected]> wrote:\n\n> On Wed, Apr 10, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n> > As I said to Parag, I see exactly no reason to believe that that's a\n> > problem, unless it happens *a lot*, like hundreds of times in a row.\n> > If it does, that's an RNG problem not s_lock's fault. Now, I'm not\n> > going to say that xoroshiro can't potentially do that, because I've\n> > not studied it. But if it is likely to do that, I'd think we'd\n> > have noticed the nonrandomness in other ways.\n>\n> The blog post to which Parag linked includes this histogram as an\n> example of a low-Hamming-weight situation:\n>\n> Reps | Value\n> -----+--------\n> 84 | 0x00\n> 10 | 0x2d\n> 6 | 0xa0\n> 6 | 0x68\n> 6 | 0x40\n> 6 | 0x02\n> 5 | 0x0b\n> 4 | 0x05\n> 4 | 0x01\n> 3 | 0xe1\n> 3 | 0x5a\n> 3 | 0x41\n> 3 | 0x20\n> 3 | 0x16\n>\n> That's certainly a lot more zeroes than you'd like to see in 192 bytes\n> of output, but it's not hundreds in a row either.\n>\n> Also, the blog post says \"On the one hand, from a practical\n> perspective, having vastly, vastly more close repeats than it should\n> isn't likely to be an issue that users will ever detect in practice.\n> Xoshiro256's large state space means it is too big to fail any\n> statistical tests that look for close repeats.\" So your theory that we\n> have a bug elsewhere in the code seems like it might be the real\n> answer.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nhi Robert,We are using xoroshiro128 and not moved to the next state of art. We did see a lot of low values as put in my last message.-ParagOn Wed, Apr 10, 2024 at 9:37 AM Robert Haas <[email protected]> wrote:On Wed, Apr 10, 2024 at 12:02 PM Tom Lane <[email protected]> wrote:\n> As I said to Parag, I see exactly no reason to believe that that's a\n> problem, unless it happens *a lot*, like hundreds of times in a row.\n> If it does, that's an RNG problem not s_lock's fault.  Now, I'm not\n> going to say that xoroshiro can't potentially do that, because I've\n> not studied it.  But if it is likely to do that, I'd think we'd\n> have noticed the nonrandomness in other ways.\n\nThe blog post to which Parag linked includes this histogram as an\nexample of a low-Hamming-weight situation:\n\nReps | Value\n-----+--------\n  84 | 0x00\n  10 | 0x2d\n   6 | 0xa0\n   6 | 0x68\n   6 | 0x40\n   6 | 0x02\n   5 | 0x0b\n   4 | 0x05\n   4 | 0x01\n   3 | 0xe1\n   3 | 0x5a\n   3 | 0x41\n   3 | 0x20\n   3 | 0x16\n\nThat's certainly a lot more zeroes than you'd like to see in 192 bytes\nof output, but it's not hundreds in a row either.\n\nAlso, the blog post says \"On the one hand, from a practical\nperspective, having vastly, vastly more close repeats than it should\nisn't likely to be an issue that users will ever detect in practice.\nXoshiro256's large state space means it is too big to fail any\nstatistical tests that look for close repeats.\" So your theory that we\nhave a bug elsewhere in the code seems like it might be the real\nanswer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Apr 2024 09:42:03 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Wed, Apr 10, 2024 at 12:40 PM Parag Paul <[email protected]> wrote:\n> The reason why this could be a problem is a flaw in the RNG with the enlarged Hamming belt.\n> I attached an image here, with the RNG outputs from 2 backends. I ran our code for weeks, and collected ther\n> values generated by the RNG over many backends. The one in Green (say backend id 600), stopped flapping values and\n> only produced low (near 0 ) values for half an hour, whereas the Blue(say backend 700), kept generating good values and had\n> a range between [0-1)\n> During this period, the backed 600 suffered and ended up with spinlock stuck condition.\n\nThis is a very vague description of a test procedure. If you provide a\nreproducible series of steps that causes a stuck spinlock, I imagine\neveryone will be on board with doing something about it. But this\ngraph is not going to convince anyone of anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:43:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Yes, the probability of this happening is astronomical, but in production\nwith 128 core servers with 7000 max_connections, with petabyte scale data,\nthis did repro 2 times in the last month. We had to move to a local\napproach to manager our ratelimiting counters.\nThis is not reproducible very easily. I feel that we should at least shield\nourselves with the following change, so that we at least increase the delay\nby 1000us every time. We will follow a linear back off, but better than no\nbackoff.\n status->cur_delay += max(1000, (int) (status->cur_delay *\n pg_prng_double(&pg_global_prng_state) +\n0.5));\n\nOn Wed, Apr 10, 2024 at 9:43 AM Robert Haas <[email protected]> wrote:\n\n> On Wed, Apr 10, 2024 at 12:40 PM Parag Paul <[email protected]> wrote:\n> > The reason why this could be a problem is a flaw in the RNG with the\n> enlarged Hamming belt.\n> > I attached an image here, with the RNG outputs from 2 backends. I ran\n> our code for weeks, and collected ther\n> > values generated by the RNG over many backends. The one in Green (say\n> backend id 600), stopped flapping values and\n> > only produced low (near 0 ) values for half an hour, whereas the\n> Blue(say backend 700), kept generating good values and had\n> > a range between [0-1)\n> > During this period, the backed 600 suffered and ended up with spinlock\n> stuck condition.\n>\n> This is a very vague description of a test procedure. If you provide a\n> reproducible series of steps that causes a stuck spinlock, I imagine\n> everyone will be on board with doing something about it. But this\n> graph is not going to convince anyone of anything.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nYes, the probability of this happening is astronomical, but in production with 128 core servers with 7000 max_connections, with petabyte scale data, this did repro 2 times in the last month. We had to move to a local approach to manager our ratelimiting counters. This is not reproducible very easily. I feel that we should at least shield ourselves with the following change, so that we at least increase the delay by 1000us every time. We will follow a linear back off, but better than no backoff.      status->cur_delay += max(1000, (int) (status->cur_delay *                                    pg_prng_double(&pg_global_prng_state) + 0.5));On Wed, Apr 10, 2024 at 9:43 AM Robert Haas <[email protected]> wrote:On Wed, Apr 10, 2024 at 12:40 PM Parag Paul <[email protected]> wrote:\n> The reason why this could be a problem is a flaw in the RNG with the enlarged Hamming belt.\n> I attached an image here, with the RNG outputs from 2 backends. I ran our code for weeks, and collected ther\n> values generated by the RNG over many backends. The one in Green (say backend id 600), stopped flapping values and\n> only produced low (near 0 ) values for half an hour, whereas the Blue(say backend 700), kept generating good values and had\n> a range between [0-1)\n> During this period, the backed 600 suffered and ended up with spinlock stuck condition.\n\nThis is a very vague description of a test procedure. If you provide a\nreproducible series of steps that causes a stuck spinlock, I imagine\neveryone will be on board with doing something about it. But this\ngraph is not going to convince anyone of anything.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 10 Apr 2024 09:48:42 -0700", "msg_from": "Parag Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "I wrote:\n> I don't think it's correct to re-initialize the SpinDelayStatus each\n> time around the outer loop. That state should persist through the\n> entire acquire operation, as it does in a regular spinlock acquire.\n> As this stands, it resets the delay to minimum each time around the\n> outer loop, and I bet it is that behavior not the RNG that's to blame\n> for what he's seeing.\n\nAfter thinking about this some more, it is fairly clear that that *is*\na mistake that can cause a thundering-herd problem. Assume we have\ntwo or more backends waiting in perform_spin_delay, and for whatever\nreason the scheduler wakes them up simultaneously. They see the\nLW_FLAG_LOCKED bit clear (because whoever had the lock when they went\nto sleep is long gone) and iterate the outer loop. One gets the lock;\nthe rest go back to sleep. And they will all sleep exactly\nMIN_DELAY_USEC, because they all reset their SpinDelayStatus.\nLather, rinse, repeat. If the s_lock code were being used as\nintended, they would soon acquire different cur_delay settings;\nbut that never happens. That is not the RNG's fault.\n\nSo I think we need something like the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 10 Apr 2024 13:03:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 09:48:42 -0700, Parag Paul wrote:\n> Yes, the probability of this happening is astronomical, but in production\n> with 128 core servers with 7000 max_connections, with petabyte scale data,\n> this did repro 2 times in the last month. We had to move to a local\n> approach to manager our ratelimiting counters.\n\nWhat version of PG was this? I think it's much more likely that you're\nhitting a bug that caused a lot more contention inside lwlocks. That was fixed\nfor 16+ in a4adc31f690 on 2022-11-20, but only backpatched to 12-15 on\n2024-01-18.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:03:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 12:28:10 -0400, Tom Lane wrote:\n> Actually ... Parag mentioned that this was specifically about\n> lwlock.c's usage of spinlocks. It doesn't really use a spinlock,\n> but it does use s_lock.c's delay logic, and I think it's got the\n> usage pattern wrong:\n> \n> while (true)\n> {\n> /* always try once to acquire lock directly */\n> old_state = pg_atomic_fetch_or_u32(&lock->state, LW_FLAG_LOCKED);\n> if (!(old_state & LW_FLAG_LOCKED))\n> break; /* got lock */\n> \n> /* and then spin without atomic operations until lock is released */\n> {\n> SpinDelayStatus delayStatus;\n> \n> init_local_spin_delay(&delayStatus);\n> \n> while (old_state & LW_FLAG_LOCKED)\n> {\n> perform_spin_delay(&delayStatus);\n> old_state = pg_atomic_read_u32(&lock->state);\n> }\n> #ifdef LWLOCK_STATS\n> delays += delayStatus.delays;\n> #endif\n> finish_spin_delay(&delayStatus);\n> }\n> \n> /*\n> * Retry. The lock might obviously already be re-acquired by the time\n> * we're attempting to get it again.\n> */\n> }\n> \n> I don't think it's correct to re-initialize the SpinDelayStatus each\n> time around the outer loop. That state should persist through the\n> entire acquire operation, as it does in a regular spinlock acquire.\n> As this stands, it resets the delay to minimum each time around the\n> outer loop, and I bet it is that behavior not the RNG that's to blame\n> for what he's seeing.\n\nHm, yea, that's not right. Normally this shouldn't be heavily contended enough\nto matter. I don't think just pulling out the spin delay would be the right\nthing though, because that'd mean we'd initialize it even in the much much\nmore common case of there not being any contention. I think we'll have to add\na separate fetch_or before the outer loop.\n\n\n> (One should still wonder what is the LWLock usage pattern that is\n> causing this spot to become so heavily contended.)\n\nMy suspicion is that it's a4adc31f690 which we only recently backpatched to\n< 16.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:08:46 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Parag Paul <[email protected]> writes:\n> Yes, the probability of this happening is astronomical, but in production\n> with 128 core servers with 7000 max_connections, with petabyte scale data,\n> this did repro 2 times in the last month. We had to move to a local\n> approach to manager our ratelimiting counters.\n> This is not reproducible very easily. I feel that we should at least shield\n> ourselves with the following change, so that we at least increase the delay\n> by 1000us every time. We will follow a linear back off, but better than no\n> backoff.\n\nI still say you are proposing to band-aid the wrong thing. Moreover:\n\n* the proposed patch will cause the first few cur_delay values to grow\nmuch faster than before, with direct performance impact to everyone,\nwhether they are on 128-core servers or not;\n\n* if we are in a regime where xoroshiro repeatedly returns zero\nacross multiple backends, your patch doesn't improve the situation\nAFAICS, because the backends will still choose the same series\nof cur_delay values and thus continue to exhibit thundering-herd\nbehavior. Indeed, as coded I think the patch makes it *more*\nlikely that the same series of cur_delay values would be chosen\nby multiple backends.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 13:12:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Hi,\n> On 2024-04-10 12:28:10 -0400, Tom Lane wrote:\n>> I don't think it's correct to re-initialize the SpinDelayStatus each\n>> time around the outer loop. That state should persist through the\n>> entire acquire operation, as it does in a regular spinlock acquire.\n>> As this stands, it resets the delay to minimum each time around the\n>> outer loop, and I bet it is that behavior not the RNG that's to blame\n>> for what he's seeing.\n\n> Hm, yea, that's not right. Normally this shouldn't be heavily contended enough\n> to matter. I don't think just pulling out the spin delay would be the right\n> thing though, because that'd mean we'd initialize it even in the much much\n> more common case of there not being any contention. I think we'll have to add\n> a separate fetch_or before the outer loop.\n\nAgreed, and I did that in my draft patch. AFAICS we can also avoid\nthe LWLOCK_STATS overhead if the initial attempt succeeds, not that\nthat is something you'd really be worried about.\n\n>> (One should still wonder what is the LWLock usage pattern that is\n>> causing this spot to become so heavily contended.)\n\n> My suspicion is that it's a4adc31f690 which we only recently backpatched to\n> < 16.\n\nSeems like a plausible theory.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 13:15:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 13:03:05 -0400, Tom Lane wrote:\n> After thinking about this some more, it is fairly clear that that *is*\n> a mistake that can cause a thundering-herd problem.\n\n> Assume we have two or more backends waiting in perform_spin_delay, and for\n> whatever reason the scheduler wakes them up simultaneously.\n\nThat's not really possible, at least not repeatably. Multiple processes\nobviously can't be scheduled concurrently on one CPU and scheduling something\non another core entails interrupting that CPU with an inter processor\ninterrupt or that other CPU scheduling on its own, without coordination.\n\nThat obviously isn't a reason to not fix the delay logic in lwlock.c.\n\n\nLooks like the wrong logic was introduced by me in\n\ncommit 008608b9d51061b1f598c197477b3dc7be9c4a64\nAuthor: Andres Freund <[email protected]>\nDate: 2016-04-10 20:12:32 -0700\n\n Avoid the use of a separate spinlock to protect a LWLock's wait queue.\n\nLikely because I was trying to avoid the overhead of init_local_spin_delay(),\nwithout duplicating the few lines to acquire the \"spinlock\".\n\n\n> So I think we need something like the attached.\n\nLGTM.\n\nI think it might be worth breaking LWLockWaitListLock() into two pieces, a\nfastpath to be inlined into a caller, and a slowpath, but that's separate work\nfrom a bugfix.\n\n\nI looked around and the other uses of init_local_spin_delay() look correct\nfrom this angle. However LockBufHdr() is more expensive than it needs to be,\nbecause it always initializes SpinDelayStatus. IIRC I've seen that show up in\nprofiles before, but never got around to writing a nice-enough patch. But\nthat's also something separate from a bugfix.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 10:32:42 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-10 13:03:05 -0400, Tom Lane wrote:\n>> So I think we need something like the attached.\n\n> LGTM.\n\nOn third thought ... while I still think this is a misuse of\nperform_spin_delay and we should change it, I'm not sure it'll do\nanything to address Parag's problem, because IIUC he's seeing actual\n\"stuck spinlock\" reports. That implies that the inner loop of\nLWLockWaitListLock slept NUM_DELAYS times without ever seeing\nLW_FLAG_LOCKED clear. What I'm suggesting would change the triggering\ncondition to \"NUM_DELAYS sleeps without acquiring the lock\", which is\nstrictly more likely to happen, so it's not going to help him. It's\ncertainly still well out in we-shouldn't-get-there territory, though.\n\nAlso, fooling around with the cur_delay adjustment doesn't affect\nthis at all: \"stuck spinlock\" is still going to be raised after\nNUM_DELAYS failures to observe the lock clear or obtain the lock.\nIncreasing cur_delay won't change that, it'll just spread the\nfixed number of attempts over a longer period; and there's no\nreason to believe that does anything except make it take longer\nto fail. Per the header comment for s_lock.c:\n\n * We time out and declare error after NUM_DELAYS delays (thus, exactly\n * that many tries). With the given settings, this will usually take 2 or\n * so minutes. It seems better to fix the total number of tries (and thus\n * the probability of unintended failure) than to fix the total time\n * spent.\n\nIf you believe that waiting processes can be awakened close enough to\nsimultaneously to hit the behavior I posited earlier, then encouraging\nthem to have different cur_delay values will help; but Andres doesn't\nbelieve it and I concede it seems like a stretch.\n\nSo I think fooling with the details in s_lock.c is pretty much beside\nthe point. The most likely bet is that Parag's getting bit by the\nbug fixed in a4adc31f690. It's possible he's seeing the effect of\nsome different issue that causes lwlock.c to hold that lock a long\ntime at scale, but that's where I'd look first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 14:02:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> The blog post to which Parag linked includes this histogram as an\n> example of a low-Hamming-weight situation:\n\nThat's an interesting post indeed, but I'm not sure how relevant\nit is to us, because it is about Xoshiro not Xoroshiro, and the\nlatter is what we use. The last sentence of the blog post is\n\n In this case, a mix of practice and theory has shown that the\n structure of Xoshiro's state space is poorer than that of many\n competing generation schemes, including Blackman's gjrand and\n perhaps even Vigna and Blackman's earlier Xoroshiro scheme (which\n has smaller zeroland expanses and does not appear to have similar\n close-repeat problems), and its output functions are unable to\n completely conceal these issues.\n\nSo while pg_prng.c might have the issue posited here, this blog\npost is not evidence for that, and indeed might be evidence\nagainst it. Someone would have to do similar analysis on the\ncode we *actually* use to convince me that we need to worry.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 14:15:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 14:02:20 -0400, Tom Lane wrote:\n> On third thought ... while I still think this is a misuse of\n> perform_spin_delay and we should change it, I'm not sure it'll do\n> anything to address Parag's problem, because IIUC he's seeing actual\n> \"stuck spinlock\" reports. That implies that the inner loop of\n> LWLockWaitListLock slept NUM_DELAYS times without ever seeing\n> LW_FLAG_LOCKED clear. What I'm suggesting would change the triggering\n> condition to \"NUM_DELAYS sleeps without acquiring the lock\", which is\n> strictly more likely to happen, so it's not going to help him. It's\n> certainly still well out in we-shouldn't-get-there territory, though.\n\nI think it could exascerbate the issue. Parag reported ~7k connections on a\n128 core machine. The buffer replacement logic in < 16 tries to lock the old\nand new lock partitions at once. That can lead to quite bad \"chains\" of\ndependent lwlocks, occasionally putting all the pressure on a single lwlock.\nWith 7k waiters on a single spinlock, higher frequency of wakeups will make it\nmuch more likely that the process holding the spinlock will be put to sleep.\n\nThis is greatly exacerbated by the issue fixed in a4adc31f690, once the\nwaitqueue is long, the spinlock will be held for an extended amount of time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 12:08:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> I think it could exascerbate the issue. Parag reported ~7k connections on a\n> 128 core machine. The buffer replacement logic in < 16 tries to lock the old\n> and new lock partitions at once. That can lead to quite bad \"chains\" of\n> dependent lwlocks, occasionally putting all the pressure on a single lwlock.\n> With 7k waiters on a single spinlock, higher frequency of wakeups will make it\n> much more likely that the process holding the spinlock will be put to sleep.\n> This is greatly exacerbated by the issue fixed in a4adc31f690, once the\n> waitqueue is long, the spinlock will be held for an extended amount of time.\n\nYeah. So what's the conclusion? Leave it alone? Commit to\nHEAD only?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:05:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 16:05:21 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > I think it could exascerbate the issue. Parag reported ~7k connections on a\n> > 128 core machine. The buffer replacement logic in < 16 tries to lock the old\n> > and new lock partitions at once. That can lead to quite bad \"chains\" of\n> > dependent lwlocks, occasionally putting all the pressure on a single lwlock.\n> > With 7k waiters on a single spinlock, higher frequency of wakeups will make it\n> > much more likely that the process holding the spinlock will be put to sleep.\n> > This is greatly exacerbated by the issue fixed in a4adc31f690, once the\n> > waitqueue is long, the spinlock will be held for an extended amount of time.\n> \n> Yeah. So what's the conclusion? Leave it alone? Commit to\n> HEAD only?\n\nI think we should certainly fix it. I don't really have an opinion about\nbackpatching, it's just on the line between the two for me.\n\nHm. The next set of releases is still a bit away, and this is one of the\nperiod where HEAD is hopefully going to be more tested than usual, so I'd\nperhaps very softly lean towards backpatching. There'd have to be some very\nodd compiler behaviour to make it slower than before anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 Apr 2024 13:32:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-10 16:05:21 -0400, Tom Lane wrote:\n>> Yeah. So what's the conclusion? Leave it alone? Commit to\n>> HEAD only?\n\n> I think we should certainly fix it. I don't really have an opinion about\n> backpatching, it's just on the line between the two for me.\n> Hm. The next set of releases is still a bit away, and this is one of the\n> period where HEAD is hopefully going to be more tested than usual, so I'd\n> perhaps very softly lean towards backpatching. There'd have to be some very\n> odd compiler behaviour to make it slower than before anyway.\n\nI'm not worried about it being slower, but about whether it could\nreport \"stuck spinlock\" in cases where the existing code succeeds.\nWhile that seems at least theoretically possible, it seems like\nif you hit it you have got problems that need to be fixed anyway.\nNonetheless, I'm kind of leaning to not back-patching. I do agree\non getting it into HEAD sooner not later though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:40:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Wed, Apr 10, 2024 at 4:40 PM Tom Lane <[email protected]> wrote:\n> I'm not worried about it being slower, but about whether it could\n> report \"stuck spinlock\" in cases where the existing code succeeds.\n> While that seems at least theoretically possible, it seems like\n> if you hit it you have got problems that need to be fixed anyway.\n> Nonetheless, I'm kind of leaning to not back-patching. I do agree\n> on getting it into HEAD sooner not later though.\n\nI just want to mention that I have heard of \"stuck spinlock\" happening\nin production just because the server was busy. And I think that's not\nintended. The timeout is supposed to be high enough that you only hit\nit if there's a bug in the code. At least AIUI. But it isn't.\n\nI know that's a separate issue, but I think it's an important one. It\nshouldn't happen that a system which was installed to defend against\nbugs in the code causes more problems than the bugs themselves.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 10 Apr 2024 20:35:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I just want to mention that I have heard of \"stuck spinlock\" happening\n> in production just because the server was busy. And I think that's not\n> intended. The timeout is supposed to be high enough that you only hit\n> it if there's a bug in the code. At least AIUI. But it isn't.\n\nWell, certainly that's not supposed to happen, but anecdotal reports\nlike this provide little basis for discussing what we ought to do\nabout it. It seems likely that raising the timeout would do nothing\nexcept allow the stuck state to persist even longer before failing.\n(Keep in mind that the existing timeout of ~2 min is already several\ngeological epochs to a computer, so arguing that spinning another\ncouple minutes would have resulted in success seems more like wishful\nthinking than reality.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 21:33:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "I wrote:\n> I'm not worried about it being slower, but about whether it could\n> report \"stuck spinlock\" in cases where the existing code succeeds.\n\nOn fourth thought ... the number of tries to acquire the lock, or\nin this case number of tries to observe the lock free, is not\nNUM_DELAYS but NUM_DELAYS * spins_per_delay. Decreasing\nspins_per_delay should therefore increase the risk of unexpected\n\"stuck spinlock\" failures. And finish_spin_delay will decrement\nspins_per_delay in any cycle where we slept at least once.\nIt's plausible therefore that this coding with finish_spin_delay\ninside the main wait loop puts more downward pressure on\nspins_per_delay than the algorithm is intended to cause.\n\nI kind of wonder whether the premises finish_spin_delay is written\non even apply anymore, given that nobody except some buildfarm\ndinosaurs runs Postgres on single-processor hardware anymore.\nMaybe we should rip out the whole mechanism and hard-wire\nspins_per_delay at 1000 or so.\n\nLess drastically, I wonder if we should call finish_spin_delay\nat all in these off-label uses of perform_spin_delay. What\nwe're trying to measure there is the behavior of TAS() spin loops,\nand I'm not sure that what LWLockWaitListLock and the bufmgr\ncallers are doing should be assumed to have timing behavior\nidentical to that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 21:52:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "\n\n> On 10 Apr 2024, at 21:48, Parag Paul <[email protected]> wrote:\n> \n> Yes, the probability of this happening is astronomical, but in production with 128 core servers with 7000 max_connections, with petabyte scale data, this did repro 2 times in the last month. We had to move to a local approach to manager our ratelimiting counters. \n\nFWIW we observed such failure on this [0] LWLock two times too. Both cases were recent (February).\nWe have ~15k clusters with 8MTPS, so it’s kind of infrequent, but not astronomic. We decided to remove that lock.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/munakoiso/logerrors/pull/25/files#diff-f8903c463a191f399b3e84c815ed6dc60adbbfc0fb0b2db490be1e58dc692146L85\n\n", "msg_date": "Thu, 11 Apr 2024 10:52:43 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Wed, Apr 10, 2024 at 9:53 PM Tom Lane <[email protected]> wrote:\n> Maybe we should rip out the whole mechanism and hard-wire\n> spins_per_delay at 1000 or so.\n\nOr, rip out the whole, whole mechanism and just don't PANIC.\n\nI'm not 100% sure that's better, but I think it's worth considering.\nThe thing is, if you're panicking regularly, that's almost as bad as\nbeing down, because you're going to lose all of your connections and\nhave to run recovery before they can be reestablished. And if you're\npanicking once in a blue moon, the PANIC actually prevents you from\neasily getting a stack trace that might help you figure out what's\nhappening. And of course if it's not really a stuck spinlock but just\na very slow system, which I think is an extremely high percentage of\nreal-world cases, then it's completely worthless at best.\n\nTo believe that the PANIC is the right idea, we have to suppose that\nwe have stuck-spinlock bugs that people actually hit, but that those\npeople don't hit them often enough to care, as long as the system\nresets when the spinlock gets stuck, instead of hanging. I can't\ncompletely rule out the existence of either such bugs or such people,\nbut I'm not aware of having encountered them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 15:24:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 15:24:28 -0400, Robert Haas wrote:\n> On Wed, Apr 10, 2024 at 9:53 PM Tom Lane <[email protected]> wrote:\n> > Maybe we should rip out the whole mechanism and hard-wire\n> > spins_per_delay at 1000 or so.\n> \n> Or, rip out the whole, whole mechanism and just don't PANIC.\n\nI continue believe that that'd be a quite bad idea.\n\n\nMy suspicion is that most of the false positives are caused by lots of signals\ninterrupting the pg_usleep()s. Because we measure the number of delays, not\nthe actual time since we've been waiting for the spinlock, signals\ninterrupting pg_usleep() trigger can very significantly shorten the amount of\ntime until we consider a spinlock stuck. We should fix that.\n\n\n\n> To believe that the PANIC is the right idea, we have to suppose that\n> we have stuck-spinlock bugs that people actually hit, but that those\n> people don't hit them often enough to care, as long as the system\n> resets when the spinlock gets stuck, instead of hanging. I can't\n> completely rule out the existence of either such bugs or such people,\n> but I'm not aware of having encountered them.\n\nI don't think that's a fair description of the situation. It supposes that the\nalternative to the PANIC is that the problem is detected and resolved some\nother way. But, depending on the spinlock, the problem will not be detected by\nautomated checks for the system being up. IME you end up with a system that's\ndegraded in a complicated hard to understand way, rather than one that's just\ndown.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:52:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Wed, Apr 10, 2024 at 9:53 PM Tom Lane <[email protected]> wrote:\n>> Maybe we should rip out the whole mechanism and hard-wire\n>> spins_per_delay at 1000 or so.\n\n> Or, rip out the whole, whole mechanism and just don't PANIC.\n\nBy that you mean \"remove the NUM_DELAYS limit\", right? We still ought\nto sleep sometimes if we don't get a spinlock promptly, so that seems\nfairly orthogonal to the other points raised in this thread.\n\nHaving said that, it's not an insane suggestion. Our coding rules\nfor spinlocks are tight enough that a truly stuck spinlock should\nbasically never happen, and certainly it basically never happens in\ndeveloper testing (IME anyway, maybe other people break things at\nthat level more often). Besides, if it does happen it wouldn't be\ndifferent in a user's eyes from other infinite or near-infinite loops.\nI don't think we could risk putting in a CHECK_FOR_INTERRUPTS, so\ngetting out of it would require \"kill -9\" or so; but it's hardly\nunheard-of for other problematic loops to not have such a check.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:00:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-10 21:52:59 -0400, Tom Lane wrote:\n> Less drastically, I wonder if we should call finish_spin_delay\n> at all in these off-label uses of perform_spin_delay. What\n> we're trying to measure there is the behavior of TAS() spin loops,\n> and I'm not sure that what LWLockWaitListLock and the bufmgr\n> callers are doing should be assumed to have timing behavior\n> identical to that.\n\nI think the difference between individual spinlocks is bigger than between\nspinlocks and lwlock/buffer header locks.\n\n\nI think we probably should move away from having any spinlocks. I tried to\njust replace them with lwlocks, but today the overhead is too big. The issue\nisn't primarily the error handling or such, but the fact that rwlocks are more\nexpensive than simple exclusive locks. The main differences are:\n\n1) On x86 a spinlock release just needs a compiler barrier, but an rwlock\n needs an atomic op.\n\n2) Simple mutex acquisition is easily done with an atomic-exchange, which is\n much harder for an rwlock (as the lock has more states, so just setting to\n \"locked\" and checking the return value isn't sufficient). Once a lock is\n contended, a atomic compare-exchange ends up with lots of retries due to\n concurrent changes.\n\nIt might be that we can still get away with just removing spinlocks - to my\nknowledge we have one very heavily contended performance critical spinlock,\nXLogCtl->Insert->insertpos_lck. I think Thomas and I have come up with a way\nto do away with that spinlock.\n\n\nOTOH, there are plenty other lwlocks where we pay the price of lwlocks being\nan rwlock, but just use the exclusive mode. So perhaps we should just add a\nexclusive-only lwlock variant.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 13:03:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-11 15:24:28 -0400, Robert Haas wrote:\n>> Or, rip out the whole, whole mechanism and just don't PANIC.\n\n> I continue believe that that'd be a quite bad idea.\n\nI'm warming to it myself.\n\n> My suspicion is that most of the false positives are caused by lots of signals\n> interrupting the pg_usleep()s. Because we measure the number of delays, not\n> the actual time since we've been waiting for the spinlock, signals\n> interrupting pg_usleep() trigger can very significantly shorten the amount of\n> time until we consider a spinlock stuck. We should fix that.\n\nWe wouldn't need to fix it, if we simply removed the NUM_DELAYS\nlimit. Whatever kicked us off the sleep doesn't matter, we might\nas well go check the spinlock.\n\nAlso, you propose in your other message replacing spinlocks with\nlwlocks. Whatever the other merits of that, I notice that we have\nno timeout or \"stuck lwlock\" detection. So that would basically\nremove the stuck-spinlock behavior in an indirect way, without\nadding any safety measures that would justify thinking that it's\nless likely we needed stuck-lock detection than before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:11:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 16:11:40 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-04-11 15:24:28 -0400, Robert Haas wrote:\n> >> Or, rip out the whole, whole mechanism and just don't PANIC.\n>\n> > I continue believe that that'd be a quite bad idea.\n>\n> I'm warming to it myself.\n>\n> > My suspicion is that most of the false positives are caused by lots of signals\n> > interrupting the pg_usleep()s. Because we measure the number of delays, not\n> > the actual time since we've been waiting for the spinlock, signals\n> > interrupting pg_usleep() trigger can very significantly shorten the amount of\n> > time until we consider a spinlock stuck. We should fix that.\n>\n> We wouldn't need to fix it, if we simply removed the NUM_DELAYS\n> limit. Whatever kicked us off the sleep doesn't matter, we might\n> as well go check the spinlock.\n\nI suspect we should fix it regardless of whether we keep NUM_DELAYS. We\nshouldn't increase cur_delay faster just because a lot of signals are coming\nin. If it were just user triggered signals it'd probably not be worth\nworrying about, but we do sometimes send a lot of signals ourselves...\n\n\n> Also, you propose in your other message replacing spinlocks with lwlocks.\n> Whatever the other merits of that, I notice that we have no timeout or\n> \"stuck lwlock\" detection.\n\nTrue. And that's not great. But at least lwlocks can be identified in\npg_stat_activity, which does help some.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 13:21:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-11 16:11:40 -0400, Tom Lane wrote:\n>> We wouldn't need to fix it, if we simply removed the NUM_DELAYS\n>> limit. Whatever kicked us off the sleep doesn't matter, we might\n>> as well go check the spinlock.\n\n> I suspect we should fix it regardless of whether we keep NUM_DELAYS. We\n> shouldn't increase cur_delay faster just because a lot of signals are coming\n> in.\n\nI'm unconvinced there's a problem there. Also, what would you do\nabout this that wouldn't involve adding kernel calls for gettimeofday?\nAdmittedly, if we only do that when we're about to sleep, maybe it's\nnot so awful; but it's still adding complexity that I'm unconvinced\nis warranted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:46:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Thu, Apr 11, 2024 at 3:52 PM Andres Freund <[email protected]> wrote:\n> My suspicion is that most of the false positives are caused by lots of signals\n> interrupting the pg_usleep()s. Because we measure the number of delays, not\n> the actual time since we've been waiting for the spinlock, signals\n> interrupting pg_usleep() trigger can very significantly shorten the amount of\n> time until we consider a spinlock stuck. We should fix that.\n\nI mean, go nuts. But <dons asbestos underpants, asbestos regular\npants, 2 pair of asbestos socks, 3 asbestos shirts, 2 asbestos\njackets, and then hides inside of a flame-proof capsule at the bottom\nof the Pacific ocean> this is just another thing like query hints,\nwhere everybody says \"oh, the right thing to do is fix X or Y or Z and\nthen you won't need it\". But of course it never actually gets fixed\nwell enough that people stop having problems in the real world. And\neventually we look like a developer community that cares more about\nour own opinion about what is right than what the experience of real\nusers actually is.\n\n> I don't think that's a fair description of the situation. It supposes that the\n> alternative to the PANIC is that the problem is detected and resolved some\n> other way. But, depending on the spinlock, the problem will not be detected by\n> automated checks for the system being up. IME you end up with a system that's\n> degraded in a complicated hard to understand way, rather than one that's just\n> down.\n\nI'm interested to read that you've seen this actually happen and that\nyou got that result. What I would have thought would happen is that,\nwithin a relatively short period of time, every backend in the system\nwould pile up waiting for that spinlock and the whole system would\nbecome completely unresponsive. I mean, I know it depends on exactly\nwhich spinlock it is. But, I would have thought that if this was\nhappening, it would be happening because some regular backend died in\na weird way, and if that is indeed what happened, then it's likely\nthat the other backends are doing similar kinds of work, because\nthat's how application workloads typically behave, so they'll probably\nall hit the part of the code where they need that spinlock too, and\nnow everybody's just spinning.\n\nIf it's something like a WAL receiver mutex or the checkpointer mutex\nor even a parallel query mutex, then I guess it would look different.\nBut even then, what I'd expect to see is all backends of type X pile\nup on the stuck mutex, and when you check 'ps' or 'top', you go \"oh\nhmm, all my WAL receivers are at 100% CPU\" and you get a backtrace or\nan strace and you go \"hmm\". Now, I agree that in this kind of scenario\nwhere only some backends lock up, automated checks are not necessarily\ngoing to notice the problem - but a PANIC is hardly better. Now you\njust have a system that keeps PANICing, which liveness checks aren't\nnecessarily going to notice either.\n\nIn all seriousness, I'd really like to understand what experience\nyou've had that makes this check seem useful. Because I think all of\nmy experiences with it have been bad. If they weren't, the last good\none was a very long time ago.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:46:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 16:46:10 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-04-11 16:11:40 -0400, Tom Lane wrote:\n> >> We wouldn't need to fix it, if we simply removed the NUM_DELAYS\n> >> limit. Whatever kicked us off the sleep doesn't matter, we might\n> >> as well go check the spinlock.\n> \n> > I suspect we should fix it regardless of whether we keep NUM_DELAYS. We\n> > shouldn't increase cur_delay faster just because a lot of signals are coming\n> > in.\n> \n> I'm unconvinced there's a problem there.\n\nObviously that's a different aspect than efficiency, but in local, admittedly\nextreme, testing I've seen stuck spinlocks being detected in a fraction of the\nnormally expected time. A spinlock that ends up sleeping for close to a second\nafter a relatively short amount of time surely isn't good for predictable\nperformance.\n\nIIRC the bad case was on a hot standby, with some recovery conflict causing\nthe startup process to send a lot of signals.\n\n\n> Also, what would you do about this that wouldn't involve adding kernel calls\n> for gettimeofday? Admittedly, if we only do that when we're about to sleep,\n> maybe it's not so awful; but it's still adding complexity that I'm\n> unconvinced is warranted.\n\nAt least on !windows, pg_usleep() uses nanosleep(), which, when interrupted by\na signal, can return the remaining time until the experation of the timer.\n\nI suspect that on windows computing the time when a signal arrived wouldn't be\nexpensive, compared to all the other overhead implied by our signal handling\nemulation.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 14:17:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 16:46:23 -0400, Robert Haas wrote:\n> On Thu, Apr 11, 2024 at 3:52 PM Andres Freund <[email protected]> wrote:\n> > My suspicion is that most of the false positives are caused by lots of signals\n> > interrupting the pg_usleep()s. Because we measure the number of delays, not\n> > the actual time since we've been waiting for the spinlock, signals\n> > interrupting pg_usleep() trigger can very significantly shorten the amount of\n> > time until we consider a spinlock stuck. We should fix that.\n> \n> I mean, go nuts. But <dons asbestos underpants, asbestos regular\n> pants, 2 pair of asbestos socks, 3 asbestos shirts, 2 asbestos\n> jackets, and then hides inside of a flame-proof capsule at the bottom\n> of the Pacific ocean> this is just another thing like query hints,\n> where everybody says \"oh, the right thing to do is fix X or Y or Z and\n> then you won't need it\". But of course it never actually gets fixed\n> well enough that people stop having problems in the real world. And\n> eventually we look like a developer community that cares more about\n> our own opinion about what is right than what the experience of real\n> users actually is.\n\nI don't think that's a particularly apt comparison. If you have spinlocks that\ncannot be acquired within tens of seconds, you're in a really bad situation,\nregardless of whether you crash-restart or not.\n\nWhereas with hints, you might actually be operating perfectly normally when\nusing hints. Never using the wrong plan is also just an order of magnitude\nharder and fuzzier problem than ensuring we don't wait for spinlocks for a\nlong time.\n\n\n> In all seriousness, I'd really like to understand what experience\n> you've had that makes this check seem useful. Because I think all of\n> my experiences with it have been bad. If they weren't, the last good\n> one was a very long time ago.\n\nBy far the most of the stuck spinlocks I've seen were due to bugs in\nout-of-core extensions. Absurdly enough, the next common thing probably is due\nto people using gdb to make an uninterruptible process break out of some code,\nwithout a crash-restart, accidentally doing so while a spinlock is held.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 14:30:23 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Thu, Apr 11, 2024 at 5:30 PM Andres Freund <[email protected]> wrote:\n> I don't think that's a particularly apt comparison. If you have spinlocks that\n> cannot be acquired within tens of seconds, you're in a really bad situation,\n> regardless of whether you crash-restart or not.\n\nI agree with that. I just don't think panicking makes it better.\n\n> > In all seriousness, I'd really like to understand what experience\n> > you've had that makes this check seem useful. Because I think all of\n> > my experiences with it have been bad. If they weren't, the last good\n> > one was a very long time ago.\n>\n> By far the most of the stuck spinlocks I've seen were due to bugs in\n> out-of-core extensions. Absurdly enough, the next common thing probably is due\n> to people using gdb to make an uninterruptible process break out of some code,\n> without a crash-restart, accidentally doing so while a spinlock is held.\n\nHmm, interesting. I'm glad I haven't seen those extensions. But I\nthink I have seen cases of people attaching gdb to grab a backtrace to\ndebug some problem in production, and failing to detach it within 60\nseconds.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 22:31:21 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Apr 11, 2024 at 5:30 PM Andres Freund <[email protected]> wrote:\n>> By far the most of the stuck spinlocks I've seen were due to bugs in\n>> out-of-core extensions. Absurdly enough, the next common thing probably is due\n>> to people using gdb to make an uninterruptible process break out of some code,\n>> without a crash-restart, accidentally doing so while a spinlock is held.\n\n> Hmm, interesting. I'm glad I haven't seen those extensions. But I\n> think I have seen cases of people attaching gdb to grab a backtrace to\n> debug some problem in production, and failing to detach it within 60\n> seconds.\n\nI don't doubt that there are extensions with bugs of this ilk\n(and I wouldn't bet an appendage that there aren't such bugs\nin core, either). But Robert's question remains: how does\nPANIC'ing after awhile make anything better? I flat out don't\nbelieve the idea that having a backend stuck on a spinlock\nwould otherwise go undetected.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 22:55:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "I wrote:\n> ... But Robert's question remains: how does\n> PANIC'ing after awhile make anything better? I flat out don't\n> believe the idea that having a backend stuck on a spinlock\n> would otherwise go undetected.\n\nOh, wait. After thinking a bit longer I believe I recall the argument\nfor this behavior: it automates recovery from a genuinely stuck\nspinlock. If we waited forever, the only way out of that is for a\nDBA to kill -9 the stuck process, which has exactly the same end\nresult as a PANIC, except that it takes a lot longer to put the system\nback in service and perhaps rousts somebody, or several somebodies,\nout of their warm beds to fix it. If you don't have a DBA on-call\n24x7 then that answer looks even worse.\n\nSo there's that. But that's not an argument that we need to be in a\nhurry to timeout; if the built-in reaction time is less than perhaps\n10 minutes you're still miles ahead of the manual solution.\n\nOn the third hand, it's still true that we have no comparable\nbehavior for any other source of system lockups, and it's difficult\nto make a case that stuck spinlocks really need more concern than\nother kinds of bugs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 23:15:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 23:15:38 -0400, Tom Lane wrote:\n> I wrote:\n> > ... But Robert's question remains: how does\n> > PANIC'ing after awhile make anything better? I flat out don't\n> > believe the idea that having a backend stuck on a spinlock\n> > would otherwise go undetected.\n> \n> Oh, wait. After thinking a bit longer I believe I recall the argument\n> for this behavior: it automates recovery from a genuinely stuck\n> spinlock. If we waited forever, the only way out of that is for a\n> DBA to kill -9 the stuck process, which has exactly the same end\n> result as a PANIC, except that it takes a lot longer to put the system\n> back in service and perhaps rousts somebody, or several somebodies,\n> out of their warm beds to fix it. If you don't have a DBA on-call\n> 24x7 then that answer looks even worse.\n\nPrecisely. And even if you have a DBA on call 24x7, they need to know that\nthey need to react to something.\n\nToday you can automate getting notified of crash-restarts, by using\n restart_after_crash = false\nand restarting somewhere outside of postgres.\n\nImo that's the only sensible setting for larger production environments,\nalthough I'm sure not everyone agrees with that.\n\n\n> So there's that. But that's not an argument that we need to be in a\n> hurry to timeout; if the built-in reaction time is less than perhaps\n> 10 minutes you're still miles ahead of the manual solution.\n\nThe current timeout is of a hard to determine total time, due to the\nincreasing and wrapping around wait times, but it's normally longer than 60s,\nunless you're interrupted by a lot of signals. 1000 sleeps between 1000 and\n1000000 us.\n\nI think we should make the timeout something predictable and probably somewhat\nlonger.\n\n\n> On the third hand, it's still true that we have no comparable\n> behavior for any other source of system lockups, and it's difficult\n> to make a case that stuck spinlocks really need more concern than\n> other kinds of bugs.\n\nSpinlocks are somewhat more finnicky though, compared to e.g. lwlocks that are\nreleased on error. Lwlocks also take e.g. care to hold interrupts so code\ndoesn't just jump out of a section with lwlocks held.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 20:47:37 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-11 23:15:38 -0400, Tom Lane wrote:\n>> On the third hand, it's still true that we have no comparable\n>> behavior for any other source of system lockups, and it's difficult\n>> to make a case that stuck spinlocks really need more concern than\n>> other kinds of bugs.\n\n> Spinlocks are somewhat more finnicky though, compared to e.g. lwlocks that are\n> released on error. Lwlocks also take e.g. care to hold interrupts so code\n> doesn't just jump out of a section with lwlocks held.\n\nYeah. I don't think that unifying spinlocks with lwlocks is a great\nidea, precisely because lwlocks have these other small overheads\nin the name of bug detection/prevention. It seems to me that the\ndivision between spinlocks and lwlocks and heavyweight locks is\nbasically a good idea that matches up well with the actual\nrequirements of some different parts of our code. The underlying\nimplementations can change (and have), but the idea of successively\nincreasing amounts of protection against caller error seems sound\nto me.\n\nIf you grant that concept, then the idea that spinlocks need more\nprotection against stuck-ness than the higher-overhead levels do\nseems mighty odd.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Apr 2024 00:03:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 20:47:37 -0700, Andres Freund wrote:\n> > So there's that. But that's not an argument that we need to be in a\n> > hurry to timeout; if the built-in reaction time is less than perhaps\n> > 10 minutes you're still miles ahead of the manual solution.\n> \n> The current timeout is of a hard to determine total time, due to the\n> increasing and wrapping around wait times, but it's normally longer than 60s,\n> unless you're interrupted by a lot of signals. 1000 sleeps between 1000 and\n> 1000000 us.\n> \n> I think we should make the timeout something predictable and probably somewhat\n> longer.\n\nFWIW, I just reproduced the scenario with signals. I added tracking of the\ntotal time actually slept and lost to SpinDelayStatus, and added a function to\ntrigger a wait on a spinlock.\n\nTo wait less, I set max_standby_streaming_delay=0.1, but that's just for\neasier testing in isolation. In reality that could have been reached before\nthe spinlock is even acquired.\n\nOn a standby, while a recovery conflict is happening:\nPANIC: XX000: stuck spinlock detected at crashme, path/to/file:line, after 4.38s, lost 127.96s\n\n\nSo right now it's really not hard to trigger the stuck-spinlock logic\ncompletely spuriously. This doesn't just happen with hot standby, there are\nplenty other sources of lots of signals being sent.\n\n\nTracking the total amount of time spent sleeping doesn't require any\nadditional syscalls, due to the nanosleep()...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 21:41:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi Andres,\n\n12.04.2024 07:41, Andres Freund wrote:\n>\n> FWIW, I just reproduced the scenario with signals. I added tracking of the\n> total time actually slept and lost to SpinDelayStatus, and added a function to\n> trigger a wait on a spinlock.\n>\n> To wait less, I set max_standby_streaming_delay=0.1, but that's just for\n> easier testing in isolation. In reality that could have been reached before\n> the spinlock is even acquired.\n>\n> On a standby, while a recovery conflict is happening:\n> PANIC: XX000: stuck spinlock detected at crashme, path/to/file:line, after 4.38s, lost 127.96s\n>\n>\n> So right now it's really not hard to trigger the stuck-spinlock logic\n> completely spuriously. This doesn't just happen with hot standby, there are\n> plenty other sources of lots of signals being sent.\n\nI managed to trigger that logic when trying to construct a reproducer\nfor bug #18426.\n\nWith the following delays added:\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -1776,6 +1776,7 @@ retry:\n       */\n      if (BUF_STATE_GET_REFCOUNT(buf_state) != 0)\n      {\n+pg_usleep(300000L);\n          UnlockBufHdr(buf, buf_state);\n          LWLockRelease(oldPartitionLock);\n          /* safety check: should definitely not be our *own* pin */\n@@ -5549,6 +5550,7 @@ TerminateBufferIO(BufferDesc *buf, bool clear_dirty, uint32 set_flag_bits,\n\n      Assert(buf_state & BM_IO_IN_PROGRESS);\n\n+pg_usleep(300);\n      buf_state &= ~(BM_IO_IN_PROGRESS | BM_IO_ERROR);\n      if (clear_dirty && !(buf_state & BM_JUST_DIRTIED))\n          buf_state &= ~(BM_DIRTY | BM_CHECKPOINT_NEEDED);\n\nand /tmp/temp.config:\nbgwriter_delay = 10\n\nTEMP_CONFIG=/tmp/temp.config make -s check -C src/test/recovery PROVE_TESTS=\"t/032*\"\nfails for me on iterations 22, 23, 37:\n2024-04-12 05:00:17.981 UTC [762336] PANIC:  stuck spinlock detected at WaitBufHdrUnlocked, bufmgr.c:5726\n\nI haven't investigated this case yet.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 12 Apr 2024 08:05:05 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 21:41:39 -0700, Andres Freund wrote:\n> FWIW, I just reproduced the scenario with signals. I added tracking of the\n> total time actually slept and lost to SpinDelayStatus, and added a function to\n> trigger a wait on a spinlock.\n>\n> To wait less, I set max_standby_streaming_delay=0.1, but that's just for\n> easier testing in isolation. In reality that could have been reached before\n> the spinlock is even acquired.\n>\n> On a standby, while a recovery conflict is happening:\n> PANIC: XX000: stuck spinlock detected at crashme, path/to/file:line, after 4.38s, lost 127.96s\n>\n>\n> So right now it's really not hard to trigger the stuck-spinlock logic\n> completely spuriously. This doesn't just happen with hot standby, there are\n> plenty other sources of lots of signals being sent.\n\nOh my. There's a workload that completely trivially hits this, without even\ntrying hard. LISTEN/NOTIFY.\n\nPANIC: XX000: stuck spinlock detected at crashme, file:line, after 0.000072s, lost 133.027159s\n\nYes, it really triggered in less than 1ms. That was with just one session\ndoing NOTIFYs from a client. There's plenty users that send NOTIFY from\ntriggers, which afaics will result in much higher rates of signals being sent.\n\n\nEven with a bit less NOTIFY traffic, this very obviously gets into the\nterritory where plain scheduling delays will trigger the stuck spinlock logic.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Thu, 11 Apr 2024 23:02:41 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> Oh my. There's a workload that completely trivially hits this, without even\n> trying hard. LISTEN/NOTIFY.\n\nHm. Bug in the NOTIFY logic perhaps? Sending that many signals\ncan't be good from a performance POV, whether or not it triggers\nspinlock issues.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 12 Apr 2024 09:43:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-12 09:43:46 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > Oh my. There's a workload that completely trivially hits this, without even\n> > trying hard. LISTEN/NOTIFY.\n> \n> Hm. Bug in the NOTIFY logic perhaps?\n\nI don't think it's a bug, async.c:SignalBackends() will signal once for every\nNOTIFY, regardless of whether the target has already been signaled. You're\ncertainly right that:\n\n> Sending that many signals can't be good from a performance POV, whether or\n> not it triggers spinlock issues.\n\nI wonder if we could switch this to latches, because with latches we'd only\nre-send a signal if the receiving end has already processed the signal. Or\nalternatively, make procsignal.c do something similar, although that might\ntake a bit of work to be done race-free.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 12 Apr 2024 08:45:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "12.04.2024 08:05, Alexander Lakhin wrote:\n> 2024-04-12 05:00:17.981 UTC [762336] PANIC:  stuck spinlock detected at WaitBufHdrUnlocked, bufmgr.c:5726\n>\n\nIt looks like that spinlock issue caused by a race condition/deadlock.\nWhat I see when the test fails is:\nA client backend executing \"DROP DATABASE conflict_db\" performs\ndropdb() -> DropDatabaseBuffers() -> InvalidateBuffer()\nAt the same time, bgwriter performs (for the same buffer):\nBgBufferSync() -> SyncOneBuffer()\n\nWhen InvalidateBuffer() is called, the buffer refcount is zero,\nthen bgwriter pins the buffer, thus increases refcount;\nInvalidateBuffer() gets into the retry loop;\nbgwriter calls UnpinBuffer() -> UnpinBufferNoOwner() ->\n   WaitBufHdrUnlocked(), which waits for !BM_LOCKED state,\nwhile InvalidateBuffer() waits for the buffer refcount decrease.\n\nAs it turns out, it's not related to spinlocks' specifics or PRNG, just a\nserendipitous find.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Fri, 12 Apr 2024 19:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nGiven that I found several ways to spuriously trigger the stuck spinlock\nlogic, I think we need to backpatch a fix.\n\n\nOn 2024-04-11 21:41:39 -0700, Andres Freund wrote:\n> Tracking the total amount of time spent sleeping doesn't require any\n> additional syscalls, due to the nanosleep()...\n\nI did quickly implement this, and it works nicely on unix. Unfortunately I\ncouldn't find something similar for windows. Which would mean we'd need to\nrecord the time before every sleep.\n\nAnother issue with that approach is that we'd need to add a field to\nSpinDelayStatus, which'd be an ABI break. Not sure there's any external code\nusing it, but if there were, it'd be a fairly nasty, rarely hit, path.\n\nAfaict there's no padding that could be reused on 32 bit platforms [1].\n\n\nI wonder if, for easy backpatching, the easiest solution is to just reset\nerrno before calling pg_usleep(), and only increment status->delays if\nerrno != EINTR. Given our pg_usleep() implementations, that'd preven the stuck\nspinlock logic from triggering too fast.\n\n\nIf the ABI issue weren't there, we could record the current time the first\ntime we sleep during a spinlock acquisition (there's an existing branch for\nthat) and then only trigger the stuck spinlock detection when enough time has\npassed. That'd be fine from an efficiency POV.\n\n\nEven if we decide to do so, I don't think it'd be a good idea to remove the\nstuck logic alltogether in the backbranches. That might take out services that\nhave been running ok-ish for a long time.\n\nGreetings,\n\nAndres Freund\n\n\n[1] But we are wasting a bunch of space on 64bit platforms due to switching\nbetween pointers and 32bit integers, should probably fix that.\n\n\n", "msg_date": "Fri, 12 Apr 2024 11:33:17 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-12 11:33:17 -0700, Andres Freund wrote:\n> I wonder if, for easy backpatching, the easiest solution is to just reset\n> errno before calling pg_usleep(), and only increment status->delays if\n> errno != EINTR. Given our pg_usleep() implementations, that'd preven the stuck\n> spinlock logic from triggering too fast.\n\nHere's a patch implementing this approach. I confirmed that before we trigger\nthe stuck spinlock logic very quickly and after we don't. However, if most\nsleeps are interrupted, it can delay the stuck spinlock detection a good\nbit. But that seems much better than triggering it too quickly.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 12 Apr 2024 12:33:02 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "On Fri, Apr 12, 2024 at 3:33 PM Andres Freund <[email protected]> wrote:\n> Here's a patch implementing this approach. I confirmed that before we trigger\n> the stuck spinlock logic very quickly and after we don't. However, if most\n> sleeps are interrupted, it can delay the stuck spinlock detection a good\n> bit. But that seems much better than triggering it too quickly.\n\n+1 for doing something about this. I'm not sure if it goes far enough,\nbut it definitely seems much better than doing nothing. Given your\nfindings, I'm honestly kind of surprised that I haven't seen problems\nof this type more frequently. And I think the general idea of not\ncounting the waits if they're interrupted makes sense. Sure, it's not\ngoing to be 100% accurate, but it's got to be way better for the timer\nto trigger too slowly than too quickly. Perhaps that's too glib of me,\ngiven that I'm not sure we should even have a timer, but even if we\nstipulate that the panic is useful in some cases, spurious panics are\nstill really bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 10:54:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 10:54:16 -0400, Robert Haas wrote:\n> On Fri, Apr 12, 2024 at 3:33 PM Andres Freund <[email protected]> wrote:\n> > Here's a patch implementing this approach. I confirmed that before we trigger\n> > the stuck spinlock logic very quickly and after we don't. However, if most\n> > sleeps are interrupted, it can delay the stuck spinlock detection a good\n> > bit. But that seems much better than triggering it too quickly.\n>\n> +1 for doing something about this. I'm not sure if it goes far enough,\n> but it definitely seems much better than doing nothing.\n\nOne thing I started to be worried about is whether a patch ought to prevent\nthe timeout used by perform_spin_delay() from increasing when\ninterrupted. Otherwise a few signals can trigger quite long waits.\n\nBut as a I can't quite see a way to make this accurate in the backbranches, I\nsuspect something like what I posted is still a good first version.\n\n\n> Given your findings, I'm honestly kind of surprised that I haven't seen\n> problems of this type more frequently.\n\nSame. I did a bunch of searches for the error, but found surprisingly\nlittle.\n\nI think in practice most spinlocks just aren't contended enough to reach\nperform_spin_delay(). And we have improved some on that over time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:54:01 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" }, { "msg_contents": "FWIW, yesterday we had one more reproduction of stuck spinlock panic which does not seem as a stuck spinlock.\n\nI don’t see any valuable diagnostic information. The reproduction happened on hot standby. There’s a message in logs on primary at the same time, but does not seem to be releated:\n\"process 3918804 acquired ShareLock on transaction 909261926 after 2716.594 ms\"\nPostgreSQL 14.11\nVM with this node does not seem heavily loaded, according to monitoring there were just 2 busy backends before panic shutdown.\n\n\n> On 16 Apr 2024, at 20:54, Andres Freund <[email protected]> wrote:\n> \n> Hi,\n> \n> On 2024-04-15 10:54:16 -0400, Robert Haas wrote:\n>> On Fri, Apr 12, 2024 at 3:33 PM Andres Freund <[email protected]> wrote:\n>>> Here's a patch implementing this approach. I confirmed that before we trigger\n>>> the stuck spinlock logic very quickly and after we don't. However, if most\n>>> sleeps are interrupted, it can delay the stuck spinlock detection a good\n>>> bit. But that seems much better than triggering it too quickly.\n>> \n>> +1 for doing something about this. I'm not sure if it goes far enough,\n>> but it definitely seems much better than doing nothing.\n> \n> One thing I started to be worried about is whether a patch ought to prevent\n> the timeout used by perform_spin_delay() from increasing when\n> interrupted. Otherwise a few signals can trigger quite long waits.\n> \n> But as a I can't quite see a way to make this accurate in the backbranches, I\n> suspect something like what I posted is still a good first version.\n> \n\n\nWhat kind of inaccuracy do you see?\nThe code in performa_spin_delay() does not seem to be much different across REL_11_STABLE..REL_12_STABLE.\nThe only difference I see is how random number is generated.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Tue, 11 Jun 2024 11:26:38 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Issue with the PRNG used by Postgres" } ]
[ { "msg_contents": "Hi, Hackers\nI have observed that there has been a paucity of discussion concerning the parallel replay of WAL logs. \nThe most recent discourse on this subject was during the PGCon event in 2023, where it was noted that PostgreSQL utilizes a single process for WAL replay. \nHowever, when configuring primary-secondary replication, there exists a tangible scenario where the primary accumulates an excessive backlog that the secondary cannot replay promptly.\nThis situation prompts me to question whether it is pertinent to consider integrating a parallel replay feature. \nSuch a functionality could potentially mitigate the risk of delayed WAL application on replicas and enhance overall system resilience and performance.\nI am keen to hear your thoughts on this issue and whether you share the view that parallel WAL replay is a necessity that we should collaboratively explore further.\nThank you for your attention to this matter.\nBest regards,\nDavid Fan\nHi, HackersI have observed that there has been a paucity of discussion concerning the parallel replay of WAL logs. The most recent discourse on this subject was during the PGCon event in 2023, where it was noted that PostgreSQL utilizes a single process for WAL replay. However, when configuring primary-secondary replication, there exists a tangible scenario where the primary accumulates an excessive backlog that the secondary cannot replay promptly.This situation prompts me to question whether it is pertinent to consider integrating a parallel replay feature. Such a functionality could potentially mitigate the risk of delayed WAL application on replicas and enhance overall system resilience and performance.I am keen to hear your thoughts on this issue and whether you share the view that parallel WAL replay is a necessity that we should collaboratively explore further.Thank you for your attention to this matter.Best regards,David Fan", "msg_date": "Tue, 9 Apr 2024 15:16:24 +0800 (CST)", "msg_from": "=?GBK?B?t7bI89Tz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Recovery in PostgreSQL" }, { "msg_contents": "On Tue, 9 Apr 2024 at 13:12, 范润泽 <[email protected]> wrote:\n>\n> Hi, Hackers\n> I have observed that there has been a paucity of discussion concerning the parallel replay of WAL logs.\n> The most recent discourse on this subject was during the PGCon event in 2023, where it was noted that PostgreSQL utilizes a single process for WAL replay.\n> However, when configuring primary-secondary replication, there exists a tangible scenario where the primary accumulates an excessive backlog that the secondary cannot replay promptly.\n> This situation prompts me to question whether it is pertinent to consider integrating a parallel replay feature.\n> Such a functionality could potentially mitigate the risk of delayed WAL application on replicas and enhance overall system resilience and performance.\n> I am keen to hear your thoughts on this issue and whether you share the view that parallel WAL replay is a necessity that we should collaboratively explore further.\n\nI think we should definitely explore this further, yes.\n\nNote that parallel WAL replay is not the only thing we can improve\nhere: A good part of WAL redo time is spent in reading and validating\nthe WAL records. If we can move that part to a parallel worker/thread,\nthat could probably improve the throughput by a good margin.\n\nThen there is another issue with parallel recovery that I also called\nout at PGCon 2023: You can't just reorder WAL records in a simple\npage- or relation-based parallelization approach.\n\nIndexes often contain transient information, of which replay isn't\neasily parallelizable with other relation's redo due to their\ndependencies on other relation's pages while gauranteeing correct\nquery results.\n\nE.g., the index insertion of page 2 in one backend's operations order\n{ 1) Page 1: heap insertion, 2) Page 2: index insert, 3) commit }\ncannot be reordered to before the heap page insertion at 1, because\nthat'd allow a concurrent index access after replay of 2), but before\nreplay of 1), to see an \"all-visible\" page 1 in its index scan, while\nthe heap tuple of the index entry wasn't even inserted yet. Index-only\nscans could thus return invalid results.\n\nSee also the wiki page [0] on parallel recovery, and Koichi-san's\nrepository [1] with his code for parallel replay based on PG 14.6.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://wiki.postgresql.org/wiki/Parallel_Recovery\n[1] https://github.com/koichi-szk/postgres/commits/parallel_replay_14_6/\n\n\n", "msg_date": "Tue, 9 Apr 2024 14:27:32 +0200", "msg_from": "Matthias van de Meent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Recovery in PostgreSQL" } ]
[ { "msg_contents": "hi.\n`\n | NESTED [ PATH ] json_path_specification [ AS json_path_name ]\nCOLUMNS ( json_table_column [, ...] )\nNESTED [ PATH ] json_path_specification [ AS json_path_name ] COLUMNS\n( json_table_column [, ...] )\n`\n \"json_path_specification\" should be \"path_expression\"?\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\ndrop table s1;\ncreate or replace function random_text_1000() returns text\nas $$select string_agg(md5(random()::text),'') from\ngenerate_Series(1,1000) s $$ LANGUAGE SQL;\n\ncreate unlogged table s1(a int GENERATED BY DEFAULT AS IDENTITY, js jsonb);\ninsert into s1(js)\nselect jsonb ('{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22, 234,' || g\n|| ']},{\"z22\": [32, 204,145]}]},\"c\": ' || g\n|| ',\"id\": \"' || random_text_1000() || '\"}')\nfrom generate_series(1_000_000, 1_000_000) g;\ninsert into s1(js)\nselect jsonb ('{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22, 234,' || g\n|| ']},{\"z22\": [32, 204,145]}]},\"c\": ' || g\n|| ',\"id\": \"' || random_text_1000() || '\"}')\nfrom generate_series(235, 235 + 200000,1) g;\n\nselect count(*), pg_size_pretty(pg_total_relation_size('s1')) from s1;\n count | pg_size_pretty\n--------+----------------\n 200002 | 6398 MB\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nexplain(analyze, costs off,buffers, timing)\nSELECT sub.*, s.a as s_a FROM s,\n(values(23)) x(x),\ngenerate_series(13, 13) y,\nJSON_TABLE(js, '$' AS c1 PASSING x AS x, y AS y\nCOLUMNS (\nxx1 int PATH '$.c',\nNESTED PATH '$.a.za[2]' COLUMNS (NESTED PATH '$.z22[*]' as z22 COLUMNS\n(c int PATH '$')),\nNESTED PATH '$.a.za[1]' COLUMNS (d json PATH '$ ? (@.z21[*] == ($\"x\" -1))'),\nNESTED PATH '$.a.za[0]' COLUMNS (NESTED PATH '$.z1[*] ? (@ >= ($\"x\"\n-2))' as z1 COLUMNS (a int PATH '$')),\nNESTED PATH '$.a.za[1]' COLUMNS\n(NESTED PATH '$.z21[*] ? (@ >= ($\"y\" +121))' as z21 COLUMNS (b int\nPATH '$ ? (@ <= ($\"x\" + 999976))' default -1000 ON EMPTY))\n)) sub;\n\nfor one jsonb, it can expand to 7 rows, the above query will return\naround 1.4 million rows.\ni use the above query, and pg_log_backend_memory_contexts in another\nsession to check the memory usage.\ndidn't find memory over consumed issue.\n\nbut I am not sure this is the right way to check the memory consumption.\n----------------------------------------------------------------------------------------------------------------------\nbegin;\nSELECT sub.*, s.a as s_a FROM s,\n(values(23)) x(x),\ngenerate_series(13, 13) y,\nJSON_TABLE(js, '$' AS c1 PASSING x AS x, y AS y\nCOLUMNS (\nxx1 int PATH '$.c',\nNESTED PATH '$.a.za[2]' COLUMNS (NESTED PATH '$.z22[*]' as z22 COLUMNS\n(c int PATH '$')),\nNESTED PATH '$.a.za[1]' COLUMNS (d json PATH '$ ? (@.z21[*] == ($\"x\" -1))'),\nNESTED PATH '$.a.za[0]' COLUMNS (NESTED PATH '$.z1[*] ? (@ >= ($\"x\"\n-2))' as z1 COLUMNS (a int PATH '$')),\nNESTED PATH '$.a.za[1]' COLUMNS\n(NESTED PATH '$.z21[*] ? (@ >= ($\"y\" +121))' as z21 COLUMNS (b int\nPATH '$ ? (@ <= ($\"x\" + 999976))' error ON EMPTY))\n)) sub;\nrollback;\n\nonly the last row will fail, because of \"error ON EMPTY\", (1_000_000\n<= 23 + 999976) is false.\nI remember the very previous patch, because of error cleanup, it took\na lot of resources.\ndoes our current implementation, only the very last row fail, will it\nbe easy to clean up the transaction?\n\n\nthe last query error message is:\n`\nERROR: no SQL/JSON item\n`\n\nwe are in ExecEvalJsonExprPath, can we output it to be:\n`\nERROR: after applying json_path \"5s\", no SQL/JSON item found\n`\nin a json_table query, we can have multiple path_expressions, like the\nabove query.\nit's not easy to know applying which path_expression failed.\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:47:03 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "sql/json remaining issue" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 9, 2024 at 4:47 PM jian he <[email protected]> wrote:\n>\n> hi.\n> `\n> | NESTED [ PATH ] json_path_specification [ AS json_path_name ]\n> COLUMNS ( json_table_column [, ...] )\n> NESTED [ PATH ] json_path_specification [ AS json_path_name ] COLUMNS\n> ( json_table_column [, ...] )\n> `\n> \"json_path_specification\" should be \"path_expression\"?\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> drop table s1;\n> create or replace function random_text_1000() returns text\n> as $$select string_agg(md5(random()::text),'') from\n> generate_Series(1,1000) s $$ LANGUAGE SQL;\n>\n> create unlogged table s1(a int GENERATED BY DEFAULT AS IDENTITY, js jsonb);\n> insert into s1(js)\n> select jsonb ('{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22, 234,' || g\n> || ']},{\"z22\": [32, 204,145]}]},\"c\": ' || g\n> || ',\"id\": \"' || random_text_1000() || '\"}')\n> from generate_series(1_000_000, 1_000_000) g;\n> insert into s1(js)\n> select jsonb ('{\"a\":{\"za\":[{\"z1\": [11,2222]},{\"z21\": [22, 234,' || g\n> || ']},{\"z22\": [32, 204,145]}]},\"c\": ' || g\n> || ',\"id\": \"' || random_text_1000() || '\"}')\n> from generate_series(235, 235 + 200000,1) g;\n>\n> select count(*), pg_size_pretty(pg_total_relation_size('s1')) from s1;\n> count | pg_size_pretty\n> --------+----------------\n> 200002 | 6398 MB\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> explain(analyze, costs off,buffers, timing)\n> SELECT sub.*, s.a as s_a FROM s,\n> (values(23)) x(x),\n> generate_series(13, 13) y,\n> JSON_TABLE(js, '$' AS c1 PASSING x AS x, y AS y\n> COLUMNS (\n> xx1 int PATH '$.c',\n> NESTED PATH '$.a.za[2]' COLUMNS (NESTED PATH '$.z22[*]' as z22 COLUMNS\n> (c int PATH '$')),\n> NESTED PATH '$.a.za[1]' COLUMNS (d json PATH '$ ? (@.z21[*] == ($\"x\" -1))'),\n> NESTED PATH '$.a.za[0]' COLUMNS (NESTED PATH '$.z1[*] ? (@ >= ($\"x\"\n> -2))' as z1 COLUMNS (a int PATH '$')),\n> NESTED PATH '$.a.za[1]' COLUMNS\n> (NESTED PATH '$.z21[*] ? (@ >= ($\"y\" +121))' as z21 COLUMNS (b int\n> PATH '$ ? (@ <= ($\"x\" + 999976))' default -1000 ON EMPTY))\n> )) sub;\n>\n> for one jsonb, it can expand to 7 rows, the above query will return\n> around 1.4 million rows.\n> i use the above query, and pg_log_backend_memory_contexts in another\n> session to check the memory usage.\n> didn't find memory over consumed issue.\n>\n> but I am not sure this is the right way to check the memory consumption.\n> ----------------------------------------------------------------------------------------------------------------------\n> begin;\n> SELECT sub.*, s.a as s_a FROM s,\n> (values(23)) x(x),\n> generate_series(13, 13) y,\n> JSON_TABLE(js, '$' AS c1 PASSING x AS x, y AS y\n> COLUMNS (\n> xx1 int PATH '$.c',\n> NESTED PATH '$.a.za[2]' COLUMNS (NESTED PATH '$.z22[*]' as z22 COLUMNS\n> (c int PATH '$')),\n> NESTED PATH '$.a.za[1]' COLUMNS (d json PATH '$ ? (@.z21[*] == ($\"x\" -1))'),\n> NESTED PATH '$.a.za[0]' COLUMNS (NESTED PATH '$.z1[*] ? (@ >= ($\"x\"\n> -2))' as z1 COLUMNS (a int PATH '$')),\n> NESTED PATH '$.a.za[1]' COLUMNS\n> (NESTED PATH '$.z21[*] ? (@ >= ($\"y\" +121))' as z21 COLUMNS (b int\n> PATH '$ ? (@ <= ($\"x\" + 999976))' error ON EMPTY))\n> )) sub;\n> rollback;\n>\n> only the last row will fail, because of \"error ON EMPTY\", (1_000_000\n> <= 23 + 999976) is false.\n> I remember the very previous patch, because of error cleanup, it took\n> a lot of resources.\n> does our current implementation, only the very last row fail, will it\n> be easy to clean up the transaction?\n\nI am not sure I understand your concern. Could you please rephrase\nit? Which previous patch are you referring to and what problem did it\ncause with respect to error cleanup?\n\nPer-row memory allocated for each successful output row JSON_TABLE()\ndoesn't pile up, because it's allocated in a context that is reset\nafter evaluating each row; see tfuncLoadRows(). But again I may be\nmisunderstanding your concern.\n\n> the last query error message is:\n> `\n> ERROR: no SQL/JSON item\n> `\n>\n> we are in ExecEvalJsonExprPath, can we output it to be:\n> `\n> ERROR: after applying json_path \"5s\", no SQL/JSON item found\n> `\n> in a json_table query, we can have multiple path_expressions, like the\n> above query.\n> it's not easy to know applying which path_expression failed.\n\nHmm, I'm not so sure about mentioning the details of the path because\npath names are optional and printing path expression itself is not a\ngood idea. Perhaps, we could mention the column name which would\nalways be there, but we'd then need to add a new field column_name\nthat's optionally set to JsonFuncExpr and JsonExpr, that is, when they\nare being set up for JSON_TABLE() columns. As shown in the attached.\nWith the patch you'll get:\n\nERROR: no SQL/JSON item found for column \"b\"\n\n--\nThanks, Amit Langote", "msg_date": "Tue, 9 Apr 2024 20:37:29 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Tue, Apr 9, 2024 at 8:37 PM Amit Langote <[email protected]> wrote:\n> On Tue, Apr 9, 2024 at 4:47 PM jian he <[email protected]> wrote:\n> > the last query error message is:\n> > `\n> > ERROR: no SQL/JSON item\n> > `\n> >\n> > we are in ExecEvalJsonExprPath, can we output it to be:\n> > `\n> > ERROR: after applying json_path \"5s\", no SQL/JSON item found\n> > `\n> > in a json_table query, we can have multiple path_expressions, like the\n> > above query.\n> > it's not easy to know applying which path_expression failed.\n>\n> Hmm, I'm not so sure about mentioning the details of the path because\n> path names are optional and printing path expression itself is not a\n> good idea. Perhaps, we could mention the column name which would\n> always be there, but we'd then need to add a new field column_name\n> that's optionally set to JsonFuncExpr and JsonExpr, that is, when they\n> are being set up for JSON_TABLE() columns. As shown in the attached.\n> With the patch you'll get:\n>\n> ERROR: no SQL/JSON item found for column \"b\"\n\nAttached is a bit more polished version of that, which also addresses\nthe error messages in JsonPathQuery() and JsonPathValue(). I noticed\nthat there was comment I had written at one point during JSON_TABLE()\nhacking that said that we should be doing this.\n\nI've also added an open item for this.\n\n-- \nThanks, Amit Langote", "msg_date": "Wed, 10 Apr 2024 17:39:21 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Wed, Apr 10, 2024 at 4:39 PM Amit Langote <[email protected]> wrote:\n>\n>\n> Attached is a bit more polished version of that, which also addresses\n> the error messages in JsonPathQuery() and JsonPathValue(). I noticed\n> that there was comment I had written at one point during JSON_TABLE()\n> hacking that said that we should be doing this.\n>\n> I've also added an open item for this.\n>\n\n`\n | NESTED [ PATH ] json_path_specification [ AS json_path_name ]\nCOLUMNS ( json_table_column [, ...] )\nNESTED [ PATH ] json_path_specification [ AS json_path_name ] COLUMNS\n( json_table_column [, ...] )\n`\n \"json_path_specification\" should be \"path_expression\"?\n\nyour explanation about memory usage is clear to me!\n\n\nThe following are minor cosmetic issues while applying v2.\n+errmsg(\"JSON path expression in JSON_VALUE should return singleton\nscalar item\")));\n\"singleton\" is not intuitive to me.\nThen I looked around.\nhttps://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=singleton\nThere is only one appearance of \"singleton\" in the manual.\nthen I wonder what's the difference between\n22038 ERRCODE_SINGLETON_SQL_JSON_ITEM_REQUIRED\n2203F ERRCODE_SQL_JSON_SCALAR_REQUIRED\n\ni assume '{\"hello\":\"world\"}' is a singleton, but not a scalar item?\nif so, then I think the error message within the \"if (count > 1)\"\nbranch in JsonPathValue\nshould use ERRCODE_SINGLETON_SQL_JSON_ITEM_REQUIRED\nwithin the \"if (!IsAJsonbScalar(res))\" branch should use\nERRCODE_SQL_JSON_SCALAR_REQUIRED\n?\n\n\nerrhint(\"Use WITH WRAPPER clause to wrap SQL/JSON item sequence into array.\")));\nmaybe\nerrhint(\"Use WITH WRAPPER clause to wrap SQL/JSON items into array.\")));\nor\nerrhint(\"Use WITH WRAPPER clause to wrap SQL/JSON item sequences into\narray.\")));\n\n\n+ if (column_name)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_MORE_THAN_ONE_SQL_JSON_ITEM),\n+ errmsg(\"JSON path expression for column \\\"%s\\\" should return\nsingleton scalar item\",\n+ column_name)));\n+ else\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SQL_JSON_SCALAR_REQUIRED),\n+ errmsg(\"JSON path expression in JSON_VALUE should return singleton\nscalar item\")));\nthe error message seems similar, but the error code is different?\nboth within \"if (count > 1)\" and \"if (!IsAJsonbScalar(res))\" branch.\n\n\nin src/include/utils/jsonpath.h, comments\n/* SQL/JSON item */\nshould be\n/* SQL/JSON query functions */\n\n\nelog(ERROR, \"unrecognized json wrapper %d\", wrapper);\nshould be\nelog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n\n\n", "msg_date": "Thu, 11 Apr 2024 11:01:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Thu, Apr 11, 2024 at 12:02 PM jian he <[email protected]> wrote:\n> On Wed, Apr 10, 2024 at 4:39 PM Amit Langote <[email protected]> wrote:\n> > Attached is a bit more polished version of that, which also addresses\n> > the error messages in JsonPathQuery() and JsonPathValue(). I noticed\n> > that there was comment I had written at one point during JSON_TABLE()\n> > hacking that said that we should be doing this.\n> >\n> > I've also added an open item for this.\n>\n> `\n> | NESTED [ PATH ] json_path_specification [ AS json_path_name ]\n> COLUMNS ( json_table_column [, ...] )\n> NESTED [ PATH ] json_path_specification [ AS json_path_name ] COLUMNS\n> ( json_table_column [, ...] )\n> `\n> \"json_path_specification\" should be \"path_expression\"?\n\nFixed in 0002.\n\n> your explanation about memory usage is clear to me!\n>\n>\n> The following are minor cosmetic issues while applying v2.\n> +errmsg(\"JSON path expression in JSON_VALUE should return singleton\n> scalar item\")));\n> \"singleton\" is not intuitive to me.\n> Then I looked around.\n> https://www.postgresql.org/search/?u=%2Fdocs%2F16%2F&q=singleton\n> There is only one appearance of \"singleton\" in the manual.\n\nYes, singleton is a term used a lot in the source code but let's keep\nit out of error messages and docs. So fixed.\n\n> errhint(\"Use WITH WRAPPER clause to wrap SQL/JSON item sequence into array.\")));\n> maybe\n> errhint(\"Use WITH WRAPPER clause to wrap SQL/JSON items into array.\")));\n> or\n> errhint(\"Use WITH WRAPPER clause to wrap SQL/JSON item sequences into\n> array.\")));\n\nChanged to use \"SQL/JSON items into array.\".\n\n> then I wonder what's the difference between\n> 22038 ERRCODE_SINGLETON_SQL_JSON_ITEM_REQUIRED\n> 2203F ERRCODE_SQL_JSON_SCALAR_REQUIRED\n>\n> i assume '{\"hello\":\"world\"}' is a singleton, but not a scalar item?\n> if so, then I think the error message within the \"if (count > 1)\"\n> branch in JsonPathValue\n> should use ERRCODE_SINGLETON_SQL_JSON_ITEM_REQUIRED\n> within the \"if (!IsAJsonbScalar(res))\" branch should use\n> ERRCODE_SQL_JSON_SCALAR_REQUIRED\n> ?\n> + if (column_name)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_MORE_THAN_ONE_SQL_JSON_ITEM),\n> + errmsg(\"JSON path expression for column \\\"%s\\\" should return\n> singleton scalar item\",\n> + column_name)));\n> + else\n> + ereport(ERROR,\n> + (errcode(ERRCODE_SQL_JSON_SCALAR_REQUIRED),\n> + errmsg(\"JSON path expression in JSON_VALUE should return singleton\n> scalar item\")));\n> the error message seems similar, but the error code is different?\n> both within \"if (count > 1)\" and \"if (!IsAJsonbScalar(res))\" branch.\n\nUsing different error codes for the same error is a copy-paste mistake\non my part. Fixed.\n\n> in src/include/utils/jsonpath.h, comments\n> /* SQL/JSON item */\n> should be\n> /* SQL/JSON query functions */\n>\n>\n> elog(ERROR, \"unrecognized json wrapper %d\", wrapper);\n> should be\n> elog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n\nFixed in 0003.\n\n-- \nThanks, Amit Langote", "msg_date": "Fri, 12 Apr 2024 18:43:57 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Fri, Apr 12, 2024 at 5:44 PM Amit Langote <[email protected]> wrote:\n>\n> > elog(ERROR, \"unrecognized json wrapper %d\", wrapper);\n> > should be\n> > elog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n>\n> Fixed in 0003.\n>\nthe fix seems not in 0003?\nother than that, everything looks fine.\n\n\n<programlisting>\nSELECT * FROM JSON_TABLE (\n'{\"favorites\":\n {\"movies\":\n [{\"name\": \"One\", \"director\": \"John Doe\"},\n {\"name\": \"Two\", \"director\": \"Don Joe\"}],\n \"books\":\n [{\"name\": \"Mystery\", \"authors\": [{\"name\": \"Brown Dan\"}]},\n {\"name\": \"Wonder\", \"authors\": [{\"name\": \"Jun Murakami\"},\n{\"name\":\"Craig Doe\"}]}]\n}}'::json, '$.favs[*]'\nCOLUMNS (user_id FOR ORDINALITY,\n NESTED '$.movies[*]'\n COLUMNS (\n movie_id FOR ORDINALITY,\n mname text PATH '$.name',\n director text),\n NESTED '$.books[*]'\n COLUMNS (\n book_id FOR ORDINALITY,\n bname text PATH '$.name',\n NESTED '$.authors[*]'\n COLUMNS (\n author_id FOR ORDINALITY,\n author_name text PATH '$.name'))));\n</programlisting>\n\nI actually did run the query, it returns null.\n'$.favs[*]'\nshould be\n'$.favorites[*]'\n\n\n\none more minor thing, I previously mentioned in getJsonPathVariable\nereport(ERROR,\n(errcode(ERRCODE_UNDEFINED_OBJECT),\nerrmsg(\"could not find jsonpath variable \\\"%s\\\"\",\npnstrdup(varName, varNameLength))));\n\ndo we need to remove pnstrdup?\n\n\n", "msg_date": "Sat, 13 Apr 2024 22:12:47 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "hi.\nhttps://wiki.postgresql.org/wiki/PostgreSQL_17_Open_Items#Open_Issues\nissue: Problems with deparsed SQL/JSON query function\n\noriginal the bug report link:\nhttps://postgr.es/m/CACJufxEqhqsfrg_p7EMyo5zak3d767iFDL8vz_4%[email protected]\n\nforgive me for putting it in the new email thread.\nI made the following change, added several tests on it.\n\n--- a/src/backend/parser/parse_expr.c\n+++ b/src/backend/parser/parse_expr.c\n@@ -4636,10 +4636,10 @@ transformJsonBehavior(ParseState *pstate,\nJsonBehavior *behavior,\n {\n expr = transformExprRecurse(pstate, behavior->expr);\n if (!IsA(expr, Const) && !IsA(expr, FuncExpr) &&\n- !IsA(expr, OpExpr))\n+ !IsA(expr, OpExpr) && !IsA(expr, CoerceViaIO) && !IsA(expr, CoerceToDomain))\n ereport(ERROR,\n (errcode(ERRCODE_DATATYPE_MISMATCH),\n- errmsg(\"can only specify a constant, non-aggregate function, or\noperator expression for DEFAULT\"),\n+ errmsg(\"can only specify a constant, non-aggregate function, or\noperator expression or cast expression for DEFAULT\"),\n parser_errposition(pstate, exprLocation(expr))));\n if (contain_var_clause(expr))\n ereport(ERROR,\n\nthese two expression node also looks like Const:\nCoerceViaIO: \"foo1\"'::jsonb::text\nCoerceToDomain: 'foo'::jsonb_test_domain\n\nwe need to deal with these two, otherwise we cannot use domain type in\nDEFAULT expression.\nalso the following should not fail:\nSELECT JSON_VALUE(jsonb '{\"d1\": \"foo\"}', '$.a2' returning text DEFAULT\n'\"foo1\"'::text::json::text ON ERROR);\n\n\nwe have `if (contain_var_clause(expr))` further check it,\nso it should be fine?", "msg_date": "Mon, 15 Apr 2024 13:03:30 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "Hi,\n\nOn Sat, Apr 13, 2024 at 11:12 PM jian he <[email protected]> wrote:\n> On Fri, Apr 12, 2024 at 5:44 PM Amit Langote <[email protected]> wrote:\n> >\n> > > elog(ERROR, \"unrecognized json wrapper %d\", wrapper);\n> > > should be\n> > > elog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n> >\n> > Fixed in 0003.\n> >\n> the fix seems not in 0003?\n> other than that, everything looks fine.\n>\n>\n> <programlisting>\n> SELECT * FROM JSON_TABLE (\n> '{\"favorites\":\n> {\"movies\":\n> [{\"name\": \"One\", \"director\": \"John Doe\"},\n> {\"name\": \"Two\", \"director\": \"Don Joe\"}],\n> \"books\":\n> [{\"name\": \"Mystery\", \"authors\": [{\"name\": \"Brown Dan\"}]},\n> {\"name\": \"Wonder\", \"authors\": [{\"name\": \"Jun Murakami\"},\n> {\"name\":\"Craig Doe\"}]}]\n> }}'::json, '$.favs[*]'\n> COLUMNS (user_id FOR ORDINALITY,\n> NESTED '$.movies[*]'\n> COLUMNS (\n> movie_id FOR ORDINALITY,\n> mname text PATH '$.name',\n> director text),\n> NESTED '$.books[*]'\n> COLUMNS (\n> book_id FOR ORDINALITY,\n> bname text PATH '$.name',\n> NESTED '$.authors[*]'\n> COLUMNS (\n> author_id FOR ORDINALITY,\n> author_name text PATH '$.name'))));\n> </programlisting>\n>\n> I actually did run the query, it returns null.\n> '$.favs[*]'\n> should be\n> '$.favorites[*]'\n\nOops, fixed.\n\nI've combined these patches into one -- attached 0001. Will push tomorrow.\n\n> one more minor thing, I previously mentioned in getJsonPathVariable\n> ereport(ERROR,\n> (errcode(ERRCODE_UNDEFINED_OBJECT),\n> errmsg(\"could not find jsonpath variable \\\"%s\\\"\",\n> pnstrdup(varName, varNameLength))));\n>\n> do we need to remove pnstrdup?\n\nLooking at this again, it seems like that's necessary because varName,\nbeing a string extracted from JsonPathItem, is not necessarily\nnull-terminated. There are many pndstrdup()s in jsonpath_exec.c\nbecause of that aspect.\n\nNow studying the JsonBehavior DEFAULT expression issue and your patch.\n\n-- \nThanks, Amit Langote", "msg_date": "Mon, 15 Apr 2024 21:46:31 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Mon, Apr 15, 2024 at 9:46 PM Amit Langote <[email protected]> wrote:\n> On Sat, Apr 13, 2024 at 11:12 PM jian he <[email protected]> wrote:\n> > On Fri, Apr 12, 2024 at 5:44 PM Amit Langote <[email protected]> wrote:\n> > >\n> > > > elog(ERROR, \"unrecognized json wrapper %d\", wrapper);\n> > > > should be\n> > > > elog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n> > >\n> > > Fixed in 0003.\n> > >\n> > the fix seems not in 0003?\n> > other than that, everything looks fine.\n\nOops, really fixed now in 0002.\n\n> I've combined these patches into one -- attached 0001. Will push tomorrow.\n\nDecided to break the error message improvement patch into its own\nafter all -- attached 0001.\n\n> Now studying the JsonBehavior DEFAULT expression issue and your patch.\n\nI found some more coercion-related expression nodes that must also be\nchecked along with CoerceViaIO and CoerceToDomain. Also, after fixing\nthe code to allow them, I found that we'd need to also check\nrecursively whether their argument expression is also one of the\nsupported expression nodes. Also, I decided that it's not necessary\nto include \"cast\" in the error message as one of the supported\nexpressions.\n\nWill push all today.\n\n-- \nThanks, Amit Langote", "msg_date": "Thu, 18 Apr 2024 09:33:23 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" }, { "msg_contents": "On Thu, Apr 18, 2024 at 9:33 AM Amit Langote <[email protected]> wrote:\n> On Mon, Apr 15, 2024 at 9:46 PM Amit Langote <[email protected]> wrote:\n> > On Sat, Apr 13, 2024 at 11:12 PM jian he <[email protected]> wrote:\n> > > On Fri, Apr 12, 2024 at 5:44 PM Amit Langote <[email protected]> wrote:\n> > > >\n> > > > > elog(ERROR, \"unrecognized json wrapper %d\", wrapper);\n> > > > > should be\n> > > > > elog(ERROR, \"unrecognized json wrapper %d\", (int) wrapper);\n> > > >\n> > > > Fixed in 0003.\n> > > >\n> > > the fix seems not in 0003?\n> > > other than that, everything looks fine.\n>\n> Oops, really fixed now in 0002.\n>\n> > I've combined these patches into one -- attached 0001. Will push tomorrow.\n>\n> Decided to break the error message improvement patch into its own\n> after all -- attached 0001.\n>\n> > Now studying the JsonBehavior DEFAULT expression issue and your patch.\n>\n> I found some more coercion-related expression nodes that must also be\n> checked along with CoerceViaIO and CoerceToDomain. Also, after fixing\n> the code to allow them, I found that we'd need to also check\n> recursively whether their argument expression is also one of the\n> supported expression nodes. Also, I decided that it's not necessary\n> to include \"cast\" in the error message as one of the supported\n> expressions.\n>\n> Will push all today.\n\nTotally forgot to drop a note here that I pushed those and marked the\n2 open items as resolved.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 26 Apr 2024 12:34:10 +0900", "msg_from": "Amit Langote <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sql/json remaining issue" } ]
[ { "msg_contents": "Hello,\n\nWhile doing some work/research on the new incremental backup feature\nsome limitations were not listed in the docs. Mainly the fact that\npg_combienbackup works with plain format and not tar.\n\nAround the same time, Tomas Vondra tested incremental backups with a\ncluster where he enabled checksums after taking the previous full\nbackup. After combining the backups the synthetic backup had pages\nwith checksums and other pages without checksums which ended in\nchecksum errors.\n\nI've attached two patches, the first one is just neat-picking things I\nfound when I first read the docs. The second has a note on the two\nlimitations listed above. The limitation on incremental backups of a\ncluster that had checksums enabled after the previous backup, I was\nnot sure if that should go in pg_basebackup or pg_combienbackup\nreference documentation. Or maybe somewhere else.\n\nKind regards, Martín\n\n-- \nMartín Marqués\nIt’s not that I have something to hide,\nit’s that I have nothing I want you to see", "msg_date": "Tue, 9 Apr 2024 09:59:07 +0200", "msg_from": "=?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Add notes to pg_combinebackup docs" }, { "msg_contents": "\n\nOn 4/9/24 09:59, Martín Marqués wrote:\n> Hello,\n> \n> While doing some work/research on the new incremental backup feature\n> some limitations were not listed in the docs. Mainly the fact that\n> pg_combienbackup works with plain format and not tar.\n> \n\nRight. The docs mostly imply this by talking about output directory and\nbackup directories, but making it more explicit would not hurt.\n\nFWIW it'd be great if we could make incremental backups work with tar\nformat in the future too. People probably don't want to keep around the\nexpanded data directory or extract everything before combining the\nbackups is not very convenient. Reading and writing the tar would make\nthis simpler.\n\n> Around the same time, Tomas Vondra tested incremental backups with a\n> cluster where he enabled checksums after taking the previous full\n> backup. After combining the backups the synthetic backup had pages\n> with checksums and other pages without checksums which ended in\n> checksum errors.\n> \n\nI'm not sure just documenting this limitation is sufficient. We can't\nmake the incremental backups work in this case (it's as if someone\nmesses with cluster without writing stuff into WAL), but I think we\nshould do better than silently producing (seemingly) corrupted backups.\n\nI say seemingly, because the backup is actually fine, the only problem\nis it has checksums enabled in the controlfile, but the pages from the\nfull backup (and the early incremental backups) have no checksums.\n\nWhat we could do is detect this in pg_combinebackup, and either just\ndisable checksums with a warning and hint to maybe enable them again. Or\nmaybe just print that the user needs to disable them.\n\nI was thinking maybe we could detect this while taking the backups, and\nforce taking a full backup if checksums got enabled since the last\nbackup. But we can't do that because we only have the manifest from the\nlast backup, and the manifest does not include info about checksums.\n\nIt's a bit unfortunate we don't have a way to enable checksums online.\nThat'd not have this problem IIRC, because it writes proper WAL. Maybe\nit's time to revive that idea ... I recall there were some concerns\nabout tracking progress to allow resuming stuff, but maybe not having\nanything because in some (rare?) cases it'd need to do more work does\nnot seem like a great trade off.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 9 Apr 2024 11:44:57 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On 4/9/24 19:44, Tomas Vondra wrote:\n> \n> On 4/9/24 09:59, Martín Marqués wrote:\n>> Hello,\n>>\n>> While doing some work/research on the new incremental backup feature\n>> some limitations were not listed in the docs. Mainly the fact that\n>> pg_combienbackup works with plain format and not tar.\n>>\n> \n> Right. The docs mostly imply this by talking about output directory and\n> backup directories, but making it more explicit would not hurt.\n> \n> FWIW it'd be great if we could make incremental backups work with tar\n> format in the future too. People probably don't want to keep around the\n> expanded data directory or extract everything before combining the\n> backups is not very convenient. Reading and writing the tar would make\n> this simpler.\n\nI have a hard time seeing this feature as being very useful, especially \nfor large databases, until pg_combinebackup works on tar (and compressed \ntar). Right now restoring an incremental requires at least twice the \nspace of the original cluster, which is going to take a lot of users by \nsurprise.\n\nI know you have made some improvements here for COW filesystems, but my \nexperience is that Postgres is generally not run on such filesystems, \nthough that is changing a bit.\n\n>> Around the same time, Tomas Vondra tested incremental backups with a\n>> cluster where he enabled checksums after taking the previous full\n>> backup. After combining the backups the synthetic backup had pages\n>> with checksums and other pages without checksums which ended in\n>> checksum errors.\n> \n> I'm not sure just documenting this limitation is sufficient. We can't\n> make the incremental backups work in this case (it's as if someone\n> messes with cluster without writing stuff into WAL), but I think we\n> should do better than silently producing (seemingly) corrupted backups.\n> \n> I say seemingly, because the backup is actually fine, the only problem\n> is it has checksums enabled in the controlfile, but the pages from the\n> full backup (and the early incremental backups) have no checksums.\n> \n> What we could do is detect this in pg_combinebackup, and either just\n> disable checksums with a warning and hint to maybe enable them again. Or\n> maybe just print that the user needs to disable them.\n> \n> I was thinking maybe we could detect this while taking the backups, and\n> force taking a full backup if checksums got enabled since the last\n> backup. But we can't do that because we only have the manifest from the\n> last backup, and the manifest does not include info about checksums.\n\nI'd say making a new full backup is the right thing to do in this case. \nIt should be easy enough to store the checksum state of the cluster in \nthe manifest.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 11 Apr 2024 10:01:00 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On 4/11/24 02:01, David Steele wrote:\n> On 4/9/24 19:44, Tomas Vondra wrote:\n>>\n>> On 4/9/24 09:59, Martín Marqués wrote:\n>>> Hello,\n>>>\n>>> While doing some work/research on the new incremental backup feature\n>>> some limitations were not listed in the docs. Mainly the fact that\n>>> pg_combienbackup works with plain format and not tar.\n>>>\n>>\n>> Right. The docs mostly imply this by talking about output directory and\n>> backup directories, but making it more explicit would not hurt.\n>>\n>> FWIW it'd be great if we could make incremental backups work with tar\n>> format in the future too. People probably don't want to keep around the\n>> expanded data directory or extract everything before combining the\n>> backups is not very convenient. Reading and writing the tar would make\n>> this simpler.\n> \n> I have a hard time seeing this feature as being very useful, especially\n> for large databases, until pg_combinebackup works on tar (and compressed\n> tar). Right now restoring an incremental requires at least twice the\n> space of the original cluster, which is going to take a lot of users by\n> surprise.\n> \n\nI do agree it'd be nice if pg_combinebackup worked with .tar directly,\nwithout having to extract the directories first. No argument there, but\nas I said in the other thread, I believe that's something we can add\nlater. That's simply how incremental development works.\n\nI can certainly imagine other ways to do pg_combinebackup, e.g. by\n\"merging\" the increments into the data directory, instead of creating a\ncopy. But again, I don't think that has to be in v1.\n\n> I know you have made some improvements here for COW filesystems, but my\n> experience is that Postgres is generally not run on such filesystems,\n> though that is changing a bit.\n> \n\nI'd say XFS is a pretty common choice, for example. And it's one of the\nfilesystems that work great with pg_combinebackup.\n\nHowever, who says this has to be the filesystem the Postgres instance\nruns on? Who in their right mind put backups on the same volume as the\ninstance anyway? At which point it can be a different filesystem, even\nif it's not ideal for running the database.\n\nFWIW I think it's fine to tell users that to minimize the disk space\nrequirements, they should use a CoW filesystem and --copy-file-range.\nThe docs don't say that currently, that's true.\n\nAll of this also depends on how people do the restore. With the CoW\nstuff they can do a quick (and small) copy on the backup server, and\nthen copy the result to the actual instance. Or they can do restore on\nthe target directly (e.g. by mounting a r/o volume with backups), in\nwhich case the CoW won't really help.\n\nBut yeah, having to keep the backups as expanded directories is not\ngreat, I'd love to have .tar. Not necessarily because of the disk space\n(in my experience the compression in filesystems works quite well for\nthis purpose), but mostly because it's more compact and allows working\nwith backups as a single piece of data (e.g. it's much cleared what the\nchecksum of a single .tar is, compared to a directory).\n\n>>> Around the same time, Tomas Vondra tested incremental backups with a\n>>> cluster where he enabled checksums after taking the previous full\n>>> backup. After combining the backups the synthetic backup had pages\n>>> with checksums and other pages without checksums which ended in\n>>> checksum errors.\n>>\n>> I'm not sure just documenting this limitation is sufficient. We can't\n>> make the incremental backups work in this case (it's as if someone\n>> messes with cluster without writing stuff into WAL), but I think we\n>> should do better than silently producing (seemingly) corrupted backups.\n>>\n>> I say seemingly, because the backup is actually fine, the only problem\n>> is it has checksums enabled in the controlfile, but the pages from the\n>> full backup (and the early incremental backups) have no checksums.\n>>\n>> What we could do is detect this in pg_combinebackup, and either just\n>> disable checksums with a warning and hint to maybe enable them again. Or\n>> maybe just print that the user needs to disable them.\n>>\n>> I was thinking maybe we could detect this while taking the backups, and\n>> force taking a full backup if checksums got enabled since the last\n>> backup. But we can't do that because we only have the manifest from the\n>> last backup, and the manifest does not include info about checksums.\n> \n> I'd say making a new full backup is the right thing to do in this case.\n> It should be easy enough to store the checksum state of the cluster in\n> the manifest.\n> \n\nAgreed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:51:28 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "\n\nOn 4/11/24 20:51, Tomas Vondra wrote:\n> On 4/11/24 02:01, David Steele wrote:\n>>\n>> I have a hard time seeing this feature as being very useful, especially\n>> for large databases, until pg_combinebackup works on tar (and compressed\n>> tar). Right now restoring an incremental requires at least twice the\n>> space of the original cluster, which is going to take a lot of users by\n>> surprise.\n> \n> I do agree it'd be nice if pg_combinebackup worked with .tar directly,\n> without having to extract the directories first. No argument there, but\n> as I said in the other thread, I believe that's something we can add\n> later. That's simply how incremental development works.\n\nOK, sure, but if the plan is to make it practical later doesn't that \nmake the feature something to be avoided now?\n\n>> I know you have made some improvements here for COW filesystems, but my\n>> experience is that Postgres is generally not run on such filesystems,\n>> though that is changing a bit.\n> \n> I'd say XFS is a pretty common choice, for example. And it's one of the\n> filesystems that work great with pg_combinebackup.\n\nXFS has certainly advanced more than I was aware.\n\n> However, who says this has to be the filesystem the Postgres instance\n> runs on? Who in their right mind put backups on the same volume as the\n> instance anyway? At which point it can be a different filesystem, even\n> if it's not ideal for running the database.\n\nMy experience is these days backups are generally placed in object \nstores. Sure, people are still using NFS but admins rarely have much \ncontrol over those volumes. They may or not be COW filesystems.\n\n> FWIW I think it's fine to tell users that to minimize the disk space\n> requirements, they should use a CoW filesystem and --copy-file-range.\n> The docs don't say that currently, that's true.\n\nThat would probably be a good addition to the docs.\n\n> All of this also depends on how people do the restore. With the CoW\n> stuff they can do a quick (and small) copy on the backup server, and\n> then copy the result to the actual instance. Or they can do restore on\n> the target directly (e.g. by mounting a r/o volume with backups), in\n> which case the CoW won't really help.\n\nAnd again, this all requires a significant amount of setup and tooling. \nObviously I believe good backup requires effort but doing this right \ngets very complicated due to the limitations of the tool.\n\n> But yeah, having to keep the backups as expanded directories is not\n> great, I'd love to have .tar. Not necessarily because of the disk space\n> (in my experience the compression in filesystems works quite well for\n> this purpose), but mostly because it's more compact and allows working\n> with backups as a single piece of data (e.g. it's much cleared what the\n> checksum of a single .tar is, compared to a directory).\n\nBut again, object stores are commonly used for backup these days and \nbilling is based on data stored rather than any compression that can be \ndone on the data. Of course, you'd want to store the compressed tars in \nthe object store, but that does mean storing an expanded copy somewhere \nto do pg_combinebackup.\n\nBut if the argument is that all this can/will be fixed in the future, I \nguess the smart thing for users to do is wait a few releases for \nincremental backups to become a practical feature.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 12 Apr 2024 08:14:27 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Fri, Apr 12, 2024 at 12:14 AM David Steele <[email protected]> wrote:\n\n>\n>\n> On 4/11/24 20:51, Tomas Vondra wrote:\n> > On 4/11/24 02:01, David Steele wrote:\n> >>\n> >> I have a hard time seeing this feature as being very useful, especially\n> >> for large databases, until pg_combinebackup works on tar (and compressed\n> >> tar). Right now restoring an incremental requires at least twice the\n> >> space of the original cluster, which is going to take a lot of users by\n> >> surprise.\n> >\n> > I do agree it'd be nice if pg_combinebackup worked with .tar directly,\n> > without having to extract the directories first. No argument there, but\n> > as I said in the other thread, I believe that's something we can add\n> > later. That's simply how incremental development works.\n>\n> OK, sure, but if the plan is to make it practical later doesn't that\n> make the feature something to be avoided now?\n>\n\nThat could be said for any feature. When we shipped streaming replication,\nthe plan was to support synchronous in the future. Should we not have\nshipped it, or told people to avoid it?\n\nSure, the current state limits it's uses in some cases. But it still leaves\na bunch of other cases where it works just fine.\n\n\n\n\n> >> I know you have made some improvements here for COW filesystems, but my\n> >> experience is that Postgres is generally not run on such filesystems,\n> >> though that is changing a bit.\n> >\n> > I'd say XFS is a pretty common choice, for example. And it's one of the\n> > filesystems that work great with pg_combinebackup.\n>\n> XFS has certainly advanced more than I was aware.\n>\n\nAnd it happens to be the default on at least one of our most common\nplatforms.\n\n\n> However, who says this has to be the filesystem the Postgres instance\n> > runs on? Who in their right mind put backups on the same volume as the\n> > instance anyway? At which point it can be a different filesystem, even\n> > if it's not ideal for running the database.\n>\n> My experience is these days backups are generally placed in object\n> stores. Sure, people are still using NFS but admins rarely have much\n> control over those volumes. They may or not be COW filesystems.\n>\n\nIf it's mounted through NFS I assume pg_combinebackup won't actually be\nable to use the COW features? Or does that actually work through NFS?\n\nMounted LUNs on a SAN I find more common today though, and there it would\ndo a fine job.\n\n\n>\n> > FWIW I think it's fine to tell users that to minimize the disk space\n> > requirements, they should use a CoW filesystem and --copy-file-range.\n> > The docs don't say that currently, that's true.\n>\n> That would probably be a good addition to the docs.\n>\n\n+1, that would be a good improvement.\n\n\n> All of this also depends on how people do the restore. With the CoW\n> > stuff they can do a quick (and small) copy on the backup server, and\n> > then copy the result to the actual instance. Or they can do restore on\n> > the target directly (e.g. by mounting a r/o volume with backups), in\n> > which case the CoW won't really help.\n>\n> And again, this all requires a significant amount of setup and tooling.\n> Obviously I believe good backup requires effort but doing this right\n> gets very complicated due to the limitations of the tool.\n>\n\nIt clearly needs to be documented that there are space needs. But\ntemporarily getting space for something like that is not very complicated\nin most environments. But you do have to be aware of it.\n\nGenerally speaking it's already the case that the \"restore experience\" with\npg_basebackup is far from great. We don't have a \"pg_baserestore\". You\nstill have to deal with archive_command and restore_command, which we all\nknow can be easy to get wrong. I don't see how this is fundamentally worse\nthan that.\n\nPersonally, I tend to recommend that \"if you want PITR and thus need to\nmess with archive_command etc, you should use a backup tool like\npg_backrest. If you're fine with just daily backups or whatnot, use\npg_basebackup\". The incremental backup story fits somewhere in between, but\nI'd still say this is (today) primarily a tool directed at those that don't\nneed full PITR.\n\n\n> But yeah, having to keep the backups as expanded directories is not\n> > great, I'd love to have .tar. Not necessarily because of the disk space\n> > (in my experience the compression in filesystems works quite well for\n> > this purpose), but mostly because it's more compact and allows working\n> > with backups as a single piece of data (e.g. it's much cleared what the\n> > checksum of a single .tar is, compared to a directory).\n>\n> But again, object stores are commonly used for backup these days and\n> billing is based on data stored rather than any compression that can be\n> done on the data. Of course, you'd want to store the compressed tars in\n> the object store, but that does mean storing an expanded copy somewhere\n> to do pg_combinebackup.\n>\n\nObject stores are definitely getting more common. I wish they were getting\na lot more common than they actually are, because they simplify a lot. But\nthey're in my experience still very far from being a majority.\n\n\nBut if the argument is that all this can/will be fixed in the future, I\n> guess the smart thing for users to do is wait a few releases for\n> incremental backups to become a practical feature.\n>\n\nThere's always going to be another set of goalposts further ahead. I think\nit can still be practical for quite a few people.\n\nI'm more worried about the issue you raised in the other thread about\nmissing files, for example...\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 12, 2024 at 12:14 AM David Steele <[email protected]> wrote:\n\nOn 4/11/24 20:51, Tomas Vondra wrote:\n> On 4/11/24 02:01, David Steele wrote:\n>>\n>> I have a hard time seeing this feature as being very useful, especially\n>> for large databases, until pg_combinebackup works on tar (and compressed\n>> tar). Right now restoring an incremental requires at least twice the\n>> space of the original cluster, which is going to take a lot of users by\n>> surprise.\n> \n> I do agree it'd be nice if pg_combinebackup worked with .tar directly,\n> without having to extract the directories first. No argument there, but\n> as I said in the other thread, I believe that's something we can add\n> later. That's simply how incremental development works.\n\nOK, sure, but if the plan is to make it practical later doesn't that \nmake the feature something to be avoided now?That could be said for any feature. When we shipped streaming replication, the plan was to support synchronous in the future. Should we not have shipped it, or told people to avoid it? Sure, the current state limits it's uses in some cases. But it still leaves a bunch of other cases where it works just fine. \n>> I know you have made some improvements here for COW filesystems, but my\n>> experience is that Postgres is generally not run on such filesystems,\n>> though that is changing a bit.\n> \n> I'd say XFS is a pretty common choice, for example. And it's one of the\n> filesystems that work great with pg_combinebackup.\n\nXFS has certainly advanced more than I was aware.And it happens to be the default on at least one of our most common platforms.\n> However, who says this has to be the filesystem the Postgres instance\n> runs on? Who in their right mind put backups on the same volume as the\n> instance anyway? At which point it can be a different filesystem, even\n> if it's not ideal for running the database.\n\nMy experience is these days backups are generally placed in object \nstores. Sure, people are still using NFS but admins rarely have much \ncontrol over those volumes. They may or not be COW filesystems.If it's mounted through NFS I assume pg_combinebackup won't actually be able to use the COW features? Or does that actually work through NFS?Mounted LUNs on a SAN I find more common today though, and there it would do a fine job. \n\n> FWIW I think it's fine to tell users that to minimize the disk space\n> requirements, they should use a CoW filesystem and --copy-file-range.\n> The docs don't say that currently, that's true.\n\nThat would probably be a good addition to the docs.+1, that would be a good improvement.\n> All of this also depends on how people do the restore. With the CoW\n> stuff they can do a quick (and small) copy on the backup server, and\n> then copy the result to the actual instance. Or they can do restore on\n> the target directly (e.g. by mounting a r/o volume with backups), in\n> which case the CoW won't really help.\n\nAnd again, this all requires a significant amount of setup and tooling. \nObviously I believe good backup requires effort but doing this right \ngets very complicated due to the limitations of the tool.It clearly needs to be documented that there are space needs. But temporarily getting space for something like that is not very complicated in most environments. But you do have to be aware of it.Generally speaking it's already the case that the \"restore experience\" with pg_basebackup is far from great. We don't have a \"pg_baserestore\". You still have to deal with archive_command and restore_command, which we all know can be easy to get wrong. I don't see how this is fundamentally worse than that.Personally, I tend to recommend that \"if you want PITR and thus need to mess with archive_command etc, you should use a backup tool like pg_backrest. If you're fine with just daily backups or whatnot, use pg_basebackup\". The incremental backup story fits somewhere in between, but I'd still say this is (today) primarily a tool directed at those that don't need full PITR. \n> But yeah, having to keep the backups as expanded directories is not\n> great, I'd love to have .tar. Not necessarily because of the disk space\n> (in my experience the compression in filesystems works quite well for\n> this purpose), but mostly because it's more compact and allows working\n> with backups as a single piece of data (e.g. it's much cleared what the\n> checksum of a single .tar is, compared to a directory).\n\nBut again, object stores are commonly used for backup these days and \nbilling is based on data stored rather than any compression that can be \ndone on the data. Of course, you'd want to store the compressed tars in \nthe object store, but that does mean storing an expanded copy somewhere \nto do pg_combinebackup.Object stores are definitely getting more common. I wish they were getting a lot more common than they actually are, because they simplify a lot.  But they're in my experience still very far from being a majority.\nBut if the argument is that all this can/will be fixed in the future, I \nguess the smart thing for users to do is wait a few releases for \nincremental backups to become a practical feature.There's always going to be another set of goalposts further ahead. I think it can still be practical for quite a few people.I'm more worried about the issue you raised in the other thread about missing files, for example... --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Apr 2024 11:09:11 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Tue, Apr 9, 2024 at 11:46 AM Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 4/9/24 09:59, Martín Marqués wrote:\n> > Hello,\n> >\n> > While doing some work/research on the new incremental backup feature\n> > some limitations were not listed in the docs. Mainly the fact that\n> > pg_combienbackup works with plain format and not tar.\n> >\n>\n> Right. The docs mostly imply this by talking about output directory and\n> backup directories, but making it more explicit would not hurt.\n>\n> FWIW it'd be great if we could make incremental backups work with tar\n> format in the future too. People probably don't want to keep around the\n> expanded data directory or extract everything before combining the\n> backups is not very convenient. Reading and writing the tar would make\n> this simpler.\n>\n> > Around the same time, Tomas Vondra tested incremental backups with a\n> > cluster where he enabled checksums after taking the previous full\n> > backup. After combining the backups the synthetic backup had pages\n> > with checksums and other pages without checksums which ended in\n> > checksum errors.\n> >\n>\n> I'm not sure just documenting this limitation is sufficient. We can't\n> make the incremental backups work in this case (it's as if someone\n> messes with cluster without writing stuff into WAL), but I think we\n> should do better than silently producing (seemingly) corrupted backups\n\n\n+1. I think that should be an open item that needs to get sorted.\n\n\nI say seemingly, because the backup is actually fine, the only problem\n> is it has checksums enabled in the controlfile, but the pages from the\n> full backup (and the early incremental backups) have no checksums.\n>\n> What we could do is detect this in pg_combinebackup, and either just\n> disable checksums with a warning and hint to maybe enable them again. Or\n> maybe just print that the user needs to disable them.\n>\n\nI don't think either of these should be done automatically. Something like\npg_combinebackup simply failing and requiring you to say\n\"--disable-checksums\" to have it do that would be the way to go, IMO.\n(once we can reliably detect it of course)\n\n\nI was thinking maybe we could detect this while taking the backups, and\n> force taking a full backup if checksums got enabled since the last\n> backup. But we can't do that because we only have the manifest from the\n> last backup, and the manifest does not include info about checksums.\n>\n\nCan we forcibly read and parse it out of pg_control?\n\n\nIt's a bit unfortunate we don't have a way to enable checksums online.\n> That'd not have this problem IIRC, because it writes proper WAL. Maybe\n> it's time to revive that idea ... I recall there were some concerns\n> about tracking progress to allow resuming stuff, but maybe not having\n> anything because in some (rare?) cases it'd need to do more work does\n> not seem like a great trade off.\n>\n>\nFor that one I still think it would be perfectly acceptable to have no\nresume at all, but that's a whole different topic :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Apr 9, 2024 at 11:46 AM Tomas Vondra <[email protected]> wrote:\n\nOn 4/9/24 09:59, Martín Marqués wrote:\n> Hello,\n> \n> While doing some work/research on the new incremental backup feature\n> some limitations were not listed in the docs. Mainly the fact that\n> pg_combienbackup works with plain format and not tar.\n> \n\nRight. The docs mostly imply this by talking about output directory and\nbackup directories, but making it more explicit would not hurt.\n\nFWIW it'd be great if we could make incremental backups work with tar\nformat in the future too. People probably don't want to keep around the\nexpanded data directory or extract everything before combining the\nbackups is not very convenient. Reading and writing the tar would make\nthis simpler.\n\n> Around the same time, Tomas Vondra tested incremental backups with a\n> cluster where he enabled checksums after taking the previous full\n> backup. After combining the backups the synthetic backup had pages\n> with checksums and other pages without checksums which ended in\n> checksum errors.\n> \n\nI'm not sure just documenting this limitation is sufficient. We can't\nmake the incremental backups work in this case (it's as if someone\nmesses with cluster without writing stuff into WAL), but I think we\nshould do better than silently producing (seemingly) corrupted backups+1. I think that should be an open item that needs to get sorted.\nI say seemingly, because the backup is actually fine, the only problem\nis it has checksums enabled in the controlfile, but the pages from the\nfull backup (and the early incremental backups) have no checksums.\n\nWhat we could do is detect this in pg_combinebackup, and either just\ndisable checksums with a warning and hint to maybe enable them again. Or\nmaybe just print that the user needs to disable them.I don't think either of these should be done automatically. Something like pg_combinebackup simply failing and requiring you to say \"--disable-checksums\" to have it do that would be the way to go, IMO.  (once we can reliably detect it of course)\nI was thinking maybe we could detect this while taking the backups, and\nforce taking a full backup if checksums got enabled since the last\nbackup. But we can't do that because we only have the manifest from the\nlast backup, and the manifest does not include info about checksums.Can we forcibly read and parse it out of pg_control?\nIt's a bit unfortunate we don't have a way to enable checksums online.\nThat'd not have this problem IIRC, because it writes proper WAL. Maybe\nit's time to revive that idea ... I recall there were some concerns\nabout tracking progress to allow resuming stuff, but maybe not having\nanything because in some (rare?) cases it'd need to do more work does\nnot seem like a great trade off.\nFor that one I still think it would be perfectly acceptable to have no resume at all, but that's a whole different topic :)--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Apr 2024 11:12:46 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On 4/12/24 19:09, Magnus Hagander wrote:\n> On Fri, Apr 12, 2024 at 12:14 AM David Steele <[email protected] \n> \n> OK, sure, but if the plan is to make it practical later doesn't that\n> make the feature something to be avoided now?\n> \n> \n> That could be said for any feature. When we shipped streaming \n> replication, the plan was to support synchronous in the future. Should \n> we not have shipped it, or told people to avoid it?\n\nThis doesn't seem like a great example. Synchronous rep is by far the \nmore used mode in my experience. I actively dissuade people from using \nsync rep because of the downsides. More people think they need it than \nactually need it.\n\n> > However, who says this has to be the filesystem the Postgres instance\n> > runs on? Who in their right mind put backups on the same volume\n> as the\n> > instance anyway? At which point it can be a different filesystem,\n> even\n> > if it's not ideal for running the database.\n> \n> My experience is these days backups are generally placed in object\n> stores. Sure, people are still using NFS but admins rarely have much\n> control over those volumes. They may or not be COW filesystems.\n> \n> \n> If it's mounted through NFS I assume pg_combinebackup won't actually be \n> able to use the COW features? Or does that actually work through NFS?\n\nPretty sure it won't work via NFS, but I was wrong about XFS, so...\n\n> Mounted LUNs on a SAN I find more common today though, and there it \n> would do a fine job.\n\nHuh, interesting. This is a case I almost never see anymore.\n\n> > All of this also depends on how people do the restore. With the CoW\n> > stuff they can do a quick (and small) copy on the backup server, and\n> > then copy the result to the actual instance. Or they can do\n> restore on\n> > the target directly (e.g. by mounting a r/o volume with backups), in\n> > which case the CoW won't really help.\n> \n> And again, this all requires a significant amount of setup and tooling.\n> Obviously I believe good backup requires effort but doing this right\n> gets very complicated due to the limitations of the tool.\n> \n> It clearly needs to be documented that there are space needs. But \n> temporarily getting space for something like that is not very \n> complicated in most environments. But you do have to be aware of it.\n\nWe find many environments ridiculously tight on space. There is a \nconstant fight with customers/users to even get the data/WAL volumes \nsized correctly.\n\nFor small databases it is probably not an issue, but this feature really \nshines with very large databases.\n\n> Generally speaking it's already the case that the \"restore experience\" \n> with pg_basebackup is far from great. We don't have a \"pg_baserestore\". \n> You still have to deal with archive_command and restore_command, which \n> we all know can be easy to get wrong. I don't see how this is \n> fundamentally worse than that.\n\nI pretty much agree with this statement. pg_basebackup is already hard \nto use effectively. Now it is just optionally harder.\n\n> Personally, I tend to recommend that \"if you want PITR and thus need to \n> mess with archive_command etc, you should use a backup tool like \n> pg_backrest. If you're fine with just daily backups or whatnot, use \n> pg_basebackup\". The incremental backup story fits somewhere in between, \n> but I'd still say this is (today) primarily a tool directed at those \n> that don't need full PITR.\n\nYeah, there are certainly cases where PITR is not required, but they \nstill seem to be in the minority. PITR cannot be disabled for the most \nrecent backup in pgBackRest and we've had few complaints about that overall.\n\n> > But yeah, having to keep the backups as expanded directories is not\n> > great, I'd love to have .tar. Not necessarily because of the disk\n> space\n> > (in my experience the compression in filesystems works quite well for\n> > this purpose), but mostly because it's more compact and allows\n> working\n> > with backups as a single piece of data (e.g. it's much cleared\n> what the\n> > checksum of a single .tar is, compared to a directory).\n> \n> But again, object stores are commonly used for backup these days and\n> billing is based on data stored rather than any compression that can be\n> done on the data. Of course, you'd want to store the compressed tars in\n> the object store, but that does mean storing an expanded copy somewhere\n> to do pg_combinebackup.\n> \n> Object stores are definitely getting more common. I wish they were \n> getting a lot more common than they actually are, because they simplify \n> a lot.  But they're in my experience still very far from being a majority.\n\nI see it the other way, especially the last few years. The majority seem \nto be object stores followed up closely by NFS. Directly mounted storage \non the backup host appears to be rarer.\n\n> But if the argument is that all this can/will be fixed in the future, I\n> guess the smart thing for users to do is wait a few releases for\n> incremental backups to become a practical feature.\n> \n> There's always going to be another set of goalposts further ahead. I \n> think it can still be practical for quite a few people.\n\nSince barman uses pg_basebackup in certain cases I imagine that will end \nup being the way most users access this feature.\n\n> I'm more worried about the issue you raised in the other thread about \n> missing files, for example...\n\nMe, too.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 12 Apr 2024 19:50:59 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On 4/12/24 11:50, David Steele wrote:\n> On 4/12/24 19:09, Magnus Hagander wrote:\n>> On Fri, Apr 12, 2024 at 12:14 AM David Steele <[email protected]\n>>\n>> ...>>\n>>      > But yeah, having to keep the backups as expanded directories is\n>> not\n>>      > great, I'd love to have .tar. Not necessarily because of the disk\n>>     space\n>>      > (in my experience the compression in filesystems works quite\n>> well for\n>>      > this purpose), but mostly because it's more compact and allows\n>>     working\n>>      > with backups as a single piece of data (e.g. it's much cleared\n>>     what the\n>>      > checksum of a single .tar is, compared to a directory).\n>>\n>>     But again, object stores are commonly used for backup these days and\n>>     billing is based on data stored rather than any compression that\n>> can be\n>>     done on the data. Of course, you'd want to store the compressed\n>> tars in\n>>     the object store, but that does mean storing an expanded copy\n>> somewhere\n>>     to do pg_combinebackup.\n>>\n>> Object stores are definitely getting more common. I wish they were\n>> getting a lot more common than they actually are, because they\n>> simplify a lot.  But they're in my experience still very far from\n>> being a majority.\n> \n> I see it the other way, especially the last few years. The majority seem\n> to be object stores followed up closely by NFS. Directly mounted storage\n> on the backup host appears to be rarer.\n> \n\nOne thing I'd mention is that not having built-in support for .tar and\n.tgz backups does not mean it's impossible to use pg_combinebackup with\narchives. You can mount them using e.g. \"ratarmount\" and then use that\nas source directories for pg_combinebackup.\n\nIt's not entirely friction-less because AFAICS it's necessary to do the\nbackup in plain format and then do the .tar to have the expected \"flat\"\ndirectory structure (and not manifest + 2x tar). But other than that it\nseems to work fine (based on my limited testing).\n\n\nFWIW the \"archivemount\" performs terribly, so adding this capability\ninto pg_combinebackup is clearly far from trivial.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Apr 2024 14:40:36 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "\n\nOn 4/12/24 11:12, Magnus Hagander wrote:\n> On Tue, Apr 9, 2024 at 11:46 AM Tomas Vondra <[email protected]>\n> wrote:\n> \n>>\n>>\n>> On 4/9/24 09:59, Martín Marqués wrote:\n>>> Hello,\n>>>\n>>> While doing some work/research on the new incremental backup feature\n>>> some limitations were not listed in the docs. Mainly the fact that\n>>> pg_combienbackup works with plain format and not tar.\n>>>\n>>\n>> Right. The docs mostly imply this by talking about output directory and\n>> backup directories, but making it more explicit would not hurt.\n>>\n>> FWIW it'd be great if we could make incremental backups work with tar\n>> format in the future too. People probably don't want to keep around the\n>> expanded data directory or extract everything before combining the\n>> backups is not very convenient. Reading and writing the tar would make\n>> this simpler.\n>>\n>>> Around the same time, Tomas Vondra tested incremental backups with a\n>>> cluster where he enabled checksums after taking the previous full\n>>> backup. After combining the backups the synthetic backup had pages\n>>> with checksums and other pages without checksums which ended in\n>>> checksum errors.\n>>>\n>>\n>> I'm not sure just documenting this limitation is sufficient. We can't\n>> make the incremental backups work in this case (it's as if someone\n>> messes with cluster without writing stuff into WAL), but I think we\n>> should do better than silently producing (seemingly) corrupted backups\n> \n> \n> +1. I think that should be an open item that needs to get sorted.\n> \n> \n> I say seemingly, because the backup is actually fine, the only problem\n>> is it has checksums enabled in the controlfile, but the pages from the\n>> full backup (and the early incremental backups) have no checksums.\n>>\n>> What we could do is detect this in pg_combinebackup, and either just\n>> disable checksums with a warning and hint to maybe enable them again. Or\n>> maybe just print that the user needs to disable them.\n>>\n> \n> I don't think either of these should be done automatically. Something like\n> pg_combinebackup simply failing and requiring you to say\n> \"--disable-checksums\" to have it do that would be the way to go, IMO.\n> (once we can reliably detect it of course)\n> \n\nYou mean pg_combinebackup would have \"--disable-checksums\" switch? Yeah,\nthat'd work, I think. It's probably better than producing a backup that\nwould seem broken when the user tries to start the instance.\n\nAlso, I realized the user probably can't disable the checksums without\nstarting the instance to finish recovery. And if there are invalid\nchecksums, I'm not sure that would actually work.\n\n> \n> I was thinking maybe we could detect this while taking the backups, and\n>> force taking a full backup if checksums got enabled since the last\n>> backup. But we can't do that because we only have the manifest from the\n>> last backup, and the manifest does not include info about checksums.\n>>\n> \n> Can we forcibly read and parse it out of pg_control?\n> \n\nYou mean when taking the backup, or during pg_combinebackup?\n\nDuring backup, it depends. For the instance we should be able to just\nget that from the instance, no need to get it from pg_control. But for\nthe backup (used as a baseline for the increment) we can't read the\npg_control - the only thing we have is the manifest.\n\nDuring pg_combinebackup we obviously can read pg_control for all the\nbackups to combine, but at that point it feels a bit too late - it does\nnot seem great to do backups, and then at recovery to tell the user the\nbackups are actually not valid.\n\nI think it'd be better to detect this while taking the basebackup.\n\n> \n> It's a bit unfortunate we don't have a way to enable checksums online.\n>> That'd not have this problem IIRC, because it writes proper WAL. Maybe\n>> it's time to revive that idea ... I recall there were some concerns\n>> about tracking progress to allow resuming stuff, but maybe not having\n>> anything because in some (rare?) cases it'd need to do more work does\n>> not seem like a great trade off.\n>>\n>>\n> For that one I still think it would be perfectly acceptable to have no\n> resume at all, but that's a whole different topic :)\n> \n\nI very much agree.\n\nIt seems a bit strange that given three options to enable checksums\n\n1) offline\n2) online without resume\n3) online with resume\n\nthe initial argument was that we need to allow resuming the process\nbecause on large systems it might take a lot of time, and we'd lose all\nthe work if the system restarts. But then we concluded that it's too\ncomplex and it's better if the large systems have to do an extended\noutage to enable checksums ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 12 Apr 2024 15:01:48 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Fri, Apr 12, 2024 at 3:01 PM Tomas Vondra <[email protected]>\nwrote:\n\n>\n>\n> On 4/12/24 11:12, Magnus Hagander wrote:\n> > On Tue, Apr 9, 2024 at 11:46 AM Tomas Vondra <\n> [email protected]>\n> > wrote:\n> >\n> >>\n> >>\n> >> On 4/9/24 09:59, Martín Marqués wrote:\n> >>> Hello,\n> >>>\n> >>> While doing some work/research on the new incremental backup feature\n> >>> some limitations were not listed in the docs. Mainly the fact that\n> >>> pg_combienbackup works with plain format and not tar.\n> >>>\n> >>\n> >> Right. The docs mostly imply this by talking about output directory and\n> >> backup directories, but making it more explicit would not hurt.\n> >>\n> >> FWIW it'd be great if we could make incremental backups work with tar\n> >> format in the future too. People probably don't want to keep around the\n> >> expanded data directory or extract everything before combining the\n> >> backups is not very convenient. Reading and writing the tar would make\n> >> this simpler.\n> >>\n> >>> Around the same time, Tomas Vondra tested incremental backups with a\n> >>> cluster where he enabled checksums after taking the previous full\n> >>> backup. After combining the backups the synthetic backup had pages\n> >>> with checksums and other pages without checksums which ended in\n> >>> checksum errors.\n> >>>\n> >>\n> >> I'm not sure just documenting this limitation is sufficient. We can't\n> >> make the incremental backups work in this case (it's as if someone\n> >> messes with cluster without writing stuff into WAL), but I think we\n> >> should do better than silently producing (seemingly) corrupted backups\n> >\n> >\n> > +1. I think that should be an open item that needs to get sorted.\n> >\n> >\n> > I say seemingly, because the backup is actually fine, the only problem\n> >> is it has checksums enabled in the controlfile, but the pages from the\n> >> full backup (and the early incremental backups) have no checksums.\n> >>\n> >> What we could do is detect this in pg_combinebackup, and either just\n> >> disable checksums with a warning and hint to maybe enable them again. Or\n> >> maybe just print that the user needs to disable them.\n> >>\n> >\n> > I don't think either of these should be done automatically. Something\n> like\n> > pg_combinebackup simply failing and requiring you to say\n> > \"--disable-checksums\" to have it do that would be the way to go, IMO.\n> > (once we can reliably detect it of course)\n> >\n>\n> You mean pg_combinebackup would have \"--disable-checksums\" switch? Yeah,\n> that'd work, I think. It's probably better than producing a backup that\n> would seem broken when the user tries to start the instance.\n>\n> Also, I realized the user probably can't disable the checksums without\n> starting the instance to finish recovery. And if there are invalid\n> checksums, I'm not sure that would actually work.\n>\n> >\n> > I was thinking maybe we could detect this while taking the backups, and\n> >> force taking a full backup if checksums got enabled since the last\n> >> backup. But we can't do that because we only have the manifest from the\n> >> last backup, and the manifest does not include info about checksums.\n> >>\n> >\n> > Can we forcibly read and parse it out of pg_control?\n> >\n>\n> You mean when taking the backup, or during pg_combinebackup?\n>\n\nYes. That way combining the backups into something that doesn't have proper\nchecksums (either by actually turning them off or as today just breaking\nthem and forcing you to turn it off yourself) can only happen\nintentionally. And if you weren't aware of the problem, it turns into a\nhard error, so you will notice before it's too late.\n\n\nDuring backup, it depends. For the instance we should be able to just\n> get that from the instance, no need to get it from pg_control. But for\n> the backup (used as a baseline for the increment) we can't read the\n> pg_control - the only thing we have is the manifest.\n\n\n> During pg_combinebackup we obviously can read pg_control for all the\n> backups to combine, but at that point it feels a bit too late - it does\n> not seem great to do backups, and then at recovery to tell the user the\n> backups are actually not valid.\n\n\n> I think it'd be better to detect this while taking the basebackup.\n>\n\nAgreed. In the end, we might want to do *both*, but the earlier the better.\n\nBut to do that what we'd need is to add a flag to the initial manifest that\nsays \"this cluster is supposed to have checksum = <on/off>\" and then refuse\nto take an inc if it changes? It doesn't seem like the end of the world to\nadd that to it?\n\n\n\n> > It's a bit unfortunate we don't have a way to enable checksums online.\n> >> That'd not have this problem IIRC, because it writes proper WAL. Maybe\n> >> it's time to revive that idea ... I recall there were some concerns\n> >> about tracking progress to allow resuming stuff, but maybe not having\n> >> anything because in some (rare?) cases it'd need to do more work does\n> >> not seem like a great trade off.\n> >>\n> >>\n> > For that one I still think it would be perfectly acceptable to have no\n> > resume at all, but that's a whole different topic :)\n> >\n>\n> I very much agree.\n>\n> It seems a bit strange that given three options to enable checksums\n>\n> 1) offline\n> 2) online without resume\n> 3) online with resume\n>\n> the initial argument was that we need to allow resuming the process\n> because on large systems it might take a lot of time, and we'd lose all\n> the work if the system restarts. But then we concluded that it's too\n> complex and it's better if the large systems have to do an extended\n> outage to enable checksums ...\n>\n\nIndeed. Or the workaround that still scares the crap out of me where you\nuse a switchover-and-switchback to a replica to \"do the offline thing\nalmost online\". To me that seems a lot scarier than the original option as\nwell.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Fri, Apr 12, 2024 at 3:01 PM Tomas Vondra <[email protected]> wrote:\n\nOn 4/12/24 11:12, Magnus Hagander wrote:\n> On Tue, Apr 9, 2024 at 11:46 AM Tomas Vondra <[email protected]>\n> wrote:\n> \n>>\n>>\n>> On 4/9/24 09:59, Martín Marqués wrote:\n>>> Hello,\n>>>\n>>> While doing some work/research on the new incremental backup feature\n>>> some limitations were not listed in the docs. Mainly the fact that\n>>> pg_combienbackup works with plain format and not tar.\n>>>\n>>\n>> Right. The docs mostly imply this by talking about output directory and\n>> backup directories, but making it more explicit would not hurt.\n>>\n>> FWIW it'd be great if we could make incremental backups work with tar\n>> format in the future too. People probably don't want to keep around the\n>> expanded data directory or extract everything before combining the\n>> backups is not very convenient. Reading and writing the tar would make\n>> this simpler.\n>>\n>>> Around the same time, Tomas Vondra tested incremental backups with a\n>>> cluster where he enabled checksums after taking the previous full\n>>> backup. After combining the backups the synthetic backup had pages\n>>> with checksums and other pages without checksums which ended in\n>>> checksum errors.\n>>>\n>>\n>> I'm not sure just documenting this limitation is sufficient. We can't\n>> make the incremental backups work in this case (it's as if someone\n>> messes with cluster without writing stuff into WAL), but I think we\n>> should do better than silently producing (seemingly) corrupted backups\n> \n> \n> +1. I think that should be an open item that needs to get sorted.\n> \n> \n> I say seemingly, because the backup is actually fine, the only problem\n>> is it has checksums enabled in the controlfile, but the pages from the\n>> full backup (and the early incremental backups) have no checksums.\n>>\n>> What we could do is detect this in pg_combinebackup, and either just\n>> disable checksums with a warning and hint to maybe enable them again. Or\n>> maybe just print that the user needs to disable them.\n>>\n> \n> I don't think either of these should be done automatically. Something like\n> pg_combinebackup simply failing and requiring you to say\n> \"--disable-checksums\" to have it do that would be the way to go, IMO.\n> (once we can reliably detect it of course)\n> \n\nYou mean pg_combinebackup would have \"--disable-checksums\" switch? Yeah,\nthat'd work, I think. It's probably better than producing a backup that\nwould seem broken when the user tries to start the instance.\n\nAlso, I realized the user probably can't disable the checksums without\nstarting the instance to finish recovery. And if there are invalid\nchecksums, I'm not sure that would actually work.\n\n> \n> I was thinking maybe we could detect this while taking the backups, and\n>> force taking a full backup if checksums got enabled since the last\n>> backup. But we can't do that because we only have the manifest from the\n>> last backup, and the manifest does not include info about checksums.\n>>\n> \n> Can we forcibly read and parse it out of pg_control?\n> \n\nYou mean when taking the backup, or during pg_combinebackup?Yes. That way combining the backups into something that doesn't have proper checksums (either by actually turning them off or as today just breaking them and forcing you to turn it off yourself) can only happen intentionally. And if you weren't aware of the problem, it turns into a hard error, so you will notice before it's too late.\nDuring backup, it depends. For the instance we should be able to just\nget that from the instance, no need to get it from pg_control. But for\nthe backup (used as a baseline for the increment) we can't read the\npg_control - the only thing we have is the manifest. \n\nDuring pg_combinebackup we obviously can read pg_control for all the\nbackups to combine, but at that point it feels a bit too late - it does\nnot seem great to do backups, and then at recovery to tell the user the\nbackups are actually not valid. \n\nI think it'd be better to detect this while taking the basebackup.Agreed. In the end, we might want to do *both*, but the earlier the better.But to do that what we'd need is to add a flag to the initial manifest that says \"this cluster is supposed to have checksum = <on/off>\" and then refuse to take an inc if it changes? It doesn't seem like the end of the world to add that to it? \n> It's a bit unfortunate we don't have a way to enable checksums online.\n>> That'd not have this problem IIRC, because it writes proper WAL. Maybe\n>> it's time to revive that idea ... I recall there were some concerns\n>> about tracking progress to allow resuming stuff, but maybe not having\n>> anything because in some (rare?) cases it'd need to do more work does\n>> not seem like a great trade off.\n>>\n>>\n> For that one I still think it would be perfectly acceptable to have no\n> resume at all, but that's a whole different topic :)\n> \n\nI very much agree.\n\nIt seems a bit strange that given three options to enable checksums\n\n1) offline\n2) online without resume\n3) online with resume\n\nthe initial argument was that we need to allow resuming the process\nbecause on large systems it might take a lot of time, and we'd lose all\nthe work if the system restarts. But then we concluded that it's too\ncomplex and it's better if the large systems have to do an extended\noutage to enable checksums ...Indeed. Or the workaround that still scares the crap out of me where you use a switchover-and-switchback to a replica to \"do the offline thing almost online\". To me that seems a lot scarier than the original option as well. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Fri, 12 Apr 2024 16:21:31 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "\n\nOn 4/12/24 22:40, Tomas Vondra wrote:\n> On 4/12/24 11:50, David Steele wrote:\n>> On 4/12/24 19:09, Magnus Hagander wrote:\n>>> On Fri, Apr 12, 2024 at 12:14 AM David Steele <[email protected]\n>>>\n>>> ...>>\n>>>      > But yeah, having to keep the backups as expanded directories is\n>>> not\n>>>      > great, I'd love to have .tar. Not necessarily because of the disk\n>>>     space\n>>>      > (in my experience the compression in filesystems works quite\n>>> well for\n>>>      > this purpose), but mostly because it's more compact and allows\n>>>     working\n>>>      > with backups as a single piece of data (e.g. it's much cleared\n>>>     what the\n>>>      > checksum of a single .tar is, compared to a directory).\n>>>\n>>>     But again, object stores are commonly used for backup these days and\n>>>     billing is based on data stored rather than any compression that\n>>> can be\n>>>     done on the data. Of course, you'd want to store the compressed\n>>> tars in\n>>>     the object store, but that does mean storing an expanded copy\n>>> somewhere\n>>>     to do pg_combinebackup.\n>>>\n>>> Object stores are definitely getting more common. I wish they were\n>>> getting a lot more common than they actually are, because they\n>>> simplify a lot.  But they're in my experience still very far from\n>>> being a majority.\n>>\n>> I see it the other way, especially the last few years. The majority seem\n>> to be object stores followed up closely by NFS. Directly mounted storage\n>> on the backup host appears to be rarer.\n>>\n> \n> One thing I'd mention is that not having built-in support for .tar and\n> .tgz backups does not mean it's impossible to use pg_combinebackup with\n> archives. You can mount them using e.g. \"ratarmount\" and then use that\n> as source directories for pg_combinebackup.\n> \n> It's not entirely friction-less because AFAICS it's necessary to do the\n> backup in plain format and then do the .tar to have the expected \"flat\"\n> directory structure (and not manifest + 2x tar). But other than that it\n> seems to work fine (based on my limited testing).\n\nWell, that's certainly convoluted and doesn't really help a lot in terms \nof space consumption, it just shifts the additional space required to \nthe backup side. I doubt this is something we'd be willing to add to our \ndocumentation so it would be up to the user to figure out and script.\n\n> FWIW the \"archivemount\" performs terribly, so adding this capability\n> into pg_combinebackup is clearly far from trivial.\n\nI imagine this would perform pretty badly. And yes, doing it efficiently \nis not trivial but certainly doable. Scanning the tar file and matching \nto entries in the manifest is one way, but I would prefer to store the \noffsets into the tar file in the manifest then assemble an ordered list \nof work to do on each tar file. But of course the latter requires a \nmanifest-centric approach, which is not what we have right now.\n\nRegards,\n-David\n\n\n", "msg_date": "Sat, 13 Apr 2024 08:44:29 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Tue, Apr 9, 2024 at 4:00 AM Martín Marqués <[email protected]> wrote:\n> I've attached two patches, the first one is just neat-picking things I\n> found when I first read the docs.\n\nI pushed a commit to remove the spurious \"the\" that you found, but I\ndon't really agree with the other changes. They all seem grammatically\ncorrect, but I think they're grammatically correct now, too, and I\nfind it more readable the way it is.\n\nI'll write a separate email about the checksum issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 13:08:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Tue, Apr 9, 2024 at 5:46 AM Tomas Vondra\n<[email protected]> wrote:\n> What we could do is detect this in pg_combinebackup, and either just\n> disable checksums with a warning and hint to maybe enable them again. Or\n> maybe just print that the user needs to disable them.\n>\n> I was thinking maybe we could detect this while taking the backups, and\n> force taking a full backup if checksums got enabled since the last\n> backup. But we can't do that because we only have the manifest from the\n> last backup, and the manifest does not include info about checksums.\n\nSo, as I see it, we have at least five options here:\n\n1. Document that this doesn't work. By \"this doesn't work\" I think\nwhat we mean is: (A) If any of the backups you'd need to combine were\ntaken with checksums off but the last one was taken with checksums on,\nthen you might get checksum failures on the cluster written by\npg_combinebackup. (B) Therefore, after enabling checksums, you should\ntake a new full backup and make future incrementals dependent\nthereupon. (C) But if you don't, then you can (I presume) run recovery\nwith ignore_checksum_failure=true to bring the cluster to a consistent\nstate, stop the database, shut checksums off, optionally also turn\nthem on again, and restart the database. Then presumably everything\nshould be OK, except that you'll have wiped out any real checksum\nfailures that weren't artifacts of the reconstruction process along\nwith the fake ones.\n\n2. As (1), but make check_control_files() emit a warning message when\nthe problem case is detected.\n\n3. As (2), but also add a command-line option to pg_combinebackup to\nflip the checksum flag to false in the control file. Then, if you have\nthe problem case, instead of following the procedure described above,\nyou can just use this option, and enable checksums afterward if you\nwant. It still has the same disadvantage as the procedure described\nabove: any \"real\" checksum failures will be suppressed, too. From a\ndesign perspective, this feels like kind of an ugly wart to me: hey,\nin this one scenario, you have to add --do-something-random or it\ndoesn't work! But I see it's got some votes already, so maybe it's the\nright answer.\n\n4. Add the checksum state to the backup manifest. Then, if someone\ntries to take an incremental backup with checksums on and the\nprecursor backup had checksums off, we could fail. A strength of this\nproposal is that it actually stops the problem from happening at\nbackup time, which in general is a whole lot nicer than not noticing a\nproblem until restore time. A possible weakness is that it stops you\nfrom doing something that is ... actually sort of OK. I mean, in the\nstrict sense, the incremental backup isn't valid, because it's going\nto cause checksum failures after reconstruction, but it's valid apart\nfrom the checksums, and those are fixable. I wonder whether users who\nencounter this error message will say \"oh, I'm glad PostgreSQL\nprevented me from doing that\" or \"oh, I'm annoyed that PostgreSQL\nprevented me from doing that.\"\n\n5. At reconstruction time, notice which backups have checksums\nenabled. If the final backup in the chain has them enabled, then\nwhenever we take a block from an earlier backup with checksums\ndisabled, re-checksum the block. As opposed to any of the previous\noptions, this creates a fully correct result, so there's no real need\nto document any restrictions on what you're allowed to do. We might\nneed to document the performance consequences, though: fast file copy\nmethods will have to be disabled, and we'll have to go block by block\nwhile paying the cost of checksum calculation for each one. You might\nbe sad to find out that your reconstruction is a lot slower than you\nwere expecting.\n\nWhile I'm probably willing to implement any of these, I have some\nreservations about attempting (4) or especially (5) after feature\nfreeze. I think there's a pretty decent chance that those fixes will\nturn out to have issues of their own which we'll then need to fix in\nturn. We could perhaps consider doing (2) for now and (5) for a future\nrelease, or something like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 14:11:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "> On 18 Apr 2024, at 20:11, Robert Haas <[email protected]> wrote:\n\n> 2. As (1), but make check_control_files() emit a warning message when\n> the problem case is detected.\n\nBeing in the post-freeze part of the cycle, this seems like the best option.\n\n> 3. As (2), but also add a command-line option to pg_combinebackup to\n> flip the checksum flag to false in the control file.\n\nI don't think this is the way to go, such an option will be plastered on to\nhelpful tutorials which users will copy/paste from even when they don't need\nthe option at all, only to disable checksums when there was no reason for doing\nso.\n\n> We could perhaps consider doing (2) for now and (5) for a future\n> release, or something like that.\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 21:25:24 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Thu, Apr 18, 2024 at 3:25 PM Daniel Gustafsson <[email protected]> wrote:\n> I don't think this is the way to go, such an option will be plastered on to\n> helpful tutorials which users will copy/paste from even when they don't need\n> the option at all, only to disable checksums when there was no reason for doing\n> so.\n\nThat's actually a really good point. I mean, it's such an obscure\nscenario, maybe it wouldn't make it into the tutorials, but if it\ndoes, it'll never get taken out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 15:29:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Thu, Apr 18, 2024 at 3:25 PM Daniel Gustafsson <[email protected]> wrote:\n> > On 18 Apr 2024, at 20:11, Robert Haas <[email protected]> wrote:\n> > 2. As (1), but make check_control_files() emit a warning message when\n> > the problem case is detected.\n>\n> Being in the post-freeze part of the cycle, this seems like the best option.\n\nHere is a patch for that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Apr 2024 14:52:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Mon, Apr 22, 2024 at 2:52 PM Robert Haas <[email protected]> wrote:\n> On Thu, Apr 18, 2024 at 3:25 PM Daniel Gustafsson <[email protected]> wrote:\n> > > On 18 Apr 2024, at 20:11, Robert Haas <[email protected]> wrote:\n> > > 2. As (1), but make check_control_files() emit a warning message when\n> > > the problem case is detected.\n> >\n> > Being in the post-freeze part of the cycle, this seems like the best option.\n>\n> Here is a patch for that.\n\nIs someone willing to review this patch? cfbot seems happy enough with\nit, and I'd like to get it committed since it's an open item, but I'd\nbe happy to have some review before I do that.\n\nIf nobody reviews or indicates an intent to do so in the next couple\nof days, I'll go ahead and commit this on my own.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 13:16:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "> On 22 Apr 2024, at 20:52, Robert Haas <[email protected]> wrote:\n> \n> On Thu, Apr 18, 2024 at 3:25 PM Daniel Gustafsson <[email protected]> wrote:\n>>> On 18 Apr 2024, at 20:11, Robert Haas <[email protected]> wrote:\n>>> 2. As (1), but make check_control_files() emit a warning message when\n>>> the problem case is detected.\n>> \n>> Being in the post-freeze part of the cycle, this seems like the best option.\n> \n> Here is a patch for that.\n\nLGTM; only one small comment which you can ignore if you feel it's not worth\nthe extra words.\n\n+ <literal>pg_combinebackup</literal> when the checksum status of the\n+ cluster has been changed; see\n\nI would have preferred that this sentence included the problematic period for\nthe change, perhaps \"..has been changed after the initial backup.\" or ideally\nsomething even better. In other words, clarifying that if checksums were\nenabled before any backups were taken this limitation is not in play. It's not\ncritical as the link aptly documents this, it just seems like the sentence is\ncut short.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 24 Apr 2024 21:08:27 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "On Wed, Apr 24, 2024 at 3:08 PM Daniel Gustafsson <[email protected]> wrote:\n> LGTM; only one small comment which you can ignore if you feel it's not worth\n> the extra words.\n>\n> + <literal>pg_combinebackup</literal> when the checksum status of the\n> + cluster has been changed; see\n>\n> I would have preferred that this sentence included the problematic period for\n> the change, perhaps \"..has been changed after the initial backup.\" or ideally\n> something even better. In other words, clarifying that if checksums were\n> enabled before any backups were taken this limitation is not in play. It's not\n> critical as the link aptly documents this, it just seems like the sentence is\n> cut short.\n\nThis was somewhat deliberate. The phraseology that you propose doesn't\nexactly seem incorrect to me. However, consider the scenario where\nsomeone takes a full backup A, an incremental backup B based on A, and\nanother incremental backup C based on B. Then, they combine A with B\nto produce X, remove A and B, and later combine X with C. When we talk\nabout the \"initial backup\", are we talking about A or X? It doesn't\nquite matter, in the end, because if X has a problem it must be\nbecause A had a similar problem, and it's also sort of meaningless to\ntalk about when X was taken, because it wasn't ever taken from the\norigin server; it was reconstructed. And it doesn't matter what was\nhappening on the origin server at the time it was reconstructed, but\nrather what was happening on the origin server at the time its inputs\nwere taken. Or, in the case of its first input, that could also be a\nreconstruction, in which case the time of that reconstruction doesn't\nmatter either; there can be any number of levels here.\n\nI feel that all of this makes it a little murky to talk about what\nhappened after the \"initial backup\". The actual original full backup\nneed not even exist any more at the time of reconstruction, as in the\nexample above. Now, I think the user will probably still get the\npoint, but I also think they'll probably get the point without the\nextra verbiage. I think that it will be natural for people to imagine\nthat what matters is not whether the checksum status has ever changed,\nbut whether it has changed within the relevant time period, whatever\nthat is exactly. If they have a more complicated situation where it's\nhard to reason about what the relevant time period is, my hope is that\nthey'll click on the link and that the longer text they see on the\nother end will help them think the situation through.\n\nAgain, this is not to say that what you're proposing is necessarily\nwrong; I'm just explaining my own thinking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 12:16:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" }, { "msg_contents": "> On 25 Apr 2024, at 18:16, Robert Haas <[email protected]> wrote:\n> \n> On Wed, Apr 24, 2024 at 3:08 PM Daniel Gustafsson <[email protected]> wrote:\n>> LGTM; only one small comment which you can ignore if you feel it's not worth\n>> the extra words.\n>> \n>> + <literal>pg_combinebackup</literal> when the checksum status of the\n>> + cluster has been changed; see\n>> \n>> I would have preferred that this sentence included the problematic period for\n>> the change, perhaps \"..has been changed after the initial backup.\" or ideally\n>> something even better. In other words, clarifying that if checksums were\n>> enabled before any backups were taken this limitation is not in play. It's not\n>> critical as the link aptly documents this, it just seems like the sentence is\n>> cut short.\n> \n> This was somewhat deliberate.\n\nGiven your reasoning below, +1 on the patch you proposed.\n\n> I think the user will probably still get the\n> point, but I also think they'll probably get the point without the\n> extra verbiage. I think that it will be natural for people to imagine\n> that what matters is not whether the checksum status has ever changed,\n> but whether it has changed within the relevant time period, whatever\n> that is exactly.\n\nFair enough.\n\n> Again, this is not to say that what you're proposing is necessarily\n> wrong; I'm just explaining my own thinking.\n\nGotcha, much appreciated.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Apr 2024 19:57:26 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add notes to pg_combinebackup docs" } ]
[ { "msg_contents": "In b262ad440e we introduced an optimization that drops IS NOT NULL quals\non a NOT NULL column, and reduces IS NULL quals on a NOT NULL column to\nconstant-FALSE. I happened to notice that this is not working correctly\nfor traditional inheritance parents. Traditional inheritance parents\nmight have NOT NULL constraints marked NO INHERIT, while their child\ntables do not have NOT NULL constraints. In such a case, we would have\nproblems when we have removed redundant IS NOT NULL restriction clauses\nof the parent rel, as this could cause NULL values from child tables to\nnot be filtered out, or when we have reduced IS NULL restriction clauses\nof the parent rel to constant-FALSE, as this could cause NULL values\nfrom child tables to not be selected out. As an example, consider\n\ncreate table p (a int);\ncreate table c () inherits (p);\n\nalter table only p alter a set not null;\n\ninsert into c values (null);\n\n-- The IS NOT NULL qual is droped, causing the NULL value from 'c' to\n-- not be filtered out\nexplain (costs off) select * from p where a is not null;\n QUERY PLAN\n-------------------------\n Append\n -> Seq Scan on p p_1\n -> Seq Scan on c p_2\n(3 rows)\n\nselect * from p where a is not null;\n a\n---\n\n(1 row)\n\n-- The IS NULL qual is reduced to constant-FALSE, causing the NULL value\n-- from 'c' to not be selected out\nexplain (costs off) select * from p where a is null;\n QUERY PLAN\n--------------------------\n Result\n One-Time Filter: false\n(2 rows)\n\nselect * from p where a is null;\n a\n---\n(0 rows)\n\n\nTo fix this issue, I think we can avoid calculating notnullattnums for\ninheritance parents in get_relation_info(). Meanwhile, when we populate\nchildrel's base restriction quals from parent rel's quals, we check if\neach qual can be proven always false/true, to apply the optimization we\nhave in b262ad440e to each child. Something like attached.\n\nThis can also be beneficial to partitioned tables in cases where the\nparent table does not have NOT NULL constraints, while some of its child\ntables do. Previously, the optimization introduced in b262ad440e was\nnot applicable in this case. With this change, the optimization can now\nbe applied to each child table that has the right NOT NULL constraints.\n\nThoughts?\n\nThanks\nRichard", "msg_date": "Tue, 9 Apr 2024 17:54:41 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Tue, 9 Apr 2024 at 21:55, Richard Guo <[email protected]> wrote:\n>\n> In b262ad440e we introduced an optimization that drops IS NOT NULL quals\n> on a NOT NULL column, and reduces IS NULL quals on a NOT NULL column to\n> constant-FALSE. I happened to notice that this is not working correctly\n> for traditional inheritance parents. Traditional inheritance parents\n> might have NOT NULL constraints marked NO INHERIT, while their child\n> tables do not have NOT NULL constraints. In such a case, we would have\n> problems when we have removed redundant IS NOT NULL restriction clauses\n> of the parent rel, as this could cause NULL values from child tables to\n> not be filtered out, or when we have reduced IS NULL restriction clauses\n> of the parent rel to constant-FALSE, as this could cause NULL values\n> from child tables to not be selected out.\n\nhmm, yeah, inheritance tables were overlooked.\n\nI looked at the patch and I don't think it's a good idea to skip\nrecording NOT NULL constraints to fix based on the fact that it\nhappens to result in this particular optimisation working correctly.\nIt seems that just makes this work in favour of possibly being wrong\nfor some future optimisation where we have something else that looks\nat the RelOptInfo.notnullattnums and makes some decision that assumes\nthe lack of corresponding notnullattnums member means the column is\nNULLable.\n\nI think a better fix is just to not apply the optimisation for\ninheritance RTEs in add_base_clause_to_rel(). If we do it this way,\nit's only the inh==true RTE that we skip. Remember that there are two\nRangeTblEntries for an inheritance parent. The other one will have\ninh==false, and we can still have the optimisation as that's the one\nthat'll be used in the final plan. It'll be the inh==true one that we\ncopy the quals from in apply_child_basequals(), so we've no need to\nworry about missing baserestrictinfos when applying the base quals to\nthe child.\n\nFor partitioned tables, there's only a single RTE with inh==true.\nWe're free to include the redundant quals there to be applied or\nskipped in apply_child_basequals(). The corresponding RangeTblEntry\nis not going to be scanned in the final plan, so it does not matter\nabout the extra qual.\n\nThe revised patch is attached.\n\nDavid", "msg_date": "Wed, 10 Apr 2024 17:13:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I think a better fix is just to not apply the optimisation for\n> inheritance RTEs in add_base_clause_to_rel().\n\nIs it worth paying attention to whether the constraint is marked\nconnoinherit? If that involves an extra syscache fetch, I'd tend to\nagree that it's not worth it; but if we can get that info for free\nit seems worthwhile to not break this for inheritance cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 01:18:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Wed, 10 Apr 2024 at 17:18, Tom Lane <[email protected]> wrote:\n>\n> David Rowley <[email protected]> writes:\n> > I think a better fix is just to not apply the optimisation for\n> > inheritance RTEs in add_base_clause_to_rel().\n>\n> Is it worth paying attention to whether the constraint is marked\n> connoinherit? If that involves an extra syscache fetch, I'd tend to\n> agree that it's not worth it; but if we can get that info for free\n> it seems worthwhile to not break this for inheritance cases.\n\nI think everything should be optimised as we like without that.\nEffectively get_relation_info() looks at the pg_attribute.attnotnull\ncolumn for the relation in question. We never look at the\npg_constraint record to figure out the nullability of the column.\n\nDavid\n\n\n", "msg_date": "Wed, 10 Apr 2024 17:35:37 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Wed, Apr 10, 2024 at 1:13 PM David Rowley <[email protected]> wrote:\n\n> I looked at the patch and I don't think it's a good idea to skip\n> recording NOT NULL constraints to fix based on the fact that it\n> happens to result in this particular optimisation working correctly.\n> It seems that just makes this work in favour of possibly being wrong\n> for some future optimisation where we have something else that looks\n> at the RelOptInfo.notnullattnums and makes some decision that assumes\n> the lack of corresponding notnullattnums member means the column is\n> NULLable.\n\n\nHmm, I have thought about your point, but I may have a different\nperspective. I think the oversight discussed here occurred because we\nmistakenly recorded NOT NULL columns that are actually nullable for\ntraditional inheritance parents. Take the query from my first email as\nan example. There are three RTEs: p(inh), p(non-inh) and c(non-inh).\nAnd we've added a NOT NULL constraint on the column a of 'p' but not of\n'c'. So it seems to me that while we can mark column a of p(non-inh) as\nnon-nullable, we cannot mark column a of p(inh) as non-nullable, because\nthere might be NULL values in 'c' and that makes column a of p(inh)\nnullable.\n\nAnd I think recording NOT NULL columns for traditional inheritance\nparents can be error-prone for some future optimization where we look\nat an inheritance parent's notnullattnums and make decisions based on\nthe assumption that the included columns are non-nullable. The issue\ndiscussed here serves as an example of this potential problem.\n\nThanks\nRichard\n\nOn Wed, Apr 10, 2024 at 1:13 PM David Rowley <[email protected]> wrote:\nI looked at the patch and I don't think it's a good idea to skip\nrecording NOT NULL constraints to fix based on the fact that it\nhappens to result in this particular optimisation working correctly.\nIt seems that just makes this work in favour of possibly being wrong\nfor some future optimisation where we have something else that looks\nat the RelOptInfo.notnullattnums and makes some decision that assumes\nthe lack of corresponding notnullattnums member means the column is\nNULLable.Hmm, I have thought about your point, but I may have a differentperspective.  I think the oversight discussed here occurred because wemistakenly recorded NOT NULL columns that are actually nullable fortraditional inheritance parents.  Take the query from my first email asan example.  There are three RTEs: p(inh), p(non-inh) and c(non-inh).And we've added a NOT NULL constraint on the column a of 'p' but not of'c'.  So it seems to me that while we can mark column a of p(non-inh) asnon-nullable, we cannot mark column a of p(inh) as non-nullable, becausethere might be NULL values in 'c' and that makes column a of p(inh)nullable.And I think recording NOT NULL columns for traditional inheritanceparents can be error-prone for some future optimization where we lookat an inheritance parent's notnullattnums and make decisions based onthe assumption that the included columns are non-nullable.  The issuediscussed here serves as an example of this potential problem.ThanksRichard", "msg_date": "Wed, 10 Apr 2024 15:12:24 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Wed, 10 Apr 2024 at 19:12, Richard Guo <[email protected]> wrote:\n> And I think recording NOT NULL columns for traditional inheritance\n> parents can be error-prone for some future optimization where we look\n> at an inheritance parent's notnullattnums and make decisions based on\n> the assumption that the included columns are non-nullable. The issue\n> discussed here serves as an example of this potential problem.\n\nI admit that it seems more likely that having a member set in\nnotnullattnums for an inheritance parent is more likely to cause\nfuture bugs than if we just leave them blank. But I also don't\nbelieve leaving them all blank is the right thing unless we document\nthat the field isn't populated for traditional inheritance parent\ntables and if the code needs to not the NOT NULL status of a column\nfor that table ONLY, then the code should look at the RelOptInfo\ncorresponding to the inh==false RangeTblEntry for that relation. If we\ndon't document the fact that we don't set the notnullattnums field\nthen someone might write some code thinking we correctly populate it.\n If the parent and all children have NOT NULL constraints for a\ncolumn, then unless we document we don't populate notnullattnums, it\nseems reasonable to assume that's a bug.\n\nIf we skip populating notnullattnums for inh==true non-partitioned\ntables, I think we also still need to skip applying the NOT NULL qual\noptimisation for inh==true RTEs as my version of the code did.\nReasons being: 1) it's a pointless exercise since we'll always end up\nadding the RestrictInfo without modification to the RelOptInfo's\nbaserestrictinfo, and 2) The optimisation in question would be looking\nat the notnullattnums that isn't correctly populated.\n\nResulting patch attached.\n\nDavid", "msg_date": "Thu, 11 Apr 2024 14:23:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Thu, Apr 11, 2024 at 10:23 AM David Rowley <[email protected]> wrote:\n\n> On Wed, 10 Apr 2024 at 19:12, Richard Guo <[email protected]> wrote:\n> > And I think recording NOT NULL columns for traditional inheritance\n> > parents can be error-prone for some future optimization where we look\n> > at an inheritance parent's notnullattnums and make decisions based on\n> > the assumption that the included columns are non-nullable. The issue\n> > discussed here serves as an example of this potential problem.\n>\n> I admit that it seems more likely that having a member set in\n> notnullattnums for an inheritance parent is more likely to cause\n> future bugs than if we just leave them blank. But I also don't\n> believe leaving them all blank is the right thing unless we document\n> that the field isn't populated for traditional inheritance parent\n> tables and if the code needs to not the NOT NULL status of a column\n> for that table ONLY, then the code should look at the RelOptInfo\n> corresponding to the inh==false RangeTblEntry for that relation. If we\n> don't document the fact that we don't set the notnullattnums field\n> then someone might write some code thinking we correctly populate it.\n> If the parent and all children have NOT NULL constraints for a\n> column, then unless we document we don't populate notnullattnums, it\n> seems reasonable to assume that's a bug.\n\n\nFair point. I agree that we should document that we don't populate\nnotnullattnums for traditional inheritance parents.\n\n\n> If we skip populating notnullattnums for inh==true non-partitioned\n> tables, I think we also still need to skip applying the NOT NULL qual\n> optimisation for inh==true RTEs as my version of the code did.\n> Reasons being: 1) it's a pointless exercise since we'll always end up\n> adding the RestrictInfo without modification to the RelOptInfo's\n> baserestrictinfo, and 2) The optimisation in question would be looking\n> at the notnullattnums that isn't correctly populated.\n\n\nI agree with both of your points. But I also think we do not need to\nskip applying the NOT NULL qual optimization for partitioned tables.\nFor partitioned tables, if the parent is marked NOT NULL, then all its\nchildren must also be marked NOT NULL. And we've already populated\nnotnullattnums for partitioned tables in get_relation_info. Applying\nthis optimization for partitioned tables can help save some cycles in\napply_child_basequals if we've reduced or skipped some restriction\nclauses for a partitioned table. This means in add_base_clause_to_rel\nwe need to also check rte->relkind:\n\n- if (!rte->inh)\n+ if (!rte->inh || rte->relkind == RELKIND_PARTITIONED_TABLE)\n\nI also think we should update the related comments for\napply_child_basequals and its caller, as my v1 patch does, since now we\nmight reduce or skip some of the resulting clauses.\n\nAttached is a revised patch.\n\nThanks\nRichard", "msg_date": "Thu, 11 Apr 2024 14:48:48 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Thu, 11 Apr 2024 at 18:49, Richard Guo <[email protected]> wrote:\n> I agree with both of your points. But I also think we do not need to\n> skip applying the NOT NULL qual optimization for partitioned tables.\n> For partitioned tables, if the parent is marked NOT NULL, then all its\n> children must also be marked NOT NULL. And we've already populated\n> notnullattnums for partitioned tables in get_relation_info. Applying\n> this optimization for partitioned tables can help save some cycles in\n> apply_child_basequals if we've reduced or skipped some restriction\n> clauses for a partitioned table. This means in add_base_clause_to_rel\n> we need to also check rte->relkind:\n>\n> - if (!rte->inh)\n> + if (!rte->inh || rte->relkind == RELKIND_PARTITIONED_TABLE)\n\nI was skipping this on purpose as I wasn't sure that we'd never expand\nrestriction_is_always_false() and restriction_is_always_true() to also\nhandle partition constraints. However, after thinking about that\nmore, the partition constraint can only become more strict the deeper\ndown the partition levels you go. If it was possible to insert a row\ndirectly into a leaf partition that wouldn't be allowed when inserting\nvia the parent then there's a bug in the partition constraint.\n\nI also considered if we'd ever add support to remove redundant quals\nin CHECK constraints and if there was room for problematic mismatches\nbetween partitioned table and leaf partitions, but I see we've thought\nabout that:\n\npostgres=# alter table only p add constraint p_check check (a = 0);\nERROR: constraint must be added to child tables too\n\n> I also think we should update the related comments for\n> apply_child_basequals and its caller, as my v1 patch does, since now we\n> might reduce or skip some of the resulting clauses.\n\nI felt the comments you wanted to add at the call site of\napply_child_basequals() knew too much about what lies within that\nfunction. The caller needn't know anything about what optimisations\nare applied in apply_child_basequals(). In my book, that's just as bad\nas documenting things about the calling function from within a\nfunction. Since functions are designed to be reused, you're just\nasking for such comments to become outdated as soon as we teach\napply_child_basequals() some new tricks. In this case, all the caller\nneeds to care about is properly handling a false return value.\n\nAfter further work on the comments, I pushed the result.\n\nThanks for working on this.\n\nDavid\n\n\n", "msg_date": "Fri, 12 Apr 2024 20:19:32 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" }, { "msg_contents": "On Fri, Apr 12, 2024 at 4:19 PM David Rowley <[email protected]> wrote:\n\n> After further work on the comments, I pushed the result.\n>\n> Thanks for working on this.\n\n\nThanks for pushing!\n\nBTW, I noticed a typo in the comment of add_base_clause_to_rel.\n\n--- a/src/backend/optimizer/plan/initsplan.c\n+++ b/src/backend/optimizer/plan/initsplan.c\n@@ -2644,7 +2644,7 @@ add_base_clause_to_rel(PlannerInfo *root, Index relid,\n * apply_child_basequals() sees, whereas the inh==false one is what's\nused\n * for the scan node in the final plan.\n *\n- * We make an exception to this is for partitioned tables. For these,\nwe\n+ * We make an exception to this for partitioned tables. For these, we\n\nThanks\nRichard\n\nOn Fri, Apr 12, 2024 at 4:19 PM David Rowley <[email protected]> wrote:\nAfter further work on the comments, I pushed the result.\n\nThanks for working on this.Thanks for pushing!BTW, I noticed a typo in the comment of add_base_clause_to_rel.--- a/src/backend/optimizer/plan/initsplan.c+++ b/src/backend/optimizer/plan/initsplan.c@@ -2644,7 +2644,7 @@ add_base_clause_to_rel(PlannerInfo *root, Index relid,     * apply_child_basequals() sees, whereas the inh==false one is what's used     * for the scan node in the final plan.     *-    * We make an exception to this is for partitioned tables.  For these, we+    * We make an exception to this for partitioned tables.  For these, weThanksRichard", "msg_date": "Fri, 12 Apr 2024 17:11:25 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect handling of IS [NOT] NULL quals on inheritance parents" } ]
[ { "msg_contents": "At least not for me. According to various things I found on the\nInternet, it's now required that you codesign your binaries and give\nthem an entitlement in order to generate core dumps:\n\nhttps://nasa.github.io/trick/howto_guides/How-to-dump-core-file-on-MacOS.html\n\nBut according to previous on-list discussion, code-signing a binary\nmeans that DYLD_* will be ignored:\n\nhttp://postgr.es/m/[email protected]\n\nNow, if DYLD_* is ignored, then our regression tests won't work\nproperly. But if core dumps are not enabled, then how am I supposed to\ndebug things that can only be debugged with a core dump?\n\nUnless I'm missing something, this is positively rage-inducing. It's\nreasonable, on Apple's part, to want to install more secure defaults,\nbut having no practical way of overriding the defaults is not\nreasonable. And by \"practical,\" I mean ideally (a) doesn't require a\nmacOS-specific patch to the PostgreSQL source repository but at least\n(b) can be done somehow without breaking other important things that\nalso need to work.\n\nAnyone have any ideas?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 13:35:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "macOS Ventura won't generate core dumps" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 13:35:51 -0400, Robert Haas wrote:\n> Now, if DYLD_* is ignored, then our regression tests won't work\n> properly. But if core dumps are not enabled, then how am I supposed to\n> debug things that can only be debugged with a core dump?\n\nFWIW, I posted a patch a while back to make meson builds support relative\nrpaths on some platforms, including macos. I.e. each library/binary finds the\nlocation of the needed libraries relative to its own location. That gets rid\nof the need to use DYLD_ at all, because the temporary install for the tests\ncan find the library location without a problem.\n\nPerhaps it's worth picking that up again?\n\nhttps://github.com/anarazel/postgres/tree/meson-rpath\nhttps://github.com/anarazel/postgres/commit/46f1963fee7525c3cc3837ef8423cbf6cb08d10a\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 10:48:34 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> At least not for me. According to various things I found on the\n> Internet, it's now required that you codesign your binaries and give\n> them an entitlement in order to generate core dumps:\n\nStill works for me, at least on machines where I have SIP turned\noff. Admittedly, Apple's busy making that a less and less desirable\nchoice.\n\n> Unless I'm missing something, this is positively rage-inducing.\n\nI don't disagree.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 14:37:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "On Tue, Apr 9, 2024 at 2:37 PM Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n> > At least not for me. According to various things I found on the\n> > Internet, it's now required that you codesign your binaries and give\n> > them an entitlement in order to generate core dumps:\n>\n> Still works for me, at least on machines where I have SIP turned\n> off. Admittedly, Apple's busy making that a less and less desirable\n> choice.\n\nWhat exact version are you running?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 14:46:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 2:37 PM Tom Lane <[email protected]> wrote:\n>> Still works for me, at least on machines where I have SIP turned\n>> off. Admittedly, Apple's busy making that a less and less desirable\n>> choice.\n\n> What exact version are you running?\n\nWorks for me on Sonoma 14.4.1 and Ventura 13.6.6, and has done\nin many versions before those.\n\nThe usual gotchas apply: you need to have started the postmaster\nunder \"ulimit -c unlimited\", and the /cores directory has to be\nwritable by whatever user the postmaster is running as. I have\noccasionally seen system updates reduce the privileges on /cores,\nalthough not recently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 15:44:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "On Tue, Apr 9, 2024 at 3:44 PM Tom Lane <[email protected]> wrote:\n> Works for me on Sonoma 14.4.1 and Ventura 13.6.6, and has done\n> in many versions before those.\n>\n> The usual gotchas apply: you need to have started the postmaster\n> under \"ulimit -c unlimited\", and the /cores directory has to be\n> writable by whatever user the postmaster is running as. I have\n> occasionally seen system updates reduce the privileges on /cores,\n> although not recently.\n\nInteresting. I'm on Ventura 13.6.2. I think I've checked all of the\nstuff you mention, but I'll do some more investigation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:57:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Apr 9, 2024 at 3:44 PM Tom Lane <[email protected]> wrote:\n>> The usual gotchas apply: you need to have started the postmaster\n>> under \"ulimit -c unlimited\", and the /cores directory has to be\n>> writable by whatever user the postmaster is running as. I have\n>> occasionally seen system updates reduce the privileges on /cores,\n>> although not recently.\n\n> Interesting. I'm on Ventura 13.6.2. I think I've checked all of the\n> stuff you mention, but I'll do some more investigation.\n\nHuh, that's odd. One idea that comes to mind is that the core files\nare frickin' large, for me typically around 3.5GB-4GB even for a\npostmaster running with default shared memory sizes. (I don't know\nwhy, although it seems to me they were less ridiculous a few years\nago.) Is it possible you're running out of space, or hitting a\nquota of some sort?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 09 Apr 2024 16:34:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: macOS Ventura won't generate core dumps" }, { "msg_contents": "On Tue, Apr 9, 2024 at 4:34 PM Tom Lane <[email protected]> wrote:\n> Huh, that's odd. One idea that comes to mind is that the core files\n> are frickin' large, for me typically around 3.5GB-4GB even for a\n> postmaster running with default shared memory sizes. (I don't know\n> why, although it seems to me they were less ridiculous a few years\n> ago.) Is it possible you're running out of space, or hitting a\n> quota of some sort?\n\nJust to close the loop here, it turns out that this was caused by some\nEDB-installed software, not macOS itself. The software in question\nwasn't expected to cause this to happen, but did anyway.\n\nWe also discovered that core dumps weren't *completely* disabled. They\nstill occurred for segmentation faults, but not for abort traps.\n\nSo, if anyone's having a similar problem, check with your IT\ndepartment or try with a fresh OS install before blaming Apple. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 12:11:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: macOS Ventura won't generate core dumps" } ]
[ { "msg_contents": "If using \"SH_SCOPE static\" with simplehash.h, it causes a bunch of\nwarnings about functions that are defined but not used. It's simple\nenough to fix by appending pg_attribute_unused() to the declarations\n(attached).\n\nThere are currently no callers that use \"SH_SCOPE static\", but I'm\nsuggesting its use in the thread below as a cleanup to a recently-\ncommitted feature:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\nThe reason I'm suggesting it there is because the hash table is used\nonly for the indexed binary heap, not an ordinary binary heap, so I'd\nlike to leave it up to the compiler whether to do any inlining or not.\n\nIf someone thinks the attached patch is a good change to commit now,\nplease let me know. Otherwise, I'll recommend \"static inline\" in the\nabove thread and leave the attached patch to be considered for v18.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 09 Apr 2024 11:10:15 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "On Tue, Apr 9, 2024 at 2:10 PM Jeff Davis <[email protected]> wrote:\n> If using \"SH_SCOPE static\" with simplehash.h, it causes a bunch of\n> warnings about functions that are defined but not used. It's simple\n> enough to fix by appending pg_attribute_unused() to the declarations\n> (attached).\n\nHmm. I'm pretty sure that I've run into this problem, but I concluded\nthat I should use either \"static inline\" or \"extern\" and didn't think\nany more of it. I'm not sure that I like the idea of just ignoring the\nwarnings, for fear that the compiler might not actually remove the\ncode for the unused functions from the resulting binary. But I'm not\nan expert in this area either, so maybe I'm wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 14:49:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "Hi,\n\nOn 2024-04-09 11:10:15 -0700, Jeff Davis wrote:\n> The reason I'm suggesting it there is because the hash table is used\n> only for the indexed binary heap, not an ordinary binary heap, so I'd\n> like to leave it up to the compiler whether to do any inlining or not.\n\nFWIW, with just about any modern-ish compiler just using \"inline\" doesn't\nactually force inlining, it just changes the cost model to make it more\nlikely.\n\n\n> If someone thinks the attached patch is a good change to commit now,\n> please let me know. Otherwise, I'll recommend \"static inline\" in the\n> above thread and leave the attached patch to be considered for v18.\n\nI'm not opposed. I'd however at least add a comment explaining why this is\nbeing used. Arguably it doesn't make sense to add it to *_create(), as without\nthat there really isn't a point in having a simplehash instantiation. Might\nmake it slightly easier to notice obsoleted uses of simplehash.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 9 Apr 2024 11:56:16 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "On Tue, 2024-04-09 at 14:49 -0400, Robert Haas wrote:\n> Hmm. I'm pretty sure that I've run into this problem, but I concluded\n> that I should use either \"static inline\" or \"extern\" and didn't think\n> any more of it.\n\nPages of warnings is not ideal, though. We should either support\n\"SH_SCOPE static\", or have some kind of useful #error that makes it\nclear that we don't support it (and/or don't think it's a good idea).\n\n> I'm not sure that I like the idea of just ignoring the\n> warnings, for fear that the compiler might not actually remove the\n> code for the unused functions from the resulting binary. But I'm not\n> an expert in this area either, so maybe I'm wrong.\n\nIn a simple \"hello world\" test with an unreferenced static function, it\ndoesn't seem to be a problem at -O2. I suppose it could be with some\ncompiler somewhere, or perhaps in a more complex scenario, but it would\nseem strange to me.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 09 Apr 2024 12:30:36 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "Hi,\n\nOn Tue, 2024-04-09 at 11:56 -0700, Andres Freund wrote:\n> FWIW, with just about any modern-ish compiler just using \"inline\"\n> doesn't\n> actually force inlining, it just changes the cost model to make it\n> more\n> likely.\n\nOK.\n\nIn the linked thread, I didn't see a good reason to encourage the\ncompiler to inline the code. Only one caller uses the hash table, so my\ninstinct would be that the code for maniuplating it should not be\ninlined. But \"extern\" (which is the scope now) is certainly not right,\nso \"static\" made the most sense to me.\n\n> \n> I'm not opposed. I'd however at least add a comment explaining why\n> this is\n> being used. Arguably it doesn't make sense to add it to *_create(),\n> as without\n> that there really isn't a point in having a simplehash instantiation.\n> Might\n> make it slightly easier to notice obsoleted uses of simplehash.\n\nThat's a good idea that preserves some utility for the warnings.\n\nShould I go ahead and commit something like that now, or hold it until\nthe other thread concludes, or hold it until the July CF?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 09 Apr 2024 12:33:04 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "On Tue, Apr 9, 2024 at 3:30 PM Jeff Davis <[email protected]> wrote:\n> Pages of warnings is not ideal, though. We should either support\n> \"SH_SCOPE static\", or have some kind of useful #error that makes it\n> clear that we don't support it (and/or don't think it's a good idea).\n\nFair.\n\n> > I'm not sure that I like the idea of just ignoring the\n> > warnings, for fear that the compiler might not actually remove the\n> > code for the unused functions from the resulting binary. But I'm not\n> > an expert in this area either, so maybe I'm wrong.\n>\n> In a simple \"hello world\" test with an unreferenced static function, it\n> doesn't seem to be a problem at -O2. I suppose it could be with some\n> compiler somewhere, or perhaps in a more complex scenario, but it would\n> seem strange to me.\n\nOK.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:53:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" }, { "msg_contents": "On Tue, Apr 9, 2024 at 3:33 PM Jeff Davis <[email protected]> wrote:\n> Should I go ahead and commit something like that now, or hold it until\n> the other thread concludes, or hold it until the July CF?\n\nI think it's fine to commit it now if it makes it usefully easier to\nfix an open item, and otherwise it should wait until next cycle.\n\nBut I also wonder if we shouldn't just keep using static inline\neverywhere. I'm guessing if we do this people are just going to make\nrandom decisions about which one to use every time this comes up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Apr 2024 15:56:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: simplehash.h: \"SH_SCOPE static\" causes warnings" } ]
[ { "msg_contents": "Hi all,\n\nI have been doing some checks with page masking and WAL consistency\nnow that we are in feature freeze, and there are no failures on HEAD:\ncd src/test/recovery/ && \\\n PG_TEST_EXTRA=wal_consistency_checking \\\n PROVE_TESTS=t/027_stream_regress.pl make check\n\nIt's been on my TODO list to automate that in one of my buildfarm\nanimals, and never got down to do it. I've looked at the current\nanimal fleet, and it looks that we don't have one yet. Perhaps I've\njust missed something?\n\nThanks,\n--\nMichael", "msg_date": "Wed, 10 Apr 2024 08:34:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "wal_consistemcy_checking clean on HEAD" }, { "msg_contents": "On Tue, Apr 9, 2024 at 7:35 PM Michael Paquier <[email protected]> wrote:\n> It's been on my TODO list to automate that in one of my buildfarm\n> animals, and never got down to do it. I've looked at the current\n> animal fleet, and it looks that we don't have one yet. Perhaps I've\n> just missed something?\n\nwal_consistency_checking is very useful in general. I find myself\nusing it fairly regularly.\n\nThat's probably why it's not finding anything now: most people working\non something that touches WAL already know that testing their patch\nwith wal_consistency_checking early is a good idea. Of course it also\nwouldn't be a bad idea to have a BF animal for that, especially\nbecause we already have BF animals that test things far more niche\nthan this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 9 Apr 2024 19:40:57 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_consistemcy_checking clean on HEAD" }, { "msg_contents": "On Tue, Apr 09, 2024 at 07:40:57PM -0400, Peter Geoghegan wrote:\n> That's probably why it's not finding anything now: most people working\n> on something that touches WAL already know that testing their patch\n> with wal_consistency_checking early is a good idea. Of course it also\n> wouldn't be a bad idea to have a BF animal for that, especially\n> because we already have BF animals that test things far more niche\n> than this.\n\nwal_consistency_checking has been enabled a couple of days ago on\nbatta, and the runs are clean:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=batta&br=HEAD\n\nRecovery tests take a bit longer, but that's still OK on this host.\nFor now, this mode only runs on HEAD.\n--\nMichael", "msg_date": "Mon, 15 Apr 2024 14:19:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal_consistemcy_checking clean on HEAD" } ]
[ { "msg_contents": "Hi\n\nPostgreSQL 18 will ship after these vacuum horizon systems reach EOL[1]:\n\nanimal | arch | llvm_version | os | os_release | end_of_support\n---------------+---------+--------------+--------+------------+----------------\nbranta | s390x | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\nsplitfin | aarch64 | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\nurutau | s390x | 10.0.0 | Ubuntu | 20.04 | 2025-04-01\nmassasauga | aarch64 | 11.1.0 | Amazon | 2 | 2025-06-30\nsnakefly | aarch64 | 11.1.0 | Amazon | 2 | 2025-06-30\n\nTherefore, some time after the tree re-opens for hacking, we could rip\nout a bunch of support code for LLVM 10-13, and then rip out support\nfor pre-opaque-pointer mode. Please see attached.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2B-g61yq7Ce4aoZtBDO98b4GXH8Cu3zxVk-Zn1Vh7TKpA%40mail.gmail.com", "msg_date": "Wed, 10 Apr 2024 13:38:33 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Wed, Apr 10, 2024 at 1:38 PM Thomas Munro <[email protected]> wrote:\n> Therefore, some time after the tree re-opens for hacking, we could rip\n> out a bunch of support code for LLVM 10-13, and then rip out support\n> for pre-opaque-pointer mode. Please see attached.\n\n... or of course closer to the end of the cycle if that's what people\nprefer for some reason, I don't mind too much as long as it happens.\n\nI added this to the commitfest app, and it promptly failed for cfbot.\nThat's expected: CI is still using Debian 11 \"bullseye\", which only\nhas LLVM 11. It became what Debian calls \"oldstable\" last year, and\nreaches the end of oldstable in a couple of months from now. Debian\n12 \"bookworm\" is the current stable release, and it has LLVM 14, so we\nshould probably go and update those CI images...\n\n\n", "msg_date": "Thu, 11 Apr 2024 14:16:39 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "Rebased over ca89db5f.\n\nI looked into whether we could drop the \"old pass manager\" code\ntoo[1]. Almost, but nope, even the C++ API lacks a way to set the\ninline threshold before LLVM 16, so that would cause a regression.\nAlthough we just hard-code the threshold to 512 with a comment that\nsounds like it's pretty arbitrary, a change to the default (225?)\nwould be unjustifiable just for code cleanup. Oh well.\n\n[1] https://github.com/macdice/postgres/commit/0d40abdf1feb75210c3a3d2a35e3d6146185974c", "msg_date": "Wed, 24 Apr 2024 11:43:12 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On 24.04.24 01:43, Thomas Munro wrote:\n> Rebased over ca89db5f.\n\nThese patches look fine to me. The new cut-off makes sense, and it does \nsave quite a bit of code. We do need to get the Cirrus CI Debian images \nupdated first, as you had already written.\n\nAs part of this patch, you also sneak in support for LLVM 18 \n(llvm-config-18, clang-18 in configure). Should this be a separate patch?\n\nAnd as I'm looking up how this was previously handled, I notice that \nthis list of clang-NN versions was last updated equally sneakily as part \nof your patch to trim off LLVM <10 (820b5af73dc). I wonder if the \noriginal intention of that configure code was that maintaining the \nversioned list above clang-7/llvm-config-7 was not needed, because the \nunversioning programs could be used, or maybe because pkg-config could \nbe used. It would be nice if we could get rid of having to update that.\n\n\n\n", "msg_date": "Sun, 12 May 2024 16:32:59 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Mon, May 13, 2024 at 2:33 AM Peter Eisentraut <[email protected]> wrote:\n> These patches look fine to me. The new cut-off makes sense, and it does\n> save quite a bit of code. We do need to get the Cirrus CI Debian images\n> updated first, as you had already written.\n\nThanks for looking!\n\n> As part of this patch, you also sneak in support for LLVM 18\n> (llvm-config-18, clang-18 in configure). Should this be a separate patch?\n\nYeah, right, I didn't really think too hard about why we have that,\nand now that you question it...\n\n> And as I'm looking up how this was previously handled, I notice that\n> this list of clang-NN versions was last updated equally sneakily as part\n> of your patch to trim off LLVM <10 (820b5af73dc). I wonder if the\n> original intention of that configure code was that maintaining the\n> versioned list above clang-7/llvm-config-7 was not needed, because the\n> unversioning programs could be used, or maybe because pkg-config could\n> be used. It would be nice if we could get rid of having to update that.\n\nI probably misunderstood why we were doing that, perhaps something to\ndo with the way some distro (Debian?) was doing things with older\nversions, and yeah I see that we went a long time after 7 without\ntouching it and nobody cared. Yeah, it would be nice to get rid of\nit. Here's a patch. Meson didn't have that.", "msg_date": "Wed, 15 May 2024 16:21:25 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On 15.05.24 06:21, Thomas Munro wrote:\n>> And as I'm looking up how this was previously handled, I notice that\n>> this list of clang-NN versions was last updated equally sneakily as part\n>> of your patch to trim off LLVM <10 (820b5af73dc). I wonder if the\n>> original intention of that configure code was that maintaining the\n>> versioned list above clang-7/llvm-config-7 was not needed, because the\n>> unversioning programs could be used, or maybe because pkg-config could\n>> be used. It would be nice if we could get rid of having to update that.\n> I probably misunderstood why we were doing that, perhaps something to\n> do with the way some distro (Debian?) was doing things with older\n> versions, and yeah I see that we went a long time after 7 without\n> touching it and nobody cared. Yeah, it would be nice to get rid of\n> it. Here's a patch. Meson didn't have that.\n\nYes, let's get that v3-0001 patch into PG17.\n\n\n\n", "msg_date": "Wed, 15 May 2024 07:20:09 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Wed, May 15, 2024 at 5:20 PM Peter Eisentraut <[email protected]> wrote:\n> Yes, let's get that v3-0001 patch into PG17.\n\nDone.\n\nBilal recently created the CI images for Debian Bookworm[1]. You can\ntry them with s/bullseye/bookworm/ in .cirrus.tasks.yml, but it looks\nlike he is still wrestling with a perl installation problem[2] in the\n32 bit build, so here is a temporary patch to do that and also delete\nthe 32 bit tests for now. This way cfbot should succeed with the\nremaining patches. Parked here for v18.\n\n[1] https://github.com/anarazel/pg-vm-images/commit/685ca7ccb7b3adecb11d948ac677d54cd9599e6c\n[2] https://cirrus-ci.com/task/5459439048720384", "msg_date": "Thu, 16 May 2024 14:33:37 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "Hi,\n\nOn Thu, 16 May 2024 at 05:34, Thomas Munro <[email protected]> wrote:\n>\n> On Wed, May 15, 2024 at 5:20 PM Peter Eisentraut <[email protected]> wrote:\n> > Yes, let's get that v3-0001 patch into PG17.\n>\n> Done.\n>\n> Bilal recently created the CI images for Debian Bookworm[1]. You can\n> try them with s/bullseye/bookworm/ in .cirrus.tasks.yml, but it looks\n> like he is still wrestling with a perl installation problem[2] in the\n> 32 bit build, so here is a temporary patch to do that and also delete\n> the 32 bit tests for now. This way cfbot should succeed with the\n> remaining patches. Parked here for v18.\n\nActually, 32 bit builds are working but the Perl version needs to be\nupdated to 'perl5.36-i386-linux-gnu' in .cirrus.tasks.yml. I changed\n0001 with the working version of 32 bit builds [1] and the rest is the\nsame. All tests pass now [2].\n\n[1] postgr.es/m/CAN55FZ0fY5EFHXLKCO_=p4pwFmHRoVom_qSE_7B48gpchfAqzw@mail.gmail.com\n[2] https://cirrus-ci.com/task/4969910856581120\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 16 May 2024 18:17:08 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Fri, May 17, 2024 at 3:17 AM Nazir Bilal Yavuz <[email protected]> wrote:\n> Actually, 32 bit builds are working but the Perl version needs to be\n> updated to 'perl5.36-i386-linux-gnu' in .cirrus.tasks.yml. I changed\n> 0001 with the working version of 32 bit builds [1] and the rest is the\n> same. All tests pass now [2].\n\nAhh, right, thanks! I will look at committing your CI/fixup patches.\n\n\n", "msg_date": "Fri, 17 May 2024 10:54:45 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Wed, May 15, 2024 at 07:20:09AM +0200, Peter Eisentraut wrote:\n> Yes, let's get that v3-0001 patch into PG17.\n\nUpon seeing this get committed in 4dd29b6833, I noticed that the docs\nstill advertise the llvm-config-$version search dance. That's still\ncorrect for Meson-based builds since we use their config-tool machinery,\nbut no longer holds for configure-based builds. The attached patch\nupdates the docs accordingly.\n\n-- \nOle Peder Brandtzæg\nIn any case, these nights just ain't getting any easier\nAnd who could judge us\nFor seeking comfort in the hazy counterfeit land of memory", "msg_date": "Sun, 19 May 2024 00:46:01 +0200", "msg_from": "Ole Peder =?utf-8?Q?Brandtz=C3=A6g?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Sun, May 19, 2024 at 10:46 AM Ole Peder Brandtzæg\n<[email protected]> wrote:\n> On Wed, May 15, 2024 at 07:20:09AM +0200, Peter Eisentraut wrote:\n> > Yes, let's get that v3-0001 patch into PG17.\n>\n> Upon seeing this get committed in 4dd29b6833, I noticed that the docs\n> still advertise the llvm-config-$version search dance. That's still\n> correct for Meson-based builds since we use their config-tool machinery,\n> but no longer holds for configure-based builds. The attached patch\n> updates the docs accordingly.\n\nOops, right I didn't know we had that documented. Thanks. Will hold\noff doing anything until the thaw.\n\nHmm, I also didn't know that Meson had its own list like our just-removed one:\n\nhttps://github.com/mesonbuild/meson/blob/master/mesonbuild/environment.py#L183\n\nUnsurprisingly, it suffers from maintenance lag, priority issues etc\n(new major versions pop out every 6 months):\n\nhttps://github.com/mesonbuild/meson/issues/10483\n\n\n", "msg_date": "Sun, 19 May 2024 11:05:49 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On Sun, May 19, 2024 at 11:05:49AM +1200, Thomas Munro wrote:\n> Oops, right I didn't know we had that documented. Thanks. Will hold\n> off doing anything until the thaw.\n\nNo worries, thanks!\n\n> Hmm, I also didn't know that Meson had its own list like our just-removed one:\n> \n> https://github.com/mesonbuild/meson/blob/master/mesonbuild/environment.py#L183\n\nI didn't either before writing the doc patch, which led me to\ninvestigate why it *just works* when doing meson setup and then I saw\nthe 40 odd \"Trying a default llvm-config fallback…\" lines in\nmeson-log.txt =) \n\n-- \nOle Peder Brandtzæg\nIt's raining triple sec in Tchula\nand the radio plays \"Crazy Train\"\n\n\n", "msg_date": "Sun, 19 May 2024 01:16:52 +0200", "msg_from": "Ole Peder =?utf-8?Q?Brandtz=C3=A6g?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> Oops, right I didn't know we had that documented. Thanks. Will hold\n> off doing anything until the thaw.\n\nFWIW, I don't think the release freeze precludes docs-only fixes.\nBut if you prefer to sit on this, that's fine too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 May 2024 19:16:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On 19.05.24 00:46, Ole Peder Brandtzæg wrote:\n> On Wed, May 15, 2024 at 07:20:09AM +0200, Peter Eisentraut wrote:\n>> Yes, let's get that v3-0001 patch into PG17.\n> \n> Upon seeing this get committed in 4dd29b6833, I noticed that the docs\n> still advertise the llvm-config-$version search dance. That's still\n> correct for Meson-based builds since we use their config-tool machinery,\n> but no longer holds for configure-based builds. The attached patch\n> updates the docs accordingly.\n\ncommitted\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 15:19:58 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" }, { "msg_contents": "On 17.05.24 00:54, Thomas Munro wrote:\n> On Fri, May 17, 2024 at 3:17 AM Nazir Bilal Yavuz <[email protected]> wrote:\n>> Actually, 32 bit builds are working but the Perl version needs to be\n>> updated to 'perl5.36-i386-linux-gnu' in .cirrus.tasks.yml. I changed\n>> 0001 with the working version of 32 bit builds [1] and the rest is the\n>> same. All tests pass now [2].\n> \n> Ahh, right, thanks! I will look at committing your CI/fixup patches.\n\nThe CI images have been updated, so this should be ready to go now. I \ngave the remaining two patches a try on CI, and it all looks okay to me. \n (needed some gentle rebasing)\n\n\n\n", "msg_date": "Wed, 21 Aug 2024 16:00:40 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Requiring LLVM 14+ in PostgreSQL 18" } ]
[ { "msg_contents": "Dears,\n\n\nthis is a developer of postgresql and currently engaged in research of parallel dml. I read lot of mail list about postgres parallel insert select or other related developing jobs, \nbut i could not understand why should we determine parallel-safety of partition relations in parallel dml even just allow parallel query in insert select statement , \ni consider that modify job do in leader process and partition exprs check also in leader node, isn't that safty enough. in the other words, if we only build a parallel query planner in insert select and forbit parallel insert jobs, should we skip any parallel safty check about target relation and build parallel query plan directly ?\n\n\ncould anyone give me some suggestion about this confusions.\n\n\nBest Regards\n\nTomas Ji\n\n\n\n\n\n\n\n\n\n\n\n\n| |\njiye\n|\n|\[email protected]\n|\n\n\n\n\n\n\n\n\n\n\n\nDears,\n this is a developer of postgresql and currently engaged in research of  parallel dml. I read lot of mail list about postgres parallel insert select or other related developing jobs, but i could not understand why should we determine parallel-safety of partition relations in parallel dml even just allow parallel query in insert select statement , i consider that modify job do in leader process and partition exprs check also in leader node, isn't that safty enough. in the other words, if we only build a parallel query planner in insert select and forbit parallel insert jobs, should we skip any parallel safty check about target relation and build parallel query plan directly ?could anyone give me some suggestion about this confusions.Best RegardsTomas Ji\n\n\n\n\n\n\[email protected]", "msg_date": "Wed, 10 Apr 2024 11:25:08 +0800 (GMT+08:00)", "msg_from": "jiye <[email protected]>", "msg_from_op": true, "msg_subject": "some confusion about parallel insert select in postgres parallel\n dml develop" } ]
[ { "msg_contents": "I noticed some error messages in the split partition code that are not\nup to par. Such as:\n\n\"new partitions not have value %s but split partition has\"\n\nhow about we revise it to:\n\n\"new partitions do not have value %s but split partition does\"\n\nAnother one is:\n\n\"any partition in the list should be DEFAULT because split partition is\nDEFAULT\"\n\nhow about we revise it to:\n\n\"all partitions in the list should be DEFAULT because split partition is\nDEFAULT\"\n\nAnother problem I noticed is that in the test files partition_split.sql\nand partition_merge.sql, there are comments specifying the expected\nerror messages for certain test queries. However, in some cases, the\nerror message mentioned in the comment does not match the error message\nactually generated by the query. Such as:\n\n-- ERROR: invalid partitions order, partition \"sales_mar2022\" can not be\nmerged\n-- (space between sections sales_jan2022 and sales_mar2022)\nALTER TABLE sales_range MERGE PARTITIONS (sales_jan2022, sales_mar2022)\nINTO sales_jan_mar2022;\nERROR: lower bound of partition \"sales_mar2022\" conflicts with upper bound\nof previous partition \"sales_jan2022\"\n\nI'm not sure if it's a good practice to specify the expected error\nmessage in the comment. But if we choose to do so, I think we at least\nneed to ensure that the specified error message in the comment remains\nconsistent with the error message produced by the query.\n\nAlso there are some comments containing grammatical issues. Such as:\n\n-- no error: bounds of sales_noerror equals to lower and upper bounds of\nsales_dec2022 and sales_feb2022\n\nAttached is a patch to fix the issues I've observed. I suspect there\nmay be more to be found.\n\nThanks\nRichard", "msg_date": "Wed, 10 Apr 2024 19:32:21 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": true, "msg_subject": "Revise some error messages in split partition code" }, { "msg_contents": "Richard Guo <[email protected]> 于2024年4月10日周三 19:32写道:\n\n> I noticed some error messages in the split partition code that are not\n> up to par. Such as:\n>\n> \"new partitions not have value %s but split partition has\"\n>\n> how about we revise it to:\n>\n> \"new partitions do not have value %s but split partition does\"\n>\n> Another one is:\n>\n> \"any partition in the list should be DEFAULT because split partition is\n> DEFAULT\"\n>\n> how about we revise it to:\n>\n> \"all partitions in the list should be DEFAULT because split partition is\n> DEFAULT\"\n>\n> Another problem I noticed is that in the test files partition_split.sql\n> and partition_merge.sql, there are comments specifying the expected\n> error messages for certain test queries. However, in some cases, the\n> error message mentioned in the comment does not match the error message\n> actually generated by the query. Such as:\n>\n> -- ERROR: invalid partitions order, partition \"sales_mar2022\" can not be\n> merged\n> -- (space between sections sales_jan2022 and sales_mar2022)\n> ALTER TABLE sales_range MERGE PARTITIONS (sales_jan2022, sales_mar2022)\n> INTO sales_jan_mar2022;\n> ERROR: lower bound of partition \"sales_mar2022\" conflicts with upper\n> bound of previous partition \"sales_jan2022\"\n>\n> I'm not sure if it's a good practice to specify the expected error\n> message in the comment. But if we choose to do so, I think we at least\n> need to ensure that the specified error message in the comment remains\n> consistent with the error message produced by the query.\n>\n> Also there are some comments containing grammatical issues. Such as:\n>\n> -- no error: bounds of sales_noerror equals to lower and upper bounds of\n> sales_dec2022 and sales_feb2022\n>\n> Attached is a patch to fix the issues I've observed. I suspect there\n> may be more to be found.\n>\n\nYeah. The day before yesterday I found some grammer errors from error\nmessages and code comments [1] .\nExcept those issues, @Alexander Lakhin <[email protected]> has found\nsome bugs [2]\nI have some concerns that whether this patch is ready to commit.\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com\n\n-- \nTender Wang\nOpenPie: https://en.openpie.com/\n\nRichard Guo <[email protected]> 于2024年4月10日周三 19:32写道:I noticed some error messages in the split partition code that are notup to par.  Such as:\"new partitions not have value %s but split partition has\"how about we revise it to:\"new partitions do not have value %s but split partition does\"Another one is:\"any partition in the list should be DEFAULT because split partition isDEFAULT\"how about we revise it to:\"all partitions in the list should be DEFAULT because split partition isDEFAULT\"Another problem I noticed is that in the test files partition_split.sqland partition_merge.sql, there are comments specifying the expectederror messages for certain test queries.  However, in some cases, theerror message mentioned in the comment does not match the error messageactually generated by the query.  Such as:-- ERROR:  invalid partitions order, partition \"sales_mar2022\" can not be merged-- (space between sections sales_jan2022 and sales_mar2022)ALTER TABLE sales_range MERGE PARTITIONS (sales_jan2022, sales_mar2022) INTO sales_jan_mar2022;ERROR:  lower bound of partition \"sales_mar2022\" conflicts with upper bound of previous partition \"sales_jan2022\"I'm not sure if it's a good practice to specify the expected errormessage in the comment.  But if we choose to do so, I think we at leastneed to ensure that the specified error message in the comment remainsconsistent with the error message produced by the query.Also there are some comments containing grammatical issues.  Such as:-- no error: bounds of sales_noerror equals to lower and upper bounds of sales_dec2022 and sales_feb2022Attached is a patch to fix the issues I've observed.  I suspect theremay be more to be found.Yeah. The day before yesterday I found some grammer errors from error messages and code comments [1] .Except  those issues, @Alexander Lakhin  has found some bugs [2]I have some concerns that whether this patch is ready to commit.[1] https://www.postgresql.org/message-id/CAHewXNkGMPU50QG7V6Q60JGFORfo8LfYO1_GCkCa0VWbmB-fEw%40mail.gmail.com[2] https://www.postgresql.org/message-id/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com-- Tender WangOpenPie:  https://en.openpie.com/", "msg_date": "Wed, 10 Apr 2024 21:32:32 +0800", "msg_from": "Tender Wang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Revise some error messages in split partition code" } ]
[ { "msg_contents": "Hi,\n\nWe updated $SUBJECT in back branches to make it clear (see commit\nf6f61a4bd), so I would like to propose to do so in HEAD as well for\nconsistency. Attached is a patch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 10 Apr 2024 20:51:34 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Comment about handling of asynchronous requests in postgres_fdw.c" }, { "msg_contents": "On Wed, Apr 10, 2024 at 8:51 PM Etsuro Fujita <[email protected]> wrote:\n> We updated $SUBJECT in back branches to make it clear (see commit\n> f6f61a4bd), so I would like to propose to do so in HEAD as well for\n> consistency. Attached is a patch for that.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 11 Apr 2024 19:45:59 +0900", "msg_from": "Etsuro Fujita <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comment about handling of asynchronous requests in postgres_fdw.c" } ]
[ { "msg_contents": "Running \"\\d tablename\" from psql could take multiple seconds when\nrunning on a system with 100k+ tables. The reason for this was that\na sequence scan on pg_class takes place, due to regex matching being\nused.\n\nRegex matching is obviously unnecessary when we're looking for an exact\nmatch. This checks for this (common) case and starts using plain\nequality in that case.", "msg_date": "Wed, 10 Apr 2024 17:50:42 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "Patch looks good to me. Great idea overall, that forced regex has always\nbugged me.\n\n+ char *regexChars = \"|*+?()[]{}.^$\\\\\";\n\n\nOne super minor optimization is that we technically do not need to scan for\n')' and ']'. If they appear without their partner, the query will fail\nanyway. :)\nCheers,\nGreg\n\nPatch looks good to me. Great idea overall, that forced regex has always bugged me.+       char       *regexChars = \"|*+?()[]{}.^$\\\\\";One super minor optimization is that we technically do not need to scan for ')' and ']'. If they appear without their partner, the query will fail anyway. :)Cheers,Greg", "msg_date": "Wed, 10 Apr 2024 13:31:56 -0400", "msg_from": "Greg Sabino Mullane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> Running \"\\d tablename\" from psql could take multiple seconds when\n> running on a system with 100k+ tables. The reason for this was that\n> a sequence scan on pg_class takes place, due to regex matching being\n> used.\n\n> Regex matching is obviously unnecessary when we're looking for an exact\n> match. This checks for this (common) case and starts using plain\n> equality in that case.\n\nReally? ISTM this argument is ignoring an optimization the backend\nhas understood for a long time.\n\nregression=# explain select * from pg_class where relname ~ '^foo$';\n QUERY PLAN \n \n--------------------------------------------------------------------------------\n-------------\n Index Scan using pg_class_relname_nsp_index on pg_class (cost=0.28..8.30 rows=\n1 width=739)\n Index Cond: (relname = 'foo'::text)\n Filter: (relname ~ '^foo$'::text)\n(3 rows)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 14:06:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "Hi\n\n\n> Regex matching is obviously unnecessary when we're looking for an exact\n> match. This checks for this (common) case and starts using plain\n> equality in that case.\n\n+1\n\n> + appendPQExpBuffer(buf, \"(%s OPERATOR(pg_catalog.=) \", namevar);\n> + appendStringLiteralConn(buf, &namebuf.data[2], conn);\n> + appendPQExpBuffer(buf, \"\\n OR %s OPERATOR(pg_catalog.=) \",\n> + altnamevar);\n> + appendStringLiteralConn(buf, &namebuf.data[2], conn);\n> + appendPQExpBufferStr(buf, \")\\n\");\n\nDo we need to force Collaction here like in other branches?\nif (PQserverVersion(conn) >= 120000)\n appendPQExpBufferStr(buf, \" COLLATE pg_catalog.default\");\n\n\n", "msg_date": "Wed, 10 Apr 2024 23:20:48 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "On Wed, 10 Apr 2024 at 20:06, Tom Lane <[email protected]> wrote:\n> Really? ISTM this argument is ignoring an optimization the backend\n> has understood for a long time.\n\nInteresting. I didn't know about that optimization. I can't check\nright now, but probably the COLLATE breaks that optimization.\n\n\n", "msg_date": "Wed, 10 Apr 2024 20:31:57 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "On Wed, 10 Apr 2024 at 20:21, Kirill Reshke <[email protected]> wrote:\n> Do we need to force Collaction here like in other branches?\n> if (PQserverVersion(conn) >= 120000)\n> appendPQExpBufferStr(buf, \" COLLATE pg_catalog.default\");\n\nAccording to the commit and codecomment that introduced the COLLATE,\nit was specifically added for correct regex matching (e.g. \\w). So I\ndon't think it's necessary, and I'm pretty sure adding it will cause\nthe index scan not to be used anymore.\n\n\n", "msg_date": "Wed, 10 Apr 2024 20:36:57 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "On Wed, 10 Apr 2024, 23:37 Jelte Fennema-Nio, <[email protected]> wrote:\n\n> On Wed, 10 Apr 2024 at 20:21, Kirill Reshke <[email protected]>\n> wrote:\n> > Do we need to force Collaction here like in other branches?\n> > if (PQserverVersion(conn) >= 120000)\n> > appendPQExpBufferStr(buf, \" COLLATE pg_catalog.default\");\n>\n> According to the commit and codecomment that introduced the COLLATE,\n> it was specifically added for correct regex matching (e.g. \\w). So I\n> don't think it's necessary, and I'm pretty sure adding it will cause\n> the index scan not to be used anymore.\n>\n\nOk, thanks for the clarification. If all of this is actually true, and\npatch is really does speedup, maybe we need to state this in the comments?\n\n>\n\nOn Wed, 10 Apr 2024, 23:37 Jelte Fennema-Nio, <[email protected]> wrote:On Wed, 10 Apr 2024 at 20:21, Kirill Reshke <[email protected]> wrote:\n> Do we need to force Collaction here like in other branches?\n> if (PQserverVersion(conn) >= 120000)\n>    appendPQExpBufferStr(buf, \" COLLATE pg_catalog.default\");\n\nAccording to the commit and codecomment that introduced the COLLATE,\nit was specifically added for correct regex matching (e.g. \\w). So I\ndon't think it's necessary, and I'm pretty sure adding it will cause\nthe index scan not to be used anymore.Ok, thanks for the clarification. If all of this is actually true, and patch is really does speedup, maybe we need to state this in the comments?", "msg_date": "Wed, 10 Apr 2024 23:39:48 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Wed, 10 Apr 2024 at 20:06, Tom Lane <[email protected]> wrote:\n>> Really? ISTM this argument is ignoring an optimization the backend\n>> has understood for a long time.\n\n> Interesting. I didn't know about that optimization. I can't check\n> right now, but probably the COLLATE breaks that optimization.\n\nNot for me.\n\n# explain select * from pg_class where relname ~ '^(foo)$' collate \"en_US\";\n QUERY PLAN \n---------------------------------------------------------------------------------------------\n Index Scan using pg_class_relname_nsp_index on pg_class (cost=0.27..8.29 rows=1 width=263)\n Index Cond: (relname = 'foo'::text)\n Filter: (relname ~ '^(foo)$'::text COLLATE \"en_US\")\n(3 rows)\n\nAlso, using -E:\n\n# \\d foo\n/******** QUERY *********/\nSELECT c.oid,\n n.nspname,\n c.relname\nFROM pg_catalog.pg_class c\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\nWHERE c.relname OPERATOR(pg_catalog.~) '^(foo)$' COLLATE pg_catalog.default\n AND pg_catalog.pg_table_is_visible(c.oid)\nORDER BY 2, 3;\n/************************/\n\n# explain SELECT c.oid,\n n.nspname,\n c.relname\nFROM pg_catalog.pg_class c\n LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace\nWHERE c.relname OPERATOR(pg_catalog.~) '^(foo)$' COLLATE pg_catalog.default\n AND pg_catalog.pg_table_is_visible(c.oid)\nORDER BY 2, 3;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------\n Sort (cost=9.42..9.42 rows=1 width=132)\n Sort Key: n.nspname, c.relname\n -> Nested Loop Left Join (cost=0.27..9.41 rows=1 width=132)\n Join Filter: (n.oid = c.relnamespace)\n -> Index Scan using pg_class_relname_nsp_index on pg_class c (cost=0.27..8.32 rows=1 width=72)\n Index Cond: (relname = 'foo'::text)\n Filter: ((relname ~ '^(foo)$'::text) AND pg_table_is_visible(oid))\n -> Seq Scan on pg_namespace n (cost=0.00..1.04 rows=4 width=68)\n(8 rows)\n\n\nThere may be an argument for psql to do what you suggest,\nbut so far it seems like duplicative complication.\n\nIf there's a case you can demonstrate where \"\\d foo\" doesn't optimize\ninto an indexscan, we should look into exactly why that's happening,\nbecause I think the cause must be more subtle than this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 16:11:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "On Wed, 10 Apr 2024 at 22:11, Tom Lane <[email protected]> wrote:\n> There may be an argument for psql to do what you suggest,\n> but so far it seems like duplicative complication.\n>\n> If there's a case you can demonstrate where \"\\d foo\" doesn't optimize\n> into an indexscan, we should look into exactly why that's happening,\n> because I think the cause must be more subtle than this.\n\nHmm, okay so I took a closer look and you're completely right: It's\nquite a lot more subtle than I initially thought. The query from \"\\d\nfoo\" is fast as long as you don't have Citus installed. It turns out\nthat Citus breaks this regex index search optimization somehow by\nadding \"NOT relation_is_a_known_shard(c.oid)\" to the securityQuals of\nthe rangeTableEntry for pg_class in its planner hook. Citus does this\nto filter out the underlying shards of a table for every query on\npg_class. The reason is that these underlying shards cluttered the\noutput of \\d and PgAdmin etc. Users also tended to get confused by\nthem, sometimes badly enough to remove them (and thus requiring\nrestore from backup).\n\nWe have a GUC to turn this filtering off for advanced users:\nSET citus.show_shards_for_app_name_prefixes = '*';\n\nIf you set that the index is used and the query is fast again. Just\nlike what is happening for you. Not using the regex search also worked\nas a way to trigger an index scan.\n\nI'll think/research a bit tomorrow and try some stuff out to see if\nthis is fixable in Citus. That would definitely be preferable to me as\nit would solve this issue on all Postgres/psql versions that citus\nsupports.\n\nIf I cannot think of a way to address this in Citus, would it be\npossible to still consider to merge this patch (assuming comments\nexplaining that Citus is the reason)? Because this planner issue that\nCitus its behaviour introduces is fixed by the change I proposed in my\nPatch too (I'm not yet sure how exactly).\n\n\n", "msg_date": "Wed, 10 Apr 2024 23:36:00 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "Jelte Fennema-Nio <[email protected]> writes:\n> On Wed, 10 Apr 2024 at 22:11, Tom Lane <[email protected]> wrote:\n>> If there's a case you can demonstrate where \"\\d foo\" doesn't optimize\n>> into an indexscan, we should look into exactly why that's happening,\n>> because I think the cause must be more subtle than this.\n\n> Hmm, okay so I took a closer look and you're completely right: It's\n> quite a lot more subtle than I initially thought. The query from \"\\d\n> foo\" is fast as long as you don't have Citus installed. It turns out\n> that Citus breaks this regex index search optimization somehow by\n> adding \"NOT relation_is_a_known_shard(c.oid)\" to the securityQuals of\n> the rangeTableEntry for pg_class in its planner hook. Citus does this\n> to filter out the underlying shards of a table for every query on\n> pg_class.\n\nHuh. Okay, but then why does it work for a simple comparison?\n\nActually, I bet I know why: texteq is marked leakproof which lets\nit drop below a security qual, while regex matches aren't leakproof.\n\nIs it really necessary for Citus' filter to be a security qual rather\nthan a plain ol' filter condition?\n\nThe direction I'd be inclined to think about if this can't be fixed\ninside Citus is whether we can generate the derived indexqual\ncondition despite the regex being backed up behind a security qual.\nThat is, as long as the derived condition is leakproof, there's no\nreason not to let it go before the security qual. We're probably\nfailing to consider generating derived quals for anything that isn't\nqualified to become an indexqual, and this example shows that that's\nleaving money on the table.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 10 Apr 2024 17:51:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" }, { "msg_contents": "On Wed, 10 Apr 2024 at 23:51, Tom Lane <[email protected]> wrote:\n> Is it really necessary for Citus' filter to be a security qual rather\n> than a plain ol' filter condition?\n\nNo, it's not. I think using security quals simply required the least\namount of code (and it worked just fine if you didn't have lots of\ntables). I created a PR for Citus to address this issue[1] by changing\nto a normal filter condition. Thanks a lot for pointing me in the\nright direction to fix this.\n\n> That is, as long as the derived condition is leakproof, there's no\n> reason not to let it go before the security qual. We're probably\n> failing to consider generating derived quals for anything that isn't\n> qualified to become an indexqual, and this example shows that that's\n> leaving money on the table.\n\nI think even though my immediate is fixed, I think this would be a\ngood improvement anyway.\n\n[1]: https://github.com/citusdata/citus/pull/7577\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:30:58 +0200", "msg_from": "Jelte Fennema-Nio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: psql: Greatly speed up \"\\d tablename\" when not using regexes" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nThe function ReorderBufferTXNByXid,\ncan return NULL when the parameter *create* is false.\n\nIn the functions ReorderBufferSetBaseSnapshot\nand ReorderBufferXidHasBaseSnapshot,\nthe second call to ReorderBufferTXNByXid,\npass false to *create* argument.\n\nIn the function ReorderBufferSetBaseSnapshot,\nfixed passing true as argument to always return\na valid ReorderBufferTXN pointer.\n\nIn the function ReorderBufferXidHasBaseSnapshot,\nfixed by checking if the pointer is NULL.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 10 Apr 2024 15:07:27 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On 10/04/2024 21:07, Ranier Vilela wrote:\n> Hi,\n> \n> Per Coverity.\n> \n> The function ReorderBufferTXNByXid,\n> can return NULL when the parameter *create* is false.\n> \n> In the functions ReorderBufferSetBaseSnapshot\n> and ReorderBufferXidHasBaseSnapshot,\n> the second call to ReorderBufferTXNByXid,\n> pass false to *create* argument.\n> \n> In the function ReorderBufferSetBaseSnapshot,\n> fixed passing true as argument to always return\n> a valid ReorderBufferTXN pointer.\n> \n> In the function ReorderBufferXidHasBaseSnapshot,\n> fixed by checking if the pointer is NULL.\n\nIf it's a \"known subxid\", the top-level XID should already have its \nReorderBufferTXN entry, so ReorderBufferTXN() should never return NULL. \nIt's not surprising if Coverity doesn't understand that, but setting the \n'create' flag doesn't seem like the right fix. If we add \"Assert(txn != \nNULL)\", does that silence it?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 00:28:43 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas <[email protected]>\nescreveu:\n\n> On 10/04/2024 21:07, Ranier Vilela wrote:\n> > Hi,\n> >\n> > Per Coverity.\n> >\n> > The function ReorderBufferTXNByXid,\n> > can return NULL when the parameter *create* is false.\n> >\n> > In the functions ReorderBufferSetBaseSnapshot\n> > and ReorderBufferXidHasBaseSnapshot,\n> > the second call to ReorderBufferTXNByXid,\n> > pass false to *create* argument.\n> >\n> > In the function ReorderBufferSetBaseSnapshot,\n> > fixed passing true as argument to always return\n> > a valid ReorderBufferTXN pointer.\n> >\n> > In the function ReorderBufferXidHasBaseSnapshot,\n> > fixed by checking if the pointer is NULL.\n>\n> If it's a \"known subxid\", the top-level XID should already have its\n> ReorderBufferTXN entry, so ReorderBufferTXN() should never return NULL.\n>\nThere are several conditions to not return NULL,\nI think trusting never is insecure.\n\nIt's not surprising if Coverity doesn't understand that, but setting the\n> 'create' flag doesn't seem like the right fix.\n\nReorderBufferSetBaseSnapshot always expects that *txn* exists.\nIf a second call fails, the only solution is to create a new one, no?\n\n\n> If we add \"Assert(txn !=\n> NULL)\", does that silence it?\n>\nI think no.\nI always compile it as a release to send to Coverity.\n\nbest regards,\nRanier Vilela\n\nEm qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas <[email protected]> escreveu:On 10/04/2024 21:07, Ranier Vilela wrote:\n> Hi,\n> \n> Per Coverity.\n> \n> The function ReorderBufferTXNByXid,\n> can return NULL when the parameter *create* is false.\n> \n> In the functions ReorderBufferSetBaseSnapshot\n> and ReorderBufferXidHasBaseSnapshot,\n> the second call to ReorderBufferTXNByXid,\n> pass false to *create* argument.\n> \n> In the function ReorderBufferSetBaseSnapshot,\n> fixed passing true as argument to always return\n> a valid ReorderBufferTXN pointer.\n> \n> In the function ReorderBufferXidHasBaseSnapshot,\n> fixed by checking if the pointer is NULL.\n\nIf it's a \"known subxid\", the top-level XID should already have its \nReorderBufferTXN entry, so ReorderBufferTXN() should never return NULL. There are several conditions to not return NULL,I think trusting never is insecure.\nIt's not surprising if Coverity doesn't understand that, but setting the \n'create' flag doesn't seem like the right fix.ReorderBufferSetBaseSnapshot always expects that *txn* exists.If a second call fails, the only solution is to create a new one, no?  If we add \"Assert(txn != \nNULL)\", does that silence it?I think no.I always compile it as a release to send to Coverity.best regards,Ranier Vilela", "msg_date": "Thu, 11 Apr 2024 09:03:55 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On 11/04/2024 15:03, Ranier Vilela wrote:\n> Em qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> On 10/04/2024 21:07, Ranier Vilela wrote:\n> > Hi,\n> >\n> > Per Coverity.\n> >\n> > The function ReorderBufferTXNByXid,\n> > can return NULL when the parameter *create* is false.\n> >\n> > In the functions ReorderBufferSetBaseSnapshot\n> > and ReorderBufferXidHasBaseSnapshot,\n> > the second call to ReorderBufferTXNByXid,\n> > pass false to *create* argument.\n> >\n> > In the function ReorderBufferSetBaseSnapshot,\n> > fixed passing true as argument to always return\n> > a valid ReorderBufferTXN pointer.\n> >\n> > In the function ReorderBufferXidHasBaseSnapshot,\n> > fixed by checking if the pointer is NULL.\n> \n> If it's a \"known subxid\", the top-level XID should already have its\n> ReorderBufferTXN entry, so ReorderBufferTXN() should never return NULL.\n> \n> There are several conditions to not return NULL,\n> I think trusting never is insecure.\n\nWell, you could make it an elog(ERROR, ..) instead. But the point is \nthat it should not happen, and if it does for some reason, that's very \nsuprpising and there is a bug somewhere. In that case, we should *not* \njust blindly create it and proceed as if everything was OK.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 15:54:33 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em qui., 11 de abr. de 2024 às 09:54, Heikki Linnakangas <[email protected]>\nescreveu:\n\n> On 11/04/2024 15:03, Ranier Vilela wrote:\n> > Em qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas\n> > <[email protected] <mailto:[email protected]>> escreveu:\n> >\n> > On 10/04/2024 21:07, Ranier Vilela wrote:\n> > > Hi,\n> > >\n> > > Per Coverity.\n> > >\n> > > The function ReorderBufferTXNByXid,\n> > > can return NULL when the parameter *create* is false.\n> > >\n> > > In the functions ReorderBufferSetBaseSnapshot\n> > > and ReorderBufferXidHasBaseSnapshot,\n> > > the second call to ReorderBufferTXNByXid,\n> > > pass false to *create* argument.\n> > >\n> > > In the function ReorderBufferSetBaseSnapshot,\n> > > fixed passing true as argument to always return\n> > > a valid ReorderBufferTXN pointer.\n> > >\n> > > In the function ReorderBufferXidHasBaseSnapshot,\n> > > fixed by checking if the pointer is NULL.\n> >\n> > If it's a \"known subxid\", the top-level XID should already have its\n> > ReorderBufferTXN entry, so ReorderBufferTXN() should never return\n> NULL.\n> >\n> > There are several conditions to not return NULL,\n> > I think trusting never is insecure.\n>\n> Well, you could make it an elog(ERROR, ..) instead. But the point is\n> that it should not happen, and if it does for some reason, that's very\n> suprpising and there is a bug somewhere. In that case, we should *not*\n> just blindly create it and proceed as if everything was OK.\n>\nThanks for the clarification.\nI will then suggest improving robustness,\nbut avoiding hiding any possible errors that may occur.\n\nv1 patch attached.\n\nbest regards,\nRanier Vilela", "msg_date": "Thu, 11 Apr 2024 10:37:20 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On 11/04/2024 16:37, Ranier Vilela wrote:\n> Em qui., 11 de abr. de 2024 às 09:54, Heikki Linnakangas \n> <[email protected] <mailto:[email protected]>> escreveu:\n> \n> On 11/04/2024 15:03, Ranier Vilela wrote:\n> > Em qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas\n> > <[email protected] <mailto:[email protected]> <mailto:[email protected]\n> <mailto:[email protected]>>> escreveu:\n> >\n> >     On 10/04/2024 21:07, Ranier Vilela wrote:\n> >      > Hi,\n> >      >\n> >      > Per Coverity.\n> >      >\n> >      > The function ReorderBufferTXNByXid,\n> >      > can return NULL when the parameter *create* is false.\n> >      >\n> >      > In the functions ReorderBufferSetBaseSnapshot\n> >      > and ReorderBufferXidHasBaseSnapshot,\n> >      > the second call to ReorderBufferTXNByXid,\n> >      > pass false to *create* argument.\n> >      >\n> >      > In the function ReorderBufferSetBaseSnapshot,\n> >      > fixed passing true as argument to always return\n> >      > a valid ReorderBufferTXN pointer.\n> >      >\n> >      > In the function ReorderBufferXidHasBaseSnapshot,\n> >      > fixed by checking if the pointer is NULL.\n> >\n> >     If it's a \"known subxid\", the top-level XID should already\n> have its\n> >     ReorderBufferTXN entry, so ReorderBufferTXN() should never\n> return NULL.\n> >\n> > There are several conditions to not return NULL,\n> > I think trusting never is insecure.\n> \n> Well, you could make it an elog(ERROR, ..) instead. But the point is\n> that it should not happen, and if it does for some reason, that's very\n> suprpising and there is a bug somewhere. In that case, we should *not*\n> just blindly create it and proceed as if everything was OK.\n> \n> Thanks for the clarification.\n> I will then suggest improving robustness,\n> but avoiding hiding any possible errors that may occur.\n\nI don't much like adding extra runtime checks for \"can't happen\" \nscenarios either. Assertions would be more clear, but in this case the \ncode would just segfault trying to dereference the NULL pointer, which \nis fine for a \"can't happen\" scenario.\n\nLooking closer, when we identify an XID as a subtransaction, we:\n- assign toplevel_xid,\n- set RBTXN_IS_SUBXACT, and\n- assign toptxn pointer.\n\nISTM the 'toplevel_xid' and RBTXN_IS_SUBXACT are redundant. \n'toplevel_xid' is only used in those two calls that do \nReorderBufferTXNByXid(rb, txn->toplevel_xid,...), and you could replace \nthose by following the 'toptxn' pointer directly. And RBTXN_IS_SUBXACT \nis redundant with (toptxn != NULL). So here's a patch to remove \n'toplevel_xid' and RBTXN_IS_SUBXACT altogether.\n\nAmit, you added 'toptxn' in commit c55040ccd017; does this look right to \nyou?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 13 Apr 2024 10:16:11 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em sáb., 13 de abr. de 2024 às 04:16, Heikki Linnakangas <[email protected]>\nescreveu:\n\n> On 11/04/2024 16:37, Ranier Vilela wrote:\n> > Em qui., 11 de abr. de 2024 às 09:54, Heikki Linnakangas\n> > <[email protected] <mailto:[email protected]>> escreveu:\n> >\n> > On 11/04/2024 15:03, Ranier Vilela wrote:\n> > > Em qua., 10 de abr. de 2024 às 18:28, Heikki Linnakangas\n> > > <[email protected] <mailto:[email protected]> <mailto:[email protected]\n> > <mailto:[email protected]>>> escreveu:\n> > >\n> > > On 10/04/2024 21:07, Ranier Vilela wrote:\n> > > > Hi,\n> > > >\n> > > > Per Coverity.\n> > > >\n> > > > The function ReorderBufferTXNByXid,\n> > > > can return NULL when the parameter *create* is false.\n> > > >\n> > > > In the functions ReorderBufferSetBaseSnapshot\n> > > > and ReorderBufferXidHasBaseSnapshot,\n> > > > the second call to ReorderBufferTXNByXid,\n> > > > pass false to *create* argument.\n> > > >\n> > > > In the function ReorderBufferSetBaseSnapshot,\n> > > > fixed passing true as argument to always return\n> > > > a valid ReorderBufferTXN pointer.\n> > > >\n> > > > In the function ReorderBufferXidHasBaseSnapshot,\n> > > > fixed by checking if the pointer is NULL.\n> > >\n> > > If it's a \"known subxid\", the top-level XID should already\n> > have its\n> > > ReorderBufferTXN entry, so ReorderBufferTXN() should never\n> > return NULL.\n> > >\n> > > There are several conditions to not return NULL,\n> > > I think trusting never is insecure.\n> >\n> > Well, you could make it an elog(ERROR, ..) instead. But the point is\n> > that it should not happen, and if it does for some reason, that's\n> very\n> > suprpising and there is a bug somewhere. In that case, we should\n> *not*\n> > just blindly create it and proceed as if everything was OK.\n> >\n> > Thanks for the clarification.\n> > I will then suggest improving robustness,\n> > but avoiding hiding any possible errors that may occur.\n>\n> I don't much like adding extra runtime checks for \"can't happen\"\n> scenarios either. Assertions would be more clear, but in this case the\n> code would just segfault trying to dereference the NULL pointer, which\n> is fine for a \"can't happen\" scenario.\n>\nThis sounds a little confusing to me.\nIs the project policy to *tolerate* dereferencing NULL pointers?\nIf this is the case, no problem, using Assert would serve the purpose of\nprotecting against these errors well.\n\n\n> Looking closer, when we identify an XID as a subtransaction, we:\n> - assign toplevel_xid,\n> - set RBTXN_IS_SUBXACT, and\n> - assign toptxn pointer.\n>\n> ISTM the 'toplevel_xid' and RBTXN_IS_SUBXACT are redundant.\n> 'toplevel_xid' is only used in those two calls that do\n> ReorderBufferTXNByXid(rb, txn->toplevel_xid,...), and you could replace\n> those by following the 'toptxn' pointer directly. And RBTXN_IS_SUBXACT\n> is redundant with (toptxn != NULL). So here's a patch to remove\n> 'toplevel_xid' and RBTXN_IS_SUBXACT altogether.\n>\n+ if (rbtxn_is_subtxn(txn))\n+ txn = rbtxn_get_toptxn(txn);\n\nrbtxn_get_toptxn already calls rbtxn_is_subtx,\nwhich adds a little unnecessary overhead.\nI made a v1 of your patch, modifying this, please ignore it if you don't\nagree.\n\nbest regards,\nRanier Vilela", "msg_date": "Sat, 13 Apr 2024 10:40:35 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On Sat, Apr 13, 2024 at 10:40:35AM -0300, Ranier Vilela wrote:\n> This sounds a little confusing to me.\n> Is the project policy to *tolerate* dereferencing NULL pointers?\n> If this is the case, no problem, using Assert would serve the purpose of\n> protecting against these errors well.\n\nIn most cases, it does not matter to have an assertion for a code path\nthat's going to crash a few lines later. The result is the same: the\ncode will crash and inform about the failure.\n\n> rbtxn_get_toptxn already calls rbtxn_is_subtx,\n> which adds a little unnecessary overhead.\n> I made a v1 of your patch, modifying this, please ignore it if you don't\n> agree.\n\nc55040ccd017 has been added in v14~, so this is material for 18~ at\nthis stage. If you want to move on with these changes, I'd suggest to\nadd a CF entry.\n\nFWIW, I think that I agree with the point made upthread by Heikki\nabout the fact that these extra ReorderBufferTXNByXid() are redundant.\nIn these two code paths, the ReorderBufferTXN entry of top transaction\nID should exist, so removing toplevel_xid makes sense.\n\nIt may be worth exploring if more simplifications around\nReorderBufferTXNByXid() are possible, actually.\n--\nMichael", "msg_date": "Mon, 15 Apr 2024 09:29:05 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "On Sat, Apr 13, 2024 at 12:46 PM Heikki Linnakangas <[email protected]> wrote:\n>\n> I don't much like adding extra runtime checks for \"can't happen\"\n> scenarios either. Assertions would be more clear, but in this case the\n> code would just segfault trying to dereference the NULL pointer, which\n> is fine for a \"can't happen\" scenario.\n>\n\nIsn't the existing assertion (Assert(!create || txn != NULL);) in\nReorderBufferTXNByXid() sufficient to handle this case?\n\n> Looking closer, when we identify an XID as a subtransaction, we:\n> - assign toplevel_xid,\n> - set RBTXN_IS_SUBXACT, and\n> - assign toptxn pointer.\n>\n> ISTM the 'toplevel_xid' and RBTXN_IS_SUBXACT are redundant.\n> 'toplevel_xid' is only used in those two calls that do\n> ReorderBufferTXNByXid(rb, txn->toplevel_xid,...), and you could replace\n> those by following the 'toptxn' pointer directly. And RBTXN_IS_SUBXACT\n> is redundant with (toptxn != NULL). So here's a patch to remove\n> 'toplevel_xid' and RBTXN_IS_SUBXACT altogether.\n>\n\nGood idea. I don't see a problem with this idea.\n\n@@ -1135,8 +1133,6 @@ static void\n ReorderBufferTransferSnapToParent(ReorderBufferTXN *txn,\n ReorderBufferTXN *subtxn)\n {\n- Assert(subtxn->toplevel_xid == txn->xid);\n\nIs there a benefit in converting this assertion using toptxn?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Apr 2024 12:00:48 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" }, { "msg_contents": "Em dom., 14 de abr. de 2024 às 21:29, Michael Paquier <[email protected]>\nescreveu:\n\n> On Sat, Apr 13, 2024 at 10:40:35AM -0300, Ranier Vilela wrote:\n> > This sounds a little confusing to me.\n> > Is the project policy to *tolerate* dereferencing NULL pointers?\n> > If this is the case, no problem, using Assert would serve the purpose of\n> > protecting against these errors well.\n>\n> In most cases, it does not matter to have an assertion for a code path\n> that's going to crash a few lines later. The result is the same: the\n> code will crash and inform about the failure.\n>\n> > rbtxn_get_toptxn already calls rbtxn_is_subtx,\n> > which adds a little unnecessary overhead.\n> > I made a v1 of your patch, modifying this, please ignore it if you don't\n> > agree.\n>\n> c55040ccd017 has been added in v14~, so this is material for 18~ at\n> this stage. If you want to move on with these changes, I'd suggest to\n> add a CF entry.\n>\nI think it's worth it, because it's a case of a possible bug, but very\nunlikely,\nand Heikki's suggestions improve the code.\n\nbest regards,\nRanier Vilela\n\nEm dom., 14 de abr. de 2024 às 21:29, Michael Paquier <[email protected]> escreveu:On Sat, Apr 13, 2024 at 10:40:35AM -0300, Ranier Vilela wrote:\n> This sounds a little confusing to me.\n> Is the project policy to *tolerate* dereferencing NULL pointers?\n> If this is the case, no problem, using Assert would serve the purpose of\n> protecting against these errors well.\n\nIn most cases, it does not matter to have an assertion for a code path\nthat's going to crash a few lines later.  The result is the same: the\ncode will crash and inform about the failure.\n\n> rbtxn_get_toptxn already calls rbtxn_is_subtx,\n> which adds a little unnecessary overhead.\n> I made a v1 of your patch, modifying this, please ignore it if you don't\n> agree.\n\nc55040ccd017 has been added in v14~, so this is material for 18~ at\nthis stage.  If you want to move on with these changes, I'd suggest to\nadd a CF entry.I think it's worth it, because it's a case of a possible bug, but very unlikely, and Heikki's suggestions improve the code. best regards,Ranier Vilela", "msg_date": "Mon, 15 Apr 2024 08:43:34 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix possible dereference null pointer\n (src/backend/replication/logical/reorderbuffer.c)" } ]
[ { "msg_contents": "I frequently hear about scenarios where users with thousands upon thousands\nof tables realize that autovacuum is struggling to keep up. When they\ninevitably go to bump up autovacuum_max_workers, they discover that it\nrequires a server restart (i.e., downtime) to take effect, causing further\nfrustration. For this reason, I think $SUBJECT is a desirable improvement.\nI spent some time looking for past discussions about this, and I was\nsurprised to not find any, so I thought I'd give it a try.\n\nThe attached proof-of-concept patch demonstrates what I have in mind.\nInstead of trying to dynamically change the global process table, etc., I'm\nproposing that we introduce a new GUC that sets the effective maximum\nnumber of autovacuum workers that can be started at any time. This means\nthere would be two GUCs for the number of autovacuum workers: one for the\nnumber of slots reserved for autovacuum workers, and another that restricts\nthe number of those slots that can be used. The former would continue to\nrequire a restart to change its value, and users would typically want to\nset it relatively high. The latter could be changed at any time and would\nallow for raising or lowering the maximum number of active autovacuum\nworkers, up to the limit set by the other parameter.\n\nThe proof-of-concept patch keeps autovacuum_max_workers as the maximum\nnumber of slots to reserve for workers, but I think we should instead\nrename this parameter to something else and then reintroduce\nautovacuum_max_workers as the new parameter that can be adjusted without\nrestarting. That way, autovacuum_max_workers continues to work much the\nsame way as in previous versions.\n\nThere are a couple of weird cases with this approach. One is when the\nrestart-only limit is set lower than the PGC_SIGHUP limit. In that case, I\nthink we should just use the restart-only limit. The other is when there\nare already N active autovacuum workers and the PGC_SIGHUP parameter is\nchanged to something less than N. For that case, I think we should just\nblock starting additional workers until the number of workers drops below\nthe new parameter's value. I don't think we should kill existing workers,\nor anything else like that.\n\nTBH I've been sitting on this idea for a while now, only because I think it\nhas a slim chance of acceptance, but IMHO this is a simple change that\ncould help many users.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 10 Apr 2024 16:23:44 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> I frequently hear about scenarios where users with thousands upon thousands\r\n> of tables realize that autovacuum is struggling to keep up. When they\r\n> inevitably go to bump up autovacuum_max_workers, they discover that it\r\n> requires a server restart (i.e., downtime) to take effect, causing further\r\n> frustration. For this reason, I think $SUBJECT is a desirable improvement.\r\n> I spent some time looking for past discussions about this, and I was\r\n> surprised to not find any, so I thought I'd give it a try.\r\n\r\nI did not review the patch in detail yet, but +1 to the idea. \r\nIt's not just thousands of tables that suffer from this.\r\nIf a user has a few large tables hogging the autovac workers, then other\r\ntables don't get the autovac cycles they require. Users are then forced\r\nto run manual vacuums, which adds complexity to their operations.\r\n\r\n> The attached proof-of-concept patch demonstrates what I have in mind.\r\n> Instead of trying to dynamically change the global process table, etc., I'm\r\n> proposing that we introduce a new GUC that sets the effective maximum\r\n> number of autovacuum workers that can be started at any time.\r\n\r\nmax_worker_processes defines a pool of max # of background workers allowed.\r\nparallel workers and extensions that spin up background workers all utilize from \r\nthis pool. \r\n\r\nShould autovacuum_max_workers be able to utilize from max_worker_processes also?\r\n\r\nThis will allow autovacuum_max_workers to be dynamic while the user only has\r\nto deal with an already existing GUC. We may want to increase the default value\r\nfor max_worker_processes as part of this.\r\n\r\n\r\nRegards,\r\n\r\nSami\r\nAmazon Web Services (AWS)\r\n\r\n", "msg_date": "Thu, 11 Apr 2024 14:24:18 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, Apr 11, 2024 at 02:24:18PM +0000, Imseih (AWS), Sami wrote:\n> max_worker_processes defines a pool of max # of background workers allowed.\n> parallel workers and extensions that spin up background workers all utilize from \n> this pool. \n> \n> Should autovacuum_max_workers be able to utilize from max_worker_processes also?\n> \n> This will allow autovacuum_max_workers to be dynamic while the user only has\n> to deal with an already existing GUC. We may want to increase the default value\n> for max_worker_processes as part of this.\n\nMy concern with this approach is that other background workers could use up\nall the slots and prevent autovacuum workers from starting, unless of\ncourse we reserve autovacuum_max_workers slots for _only_ autovacuum\nworkers. I'm not sure if we want to get these parameters tangled up like\nthis, though...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 09:42:40 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, Apr 11, 2024 at 09:42:40AM -0500, Nathan Bossart wrote:\n> On Thu, Apr 11, 2024 at 02:24:18PM +0000, Imseih (AWS), Sami wrote:\n>> max_worker_processes defines a pool of max # of background workers allowed.\n>> parallel workers and extensions that spin up background workers all utilize from \n>> this pool. \n>> \n>> Should autovacuum_max_workers be able to utilize from max_worker_processes also?\n>> \n>> This will allow autovacuum_max_workers to be dynamic while the user only has\n>> to deal with an already existing GUC. We may want to increase the default value\n>> for max_worker_processes as part of this.\n> \n> My concern with this approach is that other background workers could use up\n> all the slots and prevent autovacuum workers from starting, unless of\n> course we reserve autovacuum_max_workers slots for _only_ autovacuum\n> workers. I'm not sure if we want to get these parameters tangled up like\n> this, though...\n\nI see that the logical replication launcher process uses this pool, but we\ntake special care to make sure it gets a slot:\n\n\t/*\n\t * Register the apply launcher. It's probably a good idea to call this\n\t * before any modules had a chance to take the background worker slots.\n\t */\n\tApplyLauncherRegister();\n\nI'm not sure there's another way to effectively reserve slots that would\nwork for the autovacuum workers (which need to restart to connect to\ndifferent databases), so that would need to be invented. We'd probably\nalso want to fail startup if autovacuum_max_workers < max_worker_processes,\nwhich seems like it has the potential to cause problems when folks first\nupgrade to v18.\n\nFurthermore, we might have to convert autovacuum workers to background\nworker processes for this to work. I've admittedly wondered about whether\nwe should do that eventually, anyway, but it'd expand the scope of this\nwork quite a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 09:58:34 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> My concern with this approach is that other background workers could use up\r\n> all the slots and prevent autovacuum workers from starting\r\n\r\nThat's a good point, the current settings do not guarantee that you\r\nget a worker for the purpose if none are available, \r\ni.e. max_parallel_workers_per_gather, you may have 2 workers planned \r\nand 0 launched. \r\n\r\n> unless of\r\n> course we reserve autovacuum_max_workers slots for _only_ autovacuum\r\n> workers. I'm not sure if we want to get these parameters tangled up like\r\n> this, though...\r\n\r\nThis will be confusing to describe and we will be reserving autovac workers\r\nimplicitly, rather than explicitly with a new GUC.\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n\r\n\r\n", "msg_date": "Thu, 11 Apr 2024 15:37:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, Apr 11, 2024 at 03:37:23PM +0000, Imseih (AWS), Sami wrote:\n>> My concern with this approach is that other background workers could use up\n>> all the slots and prevent autovacuum workers from starting\n> \n> That's a good point, the current settings do not guarantee that you\n> get a worker for the purpose if none are available, \n> i.e. max_parallel_workers_per_gather, you may have 2 workers planned \n> and 0 launched. \n> \n>> unless of\n>> course we reserve autovacuum_max_workers slots for _only_ autovacuum\n>> workers. I'm not sure if we want to get these parameters tangled up like\n>> this, though...\n> \n> This will be confusing to describe and we will be reserving autovac workers\n> implicitly, rather than explicitly with a new GUC.\n\nYeah, that's probably a good reason to give autovacuum its own worker pool.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 11 Apr 2024 14:24:23 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "I spent sometime reviewing/testing the POC. It is relatively simple with a lot\r\nof obvious value. \r\n\r\nI tested with 16 tables that constantly reach the autovac threashold and the\r\npatch did the right thing. I observed concurrent autovacuum workers matching\r\nthe setting as I was adjusting it dynamically.\r\n\r\nAs you mention above, If there are more autovacs in progress and a new lower setting \r\nis applied, we should not take any special action on those autovacuums, and eventually \r\nthe max number of autovacuum workers will match the setting.\r\n\r\nI also tested by allowing user connections to reach max_connections, and observed the \r\nexpected number of autovacuums spinning up and correctly adjusted.\r\n\r\nHaving autovacuum tests ( unless I missed something ) like the above is a good \r\ngeneral improvement, but it should not be tied to this. \r\n\r\nA few comments on the POC patch:\r\n\r\n1/ We should emit a log when autovacuum_workers is set higher than the max.\r\n\r\n2/ should the name of the restart limit be \"reserved_autovacuum_workers\"?\r\n\r\nRegards,\r\n\r\nSami Imseih\r\nAWS (Amazon Web Services)\r\n\r\n\r\n\r\n", "msg_date": "Fri, 12 Apr 2024 17:27:40 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Apr 12, 2024 at 05:27:40PM +0000, Imseih (AWS), Sami wrote:\n> A few comments on the POC patch:\n\nThanks for reviewing.\n\n> 1/ We should emit a log when autovacuum_workers is set higher than the max.\n\nHm. Maybe the autovacuum launcher could do that.\n\n> 2/ should the name of the restart limit be \"reserved_autovacuum_workers\"?\n\nThat's kind-of what I had in mind, although I think we might want to avoid\nthe word \"reserved\" because it sounds a bit like reserved_connections and\nsuperuser_reserved_connections. \"autovacuum_max_slots\" or\n\"autovacuum_max_worker_slots\" might be worth considering, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Apr 2024 13:40:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": ">> 1/ We should emit a log when autovacuum_workers is set higher than the max.\r\n\r\n\r\n>> Hm. Maybe the autovacuum launcher could do that.\r\n\r\nWould it be better to use a GUC check_hook that compares the \r\nnew value with the max allowed values and emits a WARNING ?\r\n\r\nautovacuum_max_workers already has a check_autovacuum_max_workers\r\ncheck_hook, which can be repurposed for this.\r\n\r\nIn the POC patch, this check_hook is kept as-is, which will no longer make sense.\r\n\r\n>> 2/ should the name of the restart limit be \"reserved_autovacuum_workers\"?\r\n\r\n\r\n>> That's kind-of what I had in mind, although I think we might want to avoid\r\n>> the word \"reserved\" because it sounds a bit like reserved_connections \r\n>> and superuser_reserved_connections\r\n\r\nYes, I agree. This can be confusing.\r\n\r\n>> \"autovacuum_max_slots\" or\r\n>> \"autovacuum_max_worker_slots\" might be worth considering, too.\r\n\r\n\"autovacuum_max_worker_slots\" is probably the best option because\r\nwe should have \"worker\" in the name of the GUC.\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n", "msg_date": "Fri, 12 Apr 2024 22:17:44 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Apr 12, 2024 at 10:17:44PM +0000, Imseih (AWS), Sami wrote:\n>>> Hm. Maybe the autovacuum launcher could do that.\n> \n> Would it be better to use a GUC check_hook that compares the \n> new value with the max allowed values and emits a WARNING ?\n> \n> autovacuum_max_workers already has a check_autovacuum_max_workers\n> check_hook, which can be repurposed for this.\n> \n> In the POC patch, this check_hook is kept as-is, which will no longer make sense.\n\nIIRC using GUC hooks to handle dependencies like this is generally frowned\nupon because it tends to not work very well [0]. We could probably get it\nto work for this particular case, but IMHO we should still try to avoid\nthis approach. I didn't find any similar warnings for other GUCs like\nmax_parallel_workers_per_gather, so it might not be crucial to emit a\nWARNING here.\n\n[0] https://postgr.es/m/27574.1581015893%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 13 Apr 2024 14:44:50 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> IIRC using GUC hooks to handle dependencies like this is generally frowned\r\n> upon because it tends to not work very well [0]. We could probably get it\r\n> to work for this particular case, but IMHO we should still try to avoid\r\n> this approach. \r\n\r\nThanks for pointing this out. I agree, this could lead to false logs being\r\nemitted.\r\n\r\n> so it might not be crucial to emit a\r\n> WARNING here.\r\n\r\nAs mentioned earlier in the thread, we can let the autovacuum launcher emit the \r\nlog, but it will need to be careful not flood the logs when this condition exists ( i.e.\r\nlog only the first time the condition is detected or log every once in a while )\r\n\r\nThe additional complexity is not worth it.\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sun, 14 Apr 2024 01:56:09 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Here is a first attempt at a proper patch set based on the discussion thus\nfar. I've split it up into several small patches for ease of review, which\nis probably a bit excessive. If this ever makes it to commit, they could\nlikely be combined.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 14 Apr 2024 09:40:58 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:\n> The attached proof-of-concept patch demonstrates what I have in mind.\n> Instead of trying to dynamically change the global process table, etc., I'm\n> proposing that we introduce a new GUC that sets the effective maximum\n> number of autovacuum workers that can be started at any time. This means\n> there would be two GUCs for the number of autovacuum workers: one for the\n> number of slots reserved for autovacuum workers, and another that restricts\n> the number of those slots that can be used. The former would continue to\n> require a restart to change its value, and users would typically want to\n> set it relatively high. The latter could be changed at any time and would\n> allow for raising or lowering the maximum number of active autovacuum\n> workers, up to the limit set by the other parameter.\n> \n> The proof-of-concept patch keeps autovacuum_max_workers as the maximum\n> number of slots to reserve for workers, but I think we should instead\n> rename this parameter to something else and then reintroduce\n> autovacuum_max_workers as the new parameter that can be adjusted without\n> restarting. That way, autovacuum_max_workers continues to work much the\n> same way as in previous versions.\n\nWhen I thought about this, I considered proposing to add a new GUC for\n\"autovacuum_policy_workers\".\n\nautovacuum_max_workers would be the same as before, requiring a restart\nto change. The policy GUC would be the soft limit, changable at runtime\nup to the hard limit of autovacuum_max_workers (or maybe any policy\nvalue exceeding autovacuum_max_workers would be ignored).\n\nWe'd probably change autovacuum_max_workers to default to a higher value\n(8, or 32 as in your patch), and have autovacuum_max_workers default to\n3, for consistency with historic behavior. Maybe\nautovacuum_policy_workers=-1 would mean to use all workers.\n\nThere's the existing idea to change autovacuum thresholds during the\nbusy period of the day vs. off hours. This would allow something\nsimilar with nworkers rather than thresholds: if the goal were to reduce\nthe resource use of vacuum, the admin could set max_workers=8, with\npolicy_workers=2 during the busy period.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 15 Apr 2024 08:33:33 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Apr 15, 2024 at 08:33:33AM -0500, Justin Pryzby wrote:\n> On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:\n>> The proof-of-concept patch keeps autovacuum_max_workers as the maximum\n>> number of slots to reserve for workers, but I think we should instead\n>> rename this parameter to something else and then reintroduce\n>> autovacuum_max_workers as the new parameter that can be adjusted without\n>> restarting. That way, autovacuum_max_workers continues to work much the\n>> same way as in previous versions.\n> \n> When I thought about this, I considered proposing to add a new GUC for\n> \"autovacuum_policy_workers\".\n> \n> autovacuum_max_workers would be the same as before, requiring a restart\n> to change. The policy GUC would be the soft limit, changable at runtime\n> up to the hard limit of autovacuum_max_workers (or maybe any policy\n> value exceeding autovacuum_max_workers would be ignored).\n> \n> We'd probably change autovacuum_max_workers to default to a higher value\n> (8, or 32 as in your patch), and have autovacuum_max_workers default to\n> 3, for consistency with historic behavior. Maybe\n> autovacuum_policy_workers=-1 would mean to use all workers.\n\nThis sounds like roughly the same idea, although it is backwards from what\nI'm proposing in the v1 patch set. My thinking is that by making a new\nrestart-only GUC that would by default be set higher than the vast majority\nof systems should ever need, we could simplify migrating to these\nparameters. The autovacuum_max_workers parameter would effectively retain\nit's original meaning, and existing settings would continue to work\nnormally on v18, but users could now adjust it without restarting. If we\ndid it the other way, users would need to bump up autovacuum_max_workers\nand restart prior to being able to raise autovacuum_policy_workers beyond\nwhat they previously had set for autovacuum_max_workers. That being said,\nI'm open to doing it this way if folks prefer this approach, as I think it\nis still an improvement.\n\n> There's the existing idea to change autovacuum thresholds during the\n> busy period of the day vs. off hours. This would allow something\n> similar with nworkers rather than thresholds: if the goal were to reduce\n> the resource use of vacuum, the admin could set max_workers=8, with\n> policy_workers=2 during the busy period.\n\nPrecisely.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 11:28:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Apr 15, 2024 at 11:28:33AM -0500, Nathan Bossart wrote:\n> On Mon, Apr 15, 2024 at 08:33:33AM -0500, Justin Pryzby wrote:\n>> On Wed, Apr 10, 2024 at 04:23:44PM -0500, Nathan Bossart wrote:\n>>> The proof-of-concept patch keeps autovacuum_max_workers as the maximum\n>>> number of slots to reserve for workers, but I think we should instead\n>>> rename this parameter to something else and then reintroduce\n>>> autovacuum_max_workers as the new parameter that can be adjusted without\n>>> restarting. That way, autovacuum_max_workers continues to work much the\n>>> same way as in previous versions.\n>> \n>> When I thought about this, I considered proposing to add a new GUC for\n>> \"autovacuum_policy_workers\".\n>> \n>> autovacuum_max_workers would be the same as before, requiring a restart\n>> to change. The policy GUC would be the soft limit, changable at runtime\n>> up to the hard limit of autovacuum_max_workers (or maybe any policy\n>> value exceeding autovacuum_max_workers would be ignored).\n>> \n>> We'd probably change autovacuum_max_workers to default to a higher value\n>> (8, or 32 as in your patch), and have autovacuum_max_workers default to\n>> 3, for consistency with historic behavior. Maybe\n>> autovacuum_policy_workers=-1 would mean to use all workers.\n> \n> This sounds like roughly the same idea, although it is backwards from what\n> I'm proposing in the v1 patch set. My thinking is that by making a new\n> restart-only GUC that would by default be set higher than the vast majority\n> of systems should ever need, we could simplify migrating to these\n> parameters. The autovacuum_max_workers parameter would effectively retain\n> it's original meaning, and existing settings would continue to work\n> normally on v18, but users could now adjust it without restarting. If we\n> did it the other way, users would need to bump up autovacuum_max_workers\n> and restart prior to being able to raise autovacuum_policy_workers beyond\n> what they previously had set for autovacuum_max_workers. That being said,\n> I'm open to doing it this way if folks prefer this approach, as I think it\n> is still an improvement.\n\nAnother option could be to just remove the restart-only GUC and hard-code\nthe upper limit of autovacuum_max_workers to 64 or 128 or something. While\nthat would simplify matters, I suspect it would be hard to choose an\nappropriate limit that won't quickly become outdated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 11:37:49 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> Another option could be to just remove the restart-only GUC and hard-code\r\n> the upper limit of autovacuum_max_workers to 64 or 128 or something. While\r\n> that would simplify matters, I suspect it would be hard to choose an\r\n> appropriate limit that won't quickly become outdated.\r\n\r\nHardcoded values are usually hard to deal with because they are hidden either\r\nIn code or in docs.\r\n\r\n> When I thought about this, I considered proposing to add a new GUC for\r\n> \"autovacuum_policy_workers\".\r\n\r\n> autovacuum_max_workers would be the same as before, requiring a restart\r\n> to change. The policy GUC would be the soft limit, changable at runtime\r\n\r\nI think autovacuum_max_workers should still be the GUC that controls\r\nthe number of concurrent autovacuums. This parameter is already well \r\nestablished and changing the meaning now will be confusing. \r\n\r\nI suspect most users will be glad it's now dynamic, but will probably \r\nbe annoyed if it's no longer doing what it's supposed to.\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n", "msg_date": "Mon, 15 Apr 2024 17:41:04 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Agree +1,From a dba perspective, I would prefer that this parameter can be\ndynamically modified, rather than adding a new parameter,What is more\ndifficult is how to smoothly reach the target value when the setting is\nconsidered to be too large and needs to be lowered.\n\n\n\nRegards\n\nOn Tue, 16 Apr 2024 at 01:41, Imseih (AWS), Sami <[email protected]> wrote:\n\n> > Another option could be to just remove the restart-only GUC and hard-code\n> > the upper limit of autovacuum_max_workers to 64 or 128 or something.\n> While\n> > that would simplify matters, I suspect it would be hard to choose an\n> > appropriate limit that won't quickly become outdated.\n>\n> Hardcoded values are usually hard to deal with because they are hidden\n> either\n> In code or in docs.\n>\n> > When I thought about this, I considered proposing to add a new GUC for\n> > \"autovacuum_policy_workers\".\n>\n> > autovacuum_max_workers would be the same as before, requiring a restart\n> > to change. The policy GUC would be the soft limit, changable at runtime\n>\n> I think autovacuum_max_workers should still be the GUC that controls\n> the number of concurrent autovacuums. This parameter is already well\n> established and changing the meaning now will be confusing.\n>\n> I suspect most users will be glad it's now dynamic, but will probably\n> be annoyed if it's no longer doing what it's supposed to.\n>\n> Regards,\n>\n> Sami\n>\n>\n\nAgree +1,From a dba perspective, I would prefer that this parameter can be dynamically modified, rather than adding a new parameter,What is more difficult is how to smoothly reach the target value when the setting is considered to be too large and needs to be lowered.    Regards On Tue, 16 Apr 2024 at 01:41, Imseih (AWS), Sami <[email protected]> wrote:> Another option could be to just remove the restart-only GUC and hard-code\n> the upper limit of autovacuum_max_workers to 64 or 128 or something. While\n> that would simplify matters, I suspect it would be hard to choose an\n> appropriate limit that won't quickly become outdated.\n\nHardcoded values are usually hard to deal with because they are hidden either\nIn code or in docs.\n\n> When I thought about this, I considered proposing to add a new GUC for\n> \"autovacuum_policy_workers\".\n\n> autovacuum_max_workers would be the same as before, requiring a restart\n> to change.  The policy GUC would be the soft limit, changable at runtime\n\nI think autovacuum_max_workers should still be the GUC that controls\nthe number of concurrent autovacuums. This parameter is already well \nestablished and changing the meaning now will be confusing. \n\nI suspect most users will be glad it's now dynamic, but will probably \nbe annoyed if it's no longer doing what it's supposed to.\n\nRegards,\n\nSami", "msg_date": "Wed, 17 Apr 2024 14:52:16 +0800", "msg_from": "wenhui qiu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> Here is a first attempt at a proper patch set based on the discussion thus\r\n> far. I've split it up into several small patches for ease of review, which\r\n> is probably a bit excessive. If this ever makes it to commit, they could\r\n> likely be combined.\r\n\r\nI looked at the patch set. With the help of DEBUG2 output, I tested to ensure\r\nthat the the autovacuum_cost_limit balance adjusts correctly when the \r\nautovacuum_max_workers value increases/decreases. I did not think the \r\npatch will break this behavior, but it's important to verify this.\r\n\r\nSome comments on the patch:\r\n\r\n1. A nit. There should be a tab here.\r\n\r\n- dlist_head av_freeWorkers;\r\n+ dclist_head av_freeWorkers;\r\n\r\n2. autovacuum_max_worker_slots documentation:\r\n\r\n+ <para>\r\n+ Note that the value of <xref linkend=\"guc-autovacuum-max-workers\"/> is\r\n+ silently capped to this value.\r\n+ </para>\r\n\r\nThis comment looks redundant in the docs, since the entry \r\nfor autovacuum_max_workers that follows mentions the\r\nsame.\r\n\r\n\r\n3. The docs for autovacuum_max_workers should mention that when\r\nthe value changes, consider adjusting the autovacuum_cost_limit/cost_delay. \r\n\r\nThis is not something new. Even in the current state, users should think about \r\nthese settings. However, it seems even important if this value is to be \r\ndynamically adjusted.\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Thu, 18 Apr 2024 05:05:03 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, Apr 18, 2024 at 05:05:03AM +0000, Imseih (AWS), Sami wrote:\n> I looked at the patch set. With the help of DEBUG2 output, I tested to ensure\n> that the the autovacuum_cost_limit balance adjusts correctly when the \n> autovacuum_max_workers value increases/decreases. I did not think the \n> patch will break this behavior, but it's important to verify this.\n\nGreat.\n\n> 1. A nit. There should be a tab here.\n> \n> - dlist_head av_freeWorkers;\n> + dclist_head av_freeWorkers;\n\nI dare not argue with pgindent.\n\n> 2. autovacuum_max_worker_slots documentation:\n> \n> + <para>\n> + Note that the value of <xref linkend=\"guc-autovacuum-max-workers\"/> is\n> + silently capped to this value.\n> + </para>\n> \n> This comment looks redundant in the docs, since the entry \n> for autovacuum_max_workers that follows mentions the\n> same.\n\nRemoved in v2. I also noticed that I forgot to update the part about when\nautovacuum_max_workers can be changed. *facepalm*\n\n> 3. The docs for autovacuum_max_workers should mention that when\n> the value changes, consider adjusting the autovacuum_cost_limit/cost_delay. \n> \n> This is not something new. Even in the current state, users should think about \n> these settings. However, it seems even important if this value is to be \n> dynamically adjusted.\n\nI don't necessarily disagree that it might be worth mentioning these\nparameters, but I would argue that this should be proposed in a separate\nthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 19 Apr 2024 10:43:22 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Apr 19, 2024 at 11:43 AM Nathan Bossart\n<[email protected]> wrote:\n> Removed in v2. I also noticed that I forgot to update the part about when\n> autovacuum_max_workers can be changed. *facepalm*\n\nI think this could help a bunch of users, but I'd still like to\ncomplain, not so much with the desire to kill this patch as with the\ndesire to broaden the conversation.\n\nPart of the underlying problem here is that, AFAIK, neither PostgreSQL\nas a piece of software nor we as human beings who operate PostgreSQL\ndatabases have much understanding of how autovacuum_max_workers should\nbe set. It's relatively easy to hose yourself by raising\nautovacuum_max_workers to try to make things go faster, but produce\nthe exact opposite effect due to how the cost balancing stuff works.\nBut, even if you have the correct use case for autovacuum_max_workers,\nsomething like a few large tables that take a long time to vacuum plus\na bunch of smaller ones that can't get starved just because the big\ntables are in the midst of being processed, you might well ask\nyourself why it's your job to figure out the correct number of\nworkers.\n\nNow, before this patch, there is a fairly good reason for that, which\nis that we need to reserve shared memory resources for each autovacuum\nworker that might potentially run, and the system can't know how much\nshared memory you'd like to reserve for that purpose. But if that were\nthe only problem, then this patch would probably just be proposing to\ncrank up the default value of that parameter rather than introducing a\nsecond one. I bet Nathan isn't proposing that because his intuition is\nthat it will work out badly, and I think he's right. I bet that\ncranking up the number of allowed workers will often result in running\nmore workers than we really should. One possible negative consequence\nis that we'll end up with multiple processes fighting over the disk in\na situation where they should just take turns. I suspect there are\nalso ways that we can be harmed - in broadly similar fashion - by cost\nbalancing.\n\nSo I feel like what this proposal reveals is that we know that our\nalgorithm for ramping up the number of running workers doesn't really\nwork. And maybe that's just a consequence of the general problem that\nwe have no global information about how much vacuuming work there is\nto be done at any given time, and therefore we cannot take any kind of\nsensible guess about whether 1 more worker will help or hurt. Or,\nmaybe there's some way to do better than what we do today without a\nbig rewrite. I'm not sure. I don't think this patch should be burdened\nwith solving the general problem here. But I do think the general\nproblem is worth some discussion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 14:42:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Apr 19, 2024 at 02:42:13PM -0400, Robert Haas wrote:\n> I think this could help a bunch of users, but I'd still like to\n> complain, not so much with the desire to kill this patch as with the\n> desire to broaden the conversation.\n\nI think I subconsciously hoped this would spark a bigger discussion...\n\n> Now, before this patch, there is a fairly good reason for that, which\n> is that we need to reserve shared memory resources for each autovacuum\n> worker that might potentially run, and the system can't know how much\n> shared memory you'd like to reserve for that purpose. But if that were\n> the only problem, then this patch would probably just be proposing to\n> crank up the default value of that parameter rather than introducing a\n> second one. I bet Nathan isn't proposing that because his intuition is\n> that it will work out badly, and I think he's right. I bet that\n> cranking up the number of allowed workers will often result in running\n> more workers than we really should. One possible negative consequence\n> is that we'll end up with multiple processes fighting over the disk in\n> a situation where they should just take turns. I suspect there are\n> also ways that we can be harmed - in broadly similar fashion - by cost\n> balancing.\n\nEven if we were content to bump up the default value of\nautovacuum_max_workers and tell folks to just mess with the cost settings,\nthere are still probably many cases where bumping up the number of workers\nfurther would be necessary. If you have a zillion tables, turning\ncost-based vacuuming off completely may be insufficient to keep up, at\nwhich point your options become limited. It can be difficult to tell\nwhether you might end up in this situation over time as your workload\nevolves. In any case, it's not clear to me that bumping up the default\nvalue of autovacuum_max_workers would do more good than harm. I get the\nidea that the default of 3 is sufficient for a lot of clusters, so there'd\nreally be little upside to changing it AFAICT. (I guess this proves your\npoint about my intuition.)\n\n> So I feel like what this proposal reveals is that we know that our\n> algorithm for ramping up the number of running workers doesn't really\n> work. And maybe that's just a consequence of the general problem that\n> we have no global information about how much vacuuming work there is\n> to be done at any given time, and therefore we cannot take any kind of\n> sensible guess about whether 1 more worker will help or hurt. Or,\n> maybe there's some way to do better than what we do today without a\n> big rewrite. I'm not sure. I don't think this patch should be burdened\n> with solving the general problem here. But I do think the general\n> problem is worth some discussion.\n\nI certainly don't want to hold up $SUBJECT for a larger rewrite of\nautovacuum scheduling, but I also don't want to shy away from a larger\nrewrite if it's an idea whose time has come. I'm looking forward to\nhearing your ideas in your pgconf.dev talk.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:29:31 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> Part of the underlying problem here is that, AFAIK, neither PostgreSQL\r\n> as a piece of software nor we as human beings who operate PostgreSQL\r\n> databases have much understanding of how autovacuum_max_workers should\r\n> be set. It's relatively easy to hose yourself by raising\r\n> autovacuum_max_workers to try to make things go faster, but produce\r\n> the exact opposite effect due to how the cost balancing stuff works.\r\n\r\nYeah, this patch will not fix this problem. Anyone who raises av_max_workers\r\nshould think about adjusting the av_cost_delay. This was discussed up the\r\nthread [4] and even without this patch, I think it's necessary to add more\r\ndocumentation on the relationship between workers and cost.\r\n\r\n\r\n> So I feel like what this proposal reveals is that we know that our\r\n> algorithm for ramping up the number of running workers doesn't really\r\n> work. And maybe that's just a consequence of the general problem that\r\n> we have no global information about how much vacuuming work there is\r\n> to be done at any given time, and therefore we cannot take any kind of\r\n> sensible guess about whether 1 more worker will help or hurt. Or,\r\n> maybe there's some way to do better than what we do today without a\r\n> big rewrite. I'm not sure. I don't think this patch should be burdened\r\n> with solving the general problem here. But I do think the general\r\n> problem is worth some discussion.\r\n\r\nThis patch is only solving the operational problem of adjusting \r\nautovacuum_max_workers, and it does so without introducing complexity.\r\n\r\nA proposal that will alleviate the users from the burden of having to think about\r\nautovacuum_max_workers, cost_delay and cost_limit settings will be great.\r\nThis patch may be the basis for such dynamic \"auto-tuning\" of autovacuum workers.\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n[4] https://www.postgresql.org/message-id/20240419154322.GA3988554%40nathanxps13\r\n\r\n", "msg_date": "Sun, 21 Apr 2024 23:26:59 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Apr 19, 2024 at 4:29 PM Nathan Bossart <[email protected]> wrote:\n> I certainly don't want to hold up $SUBJECT for a larger rewrite of\n> autovacuum scheduling, but I also don't want to shy away from a larger\n> rewrite if it's an idea whose time has come. I'm looking forward to\n> hearing your ideas in your pgconf.dev talk.\n\nYeah, I suppose I was hoping you were going to tell me the all the\nanswers and thus make the talk a lot easier to write, but I guess life\nisn't that simple. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 10:17:19 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Apr 15, 2024 at 05:41:04PM +0000, Imseih (AWS), Sami wrote:\n>> Another option could be to just remove the restart-only GUC and hard-code\n>> the upper limit of autovacuum_max_workers to 64 or 128 or something. While\n>> that would simplify matters, I suspect it would be hard to choose an\n>> appropriate limit that won't quickly become outdated.\n> \n> Hardcoded values are usually hard to deal with because they are hidden either\n> In code or in docs.\n\nThat's true, but using a hard-coded limit means we no longer need to add a\nnew GUC. Always allocating, say, 256 slots might require a few additional\nkilobytes of shared memory, most of which will go unused, but that seems\nunlikely to be a problem for the systems that will run Postgres v18.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 May 2024 20:04:15 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "> That's true, but using a hard-coded limit means we no longer need to add a\r\n> new GUC. Always allocating, say, 256 slots might require a few additional\r\n> kilobytes of shared memory, most of which will go unused, but that seems\r\n> unlikely to be a problem for the systems that will run Postgres v18.\r\n\r\nI agree with this.\r\n\r\n\r\nRegards,\r\n\r\nSami\r\n\r\n", "msg_date": "Fri, 3 May 2024 12:57:18 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, May 03, 2024 at 12:57:18PM +0000, Imseih (AWS), Sami wrote:\n>> That's true, but using a hard-coded limit means we no longer need to add a\n>> new GUC. Always allocating, say, 256 slots might require a few additional\n>> kilobytes of shared memory, most of which will go unused, but that seems\n>> unlikely to be a problem for the systems that will run Postgres v18.\n> \n> I agree with this.\n\nHere's what this might look like. I chose an upper limit of 1024, which\nseems like it \"ought to be enough for anybody,\" at least for now.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 7 May 2024 11:06:05 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": ">>> That's true, but using a hard-coded limit means we no longer need to add a\r\n>>> new GUC. Always allocating, say, 256 slots might require a few additional\r\n>>> kilobytes of shared memory, most of which will go unused, but that seems\r\n>>> unlikely to be a problem for the systems that will run Postgres v18.\r\n>>\r\n>> I agree with this.\r\n\r\n\r\n> Here's what this might look like. I chose an upper limit of 1024, which\r\n> seems like it \"ought to be enough for anybody,\" at least for now.\r\n\r\nI thought 256 was a good enough limit. In practice, I doubt anyone will \r\nbenefit from more than a few dozen autovacuum workers. \r\nI think 1024 is way too high to even allow.\r\n\r\nBesides that the overall patch looks good to me, but I have\r\nsome comments on the documentation.\r\n\r\nI don't think combining 1024 + 5 = 1029 is a good idea in docs.\r\nBreaking down the allotment and using the name of the constant \r\nis much more clear.\r\n\r\nI suggest \r\n\" max_connections + max_wal_senders + max_worker_processes + AUTOVAC_MAX_WORKER_SLOTS + 5\"\r\n\r\nand in other places in the docs, we should mention the actual \r\nvalue of AUTOVAC_MAX_WORKER_SLOTS. Maybe in the \r\nbelow section?\r\n\r\nInstead of:\r\n- (<xref linkend=\"guc-autovacuum-max-workers\"/>) and allowed background\r\n+ (1024) and allowed background\r\n\r\ndo something like:\r\n- (<xref linkend=\"guc-autovacuum-max-workers\"/>) and allowed background\r\n+ AUTOVAC_MAX_WORKER_SLOTS (1024) and allowed background\r\n\r\nAlso, replace the 1024 here with AUTOVAC_MAX_WORKER_SLOTS.\r\n\r\n+ <varname>max_wal_senders</varname>,\r\n+ plus <varname>max_worker_processes</varname>, plus 1024 for autovacuum\r\n+ worker processes, plus one extra for each 16\r\n\r\n\r\nAlso, Not sure if I am mistaken here, but the \"+ 5\" in the existing docs\r\nseems wrong.\r\n \r\nIf it refers to NUM_AUXILIARY_PROCS defined in \r\ninclude/storage/proc.h, it should a \"6\"\r\n\r\n#define NUM_AUXILIARY_PROCS 6\r\n\r\nThis is not a consequence of this patch, and can be dealt with\r\nIn a separate thread if my understanding is correct.\r\n\r\n\r\nRegards,\r\n\r\nSami \r\n\r\n\r\n", "msg_date": "Thu, 16 May 2024 16:37:10 +0000", "msg_from": "\"Imseih (AWS), Sami\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, May 16, 2024 at 04:37:10PM +0000, Imseih (AWS), Sami wrote:\n> I thought 256 was a good enough limit. In practice, I doubt anyone will \n> benefit from more than a few dozen autovacuum workers. \n> I think 1024 is way too high to even allow.\n\nWFM\n\n> I don't think combining 1024 + 5 = 1029 is a good idea in docs.\n> Breaking down the allotment and using the name of the constant \n> is much more clear.\n> \n> I suggest \n> \" max_connections + max_wal_senders + max_worker_processes + AUTOVAC_MAX_WORKER_SLOTS + 5\"\n> \n> and in other places in the docs, we should mention the actual \n> value of AUTOVAC_MAX_WORKER_SLOTS. Maybe in the \n> below section?\n> \n> Instead of:\n> - (<xref linkend=\"guc-autovacuum-max-workers\"/>) and allowed background\n> + (1024) and allowed background\n> \n> do something like:\n> - (<xref linkend=\"guc-autovacuum-max-workers\"/>) and allowed background\n> + AUTOVAC_MAX_WORKER_SLOTS (1024) and allowed background\n> \n> Also, replace the 1024 here with AUTOVAC_MAX_WORKER_SLOTS.\n> \n> + <varname>max_wal_senders</varname>,\n> + plus <varname>max_worker_processes</varname>, plus 1024 for autovacuum\n> + worker processes, plus one extra for each 16\n\nPart of me wonders whether documenting the exact formula is worthwhile.\nThis portion of the docs is rather complicated, and I can't recall ever\nhaving to do the arithmetic is describes. Plus, see below...\n\n> Also, Not sure if I am mistaken here, but the \"+ 5\" in the existing docs\n> seems wrong.\n> \n> If it refers to NUM_AUXILIARY_PROCS defined in \n> include/storage/proc.h, it should a \"6\"\n> \n> #define NUM_AUXILIARY_PROCS 6\n> \n> This is not a consequence of this patch, and can be dealt with\n> In a separate thread if my understanding is correct.\n\nHa, I think it should actually be \"+ 7\"! The value is calculated as\n\n\tMaxConnections + autovacuum_max_workers + 1 + max_worker_processes + max_wal_senders + 6\n\nLooking at the history, this documentation tends to be wrong quite often.\nIn v9.2, the checkpointer was introduced, and these formulas were not\nupdated. In v9.3, background worker processes were introduced, and the\nformulas were still not updated. Finally, in v9.6, it was fixed in commit\n597f7e3. Then, in v14, the archiver process was made an auxiliary process\n(commit d75288f), making the formulas out-of-date again. And in v17, the\nWAL summarizer was added.\n\nOn top of this, IIUC you actually need even more semaphores if your system\ndoesn't support atomics, and from a quick skim this doesn't seem to be\ncovered in this documentation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 May 2024 21:16:46 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Thu, May 16, 2024 at 09:16:46PM -0500, Nathan Bossart wrote:\n> On Thu, May 16, 2024 at 04:37:10PM +0000, Imseih (AWS), Sami wrote:\n>> I thought 256 was a good enough limit. In practice, I doubt anyone will \n>> benefit from more than a few dozen autovacuum workers. \n>> I think 1024 is way too high to even allow.\n> \n> WFM\n\nHere is an updated patch that uses 256 as the upper limit.\n\n>> I don't think combining 1024 + 5 = 1029 is a good idea in docs.\n>> Breaking down the allotment and using the name of the constant \n>> is much more clear.\n\nI plan to further improve this section of the documentation in v18, so I've\nleft the constant unexplained for now.\n\n-- \nnathan", "msg_date": "Mon, 3 Jun 2024 13:52:29 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 13:52:29 -0500, Nathan Bossart wrote:\n> Here is an updated patch that uses 256 as the upper limit.\n\nI don't have time to read through the entire thread right now - it'd be good\nfor the commit message of a patch like this to include justification for why\nit's ok to make such a change. Even before actually committing it, so\nreviewers have an easier time catching up.\n\nWhy do we think that increasing the number of PGPROC slots, heavyweight locks\netc by 256 isn't going to cause issues? That's not an insubstantial amount of\nmemory to dedicate to something that will practically never be used.\n\nISTM that at the very least we ought to exclude the reserved slots from the\ncomputation of things like the number of locks resulting from\nmax_locks_per_transaction. It's very common to increase\nmax_locks_per_transaction substantially, adding ~250 to the multiplier can be\na good amount of memory. And AV workers should never need a meaningful number.\n\nIncreasing e.g. the size of the heavyweight lock table has consequences\nbesides the increase in memory usage, the size increase can make it less\nlikely for the table to fit largely into L3, thus decreasing performance.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 12:08:52 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:\n> I don't have time to read through the entire thread right now - it'd be good\n> for the commit message of a patch like this to include justification for why\n> it's ok to make such a change. Even before actually committing it, so\n> reviewers have an easier time catching up.\n\nSorry about that. I think the main question (besides \"should we do this?\")\nis whether we ought to make the upper limit configurable. My initial idea\nwas to split autovacuum_max_workers into two GUCs: one for the upper limit\nthat only be changed at server start and another for the effective limit\nthat can be changed up to the upper limit without restarting the server.\nIf we can just set a sufficiently high upper limit and avoid the extra GUC\nwithout causing problems, that might be preferable, but I sense that you\nare about to tell me that it will indeed cause problems. :)\n\n> Why do we think that increasing the number of PGPROC slots, heavyweight locks\n> etc by 256 isn't going to cause issues? That's not an insubstantial amount of\n> memory to dedicate to something that will practically never be used.\n\nI personally have not observed problems with these kinds of bumps in\nresource usage, although I may be biased towards larger systems where it\ndoesn't matter as much.\n\n> ISTM that at the very least we ought to exclude the reserved slots from the\n> computation of things like the number of locks resulting from\n> max_locks_per_transaction. It's very common to increase\n> max_locks_per_transaction substantially, adding ~250 to the multiplier can be\n> a good amount of memory. And AV workers should never need a meaningful number.\n\nThis is an interesting idea.\n\n> Increasing e.g. the size of the heavyweight lock table has consequences\n> besides the increase in memory usage, the size increase can make it less\n> likely for the table to fit largely into L3, thus decreasing performance.\n\nIMHO this might be a good argument for making the upper limit configurable\nand setting it relatively low by default. That's not quite as nice from a\nuser experience perspective, but weird, hard-to-diagnose performance issues\nare certainly not nice, either.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 3 Jun 2024 14:28:13 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Hi,\n\nOn 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:\n> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:\n> > Why do we think that increasing the number of PGPROC slots, heavyweight locks\n> > etc by 256 isn't going to cause issues? That's not an insubstantial amount of\n> > memory to dedicate to something that will practically never be used.\n> \n> I personally have not observed problems with these kinds of bumps in\n> resource usage, although I may be biased towards larger systems where it\n> doesn't matter as much.\n\nIME it matters *more* on larger systems. Or at least used to, I haven't\nexperimented with this in quite a while.\n\nIt's possible that we improved a bunch of things sufficiently for this to not\nmatter anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Jun 2024 16:24:27 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Jun 03, 2024 at 04:24:27PM -0700, Andres Freund wrote:\n> On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:\n>> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:\n>> > Why do we think that increasing the number of PGPROC slots, heavyweight locks\n>> > etc by 256 isn't going to cause issues? That's not an insubstantial amount of\n>> > memory to dedicate to something that will practically never be used.\n>> \n>> I personally have not observed problems with these kinds of bumps in\n>> resource usage, although I may be biased towards larger systems where it\n>> doesn't matter as much.\n> \n> IME it matters *more* on larger systems. Or at least used to, I haven't\n> experimented with this in quite a while.\n> \n> It's possible that we improved a bunch of things sufficiently for this to not\n> matter anymore.\n\nI'm curious if there is something specific you would look into to verify\nthis. IIUC one concern is the lock table not fitting into L3. Is there\nanything else? Any particular workloads you have in mind?\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 18 Jun 2024 14:00:00 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 14:00:00 -0500, Nathan Bossart wrote:\n> On Mon, Jun 03, 2024 at 04:24:27PM -0700, Andres Freund wrote:\n> > On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:\n> >> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:\n> >> > Why do we think that increasing the number of PGPROC slots, heavyweight locks\n> >> > etc by 256 isn't going to cause issues? That's not an insubstantial amount of\n> >> > memory to dedicate to something that will practically never be used.\n> >> \n> >> I personally have not observed problems with these kinds of bumps in\n> >> resource usage, although I may be biased towards larger systems where it\n> >> doesn't matter as much.\n> > \n> > IME it matters *more* on larger systems. Or at least used to, I haven't\n> > experimented with this in quite a while.\n> > \n> > It's possible that we improved a bunch of things sufficiently for this to not\n> > matter anymore.\n> \n> I'm curious if there is something specific you would look into to verify\n> this. IIUC one concern is the lock table not fitting into L3. Is there\n> anything else? Any particular workloads you have in mind?\n\nThat was the main thing I was thinking of.\n\n\nBut I think I just thought of one more: It's going to *substantially* increase\nthe resource usage for tap tests. Right now Cluster.pm has\n\t\t# conservative settings to ensure we can run multiple postmasters:\n\t\tprint $conf \"shared_buffers = 1MB\\n\";\n\t\tprint $conf \"max_connections = 10\\n\";\n\nfor nodes that allow streaming.\n\nAdding 256 extra backend slots increases the shared memory usage from ~5MB to\n~18MB.\n\n\nI just don't see much point in reserving 256 worker \"possibilities\", tbh. I\ncan't think of any practical system where it makes sense to use this much (nor\ndo I think it's going to be reasonable in the next 10 years) and it's just\ngoing to waste memory and startup time for everyone.\n\nNor does it make sense to me to have the max autovac workers be independent of\nmax_connections.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 13:43:34 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Tue, Jun 18, 2024 at 01:43:34PM -0700, Andres Freund wrote:\n> I just don't see much point in reserving 256 worker \"possibilities\", tbh. I\n> can't think of any practical system where it makes sense to use this much (nor\n> do I think it's going to be reasonable in the next 10 years) and it's just\n> going to waste memory and startup time for everyone.\n\nGiven this, here are some options I see for moving this forward:\n\n* lower the cap to, say, 64 or 32\n* exclude autovacuum worker slots from computing number of locks, etc.\n* make the cap configurable and default it to something low (e.g., 8)\n\nMy intent with a reserved set of 256 slots was to prevent users from\nneeding to deal with two GUCs. For all practical purposes, it would be\npossible to change autovacuum_max_workers whenever you want. But if the\nextra resource requirements are too much of a tax, I'm content to change\ncourse.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 18 Jun 2024 16:09:09 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Hi,\n\nOn 2024-06-18 16:09:09 -0500, Nathan Bossart wrote:\n> On Tue, Jun 18, 2024 at 01:43:34PM -0700, Andres Freund wrote:\n> > I just don't see much point in reserving 256 worker \"possibilities\", tbh. I\n> > can't think of any practical system where it makes sense to use this much (nor\n> > do I think it's going to be reasonable in the next 10 years) and it's just\n> > going to waste memory and startup time for everyone.\n> \n> Given this, here are some options I see for moving this forward:\n> \n> * lower the cap to, say, 64 or 32\n> * exclude autovacuum worker slots from computing number of locks, etc.\n\nThat seems good regardless\n\n> * make the cap configurable and default it to something low (e.g., 8)\n\n\nAnother one:\n\nHave a general cap of 64, but additionally limit it to something like\n max(1, min(WORKER_CAP, max_connections / 4))\n\nso that cases like tap tests don't end up allocating vastly more worker slots\nthan actual connection slots.\n\n\n> My intent with a reserved set of 256 slots was to prevent users from\n> needing to deal with two GUCs. For all practical purposes, it would be\n> possible to change autovacuum_max_workers whenever you want. But if the\n> extra resource requirements are too much of a tax, I'm content to change\n> course.\n\nApproximately tripling shared memory usage for tap test instances does seem\ntoo much to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 18 Jun 2024 14:33:31 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Tue, Jun 18, 2024 at 02:33:31PM -0700, Andres Freund wrote:\n> Another one:\n> \n> Have a general cap of 64, but additionally limit it to something like\n> max(1, min(WORKER_CAP, max_connections / 4))\n> \n> so that cases like tap tests don't end up allocating vastly more worker slots\n> than actual connection slots.\n\nThat's a clever idea. My only concern would be that we are tethering two\nparameters that aren't super closely related, but I'm unsure whether it\nwould cause any problems in practice.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 18 Jun 2024 19:43:36 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Tue, Jun 18, 2024 at 07:43:36PM -0500, Nathan Bossart wrote:\n> On Tue, Jun 18, 2024 at 02:33:31PM -0700, Andres Freund wrote:\n>> Another one:\n>> \n>> Have a general cap of 64, but additionally limit it to something like\n>> max(1, min(WORKER_CAP, max_connections / 4))\n>> \n>> so that cases like tap tests don't end up allocating vastly more worker slots\n>> than actual connection slots.\n> \n> That's a clever idea. My only concern would be that we are tethering two\n> parameters that aren't super closely related, but I'm unsure whether it\n> would cause any problems in practice.\n\nHere is an attempt at doing this. I've added 0001 [0] and 0002 [1] as\nprerequisite patches, which helps simplify 0003 a bit. It probably doesn't\nwork correctly for EXEC_BACKEND builds yet.\n\nI'm still not sure about this approach. At the moment, I'm leaning towards\nsomething more like v2 [2] where the upper limit is a PGC_POSTMASTER GUC\n(that we would set very low for TAP tests).\n\n[0] https://commitfest.postgresql.org/48/4998/\n[1] https://commitfest.postgresql.org/48/5059/\n[2] https://postgr.es/m/20240419154322.GA3988554%40nathanxps13\n\n-- \nnathan", "msg_date": "Fri, 21 Jun 2024 15:44:07 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Fri, Jun 21, 2024 at 03:44:07PM -0500, Nathan Bossart wrote:\n> I'm still not sure about this approach. At the moment, I'm leaning towards\n> something more like v2 [2] where the upper limit is a PGC_POSTMASTER GUC\n> (that we would set very low for TAP tests).\n\nLike so.\n\n-- \nnathan", "msg_date": "Sat, 22 Jun 2024 15:42:47 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "Here is a rebased patch.\n\nOne thing that still bugs me is that there is no feedback sent to the user\nwhen autovacuum_max_workers is set higher than autovacuum_worker_slots. I\nthink we should at least emit a WARNING, perhaps from the autovacuum\nlauncher, i.e., once when the launcher starts and then again as needed via\nHandleAutoVacLauncherInterrupts(). Or we could fail to start in\nPostmasterMain() and then ignore later misconfigurations via a GUC check\nhook. I'm not too thrilled about adding more GUC check hooks that depend\non the value of other GUCs, but I do like the idea of failing instead of\nsilently proceeding with a different value than the user configured. Any\nthoughts?\n\n-- \nnathan", "msg_date": "Mon, 8 Jul 2024 14:29:16 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "On Mon, Jul 08, 2024 at 02:29:16PM -0500, Nathan Bossart wrote:\n> One thing that still bugs me is that there is no feedback sent to the user\n> when autovacuum_max_workers is set higher than autovacuum_worker_slots. I\n> think we should at least emit a WARNING, perhaps from the autovacuum\n> launcher, i.e., once when the launcher starts and then again as needed via\n> HandleAutoVacLauncherInterrupts(). Or we could fail to start in\n> PostmasterMain() and then ignore later misconfigurations via a GUC check\n> hook. I'm not too thrilled about adding more GUC check hooks that depend\n> on the value of other GUCs, but I do like the idea of failing instead of\n> silently proceeding with a different value than the user configured. Any\n> thoughts?\n\n From recent discussions, it sounds like there isn't much appetite for GUC\ncheck hooks that depend on the values of other GUCs. Here is a new version\nof the patch that adds the WARNING described above.\n\n-- \nnathan", "msg_date": "Fri, 19 Jul 2024 14:24:39 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "rebased\n\n-- \nnathan", "msg_date": "Sat, 27 Jul 2024 15:32:09 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" }, { "msg_contents": "If there are no remaining concerns, I'd like to move forward with\ncommitting v9 in September's commitfest.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 21 Aug 2024 14:20:08 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: allow changing autovacuum_max_workers without restarting" } ]
[ { "msg_contents": "Hackers,\n\nI've been playing around with the incremental backup feature trying to \nget a sense of how it can be practically used. One of the first things I \nalways try is to delete random files and see what happens.\n\nYou can delete pretty much anything you want from the most recent \nincremental backup (not the manifest) and it will not be detected.\n\nFor example:\n\n$ pg_basebackup -c fast -D test/backup/full -F plain\n$ pg_basebackup -c fast -D test/backup/incr1 -F plain -i \n/home/dev/test/backup/full/backup_manifest\n\n$ rm test/backup/incr1/base/1/INCREMENTAL.2675\n$ rm test/backup/incr1/base/1/826\n$ /home/dev/test/pg/bin/pg_combinebackup test/backup/full \ntest/backup/incr1 -o test/backup/combine\n\n$ ls test/backup/incr1/base/1/2675\nNo such file or directory\n$ ls test/backup/incr1/base/1/826\nNo such file or directory\n\nI can certainly use verify to check the backups individually:\n\n$ /home/dev/test/pg/bin/pg_verifybackup /home/dev/test/backup/incr1\npg_verifybackup: error: \"base/1/INCREMENTAL.2675\" is present in the \nmanifest but not on disk\npg_verifybackup: error: \"base/1/826\" is present in the manifest but not \non disk\n\nBut I can't detect this issue if I use verify on the combined backup:\n\n$ /home/dev/test/pg/bin/pg_verifybackup /home/dev/test/backup/combine\nbackup successfully verified\n\nMaybe the answer here is to update the docs to specify that \npg_verifybackup should be run on all backup directories before \npg_combinebackup is run. Right now that is not at all clear.\n\nIn fact the docs say, \"pg_combinebackup will attempt to verify that the \nbackups you specify form a legal backup chain\". Which I guess just means \nthe chain is verified and not the files, but it seems easy to misinterpret.\n\nOverall I think it is an issue that the combine is being driven from the \nmost recent incremental directory rather than from the manifest.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 11 Apr 2024 11:36:30 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "pg_combinebackup does not detect missing files" }, { "msg_contents": "On Wed, Apr 10, 2024 at 9:36 PM David Steele <[email protected]> wrote:\n> I've been playing around with the incremental backup feature trying to\n> get a sense of how it can be practically used. One of the first things I\n> always try is to delete random files and see what happens.\n>\n> You can delete pretty much anything you want from the most recent\n> incremental backup (not the manifest) and it will not be detected.\n\nSure, but you can also delete anything you want from the most recent\nnon-incremental backup and it will also not be detected. There's no\nreason at all to hold incremental backup to a higher standard than we\ndo in general.\n\n> Maybe the answer here is to update the docs to specify that\n> pg_verifybackup should be run on all backup directories before\n> pg_combinebackup is run. Right now that is not at all clear.\n\nI don't want to make those kinds of prescriptive statements. If you\nwant to verify the backups that you use as input to pg_combinebackup,\nyou can use pg_verifybackup to do that, but it's not a requirement.\nI'm not averse to having some kind of statement in the documentation\nalong the lines of \"Note that pg_combinebackup does not attempt to\nverify that the individual backups are intact; for that, use\npg_verifybackup.\" But I think it should be blindingly obvious to\neveryone that you can't go whacking around the inputs to a program and\nexpect to get perfectly good output. I know it isn't blindingly\nobvious to everyone, which is why I'm not averse to adding something\nlike what I just mentioned, and maybe it wouldn't be a bad idea to\ndocument in a few other places that you shouldn't randomly remove\nfiles from the data directory of your live cluster, either, because\npeople seem to keep doing it, but really, the expectation that you\ncan't just blow files away and expect good things to happen afterward\nshould hardly need to be stated.\n\nI think it's very easy to go overboard with warnings of this type.\nWeird stuff comes to me all the time because people call me when the\nweird stuff happens, and I'm guessing that your experience is similar.\nBut my actual personal experience, as opposed to the cases reported to\nme by others, practically never features files evaporating into the\nether. If I read a documentation page for PostgreSQL or any other\npiece of software that made it sound like that was a normal\noccurrence, I'd question the technical acumen of the authors. And if I\nsaw such warnings only for one particular feature of a piece of\nsoftware and not anywhere else, I'd wonder why the authors of the\nsoftware were trying so hard to scare me off the use of that\nparticular feature. I don't trust at all that incremental backup is\nfree of bugs -- but I don't trust that all the code anyone else has\nwritten is free of bugs, either.\n\n> Overall I think it is an issue that the combine is being driven from the\n> most recent incremental directory rather than from the manifest.\n\nI don't. I considered that design and rejected it for various reasons\nthat I still believe to be good. Even if I was wrong, we're not going\nto start rewriting the implementation a week after feature freeze.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 09:50:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/16/24 23:50, Robert Haas wrote:\n> On Wed, Apr 10, 2024 at 9:36 PM David Steele <[email protected]> wrote:\n>> I've been playing around with the incremental backup feature trying to\n>> get a sense of how it can be practically used. One of the first things I\n>> always try is to delete random files and see what happens.\n>>\n>> You can delete pretty much anything you want from the most recent\n>> incremental backup (not the manifest) and it will not be detected.\n> \n> Sure, but you can also delete anything you want from the most recent\n> non-incremental backup and it will also not be detected. There's no\n> reason at all to hold incremental backup to a higher standard than we\n> do in general.\n\nExcept that we are running pg_combinebackup on the incremental, which \nthe user might reasonably expect to check backup integrity. It actually \ndoes a bunch of integrity checks -- but not this one.\n\n>> Maybe the answer here is to update the docs to specify that\n>> pg_verifybackup should be run on all backup directories before\n>> pg_combinebackup is run. Right now that is not at all clear.\n> \n> I don't want to make those kinds of prescriptive statements. If you\n> want to verify the backups that you use as input to pg_combinebackup,\n> you can use pg_verifybackup to do that, but it's not a requirement.\n> I'm not averse to having some kind of statement in the documentation\n> along the lines of \"Note that pg_combinebackup does not attempt to\n> verify that the individual backups are intact; for that, use\n> pg_verifybackup.\" \n\nI think we should do this at a minimum.\n\n> But I think it should be blindingly obvious to\n> everyone that you can't go whacking around the inputs to a program and\n> expect to get perfectly good output. I know it isn't blindingly\n> obvious to everyone, which is why I'm not averse to adding something\n> like what I just mentioned, and maybe it wouldn't be a bad idea to\n> document in a few other places that you shouldn't randomly remove\n> files from the data directory of your live cluster, either, because\n> people seem to keep doing it, but really, the expectation that you\n> can't just blow files away and expect good things to happen afterward\n> should hardly need to be stated.\n\nAnd yet, we see it all the time.\n\n> I think it's very easy to go overboard with warnings of this type.\n> Weird stuff comes to me all the time because people call me when the\n> weird stuff happens, and I'm guessing that your experience is similar.\n> But my actual personal experience, as opposed to the cases reported to\n> me by others, practically never features files evaporating into the\n> ether. \n\nSame -- if it happens at all it is very rare. Virtually every time I am \nable to track down the cause of missing files it is because the user \ndeleted them, usually to \"save space\" or because they \"did not seem \nimportant\".\n\nBut given that this occurrence is pretty common in my experience, I \nthink it is smart to mitigate against it, rather than just take it on \nfaith that the user hasn't done anything destructive.\n\nEspecially given how pg_combinebackup works, backups are going to \nundergo a lot of user manipulation (pushing to and pull from storage, \ndecompressing, untaring, etc.) and I think that means we should take \nextra care.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 17 Apr 2024 09:25:48 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Tue, Apr 16, 2024 at 7:25 PM David Steele <[email protected]> wrote:\n> Except that we are running pg_combinebackup on the incremental, which\n> the user might reasonably expect to check backup integrity. It actually\n> does a bunch of integrity checks -- but not this one.\n\nI think it's a bad idea to duplicate all of the checks from\npg_verifybackup into pg_combinebackup. I did consider this exact issue\nduring development. These are separate tools with separate purposes.\nI think that what a user should expect is that each tool has some job\nand tries to do that job well, while leaving other jobs to other\ntools.\n\nAnd I think if you think about it that way, it explains a lot about\nwhich checks pg_combinebackup does and which checks it does not do.\npg_combinebackup tries to check that it is valid to combine backup A\nwith backup B (and maybe C, D, E ...), and it checks a lot of stuff to\ntry to make sure that such an operation appears to be sensible. Those\nare checks that pg_verifybackup cannot do, because pg_verifybackup\nonly looks at one backup in isolation. If pg_combinebackup does not do\nthose checks, nobody does. Aside from that, it will also report errors\nthat it can't avoid noticing, even if those are things that\npg_verifybackup would also have found, such as, say, the control file\nnot existing.\n\nBut it will not go out of its way to perform checks that are unrelated\nto its documented purpose. And I don't think it should, especially if\nwe have another tool that already does that.\n\n> > I'm not averse to having some kind of statement in the documentation\n> > along the lines of \"Note that pg_combinebackup does not attempt to\n> > verify that the individual backups are intact; for that, use\n> > pg_verifybackup.\"\n>\n> I think we should do this at a minimum.\n\nHere is a patch to do that.\n\n> Especially given how pg_combinebackup works, backups are going to\n> undergo a lot of user manipulation (pushing to and pull from storage,\n> decompressing, untaring, etc.) and I think that means we should take\n> extra care.\n\nWe are in agreement on that point, if nothing else. I am terrified of\nusers having problems with pg_combinebackup and me not being able to\ntell whether those problems are due to user error, Robert error, or\nsomething else. I put a lot of effort into detecting dumb things that\nI thought a user might do, and a lot of effort into debugging output\nso that when things do go wrong anyway, we have a reasonable chance of\nfiguring out exactly where they went wrong. We do seem to have a\nphilosophical difference about what the scope of those checks ought to\nbe, and I don't really expect what I wrote above to convince you that\nmy position is correct, but perhaps it will convince you that I have a\nthoughtful position, as opposed to just having done stuff at random.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Apr 2024 11:03:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/18/24 01:03, Robert Haas wrote:\n> On Tue, Apr 16, 2024 at 7:25 PM David Steele <[email protected]> wrote:\n> \n> But it will not go out of its way to perform checks that are unrelated\n> to its documented purpose. And I don't think it should, especially if\n> we have another tool that already does that.\n> \n>>> I'm not averse to having some kind of statement in the documentation\n>>> along the lines of \"Note that pg_combinebackup does not attempt to\n>>> verify that the individual backups are intact; for that, use\n>>> pg_verifybackup.\"\n>>\n>> I think we should do this at a minimum.\n> \n> Here is a patch to do that.\n\nI think here:\n\n+ <application>pg_basebackup</application> only attempts to verify\n\nyou mean:\n\n+ <application>pg_combinebackup</application> only attempts to verify\n\nOtherwise this looks good to me.\n\n>> Especially given how pg_combinebackup works, backups are going to\n>> undergo a lot of user manipulation (pushing to and pull from storage,\n>> decompressing, untaring, etc.) and I think that means we should take\n>> extra care.\n> \n> We are in agreement on that point, if nothing else. I am terrified of\n> users having problems with pg_combinebackup and me not being able to\n> tell whether those problems are due to user error, Robert error, or\n> something else. I put a lot of effort into detecting dumb things that\n> I thought a user might do, and a lot of effort into debugging output\n> so that when things do go wrong anyway, we have a reasonable chance of\n> figuring out exactly where they went wrong. We do seem to have a\n> philosophical difference about what the scope of those checks ought to\n> be, and I don't really expect what I wrote above to convince you that\n> my position is correct, but perhaps it will convince you that I have a\n> thoughtful position, as opposed to just having done stuff at random.\n\nFair enough. I accept that your reasoning is not random, but I'm still \nnot very satisfied that the user needs to run a separate and rather \nexpensive process to do the verification when pg_combinebackup already \nhas the necessary information at hand. My guess is that most users will \nelect to skip verification.\n\nAt least now they'll have the information they need to make an informed \nchoice.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:09:37 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Wed, Apr 17, 2024 at 7:09 PM David Steele <[email protected]> wrote:\n> I think here:\n>\n> + <application>pg_basebackup</application> only attempts to verify\n>\n> you mean:\n>\n> + <application>pg_combinebackup</application> only attempts to verify\n>\n> Otherwise this looks good to me.\n\nGood catch, thanks. Committed with that change.\n\n> Fair enough. I accept that your reasoning is not random, but I'm still\n> not very satisfied that the user needs to run a separate and rather\n> expensive process to do the verification when pg_combinebackup already\n> has the necessary information at hand. My guess is that most users will\n> elect to skip verification.\n\nI think you're probably right that a lot of people will skip it; I'm\njust less convinced than you are that it's a bad thing. It's not a\n*great* thing if people skip it, but restore time is actually just\nabout the worst time to find out that you have a problem with your\nbackups. I think users would be better served by verifying stored\nbackups periodically when they *don't* need to restore them. Also,\nsaying that we have all of the information that we need to do the\nverification is only partially true:\n\n- we do have to parse the manifest anyway, but we don't have to\ncompute checksums anyway, and I think that cost can be significant\neven for CRC-32C and much more significant for any of the SHA variants\n\n- we don't need to read all of the files in all of the backups. if\nthere's a newer full, the corresponding file in older backups, whether\nfull or incremental, need not be read\n\n- incremental files other than the most recent only need to be read to\nthe extent that we need their data; if some of the same blocks have\nbeen changed again, we can economize\n\nHow much you save because of these effects is pretty variable. Best\ncase, you have a 2-backup chain with no manifest checksums, and all\nverification will have to do that you wouldn't otherwise need to do is\nwalk each older directory tree in toto and cross-check which files\nexist against the manifest. That's probably cheap enough that nobody\nwould be too fussed. Worst case, you have a 10-backup (or whatever)\nchain with SHA512 checksums and, say, a 50% turnover rate. In that\ncase, I think having verification happen automatically could be a\npretty major hit, both in terms of I/O and CPU. If your database is\n1TB, it's ~5.5TB of read I/O (because one 1TB full backup and 9 0.5TB\nincrementals) instead of ~1TB of read I/O, plus the checksumming.\n\nNow, obviously you can still feel that it's totally worth it, or that\nsomeone in that situation shouldn't even be using incremental backups,\nand it's a value judgement, so fair enough. But my guess is that the\nefforts that this implementation makes to minimize the amount of I/O\nrequired for a restore are going to be important for a lot of people.\n\n> At least now they'll have the information they need to make an informed\n> choice.\n\nRight.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:50:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/19/24 00:50, Robert Haas wrote:\n> On Wed, Apr 17, 2024 at 7:09 PM David Steele <[email protected]> wrote:\n> \n>> Fair enough. I accept that your reasoning is not random, but I'm still\n>> not very satisfied that the user needs to run a separate and rather\n>> expensive process to do the verification when pg_combinebackup already\n>> has the necessary information at hand. My guess is that most users will\n>> elect to skip verification.\n> \n> I think you're probably right that a lot of people will skip it; I'm\n> just less convinced than you are that it's a bad thing. It's not a\n> *great* thing if people skip it, but restore time is actually just\n> about the worst time to find out that you have a problem with your\n> backups. I think users would be better served by verifying stored\n> backups periodically when they *don't* need to restore them. \n\nAgreed, running verify regularly is a good idea, but in my experience \nmost users are only willing to run verify once they suspect (or know) \nthere is an issue. It's a pretty expensive process depending on how many \nbackups you have and where they are stored.\n\n > Also,\n> saying that we have all of the information that we need to do the\n> verification is only partially true:\n> \n> - we do have to parse the manifest anyway, but we don't have to\n> compute checksums anyway, and I think that cost can be significant\n> even for CRC-32C and much more significant for any of the SHA variants\n> \n> - we don't need to read all of the files in all of the backups. if\n> there's a newer full, the corresponding file in older backups, whether\n> full or incremental, need not be read\n> \n> - incremental files other than the most recent only need to be read to\n> the extent that we need their data; if some of the same blocks have\n> been changed again, we can economize\n> \n> How much you save because of these effects is pretty variable. Best\n> case, you have a 2-backup chain with no manifest checksums, and all\n> verification will have to do that you wouldn't otherwise need to do is\n> walk each older directory tree in toto and cross-check which files\n> exist against the manifest. That's probably cheap enough that nobody\n> would be too fussed. Worst case, you have a 10-backup (or whatever)\n> chain with SHA512 checksums and, say, a 50% turnover rate. In that\n> case, I think having verification happen automatically could be a\n> pretty major hit, both in terms of I/O and CPU. If your database is\n> 1TB, it's ~5.5TB of read I/O (because one 1TB full backup and 9 0.5TB\n> incrementals) instead of ~1TB of read I/O, plus the checksumming.\n> \n> Now, obviously you can still feel that it's totally worth it, or that\n> someone in that situation shouldn't even be using incremental backups,\n> and it's a value judgement, so fair enough. But my guess is that the\n> efforts that this implementation makes to minimize the amount of I/O\n> required for a restore are going to be important for a lot of people.\n\nSure -- pg_combinebackup would only need to verify the data that it \nuses. I'm not suggesting that it should do an exhaustive verify of every \nsingle backup in the chain. Though I can see how it sounded that way \nsince with pg_verifybackup that would pretty much be your only choice.\n\nThe beauty of doing verification in pg_combinebackup is that it can do a \nlot less than running pg_verifybackup against every backup but still get \na valid result. All we care about is that the output is correct -- if \nthere is corruption in an unused part of an earlier backup \npg_combinebackup doesn't need to care about that.\n\nAs far as I can see, pg_combinebackup already checks most of the boxes. \nThe only thing I know that it can't do is detect missing files and that \ndoesn't seem like too big a thing to handle.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 19 Apr 2024 09:36:28 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Thu, Apr 18, 2024 at 7:36 PM David Steele <[email protected]> wrote:\n> Sure -- pg_combinebackup would only need to verify the data that it\n> uses. I'm not suggesting that it should do an exhaustive verify of every\n> single backup in the chain. Though I can see how it sounded that way\n> since with pg_verifybackup that would pretty much be your only choice.\n>\n> The beauty of doing verification in pg_combinebackup is that it can do a\n> lot less than running pg_verifybackup against every backup but still get\n> a valid result. All we care about is that the output is correct -- if\n> there is corruption in an unused part of an earlier backup\n> pg_combinebackup doesn't need to care about that.\n>\n> As far as I can see, pg_combinebackup already checks most of the boxes.\n> The only thing I know that it can't do is detect missing files and that\n> doesn't seem like too big a thing to handle.\n\nHmm, that's an interesting perspective. I've always been very\nskeptical of doing verification only around missing files and not\nanything else. I figured that wouldn't be particularly meaningful, and\nthat's pretty much the only kind of validation that's even\ntheoretically possible without a bunch of extra overhead, since we\ncompute checksums on entire files rather than, say, individual blocks.\nAnd you could really only do it for the final backup in the chain,\nbecause you should end up accessing all of those files, but the same\nis not true for the predecessor backups. So it's a very weak form of\nverification.\n\nBut I looked into it and I think you're correct that, if you restrict\nthe scope in the way that you suggest, we can do it without much\nadditional code, or much additional run-time. The cost is basically\nthat, instead of only looking for a backup_manifest entry when we\nthink we can reuse its checksum, we need to do a lookup for every\nsingle file in the final input directory. Then, after processing all\nsuch files, we need to iterate over the hash table one more time and\nsee what files were never touched. That seems like an acceptably low\ncost to me. So, here's a patch.\n\nI do think there's some chance that this will encourage people to\nbelieve that pg_combinebackup is better at finding problems than it\nreally is or ever will be, and I also question whether it's right to\nkeep changing stuff after feature freeze. But I have a feeling most\npeople here are going to think this is worth including in 17. Let's\nsee what others say.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 19 Apr 2024 11:47:07 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/20/24 01:47, Robert Haas wrote:\n> On Thu, Apr 18, 2024 at 7:36 PM David Steele <[email protected]> wrote:\n>> Sure -- pg_combinebackup would only need to verify the data that it\n>> uses. I'm not suggesting that it should do an exhaustive verify of every\n>> single backup in the chain. Though I can see how it sounded that way\n>> since with pg_verifybackup that would pretty much be your only choice.\n>>\n>> The beauty of doing verification in pg_combinebackup is that it can do a\n>> lot less than running pg_verifybackup against every backup but still get\n>> a valid result. All we care about is that the output is correct -- if\n>> there is corruption in an unused part of an earlier backup\n>> pg_combinebackup doesn't need to care about that.\n>>\n>> As far as I can see, pg_combinebackup already checks most of the boxes.\n>> The only thing I know that it can't do is detect missing files and that\n>> doesn't seem like too big a thing to handle.\n> \n> Hmm, that's an interesting perspective. I've always been very\n> skeptical of doing verification only around missing files and not\n> anything else. \n\nYeah, me too. There should also be some verification of the file \ncontents themselves but now I can see we don't have that. For instance, \nI can do something like this:\n\ncp test/backup/incr1/base/1/3598 test/backup/incr1/base/1/2336\n\nAnd pg_combinebackup runs without complaint. Maybe missing files are \nmore likely than corrupted files, but it would still be nice to check \nfor both.\n\n> I figured that wouldn't be particularly meaningful, and\n> that's pretty much the only kind of validation that's even\n> theoretically possible without a bunch of extra overhead, since we\n> compute checksums on entire files rather than, say, individual blocks.\n> And you could really only do it for the final backup in the chain,\n> because you should end up accessing all of those files, but the same\n> is not true for the predecessor backups. So it's a very weak form of\n> verification.\n\nI don't think it is weak if you can verify that the output is exactly as \nexpected, i.e. all files are present and have the correct contents.\n\nBut in this case it would be nice to at least know that the source files \non disk are valid (which pg_verifybackup does). Without block checksums \nit is hard to know if the final output is correct or not.\n\n> But I looked into it and I think you're correct that, if you restrict\n> the scope in the way that you suggest, we can do it without much\n> additional code, or much additional run-time. The cost is basically\n> that, instead of only looking for a backup_manifest entry when we\n> think we can reuse its checksum, we need to do a lookup for every\n> single file in the final input directory. Then, after processing all\n> such files, we need to iterate over the hash table one more time and\n> see what files were never touched. That seems like an acceptably low\n> cost to me. So, here's a patch.\n\nI tested the patch and it works, but there is some noise from WAL files \nsince they are not in the manifest:\n\n$ pg_combinebackup test/backup/full test/backup/incr1 -o test/backup/combine\npg_combinebackup: warning: \"pg_wal/000000010000000000000008\" is present \non disk but not in the manifest\npg_combinebackup: error: \"base/1/3596\" is present in the manifest but \nnot on disk\n\nMaybe it would be better to omit this warning since it could get very \nnoisy depending on the number of WAL segments generated during backup.\n\nThough I do find it a bit odd that WAL files are not in the source \nbackup manifests but do end up in the manifest after a pg_combinebackup. \nIt doesn't seem harmful, just odd.\n\n> I do think there's some chance that this will encourage people to\n> believe that pg_combinebackup is better at finding problems than it\n> really is or ever will be, \n\nGiven that pg_combinebackup is not verifying checksums, you are probably \nright.\n\n> and I also question whether it's right to\n> keep changing stuff after feature freeze. But I have a feeling most\n> people here are going to think this is worth including in 17. Let's\n> see what others say.\n\nI think it is a worthwhile change and we are still a month away from \nbeta1. We'll see if anyone disagrees.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 22 Apr 2024 10:47:10 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Sun, Apr 21, 2024 at 8:47 PM David Steele <[email protected]> wrote:\n> > I figured that wouldn't be particularly meaningful, and\n> > that's pretty much the only kind of validation that's even\n> > theoretically possible without a bunch of extra overhead, since we\n> > compute checksums on entire files rather than, say, individual blocks.\n> > And you could really only do it for the final backup in the chain,\n> > because you should end up accessing all of those files, but the same\n> > is not true for the predecessor backups. So it's a very weak form of\n> > verification.\n>\n> I don't think it is weak if you can verify that the output is exactly as\n> expected, i.e. all files are present and have the correct contents.\n\nI don't understand what you mean here. I thought we were in agreement\nthat verifying contents would cost a lot more. The verification that\nwe can actually do without much cost can only check for missing files\nin the most recent backup, which is quite weak. pg_verifybackup is\navailable if you want more comprehensive verification and you're\nwilling to pay the cost of it.\n\n> I tested the patch and it works, but there is some noise from WAL files\n> since they are not in the manifest:\n>\n> $ pg_combinebackup test/backup/full test/backup/incr1 -o test/backup/combine\n> pg_combinebackup: warning: \"pg_wal/000000010000000000000008\" is present\n> on disk but not in the manifest\n> pg_combinebackup: error: \"base/1/3596\" is present in the manifest but\n> not on disk\n\nOops, OK, that can be fixed.\n\n> I think it is a worthwhile change and we are still a month away from\n> beta1. We'll see if anyone disagrees.\n\nI don't plan to press forward with this in this release unless we get\na couple of +1s from disinterested parties. We're now two weeks after\nfeature freeze and this is design behavior, not a bug. Perhaps the\ndesign should have been otherwise, but two weeks after feature freeze\nis not the time to debate that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 09:53:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/22/24 23:53, Robert Haas wrote:\n> On Sun, Apr 21, 2024 at 8:47 PM David Steele <[email protected]> wrote:\n>>> I figured that wouldn't be particularly meaningful, and\n>>> that's pretty much the only kind of validation that's even\n>>> theoretically possible without a bunch of extra overhead, since we\n>>> compute checksums on entire files rather than, say, individual blocks.\n>>> And you could really only do it for the final backup in the chain,\n>>> because you should end up accessing all of those files, but the same\n>>> is not true for the predecessor backups. So it's a very weak form of\n>>> verification.\n>>\n>> I don't think it is weak if you can verify that the output is exactly as\n>> expected, i.e. all files are present and have the correct contents.\n> \n> I don't understand what you mean here. I thought we were in agreement\n> that verifying contents would cost a lot more. The verification that\n> we can actually do without much cost can only check for missing files\n> in the most recent backup, which is quite weak. pg_verifybackup is\n> available if you want more comprehensive verification and you're\n> willing to pay the cost of it.\n\nI simply meant that it is *possible* to verify the output of \npg_combinebackup without explicitly verifying all the backups. There \nwould be overhead, yes, but it would be less than verifying each backup \nindividually. For my 2c that efficiency would make it worth doing \nverification in pg_combinebackup, with perhaps a switch to turn it off \nif the user is confident in their sources.\n\n>> I think it is a worthwhile change and we are still a month away from\n>> beta1. We'll see if anyone disagrees.\n> \n> I don't plan to press forward with this in this release unless we get\n> a couple of +1s from disinterested parties. We're now two weeks after\n> feature freeze and this is design behavior, not a bug. Perhaps the\n> design should have been otherwise, but two weeks after feature freeze\n> is not the time to debate that.\n\nIt doesn't appear that anyone but me is terribly concerned about \nverification, even in this weak form, so probably best to hold this \npatch until the next release. As you say, it is late in the game.\n\nRegards,\n-David\n\n\n", "msg_date": "Wed, 24 Apr 2024 09:22:58 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Tue, Apr 23, 2024 at 7:23 PM David Steele <[email protected]> wrote:\n> > I don't understand what you mean here. I thought we were in agreement\n> > that verifying contents would cost a lot more. The verification that\n> > we can actually do without much cost can only check for missing files\n> > in the most recent backup, which is quite weak. pg_verifybackup is\n> > available if you want more comprehensive verification and you're\n> > willing to pay the cost of it.\n>\n> I simply meant that it is *possible* to verify the output of\n> pg_combinebackup without explicitly verifying all the backups. There\n> would be overhead, yes, but it would be less than verifying each backup\n> individually. For my 2c that efficiency would make it worth doing\n> verification in pg_combinebackup, with perhaps a switch to turn it off\n> if the user is confident in their sources.\n\nHmm, can you outline the algorithm that you have in mind? I feel we've\nmisunderstood each other a time or two already on this topic, and I'd\nlike to avoid more of that. Unless you just mean what the patch I\nposted does (check if anything from the final manifest is missing from\nthe corresponding directory), but that doesn't seem like verifying the\noutput.\n\n> >> I think it is a worthwhile change and we are still a month away from\n> >> beta1. We'll see if anyone disagrees.\n> >\n> > I don't plan to press forward with this in this release unless we get\n> > a couple of +1s from disinterested parties. We're now two weeks after\n> > feature freeze and this is design behavior, not a bug. Perhaps the\n> > design should have been otherwise, but two weeks after feature freeze\n> > is not the time to debate that.\n>\n> It doesn't appear that anyone but me is terribly concerned about\n> verification, even in this weak form, so probably best to hold this\n> patch until the next release. As you say, it is late in the game.\n\nAdded https://commitfest.postgresql.org/48/4951/\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 24 Apr 2024 10:05:31 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 4/25/24 00:05, Robert Haas wrote:\n> On Tue, Apr 23, 2024 at 7:23 PM David Steele <[email protected]> wrote:\n>>> I don't understand what you mean here. I thought we were in agreement\n>>> that verifying contents would cost a lot more. The verification that\n>>> we can actually do without much cost can only check for missing files\n>>> in the most recent backup, which is quite weak. pg_verifybackup is\n>>> available if you want more comprehensive verification and you're\n>>> willing to pay the cost of it.\n>>\n>> I simply meant that it is *possible* to verify the output of\n>> pg_combinebackup without explicitly verifying all the backups. There\n>> would be overhead, yes, but it would be less than verifying each backup\n>> individually. For my 2c that efficiency would make it worth doing\n>> verification in pg_combinebackup, with perhaps a switch to turn it off\n>> if the user is confident in their sources.\n> \n> Hmm, can you outline the algorithm that you have in mind? I feel we've\n> misunderstood each other a time or two already on this topic, and I'd\n> like to avoid more of that. Unless you just mean what the patch I\n> posted does (check if anything from the final manifest is missing from\n> the corresponding directory), but that doesn't seem like verifying the\n> output.\n\nYeah, it seems you are right that it is not possible to verify the \noutput in all cases.\n\nHowever, I think allowing the user to optionally validate the input \nwould be a good feature. Running pg_verifybackup as a separate step is \ngoing to be a more expensive then verifying/copying at the same time. \nEven with storage tricks to copy ranges of data, pg_combinebackup is \ngoing to aware of files that do not need to be verified for the current \noperation, e.g. old copies of free space maps.\n\nAdditionally, if pg_combinebackup is updated to work against tar.gz, \nwhich I believe will be important going forward, then there would be \nlittle penalty to verification since all the required data would be in \nmemory at some point anyway. Though, if the file is compressed it might \nbe redundant since compression formats generally include checksums.\n\nOne more thing occurs to me -- if data checksums are enabled then a \nrough and ready output verification would be to test the checksums \nduring combine. Data checksums aren't very good but something should be \ntriggered if a bunch of pages go wrong, especially since the block \noffset is part of the checksum. This would be helpful for catching \ncombine bugs.\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 17 May 2024 15:18:18 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Fri, May 17, 2024 at 1:18 AM David Steele <[email protected]> wrote:\n> However, I think allowing the user to optionally validate the input\n> would be a good feature. Running pg_verifybackup as a separate step is\n> going to be a more expensive then verifying/copying at the same time.\n> Even with storage tricks to copy ranges of data, pg_combinebackup is\n> going to aware of files that do not need to be verified for the current\n> operation, e.g. old copies of free space maps.\n\nIn cases where pg_combinebackup reuses a checksums from the input\nmanifest rather than recomputing it, this could accomplish something.\nHowever, for any file that's actually reconstructed, pg_combinebackup\ncomputes the checksum as it's writing the output file. I don't see how\nit's sensible to then turn around and verify that the checksum that we\njust computed is the same one that we now get. It makes sense to run\npg_verifybackup on the output of pg_combinebackup at a later time,\nbecause that can catch bits that have been flipped on disk in the\nmeanwhile. But running the equivalent of pg_verifybackup during\npg_combinebackup would amount to doing the exact same checksum\ncalculation twice and checking that it gets the same answer both\ntimes.\n\n> One more thing occurs to me -- if data checksums are enabled then a\n> rough and ready output verification would be to test the checksums\n> during combine. Data checksums aren't very good but something should be\n> triggered if a bunch of pages go wrong, especially since the block\n> offset is part of the checksum. This would be helpful for catching\n> combine bugs.\n\nI don't know, I'm not very enthused about this. I bet pg_combinebackup\nhas some bugs, and it's possible that one of them could involve\nputting blocks in the wrong places, but it doesn't seem especially\nlikely. Even if it happens, it's more likely to be that\npg_combinebackup thinks it's putting them in the right places but is\nactually writing them to the wrong offset in the file, in which case a\nblock-checksum calculation inside pg_combinebackup is going to think\neverything's fine, but a standalone tool that isn't confused will be\nable to spot the damage.\n\nIt's frustrating that we can't do better verification of these things,\nbut to fix that I think we need better infrastructure elsewhere. For\ninstance, if we made pg_basebackup copy blocks from shared_buffers\nrather than the filesystem, or at least copy them when they weren't\nbeing concurrently written to the filesystem, then we'd not have the\nrisk of torn pages producing spurious bad checksums. If we could\nsomehow render a backup consistent when taking it instead of when\nrestoring it, we could verify tons of stuff. If we had some useful\nmarkers of how long files were supposed to be and which ones were\nsupposed to be present, we could check a lot of things that are\nuncheckable today. pg_combinebackup does the best it can -- or the\nbest I could make it do -- but there's a disappointing number of\nsituations where it's like \"hmm, in this situation, either something\nbad happened or it's just the somewhat unusual case where this happens\nin the normal course of events, and we have no way to tell which it\nis.\"\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 17 May 2024 08:20:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 5/17/24 22:20, Robert Haas wrote:\n> On Fri, May 17, 2024 at 1:18 AM David Steele <[email protected]> wrote:\n>> However, I think allowing the user to optionally validate the input\n>> would be a good feature. Running pg_verifybackup as a separate step is\n>> going to be a more expensive then verifying/copying at the same time.\n>> Even with storage tricks to copy ranges of data, pg_combinebackup is\n>> going to aware of files that do not need to be verified for the current\n>> operation, e.g. old copies of free space maps.\n> \n> In cases where pg_combinebackup reuses a checksums from the input\n> manifest rather than recomputing it, this could accomplish something.\n> However, for any file that's actually reconstructed, pg_combinebackup\n> computes the checksum as it's writing the output file. I don't see how\n> it's sensible to then turn around and verify that the checksum that we\n> just computed is the same one that we now get. \n\nHere's an example. First make a few backups:\n\n$ pg_basebackup -c fast -X none -D test/backup/full -F plain\n$ pg_basebackup -c fast -D test/backup/incr1 -F plain -i \n/home/dev/test/backup/full/backup_manifest\n\nThen intentionally corrupt a file in the incr backup:\n\n$ truncate -s 0 test/backup/incr1/base/5/3764_fsm\n\nIn this case pg_verifybackup will error:\n\n$ pg_verifybackup test/backup/incr1\npg_verifybackup: error: \"base/5/3764_fsm\" has size 0 on disk but size \n24576 in the manifest\n\nBut pg_combinebackup does not complain:\n\n$ pg_combinebackup test/backup/full test/backup/incr1 -o test/backup/combine\n$ ls -lah test/backup/combine/base/5/3764_fsm\n-rw------- 1 dev dialout 0 May 17 22:08 test/backup/combine/base/5/3764_fsm\n\nIt would be nice if pg_combinebackup would (at least optionally but \nprefferrably by default) complain in this case rather than the user \nneeding to separately run pg_verifybackup.\n\nRegards,\n-David\n\n\n", "msg_date": "Sat, 18 May 2024 08:14:27 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "\n\nOn 5/17/24 14:20, Robert Haas wrote:\n> On Fri, May 17, 2024 at 1:18 AM David Steele <[email protected]> wrote:\n>> However, I think allowing the user to optionally validate the input\n>> would be a good feature. Running pg_verifybackup as a separate step is\n>> going to be a more expensive then verifying/copying at the same time.\n>> Even with storage tricks to copy ranges of data, pg_combinebackup is\n>> going to aware of files that do not need to be verified for the current\n>> operation, e.g. old copies of free space maps.\n> \n> In cases where pg_combinebackup reuses a checksums from the input\n> manifest rather than recomputing it, this could accomplish something.\n> However, for any file that's actually reconstructed, pg_combinebackup\n> computes the checksum as it's writing the output file. I don't see how\n> it's sensible to then turn around and verify that the checksum that we\n> just computed is the same one that we now get. It makes sense to run\n> pg_verifybackup on the output of pg_combinebackup at a later time,\n> because that can catch bits that have been flipped on disk in the\n> meanwhile. But running the equivalent of pg_verifybackup during\n> pg_combinebackup would amount to doing the exact same checksum\n> calculation twice and checking that it gets the same answer both\n> times.\n> \n>> One more thing occurs to me -- if data checksums are enabled then a\n>> rough and ready output verification would be to test the checksums\n>> during combine. Data checksums aren't very good but something should be\n>> triggered if a bunch of pages go wrong, especially since the block\n>> offset is part of the checksum. This would be helpful for catching\n>> combine bugs.\n> \n> I don't know, I'm not very enthused about this. I bet pg_combinebackup\n> has some bugs, and it's possible that one of them could involve\n> putting blocks in the wrong places, but it doesn't seem especially\n> likely. Even if it happens, it's more likely to be that\n> pg_combinebackup thinks it's putting them in the right places but is\n> actually writing them to the wrong offset in the file, in which case a\n> block-checksum calculation inside pg_combinebackup is going to think\n> everything's fine, but a standalone tool that isn't confused will be\n> able to spot the damage.\n> \n\nPerhaps more importantly, can you even verify data checksums before the\nrecovery is completed? I don't think you can (pg_checksums certainly\ndoes not allow doing that). Because who knows in what shape you copied\nthe block?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 May 2024 13:06:11 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 5/18/24 21:06, Tomas Vondra wrote:\n> \n> On 5/17/24 14:20, Robert Haas wrote:\n>> On Fri, May 17, 2024 at 1:18 AM David Steele <[email protected]> wrote:\n>>> However, I think allowing the user to optionally validate the input\n>>> would be a good feature. Running pg_verifybackup as a separate step is\n>>> going to be a more expensive then verifying/copying at the same time.\n>>> Even with storage tricks to copy ranges of data, pg_combinebackup is\n>>> going to aware of files that do not need to be verified for the current\n>>> operation, e.g. old copies of free space maps.\n>>\n>> In cases where pg_combinebackup reuses a checksums from the input\n>> manifest rather than recomputing it, this could accomplish something.\n>> However, for any file that's actually reconstructed, pg_combinebackup\n>> computes the checksum as it's writing the output file. I don't see how\n>> it's sensible to then turn around and verify that the checksum that we\n>> just computed is the same one that we now get. It makes sense to run\n>> pg_verifybackup on the output of pg_combinebackup at a later time,\n>> because that can catch bits that have been flipped on disk in the\n>> meanwhile. But running the equivalent of pg_verifybackup during\n>> pg_combinebackup would amount to doing the exact same checksum\n>> calculation twice and checking that it gets the same answer both\n>> times.\n>>\n>>> One more thing occurs to me -- if data checksums are enabled then a\n>>> rough and ready output verification would be to test the checksums\n>>> during combine. Data checksums aren't very good but something should be\n>>> triggered if a bunch of pages go wrong, especially since the block\n>>> offset is part of the checksum. This would be helpful for catching\n>>> combine bugs.\n>>\n>> I don't know, I'm not very enthused about this. I bet pg_combinebackup\n>> has some bugs, and it's possible that one of them could involve\n>> putting blocks in the wrong places, but it doesn't seem especially\n>> likely. Even if it happens, it's more likely to be that\n>> pg_combinebackup thinks it's putting them in the right places but is\n>> actually writing them to the wrong offset in the file, in which case a\n>> block-checksum calculation inside pg_combinebackup is going to think\n>> everything's fine, but a standalone tool that isn't confused will be\n>> able to spot the damage.\n> \n> Perhaps more importantly, can you even verify data checksums before the\n> recovery is completed? I don't think you can (pg_checksums certainly\n> does not allow doing that). Because who knows in what shape you copied\n> the block?\n\nYeah, you'd definitely need a list of blocks you knew to be valid at \nbackup time, which sounds like a lot more work that just some overall \nchecksumming scheme.\n\nRegards,\n-David\n\n\n", "msg_date": "Sun, 19 May 2024 08:30:17 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Fri, May 17, 2024 at 6:14 PM David Steele <[email protected]> wrote:\n> Then intentionally corrupt a file in the incr backup:\n>\n> $ truncate -s 0 test/backup/incr1/base/5/3764_fsm\n>\n> In this case pg_verifybackup will error:\n>\n> $ pg_verifybackup test/backup/incr1\n> pg_verifybackup: error: \"base/5/3764_fsm\" has size 0 on disk but size\n> 24576 in the manifest\n>\n> But pg_combinebackup does not complain:\n>\n> $ pg_combinebackup test/backup/full test/backup/incr1 -o test/backup/combine\n> $ ls -lah test/backup/combine/base/5/3764_fsm\n> -rw------- 1 dev dialout 0 May 17 22:08 test/backup/combine/base/5/3764_fsm\n>\n> It would be nice if pg_combinebackup would (at least optionally but\n> prefferrably by default) complain in this case rather than the user\n> needing to separately run pg_verifybackup.\n\nMy first reaction here is that it would be better to have people run\npg_verifybackup for this. If we try to do this in pg_combinebackup,\nwe're either going to be quite limited in the amount of validation we\ncan do (which might lure users into a false sense of security) or\nwe're going to make things quite a bit more complicated and expensive.\n\nPerhaps there's something here that is worth doing; I haven't thought\nabout this deeply and can't really do so at present. I do believe in\nreasonable error detection, which I hope goes without saying, but I\nalso believe strongly in orthogonality: a tool should do one job and\ndo it as well as possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 13:09:46 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 5/21/24 03:09, Robert Haas wrote:\n> On Fri, May 17, 2024 at 6:14 PM David Steele <[email protected]> wrote:\n>> Then intentionally corrupt a file in the incr backup:\n>>\n>> $ truncate -s 0 test/backup/incr1/base/5/3764_fsm\n>>\n>> In this case pg_verifybackup will error:\n>>\n>> $ pg_verifybackup test/backup/incr1\n>> pg_verifybackup: error: \"base/5/3764_fsm\" has size 0 on disk but size\n>> 24576 in the manifest\n>>\n>> But pg_combinebackup does not complain:\n>>\n>> $ pg_combinebackup test/backup/full test/backup/incr1 -o test/backup/combine\n>> $ ls -lah test/backup/combine/base/5/3764_fsm\n>> -rw------- 1 dev dialout 0 May 17 22:08 test/backup/combine/base/5/3764_fsm\n>>\n>> It would be nice if pg_combinebackup would (at least optionally but\n>> prefferrably by default) complain in this case rather than the user\n>> needing to separately run pg_verifybackup.\n> \n> My first reaction here is that it would be better to have people run\n> pg_verifybackup for this. If we try to do this in pg_combinebackup,\n> we're either going to be quite limited in the amount of validation we\n> can do (which might lure users into a false sense of security) or\n> we're going to make things quite a bit more complicated and expensive.\n> \n> Perhaps there's something here that is worth doing; I haven't thought\n> about this deeply and can't really do so at present. I do believe in\n> reasonable error detection, which I hope goes without saying, but I\n> also believe strongly in orthogonality: a tool should do one job and\n> do it as well as possible.\n\nOK, that seems like a good place to leave this for now.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 21 May 2024 07:41:59 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Fri, Apr 19, 2024 at 11:47 AM Robert Haas <[email protected]> wrote:\n> Hmm, that's an interesting perspective. I've always been very\n> skeptical of doing verification only around missing files and not\n> anything else. I figured that wouldn't be particularly meaningful, and\n> that's pretty much the only kind of validation that's even\n> theoretically possible without a bunch of extra overhead, since we\n> compute checksums on entire files rather than, say, individual blocks.\n> And you could really only do it for the final backup in the chain,\n> because you should end up accessing all of those files, but the same\n> is not true for the predecessor backups. So it's a very weak form of\n> verification.\n>\n> But I looked into it and I think you're correct that, if you restrict\n> the scope in the way that you suggest, we can do it without much\n> additional code, or much additional run-time. The cost is basically\n> that, instead of only looking for a backup_manifest entry when we\n> think we can reuse its checksum, we need to do a lookup for every\n> single file in the final input directory. Then, after processing all\n> such files, we need to iterate over the hash table one more time and\n> see what files were never touched. That seems like an acceptably low\n> cost to me. So, here's a patch.\n>\n> I do think there's some chance that this will encourage people to\n> believe that pg_combinebackup is better at finding problems than it\n> really is or ever will be, and I also question whether it's right to\n> keep changing stuff after feature freeze. But I have a feeling most\n> people here are going to think this is worth including in 17. Let's\n> see what others say.\n\nThere was no hue and cry to include this in v17 and I think that ship\nhas sailed at this point, but we could still choose to include this as\nan enhancement for v18 if people want it. I think David's probably in\nfavor of that (but I'm not 100% sure) and I have mixed feelings about\nit (explained above) so what I'd really like is some other opinions on\nwhether this idea is good, bad, or indifferent.\n\nHere is a rebased version of the patch. No other changes since v1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 2 Aug 2024 09:37:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On 8/2/24 20:37, Robert Haas wrote:\n> On Fri, Apr 19, 2024 at 11:47 AM Robert Haas <[email protected]> wrote:\n>> Hmm, that's an interesting perspective. I've always been very\n>> skeptical of doing verification only around missing files and not\n>> anything else. I figured that wouldn't be particularly meaningful, and\n>> that's pretty much the only kind of validation that's even\n>> theoretically possible without a bunch of extra overhead, since we\n>> compute checksums on entire files rather than, say, individual blocks.\n>> And you could really only do it for the final backup in the chain,\n>> because you should end up accessing all of those files, but the same\n>> is not true for the predecessor backups. So it's a very weak form of\n>> verification.\n>>\n>> But I looked into it and I think you're correct that, if you restrict\n>> the scope in the way that you suggest, we can do it without much\n>> additional code, or much additional run-time. The cost is basically\n>> that, instead of only looking for a backup_manifest entry when we\n>> think we can reuse its checksum, we need to do a lookup for every\n>> single file in the final input directory. Then, after processing all\n>> such files, we need to iterate over the hash table one more time and\n>> see what files were never touched. That seems like an acceptably low\n>> cost to me. So, here's a patch.\n>>\n>> I do think there's some chance that this will encourage people to\n>> believe that pg_combinebackup is better at finding problems than it\n>> really is or ever will be, and I also question whether it's right to\n>> keep changing stuff after feature freeze. But I have a feeling most\n>> people here are going to think this is worth including in 17. Let's\n>> see what others say.\n> \n> There was no hue and cry to include this in v17 and I think that ship\n> has sailed at this point, but we could still choose to include this as\n> an enhancement for v18 if people want it. I think David's probably in\n> favor of that (but I'm not 100% sure) and I have mixed feelings about\n> it (explained above) so what I'd really like is some other opinions on\n> whether this idea is good, bad, or indifferent.\n\nI'm still in favor but if nobody else is interested then I'm not going \nto push on it.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 5 Aug 2024 10:58:44 +0700", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Fri, Aug 2, 2024 at 7:07 PM Robert Haas <[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 11:47 AM Robert Haas <[email protected]> wrote:\n> >\n> [...]\n> Here is a rebased version of the patch. No other changes since v1.\n>\n\nHere are two minor comments on this:\n\n$ pg_combinebackup /tmp/backup_full/ /tmp/backup_incr2/\n/tmp/backup_incr3/ -o /tmp/backup_comb\npg_combinebackup: warning: \"pg_wal/000000010000000000000020\" is\npresent on disk but not in the manifest\n\nThis warning shouldn’t be reported, since we don’t include WAL in the\nbackup manifest ? Also, I found that the final resultant manifest\nincludes this WAL entry:\n\n$ head /tmp/backup_comb/backup_manifest | grep pg_wal\n{ \"Path\": \"pg_wal/000000010000000000000020\", \"Size\": 16777216,\n\"Last-Modified\": \"2024-08-06 11:54:16 GMT\" },\n\n---\n\n+# Set up another new database instance. force_initdb is used because\n+# we want it to be a separate cluster with a different system ID.\n+my $node2 = PostgreSQL::Test::Cluster->new('node2');\n+$node2->init(force_initdb => 1, has_archiving => 1, allows_streaming => 1);\n+$node2->append_conf('postgresql.conf', 'summarize_wal = on');\n+$node2->start;\n+\n\nUnused cluster-node in the test.\n\nRegards,\nAmul\n\n\n", "msg_date": "Tue, 6 Aug 2024 18:07:04 +0530", "msg_from": "Amul Sul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" }, { "msg_contents": "On Sun, Aug 4, 2024 at 11:58 PM David Steele <[email protected]> wrote:\n> I'm still in favor but if nobody else is interested then I'm not going\n> to push on it.\n\nOK, so since this email was sent, Amul reviewed the patch (thanks,\nAmul!) but didn't take a position on whether it was a good idea.\nNobody else has responded. Hence, I'm withdrawing this patch. If a\nbunch of people show up to say we should really do this, we can\nrevisit the issue when that happens.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Aug 2024 11:03:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup does not detect missing files" } ]
[ { "msg_contents": "Hi All,\nPer below code and comment in apply_scanjoin_target_to_paths(), the\nfunction zaps all the paths of a partitioned relation.\n/*\n* If the rel is partitioned, we want to drop its existing paths and\n* generate new ones. This function would still be correct if we kept the\n* existing paths: we'd modify them to generate the correct target above\n* the partitioning Append, and then they'd compete on cost with paths\n* generating the target below the Append\n... snip ...\n*/\nif (rel_is_partitioned)\nrel->pathlist = NIL;\n\nLater the function adjusts the targets of paths in child relations and\nconstructs Append paths from them. That works for simple partitioned\nrelations but not for join between partitioned relations. When\nenable_partitionwise_join is true, the joinrel representing a join between\npartitioned relations may have join paths joining append paths and Append\npaths containing child join paths. Once we zap the pathlist, the only paths\nthat can be computed again are the Append paths. If the optimal path,\nbefore applying the new target, was a join of append paths it will be lost\nforever. This will result in choosing a suboptimal Append path.\n\nWe have one such query in our regression set.\n\nSELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\nplt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\ncoalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\nt1.c, t1.a, t2.a, t3.a;\n\nFor this query, the cheapest Append of Joins path has cost 24.97..25.57 and\nthe cheapest Join of Appends path has cost 21.29..21.81. The latter should\nbe chosen even though enable_partitionwise_join is ON. But this function\nchooses the first.\n\nThe solution is to zap the pathlists only for simple partitioned relations\nlike the attached patch.\n\nWith this patch above query does not choose non-partitionwise join path and\npartition_join test fails. That's expected. But we need to replace that\nquery with some query which uses partitionwise join while maintaining the\nconditions of the test as explained in the comment above that query. I have\ntried a few variations but without success. Suggestions welcome.\n\nThe problem is reproducible on PG 15. The patch is based on 15_STABLE\nbranch. But the problem exists in recent branches as well.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 11 Apr 2024 12:07:09 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Thu, Apr 11, 2024 at 12:07 PM Ashutosh Bapat <\[email protected]> wrote:\n\n> Hi All,\n> Per below code and comment in apply_scanjoin_target_to_paths(), the\n> function zaps all the paths of a partitioned relation.\n> /*\n> * If the rel is partitioned, we want to drop its existing paths and\n> * generate new ones. This function would still be correct if we kept the\n> * existing paths: we'd modify them to generate the correct target above\n> * the partitioning Append, and then they'd compete on cost with paths\n> * generating the target below the Append\n> ... snip ...\n> */\n> if (rel_is_partitioned)\n> rel->pathlist = NIL;\n>\n> Later the function adjusts the targets of paths in child relations and\n> constructs Append paths from them. That works for simple partitioned\n> relations but not for join between partitioned relations. When\n> enable_partitionwise_join is true, the joinrel representing a join between\n> partitioned relations may have join paths joining append paths and Append\n> paths containing child join paths. Once we zap the pathlist, the only paths\n> that can be computed again are the Append paths. If the optimal path,\n> before applying the new target, was a join of append paths it will be lost\n> forever. This will result in choosing a suboptimal Append path.\n>\n> We have one such query in our regression set.\n>\n> SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\n> plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\n> coalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\n> t1.c, t1.a, t2.a, t3.a;\n>\n> For this query, the cheapest Append of Joins path has cost 24.97..25.57\n> and the cheapest Join of Appends path has cost 21.29..21.81. The latter\n> should be chosen even though enable_partitionwise_join is ON. But this\n> function chooses the first.\n>\n> The solution is to zap the pathlists only for simple partitioned relations\n> like the attached patch.\n>\n> With this patch above query does not choose non-partitionwise join path\n> and partition_join test fails. That's expected. But we need to replace that\n> query with some query which uses partitionwise join while maintaining the\n> conditions of the test as explained in the comment above that query. I have\n> tried a few variations but without success. Suggestions welcome.\n>\n> The problem is reproducible on PG 15. The patch is based on 15_STABLE\n> branch. But the problem exists in recent branches as well.\n>\n>\n>\nAfter sending email I found another thread where this problem was discussed\n[1]. Tom has the same explanation as mine however he favoured not doing\nanything about it. Since the rationale was based on the cost and not actual\nperformance, it was fine at that time. At EDB we have a customer case where\npartitionwise join plan is worse than non-partitionwise join plan. The\nsuboptimal plan is chosen because of the above code. I think we should fix\nthe problem.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/786.1565541557%40sss.pgh.pa.us#9d50e1b375201f29bbf17072d75569e3\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Thu, Apr 11, 2024 at 12:07 PM Ashutosh Bapat <[email protected]> wrote:Hi All,Per below code and comment in apply_scanjoin_target_to_paths(), the function zaps all the paths of a partitioned relation.\t/*\t * If the rel is partitioned, we want to drop its existing paths and\t * generate new ones.  This function would still be correct if we kept the\t * existing paths: we'd modify them to generate the correct target above\t * the partitioning Append, and then they'd compete on cost with paths\t * generating the target below the Append... snip ...\t */\tif (rel_is_partitioned)\t\trel->pathlist = NIL;Later the function adjusts the targets of paths in child relations and constructs Append paths from them. That works for simple partitioned relations but not for join between partitioned relations. When enable_partitionwise_join is true, the joinrel representing a join between partitioned relations may have join paths joining append paths and Append paths containing child join paths. Once we zap the pathlist, the only paths that can be computed again are the Append paths. If the optimal path, before applying the new target, was a join of append paths it will be lost forever. This will result in choosing a suboptimal Append path.We have one such query in our regression set.SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE coalesce(t1.a, 0    ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY t1.c, t1.a, t2.a, t3.a;For this query, the cheapest Append of Joins path has cost 24.97..25.57 and the cheapest Join of Appends path has cost 21.29..21.81. The latter should be chosen even though enable_partitionwise_join is ON. But this function chooses the first.The solution is to zap the pathlists only for simple partitioned relations like the attached patch.With this patch above query does not choose non-partitionwise join path and partition_join test fails. That's expected. But we need to replace that query with some query which uses partitionwise join while maintaining the conditions of the test as explained in the comment above that query. I have tried a few variations but without success. Suggestions welcome.The problem is reproducible on PG 15. The patch is based on 15_STABLE branch. But the problem exists in recent branches as well.\nAfter sending email I found another thread where this problem was discussed [1]. Tom has the same explanation as mine however he favoured not doing anything about it. Since the rationale was based on the cost and not actual performance, it was fine at that time. At EDB we have a customer case where partitionwise join plan is worse than non-partitionwise join plan. The suboptimal plan is chosen because of the above code. I think we should fix the problem.[1] https://www.postgresql.org/message-id/flat/786.1565541557%40sss.pgh.pa.us#9d50e1b375201f29bbf17072d75569e3-- Best Wishes,Ashutosh Bapat", "msg_date": "Thu, 11 Apr 2024 12:24:37 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "Here's patch with\n\nOn Thu, Apr 11, 2024 at 12:24 PM Ashutosh Bapat <\[email protected]> wrote:\n\n>\n>\n> On Thu, Apr 11, 2024 at 12:07 PM Ashutosh Bapat <\n> [email protected]> wrote:\n>\n>> Hi All,\n>> Per below code and comment in apply_scanjoin_target_to_paths(), the\n>> function zaps all the paths of a partitioned relation.\n>> /*\n>> * If the rel is partitioned, we want to drop its existing paths and\n>> * generate new ones. This function would still be correct if we kept the\n>> * existing paths: we'd modify them to generate the correct target above\n>> * the partitioning Append, and then they'd compete on cost with paths\n>> * generating the target below the Append\n>> ... snip ...\n>> */\n>> if (rel_is_partitioned)\n>> rel->pathlist = NIL;\n>>\n>> Later the function adjusts the targets of paths in child relations and\n>> constructs Append paths from them. That works for simple partitioned\n>> relations but not for join between partitioned relations. When\n>> enable_partitionwise_join is true, the joinrel representing a join between\n>> partitioned relations may have join paths joining append paths and Append\n>> paths containing child join paths. Once we zap the pathlist, the only paths\n>> that can be computed again are the Append paths. If the optimal path,\n>> before applying the new target, was a join of append paths it will be lost\n>> forever. This will result in choosing a suboptimal Append path.\n>>\n>> We have one such query in our regression set.\n>>\n>> SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\n>> plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\n>> coalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\n>> t1.c, t1.a, t2.a, t3.a;\n>>\n>> For this query, the cheapest Append of Joins path has cost 24.97..25.57\n>> and the cheapest Join of Appends path has cost 21.29..21.81. The latter\n>> should be chosen even though enable_partitionwise_join is ON. But this\n>> function chooses the first.\n>>\n>> The solution is to zap the pathlists only for simple partitioned\n>> relations like the attached patch.\n>>\n>> With this patch above query does not choose non-partitionwise join path\n>> and partition_join test fails. That's expected. But we need to replace that\n>> query with some query which uses partitionwise join while maintaining the\n>> conditions of the test as explained in the comment above that query. I have\n>> tried a few variations but without success. Suggestions welcome.\n>>\n>\nFound a replacement for that query by using a 2-way join instead of 3-way\njoin. The query still executes the referenced code in\nprocess_outer_partition() as mentioned in the comment. I did think about\nremoving the original query. But it is the only example in our regression\ntests where partitionwise join is more costly than non-partitionwise join.\nSo I have left it as is in the test. I am fine if others think that we\nshould remove it.\n\nAdding to the next commitfest but better to consider this for the next set\nof minor releases.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Mon, 15 Apr 2024 12:29:47 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "Hi Ashutosh & hackers,\n\nOn Mon, Apr 15, 2024 at 9:00 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Here's patch with\n>\n[..]\n> Adding to the next commitfest but better to consider this for the next set of minor releases.\n\n1. The patch does not pass cfbot -\nhttps://cirrus-ci.com/task/5486258451906560 on master due to test\nfailure \"not ok 206 + partition_join\"\n\n2. Without the patch applied, the result of the meson test on master\nwas clean (no failures , so master is fine). After applying patch\nthere were expected some hunk failures (as the patch was created for\n15_STABLE):\n\npatching file src/backend/optimizer/plan/planner.c\nHunk #1 succeeded at 7567 (offset 468 lines).\nHunk #2 succeeded at 7593 (offset 468 lines).\npatching file src/test/regress/expected/partition_join.out\nHunk #1 succeeded at 4777 (offset 56 lines).\nHunk #2 succeeded at 4867 (offset 56 lines).\npatching file src/test/regress/sql/partition_join.sql\nHunk #1 succeeded at 1136 (offset 1 line).\n\n3. Without patch there is performance regression/bug on master (cost\nis higher with enable_partitionwise_join=on that without it):\n\ndata preparation:\n-- Test the process_outer_partition() code path\nCREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0000',\n'0001', '0002');\nCREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt1_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i;\nANALYZE plt1_adv;\n\nCREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002');\nCREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt2_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i WHERE i % 5 IN (2, 3, 4);\nANALYZE plt2_adv;\n\nCREATE TABLE plt3_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt3_adv_p1 PARTITION OF plt3_adv FOR VALUES IN ('0001');\nCREATE TABLE plt3_adv_p2 PARTITION OF plt3_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt3_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i WHERE i % 5 IN (1, 3, 4);\nANALYZE plt3_adv;\n\noff:\nEXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\nLEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n= t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort (cost=22.02..22.58 rows=223 width=27)\n Sort Key: t1.c, t1.a, t2.a, t3.a\n -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n[..]\n\n\nwith enable_partitionwise_join=ON (see the jump from cost 22.02 -> 27.65):\nEXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\nLEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n= t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort (cost=27.65..28.37 rows=289 width=27)\n Sort Key: t1.c, t1.a, t2.a, t3.a\n -> Append (cost=2.23..15.83 rows=289 width=27)\n -> Hash Full Join (cost=2.23..4.81 rows=41 width=27)\n[..]\n -> Hash Full Join (cost=2.45..9.57 rows=248 width=27)\n[..]\n\n\nHowever with the patch applied the plan with minimal cost is always\nchosen (\"22\"):\n\nexplain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\nplt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\ncoalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\nt1.c, t1.a, t2.a, t3.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort (cost=22.02..22.58 rows=223 width=27)\n Sort Key: t1.c, t1.a, t2.a, t3.a\n -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n[..]\n\n\nset enable_partitionwise_join to on;\nexplain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\nplt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\ncoalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\nt1.c, t1.a, t2.a, t3.a;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort (cost=22.02..22.58 rows=223 width=27)\n Sort Key: t1.c, t1.a, t2.a, t3.a\n -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n[..]\n\nwith the patch applied, the minimal cost (with toggle on or off) the\ncost always stays the minimal from the available ones. We cannot\nprovide a reproducer for real performance regression, but for the\naffected customer it took 530+s (with enable_partitionwise_join=on)\nand without that GUC it it was ~23s.\n\n4. meson test ends up with failures like below:\n\n 4/290 postgresql:regress / regress/regress\n ERROR 32.67s\n 6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n ERROR 56.96s\n 35/290 postgresql:recovery / recovery/027_stream_regress\n ERROR 40.20s\n\n(all due to \"regression tests pass\" failures)\n\nthe partition_join.sql is failing for test 206, so for this:\n\n-- partitionwise join with fractional paths\nCREATE TABLE fract_t (id BIGINT, PRIMARY KEY (id)) PARTITION BY RANGE (id);\nCREATE TABLE fract_t0 PARTITION OF fract_t FOR VALUES FROM ('0') TO ('1000');\nCREATE TABLE fract_t1 PARTITION OF fract_t FOR VALUES FROM ('1000') TO ('2000');\n\n-- insert data\nINSERT INTO fract_t (id) (SELECT generate_series(0, 1999));\nANALYZE fract_t;\n\n-- verify plan; nested index only scans\nSET max_parallel_workers_per_gather = 0;\nSET enable_partitionwise_join = on;\n\nthe testsuite was expecting the below with enable_partitionwise_join = on;\n\nEXPLAIN (COSTS OFF)\nSELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER\nBY x.id ASC LIMIT 10;\n QUERY PLAN\n-----------------------------------------------------------------------\n Limit\n -> Merge Append\n Sort Key: x.id\n -> Merge Left Join\n Merge Cond: (x_1.id = y_1.id)\n -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n -> Merge Left Join\n Merge Cond: (x_2.id = y_2.id)\n -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n -> Index Only Scan using fract_t1_pkey on fract_t1 y_2\n\nbut actually with patch it gets this (here with costs):\n\nEXPLAIN (COSTS) SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y\nUSING (id) ORDER BY x.id ASC LIMIT 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Limit (cost=1.10..2.21 rows=10 width=16)\n -> Merge Left Join (cost=1.10..223.10 rows=2000 width=16)\n Merge Cond: (x.id = y.id)\n -> Append (cost=0.55..96.55 rows=2000 width=8)\n[..]\n -> Append (cost=0.55..96.55 rows=2000 width=8)\n[..]\n\n\nif you run it without patch and again with enable_partitionwise_join=on:\n\nEXPLAIN SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING\n(id) ORDER BY x.id ASC LIMIT 10;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Limit (cost=1.11..2.22 rows=10 width=16)\n -> Merge Append (cost=1.11..223.11 rows=2000 width=16)\n Sort Key: x.id\n -> Merge Left Join (cost=0.55..101.55 rows=1000 width=16)\n[..]\n -> Merge Left Join (cost=0.55..101.55 rows=1000 width=16)\n[..]\n\nSo with the patch that SQL does not use partitionwise join as it finds\nit more optimal to stick to a plan with cost of \"1.10..2.21\" instead\nof \"1.11..2.22\" (w/ partition_join), nitpicking but still a failure\ntechnically. Perhaps it could be even removed? (it's pretty close to\nnoise?).\n\n-J.\n\n\n", "msg_date": "Mon, 6 May 2024 12:55:27 +0200", "msg_from": "Jakub Wartak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Mon, May 6, 2024 at 4:26 PM Jakub Wartak <[email protected]>\nwrote:\n\n> Hi Ashutosh & hackers,\n>\n> On Mon, Apr 15, 2024 at 9:00 AM Ashutosh Bapat\n> <[email protected]> wrote:\n> >\n> > Here's patch with\n> >\n> [..]\n> > Adding to the next commitfest but better to consider this for the next\n> set of minor releases.\n>\n> 1. The patch does not pass cfbot -\n> https://cirrus-ci.com/task/5486258451906560 on master due to test\n> failure \"not ok 206 + partition_join\"\n>\n\nSo I need to create a patch for master first. I thought CFBot somehow knew\nthat the patch was created for PG 15. :)\n\n\n>\n> 2. Without the patch applied, the result of the meson test on master\n> was clean (no failures , so master is fine). After applying patch\n> there were expected some hunk failures (as the patch was created for\n> 15_STABLE):\n>\n> patching file src/backend/optimizer/plan/planner.c\n> Hunk #1 succeeded at 7567 (offset 468 lines).\n> Hunk #2 succeeded at 7593 (offset 468 lines).\n> patching file src/test/regress/expected/partition_join.out\n> Hunk #1 succeeded at 4777 (offset 56 lines).\n> Hunk #2 succeeded at 4867 (offset 56 lines).\n> patching file src/test/regress/sql/partition_join.sql\n> Hunk #1 succeeded at 1136 (offset 1 line).\n>\n> 3. Without patch there is performance regression/bug on master (cost\n> is higher with enable_partitionwise_join=on that without it):\n>\n> data preparation:\n> -- Test the process_outer_partition() code path\n> CREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0000',\n> '0001', '0002');\n> CREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0003',\n> '0004');\n> INSERT INTO plt1_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\n> generate_series(0, 24) i;\n> ANALYZE plt1_adv;\n>\n> CREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002');\n> CREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0003',\n> '0004');\n> INSERT INTO plt2_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\n> generate_series(0, 24) i WHERE i % 5 IN (2, 3, 4);\n> ANALYZE plt2_adv;\n>\n> CREATE TABLE plt3_adv (a int, b int, c text) PARTITION BY LIST (c);\n> CREATE TABLE plt3_adv_p1 PARTITION OF plt3_adv FOR VALUES IN ('0001');\n> CREATE TABLE plt3_adv_p2 PARTITION OF plt3_adv FOR VALUES IN ('0003',\n> '0004');\n> INSERT INTO plt3_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\n> generate_series(0, 24) i WHERE i % 5 IN (1, 3, 4);\n> ANALYZE plt3_adv;\n>\n> off:\n> EXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\n> LEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n> = t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n> 4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Sort (cost=22.02..22.58 rows=223 width=27)\n> Sort Key: t1.c, t1.a, t2.a, t3.a\n> -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n> [..]\n>\n>\n> with enable_partitionwise_join=ON (see the jump from cost 22.02 -> 27.65):\n> EXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\n> LEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n> = t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n> 4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Sort (cost=27.65..28.37 rows=289 width=27)\n> Sort Key: t1.c, t1.a, t2.a, t3.a\n> -> Append (cost=2.23..15.83 rows=289 width=27)\n> -> Hash Full Join (cost=2.23..4.81 rows=41 width=27)\n> [..]\n> -> Hash Full Join (cost=2.45..9.57 rows=248 width=27)\n> [..]\n>\n>\n> However with the patch applied the plan with minimal cost is always\n> chosen (\"22\"):\n>\n> explain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT\n> JOIN\n> plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\n> coalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\n> t1.c, t1.a, t2.a, t3.a;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Sort (cost=22.02..22.58 rows=223 width=27)\n> Sort Key: t1.c, t1.a, t2.a, t3.a\n> -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n> [..]\n>\n>\n> set enable_partitionwise_join to on;\n> explain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT\n> JOIN\n> plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\n> coalesce(t1.a, 0 ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\n> t1.c, t1.a, t2.a, t3.a;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Sort (cost=22.02..22.58 rows=223 width=27)\n> Sort Key: t1.c, t1.a, t2.a, t3.a\n> -> Hash Full Join (cost=4.83..13.33 rows=223 width=27)\n> [..]\n>\n> with the patch applied, the minimal cost (with toggle on or off) the\n> cost always stays the minimal from the available ones. We cannot\n> provide a reproducer for real performance regression, but for the\n> affected customer it took 530+s (with enable_partitionwise_join=on)\n\nand without that GUC it it was ~23s.\n>\n\nThanks for providing actual timing. That's a huge difference.\n\n\n> 4. meson test ends up with failures like below:\n>\n> 4/290 postgresql:regress / regress/regress\n> ERROR 32.67s\n> 6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n> ERROR 56.96s\n> 35/290 postgresql:recovery / recovery/027_stream_regress\n> ERROR 40.20s\n>\n> (all due to \"regression tests pass\" failures)\n>\n> the partition_join.sql is failing for test 206, so for this:\n>\n> -- partitionwise join with fractional paths\n> CREATE TABLE fract_t (id BIGINT, PRIMARY KEY (id)) PARTITION BY RANGE (id);\n> CREATE TABLE fract_t0 PARTITION OF fract_t FOR VALUES FROM ('0') TO\n> ('1000');\n> CREATE TABLE fract_t1 PARTITION OF fract_t FOR VALUES FROM ('1000') TO\n> ('2000');\n>\n> -- insert data\n> INSERT INTO fract_t (id) (SELECT generate_series(0, 1999));\n> ANALYZE fract_t;\n>\n> -- verify plan; nested index only scans\n> SET max_parallel_workers_per_gather = 0;\n> SET enable_partitionwise_join = on;\n>\n> the testsuite was expecting the below with enable_partitionwise_join = on;\n>\n> EXPLAIN (COSTS OFF)\n> SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER\n> BY x.id ASC LIMIT 10;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Limit\n> -> Merge Append\n> Sort Key: x.id\n> -> Merge Left Join\n> Merge Cond: (x_1.id = y_1.id)\n> -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n> -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n> -> Merge Left Join\n> Merge Cond: (x_2.id = y_2.id)\n> -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n> -> Index Only Scan using fract_t1_pkey on fract_t1 y_2\n>\n> but actually with patch it gets this (here with costs):\n>\n> EXPLAIN (COSTS) SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y\n> USING (id) ORDER BY x.id ASC LIMIT 10;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------\n> Limit (cost=1.10..2.21 rows=10 width=16)\n> -> Merge Left Join (cost=1.10..223.10 rows=2000 width=16)\n> Merge Cond: (x.id = y.id)\n> -> Append (cost=0.55..96.55 rows=2000 width=8)\n> [..]\n> -> Append (cost=0.55..96.55 rows=2000 width=8)\n> [..]\n>\n>\n> if you run it without patch and again with enable_partitionwise_join=on:\n>\n> EXPLAIN SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING\n> (id) ORDER BY x.id ASC LIMIT 10;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------\n> Limit (cost=1.11..2.22 rows=10 width=16)\n> -> Merge Append (cost=1.11..223.11 rows=2000 width=16)\n> Sort Key: x.id\n> -> Merge Left Join (cost=0.55..101.55 rows=1000 width=16)\n> [..]\n> -> Merge Left Join (cost=0.55..101.55 rows=1000 width=16)\n> [..]\n>\n> So with the patch that SQL does not use partitionwise join as it finds\n> it more optimal to stick to a plan with cost of \"1.10..2.21\" instead\n> of \"1.11..2.22\" (w/ partition_join), nitpicking but still a failure\n> technically. Perhaps it could be even removed? (it's pretty close to\n> noise?).\n>\n\nI think we need to replace the failing query with something which uses\npartitionwise join even with the patch.\n\nI will take a look at this after returning from a two week long vacation,\nunless someone else is interested in fixing this before that.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, May 6, 2024 at 4:26 PM Jakub Wartak <[email protected]> wrote:Hi Ashutosh & hackers,\n\nOn Mon, Apr 15, 2024 at 9:00 AM Ashutosh Bapat\n<[email protected]> wrote:\n>\n> Here's patch with\n>\n[..]\n> Adding to the next commitfest but better to consider this for the next set of minor releases.\n\n1. The patch does not pass cfbot -\nhttps://cirrus-ci.com/task/5486258451906560 on master due to test\nfailure \"not ok 206   + partition_join\"So I need to create a patch for master first. I thought CFBot somehow knew that the patch was created for PG 15. :) \n\n2. Without the patch applied, the result of the meson test on master\nwas clean (no failures , so master is fine). After applying patch\nthere were expected some hunk failures (as the patch was created for\n15_STABLE):\n\npatching file src/backend/optimizer/plan/planner.c\nHunk #1 succeeded at 7567 (offset 468 lines).\nHunk #2 succeeded at 7593 (offset 468 lines).\npatching file src/test/regress/expected/partition_join.out\nHunk #1 succeeded at 4777 (offset 56 lines).\nHunk #2 succeeded at 4867 (offset 56 lines).\npatching file src/test/regress/sql/partition_join.sql\nHunk #1 succeeded at 1136 (offset 1 line).\n\n3. Without patch there is performance regression/bug on master (cost\nis higher with enable_partitionwise_join=on that without it):\n\ndata preparation:\n-- Test the process_outer_partition() code path\nCREATE TABLE plt1_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt1_adv_p1 PARTITION OF plt1_adv FOR VALUES IN ('0000',\n'0001', '0002');\nCREATE TABLE plt1_adv_p2 PARTITION OF plt1_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt1_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i;\nANALYZE plt1_adv;\n\nCREATE TABLE plt2_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt2_adv_p1 PARTITION OF plt2_adv FOR VALUES IN ('0002');\nCREATE TABLE plt2_adv_p2 PARTITION OF plt2_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt2_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i WHERE i % 5 IN (2, 3, 4);\nANALYZE plt2_adv;\n\nCREATE TABLE plt3_adv (a int, b int, c text) PARTITION BY LIST (c);\nCREATE TABLE plt3_adv_p1 PARTITION OF plt3_adv FOR VALUES IN ('0001');\nCREATE TABLE plt3_adv_p2 PARTITION OF plt3_adv FOR VALUES IN ('0003', '0004');\nINSERT INTO plt3_adv SELECT i, i, to_char(i % 5, 'FM0000') FROM\ngenerate_series(0, 24) i WHERE i % 5 IN (1, 3, 4);\nANALYZE plt3_adv;\n\noff:\nEXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\nLEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n= t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort  (cost=22.02..22.58 rows=223 width=27)\n   Sort Key: t1.c, t1.a, t2.a, t3.a\n   ->  Hash Full Join  (cost=4.83..13.33 rows=223 width=27)\n[..]\n\n\nwith enable_partitionwise_join=ON (see the jump from cost 22.02 -> 27.65):\nEXPLAIN SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1\nLEFT JOIN plt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c\n= t3.c) WHERE coalesce(t1.a, 0) % 5 != 3 AND coalesce(t1.a, 0) % 5 !=\n4 ORDER BY t1.c, t1.a, t2.a, t3.a;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort  (cost=27.65..28.37 rows=289 width=27)\n   Sort Key: t1.c, t1.a, t2.a, t3.a\n   ->  Append  (cost=2.23..15.83 rows=289 width=27)\n         ->  Hash Full Join  (cost=2.23..4.81 rows=41 width=27)\n[..]\n         ->  Hash Full Join  (cost=2.45..9.57 rows=248 width=27)\n[..]\n\n\nHowever with the patch applied the plan with minimal cost is always\nchosen (\"22\"):\n\nexplain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\nplt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\ncoalesce(t1.a, 0    ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\nt1.c, t1.a, t2.a, t3.a;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort  (cost=22.02..22.58 rows=223 width=27)\n   Sort Key: t1.c, t1.a, t2.a, t3.a\n   ->  Hash Full Join  (cost=4.83..13.33 rows=223 width=27)\n[..]\n\n\nset enable_partitionwise_join to on;\nexplain SELECT t1.a, t1.c, t2.a, t2.c, t3.a, t3.c FROM (plt1_adv t1 LEFT JOIN\nplt2_adv t2 ON (t1.c = t2.c)) FULL JOIN plt3_adv t3 ON (t1.c = t3.c) WHERE\ncoalesce(t1.a, 0    ) % 5 != 3 AND coalesce(t1.a, 0) % 5 != 4 ORDER BY\nt1.c, t1.a, t2.a, t3.a;\n                                          QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Sort  (cost=22.02..22.58 rows=223 width=27)\n   Sort Key: t1.c, t1.a, t2.a, t3.a\n   ->  Hash Full Join  (cost=4.83..13.33 rows=223 width=27)\n[..]\n\nwith the patch applied, the minimal cost (with toggle on or off) the\ncost always stays the minimal from the available ones. We cannot\nprovide a reproducer for real performance regression, but for the\naffected customer it took 530+s (with enable_partitionwise_join=on) and without that GUC it it was ~23s. Thanks for providing actual timing. That's a huge difference.\n\n4. meson test ends up with failures like below:\n\n  4/290 postgresql:regress / regress/regress\n                 ERROR          32.67s\n  6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n                 ERROR          56.96s\n 35/290 postgresql:recovery / recovery/027_stream_regress\n                 ERROR          40.20s\n\n(all due to \"regression tests pass\" failures)\n\nthe partition_join.sql is failing for test 206, so for this:\n\n-- partitionwise join with fractional paths\nCREATE TABLE fract_t (id BIGINT, PRIMARY KEY (id)) PARTITION BY RANGE (id);\nCREATE TABLE fract_t0 PARTITION OF fract_t FOR VALUES FROM ('0') TO ('1000');\nCREATE TABLE fract_t1 PARTITION OF fract_t FOR VALUES FROM ('1000') TO ('2000');\n\n-- insert data\nINSERT INTO fract_t (id) (SELECT generate_series(0, 1999));\nANALYZE fract_t;\n\n-- verify plan; nested index only scans\nSET max_parallel_workers_per_gather = 0;\nSET enable_partitionwise_join = on;\n\nthe testsuite was expecting the below with enable_partitionwise_join = on;\n\nEXPLAIN (COSTS OFF)\nSELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER\nBY x.id ASC LIMIT 10;\n                              QUERY PLAN\n-----------------------------------------------------------------------\n Limit\n   ->  Merge Append\n         Sort Key: x.id\n         ->  Merge Left Join\n               Merge Cond: (x_1.id = y_1.id)\n               ->  Index Only Scan using fract_t0_pkey on fract_t0 x_1\n               ->  Index Only Scan using fract_t0_pkey on fract_t0 y_1\n         ->  Merge Left Join\n               Merge Cond: (x_2.id = y_2.id)\n               ->  Index Only Scan using fract_t1_pkey on fract_t1 x_2\n               ->  Index Only Scan using fract_t1_pkey on fract_t1 y_2\n\nbut actually with patch it gets this (here with costs):\n\nEXPLAIN (COSTS) SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y\nUSING (id) ORDER BY x.id ASC LIMIT 10;\n                                                 QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Limit  (cost=1.10..2.21 rows=10 width=16)\n   ->  Merge Left Join  (cost=1.10..223.10 rows=2000 width=16)\n         Merge Cond: (x.id = y.id)\n         ->  Append  (cost=0.55..96.55 rows=2000 width=8)\n[..]\n         ->  Append  (cost=0.55..96.55 rows=2000 width=8)\n[..]\n\n\nif you run it without patch and again with enable_partitionwise_join=on:\n\nEXPLAIN SELECT x.id, y.id FROM fract_t x LEFT JOIN fract_t y USING\n(id) ORDER BY x.id ASC LIMIT 10;\n                                                 QUERY PLAN\n-------------------------------------------------------------------------------------------------------------\n Limit  (cost=1.11..2.22 rows=10 width=16)\n   ->  Merge Append  (cost=1.11..223.11 rows=2000 width=16)\n         Sort Key: x.id\n         ->  Merge Left Join  (cost=0.55..101.55 rows=1000 width=16)\n[..]\n         ->  Merge Left Join  (cost=0.55..101.55 rows=1000 width=16)\n[..]\n\nSo with the patch that SQL does not use partitionwise join as it finds\nit more optimal to stick to a plan with cost of \"1.10..2.21\" instead\nof \"1.11..2.22\" (w/ partition_join), nitpicking but still a failure\ntechnically. Perhaps it could be even removed? (it's pretty close to\nnoise?).I think we need to replace the failing query with something which uses partitionwise join even with the patch.I will take a look at this after returning from a two week long vacation, unless someone else is interested in fixing this before that. -- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 6 May 2024 18:28:14 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Mon, May 6, 2024 at 6:28 PM Ashutosh Bapat <[email protected]>\nwrote:\n\n>\n>\n> On Mon, May 6, 2024 at 4:26 PM Jakub Wartak <[email protected]>\n> wrote:\n>\n>> Hi Ashutosh & hackers,\n>>\n>> On Mon, Apr 15, 2024 at 9:00 AM Ashutosh Bapat\n>> <[email protected]> wrote:\n>> >\n>> > Here's patch with\n>> >\n>> [..]\n>> > Adding to the next commitfest but better to consider this for the next\n>> set of minor releases.\n>>\n>> 1. The patch does not pass cfbot -\n>> https://cirrus-ci.com/task/5486258451906560 on master due to test\n>> failure \"not ok 206 + partition_join\"\n>>\n>\n> So I need to create a patch for master first. I thought CFBot somehow knew\n> that the patch was created for PG 15. :)\n>\n\nPFA patch for master. That should fix CfBot.\n\n\n>\n>> 4. meson test ends up with failures like below:\n>>\n>> 4/290 postgresql:regress / regress/regress\n>> ERROR 32.67s\n>> 6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n>> ERROR 56.96s\n>> 35/290 postgresql:recovery / recovery/027_stream_regress\n>> ERROR 40.20s\n>>\n>> (all due to \"regression tests pass\" failures)\n>> [...]\n>\n>\n>> So with the patch that SQL does not use partitionwise join as it finds\n>> it more optimal to stick to a plan with cost of \"1.10..2.21\" instead\n>> of \"1.11..2.22\" (w/ partition_join), nitpicking but still a failure\n>> technically. Perhaps it could be even removed? (it's pretty close to\n>> noise?).\n>>\n>\n>\nThe test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and later\nmodified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The modification just\navoided eliminating the join, so that change can be ignored.\n6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests to test fractional\npaths being considered when creating ordered append paths. Reading the\ncommit message, I was expecting a test which did not use a join as well and\nalso which used inheritance. But it seems that the queries added by that\ncommit, test all the required scenarios and hence we see two queries\ninvolving join between partitioned tables. As the comment there says the\nintention is to verify index only scans and not exactly partitionwise join.\nSo just fixing the expected output of one query looks fine. The other query\nwill test partitionwise join and fractional paths anyway. I am including\nTomas, Arne and Zhihong, who worked on the first commit, to comment on\nexpected output changes.\n\nI will create patches for the back-branches once the patch for master is in\na committable state.\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Wed, 22 May 2024 13:27:07 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "Hi Ashutosh,\n\nthanks for bringing this to my attention. I'll first share a few \nthoughts about the change and respond regarding the test below.\n\nI clearly understand your intention with this patch. It's an issue I run \ninto from time to time.\n\nI did some testing with some benchmark sets back with pg 14. I did the \nfollowing: I planned with and without the partitionwise join GUC \n(explain) and took the one with the lower cost to execute the query.\n\nInterestingly, even discounting the overhead and additional planning \ntime, the option with the lower cost turned out to be slower on our \nbenchmark set back then. The median query with disabled GUC was quicker, \nbut on average that was not the case. The observation is one, I'd \ngenerally describe as \"The more options you consider, the more ways we \nhave to be horribly wrong. More options for the planner are a great way \nto uncover the various shortcomings of it.\"\n\nThat might be specific to the benchmark I was working with at the time. \nBut that made me drop the issue back then. That is ofc no valid reason \nnot to go in the direction of making the planner to consider more \noptions. :)\n\nMaybe we can discuss that in person next week?\n\nOn 2024-05-22 07:57, Ashutosh Bapat wrote:\n> On Mon, May 6, 2024 at 6:28 PM Ashutosh Bapat\n> <[email protected]> wrote:\n>>> 4. meson test ends up with failures like below:\n>>> \n>>> 4/290 postgresql:regress / regress/regress\n>>> ERROR 32.67s\n>>> 6/290 postgresql:pg_upgrade / pg_upgrade/002_pg_upgrade\n>>> ERROR 56.96s\n>>> 35/290 postgresql:recovery / recovery/027_stream_regress\n>>> ERROR 40.20s\n>>> \n>>> (all due to \"regression tests pass\" failures)\n>>> [...]\n> \n>>> So with the patch that SQL does not use partitionwise join as it\n>>> finds\n>>> it more optimal to stick to a plan with cost of \"1.10..2.21\"\n>>> instead\n>>> of \"1.11..2.22\" (w/ partition_join), nitpicking but still a\n>>> failure\n>>> technically. Perhaps it could be even removed? (it's pretty close\n>>> to\n>>> noise?).\n> \n> The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and\n> later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The\n> modification just avoided eliminating the join, so that change can be\n> ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests to\n> test fractional paths being considered when creating ordered append\n> paths. Reading the commit message, I was expecting a test which did\n> not use a join as well and also which used inheritance. But it seems\n> that the queries added by that commit, test all the required scenarios\n> and hence we see two queries involving join between partitioned\n> tables. As the comment there says the intention is to verify index\n> only scans and not exactly partitionwise join. So just fixing the\n> expected output of one query looks fine. The other query will test\n> partitionwise join and fractional paths anyway. I am including Tomas,\n> Arne and Zhihong, who worked on the first commit, to comment on\n> expected output changes.\n\nThe test was put there to make sure a fractional join is considered in \nthe case that a partitionwise join is considered. Because that wasn't \nthe case before.\n\nThe important part for my use case back then was that we do Merge \nJoin(s) at all. The test result after your patch still confirms that.\n\nIf we simply modify the test as such, we no longer confirm, whether the \ncode path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is \nstill working.\n\nMaybe it's worthwhile to add something like\n\ncreate index on fract_t0 ((id*id));\n\nEXPLAIN (COSTS OFF)\nSELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id DESC \nLIMIT 10;\n QUERY PLAN\n-------------------------------------------------------------------------------\n Limit\n -> Merge Append\n Sort Key: ((x.id * x.id)) DESC\n -> Nested Loop\n -> Index Scan Backward using fract_t0_expr_idx on \nfract_t0 x_1\n -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n Index Cond: (id = x_1.id)\n -> Sort\n Sort Key: ((x_2.id * x_2.id)) DESC\n -> Hash Join\n Hash Cond: (x_2.id = y_2.id)\n -> Seq Scan on fract_t1 x_2\n -> Hash\n -> Seq Scan on fract_t1 y_2\n\n\nI am not sure, whether it's worth the extra test cycles on every animal, \nbut since we are not creating an extra table it might be ok.\nI don't have a very strong feeling about the above test case.\n\n> I will create patches for the back-branches once the patch for master\n> is in a committable state.\n\nI am not sure, whether it's really a bug. I personally wouldn't be brave \nenough to back patch this. I don't want to deal with complaining end \nusers. Suddenly their optimizer, which always had horrible estimates, \nwas actually able to do harmful stuff with them. Only due to a minor \nversion upgrade. I think that's a bad idea to backpatch something with \ncomplex performance implications. Especially since they might even be \nbased on potentially inaccurate data...\n\n> \n> --\n> \n> Best Wishes,\n> Ashutosh Bapat\n\nAll the best\nArne\n\n\n", "msg_date": "Fri, 24 May 2024 18:02:25 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Fri, May 24, 2024 at 2:02 PM <[email protected]> wrote:\n> I am not sure, whether it's really a bug. I personally wouldn't be brave\n> enough to back patch this. I don't want to deal with complaining end\n> users. Suddenly their optimizer, which always had horrible estimates,\n> was actually able to do harmful stuff with them. Only due to a minor\n> version upgrade. I think that's a bad idea to backpatch something with\n> complex performance implications. Especially since they might even be\n> based on potentially inaccurate data...\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 May 2024 14:47:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Fri, May 24, 2024 at 11:02 AM <[email protected]> wrote:\n\n> Hi Ashutosh,\n>\n> thanks for bringing this to my attention. I'll first share a few\n> thoughts about the change and respond regarding the test below.\n>\n> I clearly understand your intention with this patch. It's an issue I run\n> into from time to time.\n>\n> I did some testing with some benchmark sets back with pg 14. I did the\n> following: I planned with and without the partitionwise join GUC\n> (explain) and took the one with the lower cost to execute the query.\n>\n> Interestingly, even discounting the overhead and additional planning\n> time, the option with the lower cost turned out to be slower on our\n> benchmark set back then. The median query with disabled GUC was quicker,\n> but on average that was not the case. The observation is one, I'd\n> generally describe as \"The more options you consider, the more ways we\n> have to be horribly wrong. More options for the planner are a great way\n> to uncover the various shortcomings of it.\"\n>\n> That might be specific to the benchmark I was working with at the time.\n> But that made me drop the issue back then. That is ofc no valid reason\n> not to go in the direction of making the planner to consider more\n> options. :)\n>\n\nIn summary, you are suggesting that partitionwise join performs better than\nplain join even if the latter one has lower cost. Hence fixing this issue\nhas never become a priority for you. Am I right?\n\nPlans with lower costs being slower is not new for optimizer.\nPartitionwise join just adds another case.\n\n\n>\n> Maybe we can discuss that in person next week?\n>\n\nSure.\n\n\n>\n> On 2024-05-22 07:57, Ashutosh Bapat wrote:\n> >\n> > The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and\n> > later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The\n> > modification just avoided eliminating the join, so that change can be\n> > ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests to\n> > test fractional paths being considered when creating ordered append\n> > paths. Reading the commit message, I was expecting a test which did\n> > not use a join as well and also which used inheritance. But it seems\n> > that the queries added by that commit, test all the required scenarios\n> > and hence we see two queries involving join between partitioned\n> > tables. As the comment there says the intention is to verify index\n> > only scans and not exactly partitionwise join. So just fixing the\n> > expected output of one query looks fine. The other query will test\n> > partitionwise join and fractional paths anyway. I am including Tomas,\n> > Arne and Zhihong, who worked on the first commit, to comment on\n> > expected output changes.\n>\n> The test was put there to make sure a fractional join is considered in\n> the case that a partitionwise join is considered. Because that wasn't\n> the case before.\n>\n> The important part for my use case back then was that we do Merge\n> Join(s) at all. The test result after your patch still confirms that.\n>\n> If we simply modify the test as such, we no longer confirm, whether the\n> code path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is\n> still working.\n>\n> Maybe it's worthwhile to add something like\n>\n> create index on fract_t0 ((id*id));\n>\n> EXPLAIN (COSTS OFF)\n> SELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id DESC\n> LIMIT 10;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------\n> Limit\n> -> Merge Append\n> Sort Key: ((x.id * x.id)) DESC\n> -> Nested Loop\n> -> Index Scan Backward using fract_t0_expr_idx on\n> fract_t0 x_1\n> -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n> Index Cond: (id = x_1.id)\n> -> Sort\n> Sort Key: ((x_2.id * x_2.id)) DESC\n> -> Hash Join\n> Hash Cond: (x_2.id = y_2.id)\n> -> Seq Scan on fract_t1 x_2\n> -> Hash\n> -> Seq Scan on fract_t1 y_2\n>\n>\n> I am not sure, whether it's worth the extra test cycles on every animal,\n> but since we are not creating an extra table it might be ok.\n> I don't have a very strong feeling about the above test case.\n>\n\nMy patch removes redundant enable_partitionwise_join = on since that's done\nvery early in the test. Apart from that it does not change the test. So if\nthe expected output change is fine with you, I think we should leave the\ntest as is. Plan outputs are sometimes fragile and thus make expected\noutputs flaky.\n\n\n> > I will create patches for the back-branches once the patch for master\n> > is in a committable state.\n>\n> I am not sure, whether it's really a bug. I personally wouldn't be brave\n> enough to back patch this. I don't want to deal with complaining end\n> users. Suddenly their optimizer, which always had horrible estimates,\n> was actually able to do harmful stuff with them. Only due to a minor\n> version upgrade. I think that's a bad idea to backpatch something with\n> complex performance implications. Especially since they might even be\n> based on potentially inaccurate data...\n>\n\nSince it's a thinko I considered it as a bug. But I agree that it has the\npotential to disturb plans after upgrade and thus upset users. So I am fine\nif we don't backpatch.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, May 24, 2024 at 11:02 AM <[email protected]> wrote:Hi Ashutosh,\n\nthanks for bringing this to my attention. I'll first share a few \nthoughts about the change and respond regarding the test below.\n\nI clearly understand your intention with this patch. It's an issue I run \ninto from time to time.\n\nI did some testing with some benchmark sets back with pg 14. I did the \nfollowing: I planned with and without the partitionwise join GUC \n(explain) and took the one with the lower cost to execute the query.\n\nInterestingly, even discounting the overhead and additional planning \ntime, the option with the lower cost turned out to be slower on our \nbenchmark set back then. The median query with disabled GUC was quicker, \nbut on average that was not the case. The observation is one, I'd \ngenerally describe as \"The more options you consider, the more ways we \nhave to be horribly wrong. More options for the planner are a great way \nto uncover the various shortcomings of it.\"\n\nThat might be specific to the benchmark I was working with at the time. \nBut that made me drop the issue back then. That is ofc no valid reason \nnot to go in the direction of making the planner to consider more \noptions. :)In summary, you are suggesting that partitionwise join performs better than plain join even if the latter one has lower cost. Hence fixing this issue has never become a priority for you. Am I right?Plans with lower costs being slower is not new for optimizer. Partitionwise join just adds another case. \n\nMaybe we can discuss that in person next week?Sure. \n\nOn 2024-05-22 07:57, Ashutosh Bapat wrote:\n> \n> The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and\n> later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The\n> modification just avoided eliminating the join, so that change can be\n> ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests to\n> test fractional paths being considered when creating ordered append\n> paths. Reading the commit message, I was expecting a test which did\n> not use a join as well and also which used inheritance. But it seems\n> that the queries added by that commit, test all the required scenarios\n> and hence we see two queries involving join between partitioned\n> tables. As the comment there says the intention is to verify index\n> only scans and not exactly partitionwise join. So just fixing the\n> expected output of one query looks fine. The other query will test\n> partitionwise join and fractional paths anyway. I am including Tomas,\n> Arne and Zhihong, who worked on the first commit, to comment on\n> expected output changes.\n\nThe test was put there to make sure a fractional join is considered in \nthe case that a partitionwise join is considered. Because that wasn't \nthe case before.\n\nThe important part for my use case back then was that we do Merge \nJoin(s) at all. The test result after your patch still confirms that.\n\nIf we simply modify the test as such, we no longer confirm, whether the \ncode path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is \nstill working.\n\nMaybe it's worthwhile to add something like\n\ncreate index on fract_t0 ((id*id));\n\nEXPLAIN (COSTS OFF)\nSELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id DESC \nLIMIT 10;\n                                   QUERY PLAN\n-------------------------------------------------------------------------------\n  Limit\n    ->  Merge Append\n          Sort Key: ((x.id * x.id)) DESC\n          ->  Nested Loop\n                ->  Index Scan Backward using fract_t0_expr_idx on \nfract_t0 x_1\n                ->  Index Only Scan using fract_t0_pkey on fract_t0 y_1\n                      Index Cond: (id = x_1.id)\n          ->  Sort\n                Sort Key: ((x_2.id * x_2.id)) DESC\n                ->  Hash Join\n                      Hash Cond: (x_2.id = y_2.id)\n                      ->  Seq Scan on fract_t1 x_2\n                      ->  Hash\n                            ->  Seq Scan on fract_t1 y_2\n\n\nI am not sure, whether it's worth the extra test cycles on every animal, \nbut since we are not creating an extra table it might be ok.\nI don't have a very strong feeling about the above test case.My patch removes redundant enable_partitionwise_join = on since that's done very early in the test. Apart from that it does not change the test. So if the expected output change is fine with you, I think we should leave the test as is. Plan outputs are sometimes fragile and thus make expected outputs flaky.\n\n> I will create patches for the back-branches once the patch for master\n> is in a committable state.\n\nI am not sure, whether it's really a bug. I personally wouldn't be brave \nenough to back patch this. I don't want to deal with complaining end \nusers. Suddenly their optimizer, which always had horrible estimates, \nwas actually able to do harmful stuff with them. Only due to a minor \nversion upgrade. I think that's a bad idea to backpatch something with \ncomplex performance implications. Especially since they might even be \nbased on potentially inaccurate data...Since it's a thinko I considered it as a bug. But I agree that it has the potential to disturb plans after upgrade and thus upset users. So I am fine if we don't backpatch. -- Best Wishes,Ashutosh Bapat", "msg_date": "Mon, 27 May 2024 14:17:25 -0700", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "Hi Ashutosh!\n\nOn 2024-05-27 14:17, Ashutosh Bapat wrote:\n> On Fri, May 24, 2024 at 11:02 AM <[email protected]> wrote:\n> \n>> Hi Ashutosh,\n>> \n>> thanks for bringing this to my attention. I'll first share a few\n>> thoughts about the change and respond regarding the test below.\n>> \n>> I clearly understand your intention with this patch. It's an issue I\n>> run\n>> into from time to time.\n>> \n>> I did some testing with some benchmark sets back with pg 14. I did\n>> the\n>> following: I planned with and without the partitionwise join GUC\n>> (explain) and took the one with the lower cost to execute the query.\n>> \n>> Interestingly, even discounting the overhead and additional planning\n>> \n>> time, the option with the lower cost turned out to be slower on our\n>> benchmark set back then. The median query with disabled GUC was\n>> quicker,\n>> but on average that was not the case. The observation is one, I'd\n>> generally describe as \"The more options you consider, the more ways\n>> we\n>> have to be horribly wrong. More options for the planner are a great\n>> way\n>> to uncover the various shortcomings of it.\"\n>> \n>> That might be specific to the benchmark I was working with at the\n>> time.\n>> But that made me drop the issue back then. That is ofc no valid\n>> reason\n>> not to go in the direction of making the planner to consider more\n>> options. :)\n> \n> In summary, you are suggesting that partitionwise join performs better\n> than plain join even if the latter one has lower cost. Hence fixing\n> this issue has never become a priority for you. Am I right?\n> \n> Plans with lower costs being slower is not new for optimizer.\n> Partitionwise join just adds another case.\n\nSorry for my confusing long text. I will try to recap my points \nconcisely.\n\n1. I think the order by pk frac limit plans had just to similar \nperformance behaviour for me to bother.\nBut afaics the main point of your proposal is not related to frac plans \nat all.\n2. We can't expect the optimizers to simply yield better results by \nbeing given more options to be wrong. (Let me give a simple example: \nThis patch makes our lack of cross table cross column statistics worse. \nWe give it more opportunity to pick something horrible.\n3. I dislike, that this patch makes much harder to debug, why no \npartitionwise join is chosen.\n\n> \n>> Maybe we can discuss that in person next week?\n> \n> Sure.\n> \n>> On 2024-05-22 07:57, Ashutosh Bapat wrote:\n>>> \n>>> The test was added by 6b94e7a6da2f1c6df1a42efe64251f32a444d174 and\n>>> later modified by 3c569049b7b502bb4952483d19ce622ff0af5fd6. The\n>>> modification just avoided eliminating the join, so that change can\n>> be\n>>> ignored. 6b94e7a6da2f1c6df1a42efe64251f32a444d174 added the tests\n>> to\n>>> test fractional paths being considered when creating ordered\n>> append\n>>> paths. Reading the commit message, I was expecting a test which\n>> did\n>>> not use a join as well and also which used inheritance. But it\n>> seems\n>>> that the queries added by that commit, test all the required\n>> scenarios\n>>> and hence we see two queries involving join between partitioned\n>>> tables. As the comment there says the intention is to verify index\n>>> only scans and not exactly partitionwise join. So just fixing the\n>>> expected output of one query looks fine. The other query will test\n>>> partitionwise join and fractional paths anyway. I am including\n>> Tomas,\n>>> Arne and Zhihong, who worked on the first commit, to comment on\n>>> expected output changes.\n>> \n>> The test was put there to make sure a fractional join is considered\n>> in\n>> the case that a partitionwise join is considered. Because that\n>> wasn't\n>> the case before.\n>> \n>> The important part for my use case back then was that we do Merge\n>> Join(s) at all. The test result after your patch still confirms\n>> that.\n>> \n>> If we simply modify the test as such, we no longer confirm, whether\n>> the\n>> code path introduced in 6b94e7a6da2f1c6df1a42efe64251f32a444d174 is\n>> still working.\n>> \n>> Maybe it's worthwhile to add something like\n>> \n>> create index on fract_t0 ((id*id));\n>> \n>> EXPLAIN (COSTS OFF)\n>> SELECT * FROM fract_t x JOIN fract_t y USING (id) ORDER BY id * id\n>> DESC\n>> LIMIT 10;\n>> QUERY PLAN\n>> \n> -------------------------------------------------------------------------------\n>> Limit\n>> -> Merge Append\n>> Sort Key: ((x.id [1] * x.id [1])) DESC\n>> -> Nested Loop\n>> -> Index Scan Backward using fract_t0_expr_idx on\n>> fract_t0 x_1\n>> -> Index Only Scan using fract_t0_pkey on fract_t0\n>> y_1\n>> Index Cond: (id = x_1.id [2])\n>> -> Sort\n>> Sort Key: ((x_2.id [3] * x_2.id [3])) DESC\n>> -> Hash Join\n>> Hash Cond: (x_2.id [3] = y_2.id [4])\n>> -> Seq Scan on fract_t1 x_2\n>> -> Hash\n>> -> Seq Scan on fract_t1 y_2\n>> \n>> I am not sure, whether it's worth the extra test cycles on every\n>> animal,\n>> but since we are not creating an extra table it might be ok.\n>> I don't have a very strong feeling about the above test case.\n> \n> My patch removes redundant enable_partitionwise_join = on since that's\n> done very early in the test. Apart from that it does not change the\n> test. So if the expected output change is fine with you, I think we\n> should leave the test as is. Plan outputs are sometimes fragile and\n> thus make expected outputs flaky.\n\nIf at all, we can add to that. That would indeed give us more code test \ncoverage. I will refrain from commenting further, since that discussion \nwould get completely disconnected from the patch at hand.\n\n> \n>>> I will create patches for the back-branches once the patch for\n>> master\n>>> is in a committable state.\n>> \n>> I am not sure, whether it's really a bug. I personally wouldn't be\n>> brave\n>> enough to back patch this. I don't want to deal with complaining end\n>> \n>> users. Suddenly their optimizer, which always had horrible\n>> estimates,\n>> was actually able to do harmful stuff with them. Only due to a minor\n>> \n>> version upgrade. I think that's a bad idea to backpatch something\n>> with\n>> complex performance implications. Especially since they might even\n>> be\n>> based on potentially inaccurate data...\n> \n> Since it's a thinko I considered it as a bug. But I agree that it has\n> the potential to disturb plans after upgrade and thus upset users. So\n> I am fine if we don't backpatch.\n> \n> --\n> \n> Best Wishes,\n> Ashutosh Bapat\n> \n> \n> Links:\n> ------\n> [1] http://x.id\n> [2] http://x_1.id\n> [3] http://x_2.id\n> [4] http://y_2.id\n\nAll the best\nArne\n\n\n", "msg_date": "Mon, 27 May 2024 18:43:27 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Tue, May 28, 2024 at 7:13 AM <[email protected]> wrote:\n\n> 1. I think the order by pk frac limit plans had just to similar\n> performance behaviour for me to bother.\n> But afaics the main point of your proposal is not related to frac plans\n> at all.\n>\n\nRight.\n\n\n> 2. We can't expect the optimizers to simply yield better results by\n> being given more options to be wrong. (Let me give a simple example:\n\nThis patch makes our lack of cross table cross column statistics worse.\n> We give it more opportunity to pick something horrible.\n>\n\nI don't see the connection between cross column statistics and this bug I\nam fixing. Can you please elaborate?\n\n\n> 3. I dislike, that this patch makes much harder to debug, why no\n> partitionwise join is chosen.\n>\nCan you please elaborate more? How does my change make debugging harder?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Tue, May 28, 2024 at 7:13 AM <[email protected]> wrote:\n1. I think the order by pk frac limit plans had just to similar \nperformance behaviour for me to bother.\nBut afaics the main point of your proposal is not related to frac plans \nat all.Right. \n2. We can't expect the optimizers to simply yield better results by \nbeing given more options to be wrong. (Let me give a simple example:\nThis patch makes our lack of cross table cross column statistics worse. \nWe give it more opportunity to pick something horrible.I don't see the connection between cross column statistics and this bug I am fixing. Can  you please elaborate? \n3. I dislike, that this patch makes much harder to debug, why no \npartitionwise join is chosen.Can you please elaborate more? How does my change make debugging harder? -- Best Wishes,Ashutosh Bapat", "msg_date": "Wed, 5 Jun 2024 16:27:13 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Wed, May 22, 2024 at 3:57 PM Ashutosh Bapat\n<[email protected]> wrote:\n> I will create patches for the back-branches once the patch for master is in a committable state.\n\nAFAIU, this patch prevents apply_scanjoin_target_to_paths() from\ndiscarding old paths of partitioned joinrels. Therefore, we can\nretain non-partitionwise join paths if the cheapest path happens to be\namong them.\n\nOne concern from me is that if the cheapest path of a joinrel is a\npartitionwise join path, following this approach could lead to\nundesirable cross-platform plan variations, as detailed in the\noriginal comment.\n\nIs there a specific query that demonstrates benefits from this change?\nI'm curious about scenarios where a partitionwise join runs slower\nthan a non-partitionwise join.\n\nThanks\nRichard\n\n\n", "msg_date": "Wed, 24 Jul 2024 12:12:13 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On Wed, Jul 24, 2024 at 9:42 AM Richard Guo <[email protected]> wrote:\n>\n> On Wed, May 22, 2024 at 3:57 PM Ashutosh Bapat\n> <[email protected]> wrote:\n> > I will create patches for the back-branches once the patch for master is in a committable state.\n>\n> AFAIU, this patch prevents apply_scanjoin_target_to_paths() from\n> discarding old paths of partitioned joinrels. Therefore, we can\n> retain non-partitionwise join paths if the cheapest path happens to be\n> among them.\n\nRight. Thanks for the summary.\n\n>\n> One concern from me is that if the cheapest path of a joinrel is a\n> partitionwise join path, following this approach could lead to\n> undesirable cross-platform plan variations, as detailed in the\n> original comment.\n\nI read through the email thread [3] referenced in the commit\n(1d338584062b3e53b738f987ecb0d2b67745232a) which added that comment.\nThe change is mentioned in [4] first. Please notice that this change\nis unrelated to the bug that started the thread. [5], [6] talk about\nthe costs of projection path above Append vs project path below\nAppend. But I don't see any example of any cross-platform plan\nvariations. I also do not see an example in that thread where such a\nplan variation results in bad performance. If the costs of\npartitionwise and non-partitionwise join paths are so close to each\nother that platform specific arithmetic can swing it one way or the\nother, possibly their performance is going to be comparable. Without\nan example query it's hard to assess this possibility or address the\nconcern, especially when we have examples of the behaviour otherwise.\n\n>\n> Is there a specific query that demonstrates benefits from this change?\n> I'm curious about scenarios where a partitionwise join runs slower\n> than a non-partitionwise join.\n\n[1] provides a testcase where a nonpartitionwise join is better than\npartitionwise join. This testcase is derived from a bug reported by an\nEDB customer. [2] is another bug report on psql-bugs.\n\n\n[1] https://www.postgresql.org/message-id/CAKZiRmyaFFvxyEYGG_hu0F-EVEcqcnveH23MULhW6UY_jwykGw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/786.1565541557%40sss.pgh.pa.us#9d50e1b375201f29bbf17072d75569e3\n[3] https://www.postgresql.org/message-id/flat/15669-02fb3296cca26203%40postgresql.org\n[4] https://www.postgresql.org/message-id/20477.1551819776%40sss.pgh.pa.us\n[5] https://www.postgresql.org/message-id/15350.1551973953%40sss.pgh.pa.us\n[6] https://www.postgresql.org/message-id/24357.1551984010%40sss.pgh.pa.us\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 24 Jul 2024 18:52:29 +0530", "msg_from": "Ashutosh Bapat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" }, { "msg_contents": "On 24/7/2024 15:22, Ashutosh Bapat wrote:\n> On Wed, Jul 24, 2024 at 9:42 AM Richard Guo <[email protected]> wrote:\n>> Is there a specific query that demonstrates benefits from this change?\n>> I'm curious about scenarios where a partitionwise join runs slower\n>> than a non-partitionwise join.\n> \n> [1] provides a testcase where a nonpartitionwise join is better than\n> partitionwise join. This testcase is derived from a bug reported by an\n> EDB customer. [2] is another bug report on psql-bugs.\nI haven't passed through the patch yet, but can this issue affect the \ndecision on what to push down to foreign servers: a whole join or just a \nscan of two partitions?\nIf the patch is related to the pushdown decision, I'd say it is quite an \nannoying problem for me. From time to time, I see cases where JOIN \nproduces more tuples than both partitions have in total - in this case, \nit would be better to transfer tables' tuples to the main instance \nbefore joining them.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Tue, 1 Oct 2024 04:52:11 +0700", "msg_from": "Andrei Lepikhov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: apply_scanjoin_target_to_paths and partitionwise join" } ]
[ { "msg_contents": "Hi,\n\n/*\n * we expect the find the last lines of the manifest, including the checksum,\n * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n * parse step if we are about to overflow MAX_CHUNK bytes.\n */\n\nShouldn't this be:\n/*\n * we expect to find the last lines of the manifest,...\n */\n\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\nHi,\n\n\n\n\n/*\n\n * we expect the find the last lines of the manifest, including the checksum,\n\n * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n\n * parse step if we are about to overflow MAX_CHUNK bytes.\n\n */\n\n\n\n\nShouldn't this be:\n\n/*\n\n * we expect to find the last lines of the manifest,...\n\n */\n\n\n\n\n\n\n\nRegards\n\nDaniel", "msg_date": "Thu, 11 Apr 2024 09:49:54 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>", "msg_from_op": true, "msg_subject": "type in basebackup_incremental.c ?" }, { "msg_contents": "> On 11 Apr 2024, at 11:49, Daniel Westermann (DWE) <[email protected]> wrote:\n> \n> Hi,\n> \n> /*\n> * we expect the find the last lines of the manifest, including the checksum,\n> * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n> * parse step if we are about to overflow MAX_CHUNK bytes.\n> */\n> \n> Shouldn't this be:\n> /*\n> * we expect to find the last lines of the manifest,...\n> */\n\nThat sounds about right, and since it's a full sentence it should also start\nwith a capital 'W': \"We expect to find the..\".\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:15:26 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type in basebackup_incremental.c ?" }, { "msg_contents": ">Sent: Thursday, April 11, 2024 12:15\n>To: Daniel Westermann (DWE) <[email protected]>\n>Cc: PostgreSQL Hackers <[email protected]>\n>Subject: Re: type in basebackup_incremental.c ?\n>\n>> On 11 Apr 2024, at 11:49, Daniel Westermann (DWE) <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> /*\n>> * we expect the find the last lines of the manifest, including the checksum,\n>> * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n>> * parse step if we are about to overflow MAX_CHUNK bytes.\n>> */\n>>\n>> Shouldn't this be:\n>> /*\n>> * we expect to find the last lines of the manifest,...\n>> */\n\n>That sounds about right, and since it's a full sentence it should also start\n>with a capital 'W': \"We expect to find the..\".\n\n... and a bit further down:\n\n * We don't really need this information, because we use WAL summaries to\n * figure what's changed.\n\nShould probably be: ...because we use WAL summaries to figure out ...\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\n>Sent: Thursday, April 11, 2024 12:15\n\n\n\n>To: Daniel Westermann (DWE) <[email protected]>\n\n>Cc: PostgreSQL Hackers <[email protected]>\n\n>Subject: Re: type in basebackup_incremental.c ?\n>\n>> On 11 Apr 2024, at 11:49, Daniel Westermann (DWE) <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> /*\n>>  * we expect the find the last lines of the manifest, including the checksum,\n>>  * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n>>  * parse step if we are about to overflow MAX_CHUNK bytes.\n>>  */\n>>\n>> Shouldn't this be:\n>> /*\n>>  * we expect to find the last lines of the manifest,...\n>>  */\n\n>That sounds about right, and since it's a full sentence it should also start\n>with a capital 'W': \"We expect to find the..\".\n\n\n... and a bit further down:\n\n\n         * We don't really need this information, because we use WAL summaries to\n         * figure what's changed.\n\n\nShould probably be: ...because we use WAL summaries to figure out ...\n\n\nRegards\nDaniel", "msg_date": "Thu, 11 Apr 2024 11:00:31 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: type in basebackup_incremental.c ?" }, { "msg_contents": "\nOn 2024-04-11 Th 06:15, Daniel Gustafsson wrote:\n>> On 11 Apr 2024, at 11:49, Daniel Westermann (DWE) <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> /*\n>> * we expect the find the last lines of the manifest, including the checksum,\n>> * in the last MIN_CHUNK bytes of the manifest. We trigger an incremental\n>> * parse step if we are about to overflow MAX_CHUNK bytes.\n>> */\n>>\n>> Shouldn't this be:\n>> /*\n>> * we expect to find the last lines of the manifest,...\n>> */\n> That sounds about right, and since it's a full sentence it should also start\n> with a capital 'W': \"We expect to find the..\".\n>\n\nThanks, I will include that in the cleanups I'm intending to push today.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 07:16:02 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type in basebackup_incremental.c ?" }, { "msg_contents": "> On 11 Apr 2024, at 13:16, Andrew Dunstan <[email protected]> wrote:\n\n> Thanks, I will include that in the cleanups I'm intending to push today.\n\n+1, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 13:16:40 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: type in basebackup_incremental.c ?" } ]
[ { "msg_contents": "Now that the tree has settled down a bit post-freeze I ran some tooling to\ncheck spelling. I was primarily interested in docs and README* which were\nmostly free from simply typos, while the code had some in various comments and\none in code. The attached fixes all that I came across (not cross-referenced\nagainst ongoing reverts or any other fixup threads but will be before pushing\nof course).\n\n--\nDaniel Gustafsson", "msg_date": "Thu, 11 Apr 2024 15:05:33 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Typos in the code and README" }, { "msg_contents": "\nOn 2024-04-11 Th 09:05, Daniel Gustafsson wrote:\n> Now that the tree has settled down a bit post-freeze I ran some tooling to\n> check spelling. I was primarily interested in docs and README* which were\n> mostly free from simply typos, while the code had some in various comments and\n> one in code. The attached fixes all that I came across (not cross-referenced\n> against ongoing reverts or any other fixup threads but will be before pushing\n> of course).\n\n\n\nI have these covered:\n\n\nsrc/test/modules/test_json_parser/README                  | 2 +-\n.../test_json_parser/test_json_parser_incremental.c       | 4 ++--\nsrc/test/modules/test_json_parser/test_json_parser_perf.c | 2 +-\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 11 Apr 2024 09:21:06 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected]> wrote:\n\n> Now that the tree has settled down a bit post-freeze I ran some tooling to\n> check spelling. I was primarily interested in docs and README* which were\n> mostly free from simply typos, while the code had some in various comments\n> and\n> one in code. The attached fixes all that I came across (not\n> cross-referenced\n> against ongoing reverts or any other fixup threads but will be before\n> pushing\n> of course).\n>\n\nI see you've corrected \"iif\" to \"if\". It should be \"iff\".\n\nDavid\n\n>\n\nOn Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected]> wrote:Now that the tree has settled down a bit post-freeze I ran some tooling to\ncheck spelling.  I was primarily interested in docs and README* which were\nmostly free from simply typos, while the code had some in various comments and\none in code.  The attached fixes all that I came across (not cross-referenced\nagainst ongoing reverts or any other fixup threads but will be before pushing\nof course).I see you've corrected \"iif\" to \"if\". It should be \"iff\".David", "msg_date": "Fri, 12 Apr 2024 01:29:58 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 11 Apr 2024, at 15:29, David Rowley <[email protected]> wrote:\n> \n> On Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected] <mailto:[email protected]>> wrote:\n>> Now that the tree has settled down a bit post-freeze I ran some tooling to\n>> check spelling. I was primarily interested in docs and README* which were\n>> mostly free from simply typos, while the code had some in various comments and\n>> one in code. The attached fixes all that I came across (not cross-referenced\n>> against ongoing reverts or any other fixup threads but will be before pushing\n>> of course).\n> \n> \n> I see you've corrected \"iif\" to \"if\". It should be \"iff\".\n\nGotcha, will fix. I opted for \"if\" due to recent threads where the use of\n\"iff\" was discouraged due to not being commonly known, but that was in doc/ and\nnot code comments.\n\n--\nDaniel Gustafsson\n\n\nOn 11 Apr 2024, at 15:29, David Rowley <[email protected]> wrote:On Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected]> wrote:Now that the tree has settled down a bit post-freeze I ran some tooling to\ncheck spelling.  I was primarily interested in docs and README* which were\nmostly free from simply typos, while the code had some in various comments and\none in code.  The attached fixes all that I came across (not cross-referenced\nagainst ongoing reverts or any other fixup threads but will be before pushing\nof course).I see you've corrected \"iif\" to \"if\". It should be \"iff\".Gotcha, will fix.  I opted for \"if\" due to recent threads where the use of\"iff\" was discouraged due to not being commonly known, but that was in doc/ andnot code comments.\n--Daniel Gustafsson", "msg_date": "Thu, 11 Apr 2024 15:37:00 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Thu, Apr 11, 2024 at 03:37:00PM +0200, Daniel Gustafsson wrote:\n> On 11 Apr 2024, at 15:29, David Rowley <[email protected]> wrote:\n> \n> On Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected]> wrote:\n> \n> Now that the tree has settled down a bit post-freeze I ran some tooling\n> to\n> check spelling. I was primarily interested in docs and README* which\n> were\n> mostly free from simply typos, while the code had some in various\n> comments and\n> one in code. The attached fixes all that I came across (not\n> cross-referenced\n> against ongoing reverts or any other fixup threads but will be before\n> pushing\n> of course).\n> \n> \n> I see you've corrected \"iif\" to \"if\". It should be \"iff\".\n> \n> \n> Gotcha, will fix. I opted for \"if\" due to recent threads where the use of\n> \"iff\" was discouraged due to not being commonly known, but that was in doc/ and\n> not code comments.\n\nI actually agree \"iff\" is just not clear enough. \"Iff\" stands for \"if\nand only if\" and maybe should be spelled out that way.\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 12 Apr 2024 16:55:16 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Fri, Apr 12, 2024 at 04:55:16PM -0400, Bruce Momjian wrote:\n> On Thu, Apr 11, 2024 at 03:37:00PM +0200, Daniel Gustafsson wrote:\n> > On 11 Apr 2024, at 15:29, David Rowley <[email protected]> wrote:\n> > \n> > On Fri, 12 Apr 2024, 1:05 am Daniel Gustafsson, <[email protected]> wrote:\n> > \n> > Now that the tree has settled down a bit post-freeze I ran some tooling\n> > to\n> > check spelling. I was primarily interested in docs and README* which\n> > were\n> > mostly free from simply typos, while the code had some in various\n> > comments and\n> > one in code. The attached fixes all that I came across (not\n> > cross-referenced\n> > against ongoing reverts or any other fixup threads but will be before\n> > pushing\n> > of course).\n> > \n> > \n> > I see you've corrected \"iif\" to \"if\". It should be \"iff\".\n> > \n> > \n> > Gotcha, will fix. I opted for \"if\" due to recent threads where the use of\n> > \"iff\" was discouraged due to not being commonly known, but that was in doc/ and\n> > not code comments.\n> \n> I actually agree \"iff\" is just not clear enough. \"Iff\" stands for \"if\n> and only if\" and maybe should be spelled out that way.\n\nJust to clarify, I think \"if and only if\" means \"if A then B\" and B can\nonly happen if A happens, meaning there are not other cases where B can\nhappen. This latter part is what disinguishes \"iff\" from \"if\".\n\n-- \n Bruce Momjian <[email protected]> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 12 Apr 2024 16:58:22 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On 11/04/2024 16:05, Daniel Gustafsson wrote:\n> Now that the tree has settled down a bit post-freeze I ran some tooling to\n> check spelling. I was primarily interested in docs and README* which were\n> mostly free from simply typos, while the code had some in various comments and\n> one in code. The attached fixes all that I came across (not cross-referenced\n> against ongoing reverts or any other fixup threads but will be before pushing\n> of course).\n\nHere's a few more. I've accumulate these over the past couple of months, \nkeeping them stashed in a branch, adding to it whenever I've spotted a \nminor typo while reading the code.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sat, 13 Apr 2024 00:15:09 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 12 Apr 2024, at 23:15, Heikki Linnakangas <[email protected]> wrote:\n> \n> On 11/04/2024 16:05, Daniel Gustafsson wrote:\n>> Now that the tree has settled down a bit post-freeze I ran some tooling to\n>> check spelling. I was primarily interested in docs and README* which were\n>> mostly free from simply typos, while the code had some in various comments and\n>> one in code. The attached fixes all that I came across (not cross-referenced\n>> against ongoing reverts or any other fixup threads but will be before pushing\n>> of course).\n> \n> Here's a few more. I've accumulate these over the past couple of months, keeping them stashed in a branch, adding to it whenever I've spotted a minor typo while reading the code.\n\nNice, let's lot all these together.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 12 Apr 2024 23:17:11 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Sat, 13 Apr 2024 at 09:17, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 12 Apr 2024, at 23:15, Heikki Linnakangas <[email protected]> wrote:\n> > Here's a few more. I've accumulate these over the past couple of months, keeping them stashed in a branch, adding to it whenever I've spotted a minor typo while reading the code.\n>\n> Nice, let's lot all these together.\n\nHere are a few additional ones to add to that.\n\nFound with a manual trawl through git grep -E\n'\\b([a-zA-Z]{2,}[^long|^that])\\s+\\1\\b' -- ':!*.po' ':!*.dat'\n\nDavid", "msg_date": "Sun, 14 Apr 2024 23:19:56 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 14 Apr 2024, at 13:19, David Rowley <[email protected]> wrote:\n> \n> On Sat, 13 Apr 2024 at 09:17, Daniel Gustafsson <[email protected]> wrote:\n>> \n>>> On 12 Apr 2024, at 23:15, Heikki Linnakangas <[email protected]> wrote:\n>>> Here's a few more. I've accumulate these over the past couple of months, keeping them stashed in a branch, adding to it whenever I've spotted a minor typo while reading the code.\n>> \n>> Nice, let's lot all these together.\n> \n> Here are a few additional ones to add to that.\n\nThanks. Collecting all the ones submitted here, as well as a few submitted\noff-list by Alexander, the patch is now a 3-part patchset of cleanups:\n\n0001 contains the typos and duplicate words fixups, 0002 fixes a parameter with\nthe wrong name in the prototype and 0003 removes a leftover prototype which was\naccidentally left in a refactoring.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 15 Apr 2024 14:25:45 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Mon, Apr 15, 2024 at 8:26 PM Daniel Gustafsson <[email protected]> wrote:\n\n> Thanks. Collecting all the ones submitted here, as well as a few submitted\n> off-list by Alexander, the patch is now a 3-part patchset of cleanups:\n>\n> 0001 contains the typos and duplicate words fixups, 0002 fixes a parameter\n> with\n> the wrong name in the prototype and 0003 removes a leftover prototype\n> which was\n> accidentally left in a refactoring.\n\n\nBTW, it seems that 0001 needs a rebase over 9dfcac8e15.\n\nThanks\nRichard\n\nOn Mon, Apr 15, 2024 at 8:26 PM Daniel Gustafsson <[email protected]> wrote:\nThanks.  Collecting all the ones submitted here, as well as a few submitted\noff-list by Alexander, the patch is now a 3-part patchset of cleanups:\n\n0001 contains the typos and duplicate words fixups, 0002 fixes a parameter with\nthe wrong name in the prototype and 0003 removes a leftover prototype which was\naccidentally left in a refactoring.BTW, it seems that 0001 needs a rebase over 9dfcac8e15.ThanksRichard", "msg_date": "Tue, 16 Apr 2024 16:28:24 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "Hi,\n\nThanks for working on this!\n\nOn Mon, 15 Apr 2024 at 15:26, Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 14 Apr 2024, at 13:19, David Rowley <[email protected]> wrote:\n> >\n> > On Sat, 13 Apr 2024 at 09:17, Daniel Gustafsson <[email protected]> wrote:\n> >>\n> >>> On 12 Apr 2024, at 23:15, Heikki Linnakangas <[email protected]> wrote:\n> >>> Here's a few more. I've accumulate these over the past couple of months, keeping them stashed in a branch, adding to it whenever I've spotted a minor typo while reading the code.\n> >>\n> >> Nice, let's lot all these together.\n> >\n> > Here are a few additional ones to add to that.\n>\n> Thanks. Collecting all the ones submitted here, as well as a few submitted\n> off-list by Alexander, the patch is now a 3-part patchset of cleanups:\n>\n> 0001 contains the typos and duplicate words fixups, 0002 fixes a parameter with\n> the wrong name in the prototype and 0003 removes a leftover prototype which was\n> accidentally left in a refactoring.\n\nI realized two small typos: 'sgmr' -> 'smgr'. You may want to include\nthem in 0001.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Tue, 16 Apr 2024 16:37:50 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 16 Apr 2024, at 15:37, Nazir Bilal Yavuz <[email protected]> wrote:\n\n> I realized two small typos: 'sgmr' -> 'smgr'. You may want to include\n> them in 0001.\n\nThanks, I incorporated these into 0001 before pushing. All the commits in this\npatchset are now applied.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:13:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Fri, 19 Apr 2024 at 20:13, Daniel Gustafsson <[email protected]> wrote:\n> Thanks, I incorporated these into 0001 before pushing. All the commits in this\n> patchset are now applied.\n\nHere are a few more to see if it motivates anyone else to do a more\nthorough search for another batch.\n\nFixes duplicate words spanning multiple lines plus an outdated\nreference to \"streaming read\" which was renamed to \"read stream\" late\nin that patch's development.\n\nduplicate words found using:\nag \"\\s([a-zA-Z]{2,})[\\s*]*\\n\\1\\b\"\nag \"\\s([a-zA-Z]{2,})\\n(\\s*\\*\\s*)\\1\\b\"\n\nDavid", "msg_date": "Sat, 20 Apr 2024 16:09:13 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Sat, 20 Apr 2024 at 16:09, David Rowley <[email protected]> wrote:\n> Here are a few more to see if it motivates anyone else to do a more\n> thorough search for another batch.\n\nI've pushed these now.\n\nDavid\n\n\n", "msg_date": "Sun, 28 Apr 2024 20:05:06 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "Hello,\n\n28.04.2024 11:05, David Rowley wrote:\n\n> On Sat, 20 Apr 2024 at 16:09, David Rowley <[email protected]> wrote:\n>> Here are a few more to see if it motivates anyone else to do a more\n>> thorough search for another batch.\n> I've pushed these now.\n\nPlease look also at the list of other typos and inconsistencies I've found\non the master branch:\nadditional_notnulls -> old_notnulls (cf. b0e96f311)\nATAddCheckConstraint ->? ATAddCheckNNConstraint (cf. b0e96f311)\nbt_page_check -> bt_target_page_check (cf. 5ae208720)\ncalblack_arg -> callback_arg\ncombinig -> combining\ncompaining -> complaining\nctllock - remove (cf. 53c2a97a9)\ndabatase -> database\neval_const_exprs_mutator -> eval_const_expressions_mutator\nExecEndValuesScan - remove (cf. d060e921e)\nget_json_nested_columns -> get_json_table_nested_columns\nio_direct -> debug_io_direct\niS ->? IS (a neatnik-ism)\njoing ->? join (cf. 20f90a0e4)\n_jumbleRangeTblEntry - remove (cf. 367c989cd)\nMallroy -> Mallory\nonstead -> instead\nprocedual -> procedural\npsql_safe -> safe_psql\nread_quoted_pattern -> read_quoted_string\nrepertiore -> repertoire\nrightfirstdataoffset ->? rightfirstoffset (cf. 5ae208720)\nSincle ->? Since (perhaps the whole sentence could be rephrased)\nsslnegotition=direct/requiredirectwould -> sslnegotiation=direct/requiredirect would\nsslnegotitation -> sslnegotiation\nwalreciever -> walreceiver\nxid_wrapround -> xid_wraparound\n\n(some of them are located in doc/, so it's not a code-only change)\nI've attached the patch for your convenience, though maybe some\nof the suggestions are to be discarded.\n\nBest regards,\nAlexander", "msg_date": "Thu, 2 May 2024 15:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Fri, 3 May 2024 at 00:00, Alexander Lakhin <[email protected]> wrote:\n> (some of them are located in doc/, so it's not a code-only change)\n> I've attached the patch for your convenience, though maybe some\n> of the suggestions are to be discarded.\n\nThanks. I was hoping you'd do that.\n\nI pushed the patch after only adjusting the path in the docs which had\n\"module\" rather than \"modules\".\n\nDavid\n\n\n", "msg_date": "Sat, 4 May 2024 02:36:35 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "Hello hackers,\n\n03.05.2024 17:36, David Rowley wrote:\n> I pushed the patch after only adjusting the path in the docs which had\n> \"module\" rather than \"modules\".\n\nPlease look at another bunch of inconsistencies/orphaned entities I found\nin the tree, with the possible substitutions:\nerrmsg_buf -> errormsg_buf\n(coined by 6b18b3fe2)\n\nNoMovementScanDirectionScans -> NoMovementScanDirection\n(introduced with e9aaf0632, discussed in [1], but still seems inaccurate)\n\nXLogReadRecordInternal -> XLogReadRecord\n(from 3f1ce9734, align with a comment above: \"Start and end point of last\nrecord returned by XLogReadRecord().\")\n\nBYPASS_ALLOWCONN -> BGWORKER_BYPASS_ROLELOGINCHECK (see 492217301)\n\nxs_ctup.t_self -> xs_heaptid (see c2fe139c2 and 304532421)\n\npgStatShmLookupCache -> pgStatLocal.shmem (coined by 5891c7a8e)\n\nsmgr_fsm_nblocks and smgr_vm_nblocks -> smgr_cached_nblocks\n(see the same comment updated by c5315f4f4)\n\nXID becomes older than GlobalXmin -> XID becomes visible to everyone\n(in accordance with dc7420c2c9 src/backend/access/gist/gistutil.c)\n\ngen-rtab - remove (non-existing since db7d1a7b0)\n\nBARRIER_SHOULD_CHECK - remove (unused since a3ed4d1ef)\n\nEXE_EXT - remove (unused since f06b1c598)\n\nendterm - remove\n(see 60c90c16c -- Use xreflabel attributes instead of endterm ...)\n\nxl_commit_ts_set, SizeOfCommitTsSet - remove (unused since 08aa89b32)\n\nThe corresponding patch is attached for your convenience.\n\n[1] https://www.postgresql.org/message-id/20230131140224.7j6gbcsfwmad2a4b%40liskov\n\nBest regards,\nAlexander", "msg_date": "Thu, 4 Jul 2024 20:00:01 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 4 Jul 2024, at 19:00, Alexander Lakhin <[email protected]> wrote:\n\n> Please look at another bunch of inconsistencies/orphaned entities I found\n> in the tree, with the possible substitutions:\n\nThanks for these, and sorry for the delay in processing them (summer happened\netc). I've started to go through them and have applied some low-hanging fruit\nalready, will go over all them.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 25 Jul 2024 15:45:32 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "(I know Daniel mentioned he'd get to these, but the ScanDirection one\nwas my fault and I needed to clear that off my mind. I did a few\nothers while on this topic.)\n\nOn Fri, 5 Jul 2024 at 05:00, Alexander Lakhin <[email protected]> wrote:\n> Please look at another bunch of inconsistencies/orphaned entities I found\n> in the tree, with the possible substitutions:\n> errmsg_buf -> errormsg_buf\n> (coined by 6b18b3fe2)\n\nFixed.\n\n> NoMovementScanDirectionScans -> NoMovementScanDirection\n> (introduced with e9aaf0632, discussed in [1], but still seems inaccurate)\n\nOops. Fixed.\n\n> XLogReadRecordInternal -> XLogReadRecord\n> (from 3f1ce9734, align with a comment above: \"Start and end point of last\n> record returned by XLogReadRecord().\")\n\nFixed.\n\n> BYPASS_ALLOWCONN -> BGWORKER_BYPASS_ROLELOGINCHECK (see 492217301)\n\nFixed\n\n> xs_ctup.t_self -> xs_heaptid (see c2fe139c2 and 304532421)\n\nFixed.\n\n> pgStatShmLookupCache -> pgStatLocal.shmem (coined by 5891c7a8e)\n\nFixed.\n\n> smgr_fsm_nblocks and smgr_vm_nblocks -> smgr_cached_nblocks\n> (see the same comment updated by c5315f4f4)\n\nHeikki fixed in 19de089cd.\n\n> XID becomes older than GlobalXmin -> XID becomes visible to everyone\n> (in accordance with dc7420c2c9 src/backend/access/gist/gistutil.c)\n\nI'd need to spend more time to understand this.\n\n> gen-rtab - remove (non-existing since db7d1a7b0)\n\nDaniel fixed in cc59f9d0f.\n\n> BARRIER_SHOULD_CHECK - remove (unused since a3ed4d1ef)\n\nI wasn't sure if nothing else external could be using this. The macro\ndoesn't use any fields that are now gone, so I'm not confident enough\nto remove it. Can Robert confirm?\n\n> EXE_EXT - remove (unused since f06b1c598)\n\nDaniel fixed in 88e3da565.\n\n> endterm - remove\n> (see 60c90c16c -- Use xreflabel attributes instead of endterm ...)\n\nI read that commit message and I agree it's now unused. I just didn't\nget any vibes from the commit message that it shouldn't ever be used\nagain. Can Tom confirm?\n\n> xl_commit_ts_set, SizeOfCommitTsSet - remove (unused since 08aa89b32)\n\nI would say it should be removed. I just didn't because the commit I\nhad pending fitted into the \"typos\" category and this didn't quite\nfit.\n\nI've attached a patch with the remainder.\n\nDavid", "msg_date": "Mon, 12 Aug 2024 23:59:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "Hello,\n\n12.08.2024 14:59, David Rowley wrote:\n> (I know Daniel mentioned he'd get to these, but the ScanDirection one\n> was my fault and I needed to clear that off my mind. I did a few\n> others while on this topic.)\n\nThank you, David, for working on that!\n\nI've gathered another bunch of defects with the possible substitutions.\nPlease take a look:\nadapated -> adapted\n\nbecasue -> because\n\ncancelled -> canceled (introduced by 90f517821, but see 8c9da1441)\n\ncange -> change\n\ncomand -> command\n\nCommitTSSLRU -> CommitTsSLRU (introduced by 53c2a97a9; maybe the fix\n  should be back-patched...)\n\nconnectOptions2 -> pqConnectOptions2 (see 774bcffe4)\n\nInjections points -> Injection points\n\njsetate -> jsestate\n\nLockShmemSize -> remove the sentence? (added by ec0baf949, outdated with\n  a794fb068)\n\nMaybeStartSlotSyncWorker -> LaunchMissingBackgroundProcesses (the logic to\n  start B_SLOTSYNC_WORKER moved from the former to the latter function with\n  3354f8528)\n\nmultixact_member_buffer -> multixact_member_buffers\n\nper_data_data -> per_buffer_data (see code below the comment; introduced by\n  b5a9b18cd)\n\nper_buffer_private -> remove the function declaration? (the duplicate\n  declaration was added by a858be17c)\n\nperformancewise -> performance-wise? (coined by a7f107df2)\n\npgstat_add_kind -> pgstat_register_kind (see 7949d9594)\n\npg_signal_autovacuum -> pg_signal_autovacuum_worker (see d2b74882c)\n\nrecoveery -> recovery\n\nRegisteredWorker -> RegisteredBgWorker\n\nRUNNING_XACT -> RUNNING_XACTS\n\nsanpshot -> snapshot\n\nTypeEntry -> TypeCacheEntry (align with AttoptCacheEntry, from the same\n  commit 40064a8ee)\n\nThe corresponding patch is attached for your convenience.\n\nBest regards,\nAlexander", "msg_date": "Mon, 2 Sep 2024 21:00:00 +0300", "msg_from": "Alexander Lakhin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Mon, Sep 02, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> I've gathered another bunch of defects with the possible substitutions.\n> Please take a look:\n> pgstat_add_kind -> pgstat_register_kind (see 7949d9594)\n\nAnd here I thought I took care of these inconsistencies. This one is\non me so I'll go fix that in a bit, with the rest while on it.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 14:24:32 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Tue, Sep 03, 2024 at 02:24:32PM +0900, Michael Paquier wrote:\n> On Mon, Sep 02, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n> > I've gathered another bunch of defects with the possible substitutions.\n> > Please take a look:\n> > pgstat_add_kind -> pgstat_register_kind (see 7949d9594)\n> \n> And here I thought I took care of these inconsistencies. This one is\n> on me so I'll go fix that in a bit, with the rest while on it.\n\nAnd done that.\n\nThe bit about CommitTSSLRU -> CommitTsSLRU in lwlock.c should be\nbackpatched down to 17, indeed, but the branch is frozen until the RC\ntag lands in the tree, so I have left it out for now. The tag should\nshow up tomorrow or so. Good thing that you have noticed this issue\nbefore the release.\n--\nMichael", "msg_date": "Tue, 3 Sep 2024 14:51:40 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 3 Sep 2024, at 07:51, Michael Paquier <[email protected]> wrote:\n> \n> On Tue, Sep 03, 2024 at 02:24:32PM +0900, Michael Paquier wrote:\n>> On Mon, Sep 02, 2024 at 09:00:00PM +0300, Alexander Lakhin wrote:\n>>> I've gathered another bunch of defects with the possible substitutions.\n>>> Please take a look:\n>>> pgstat_add_kind -> pgstat_register_kind (see 7949d9594)\n>> \n>> And here I thought I took care of these inconsistencies. This one is\n>> on me so I'll go fix that in a bit, with the rest while on it.\n> \n> And done that.\n> \n> The bit about CommitTSSLRU -> CommitTsSLRU in lwlock.c should be\n> backpatched down to 17, indeed, but the branch is frozen until the RC\n> tag lands in the tree, so I have left it out for now. The tag should\n> show up tomorrow or so. Good thing that you have noticed this issue\n> before the release.\n\nI see your v17 typo fixes, and raise you a few more. Commit 31a98934d169 from\njust now contains 2 (out of 3) sets of typos introduced in v17 so they should\nfollow along when you push the ones mentioned here.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 12:00:13 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Tue, Sep 03, 2024 at 12:00:13PM +0200, Daniel Gustafsson wrote:\n> I see your v17 typo fixes, and raise you a few more. Commit 31a98934d169 from\n> just now contains 2 (out of 3) sets of typos introduced in v17 so they should\n> follow along when you push the ones mentioned here.\n\nIs that really mandatory? I tend to worry about back branches only\nwhen this stuff is user-visible, like in the docs or error messages.\nThis opinion varies for each individual, of course. That's just my\nlazy opinion.\n\nCommitTSSLRU -> CommitTsSLRU is user-visible, showing up in\npg_stat_activity. Fixed this one with 08b9b9e043bb, as the tag for\n17rc1 has been pushed.\n\nPicking f747bc18f7f2 and 75c5231a00f3 on REL_17_STABLE leads to the\nattached, I think, without the conflicts.\n--\nMichael", "msg_date": "Wed, 4 Sep 2024 10:25:17 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 4 Sep 2024, at 03:25, Michael Paquier <[email protected]> wrote:\n> \n> On Tue, Sep 03, 2024 at 12:00:13PM +0200, Daniel Gustafsson wrote:\n>> I see your v17 typo fixes, and raise you a few more. Commit 31a98934d169 from\n>> just now contains 2 (out of 3) sets of typos introduced in v17 so they should\n>> follow along when you push the ones mentioned here.\n> \n> Is that really mandatory? I tend to worry about back branches only\n> when this stuff is user-visible, like in the docs or error messages.\n> This opinion varies for each individual, of course. That's just my\n> lazy opinion.\n\nNot mandatory at all, but since you were prepping a typo backpatch anyways I\nfigured these could join to put a small dent in reducing risks for future\nbackports.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 4 Sep 2024 10:23:44 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Wed, 4 Sept 2024 at 20:24, Daniel Gustafsson <[email protected]> wrote:\n> Not mandatory at all, but since you were prepping a typo backpatch anyways I\n> figured these could join to put a small dent in reducing risks for future\n> backports.\n\nI think this is pretty good logic. I think fixing comment typos in\nancient code and backpatching to all supported versions isn't good use\nof time, but fixing a typo in \"recent\" code and backpatching to where\nthat code was added seems useful. Newer code is more likely to need\nbug fixes in the future, so going to a bit more effort to make\nbackpatching those bug fixes easier seems worth the effort. I just\ndon't know what \"recent\" should be defined as. I'd say if it's in a\nversion we've not released yet, that's probably recent. By the time .1\nis out, there's less chance of bugs in new code. Anyway, I doubt hard\nguidelines are warranted here, but maybe some hints about best\npractices in https://wiki.postgresql.org/wiki/Committing_checklist ?\n\nDavid\n\n\n", "msg_date": "Thu, 5 Sep 2024 03:34:31 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "On Thu, Sep 05, 2024 at 03:34:31AM +1200, David Rowley wrote:\n> Anyway, I doubt hard\n> guidelines are warranted here, but maybe some hints about best\n> practices in https://wiki.postgresql.org/wiki/Committing_checklist ?\n\nYep, that may be useful. I just tend to be cautious because it can be\nvery easy to mess up things depending on the code path you're\nmanipulating, speaking with some.. Experience on the matter. And an\nRC1 is kind of something to be cautious with.\n--\nMichael", "msg_date": "Thu, 5 Sep 2024 09:03:26 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typos in the code and README" }, { "msg_contents": "> On 4 Sep 2024, at 17:34, David Rowley <[email protected]> wrote:\n> \n> On Wed, 4 Sept 2024 at 20:24, Daniel Gustafsson <[email protected]> wrote:\n>> Not mandatory at all, but since you were prepping a typo backpatch anyways I\n>> figured these could join to put a small dent in reducing risks for future\n>> backports.\n> \n> I think this is pretty good logic. I think fixing comment typos in\n> ancient code and backpatching to all supported versions isn't good use\n> of time, but fixing a typo in \"recent\" code and backpatching to where\n> that code was added seems useful. Newer code is more likely to need\n> bug fixes in the future, so going to a bit more effort to make\n> backpatching those bug fixes easier seems worth the effort.\n\nAbsolutely agree.\n\n> I just don't know what \"recent\" should be defined as. I'd say if it's in a\n> version we've not released yet, that's probably recent. By the time .1\n> is out, there's less chance of bugs in new code. Anyway, I doubt hard\n> guidelines are warranted here, but maybe some hints about best\n> practices in https://wiki.postgresql.org/wiki/Committing_checklist ?\n\nThat sounds like a good idea. Off the cuff I would agree that unreleased\nversions and .0 versions are strong candidates (but not mandatory) for trivial\nbackpatches like typos, beyond that the value is likely to be lower.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Fri, 6 Sep 2024 11:42:20 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typos in the code and README" } ]
[ { "msg_contents": "Hi hackers!\n\nWhile working on [0] i have noticed this comment in\nTerminateOtherDBBackends function:\n\n/*\n* Check whether we have the necessary rights to terminate other\n* sessions. We don't terminate any session until we ensure that we\n* have rights on all the sessions to be terminated. These checks are\n* the same as we do in pg_terminate_backend.\n*\n* In this case we don't raise some warnings - like \"PID %d is not a\n* PostgreSQL server process\", because for us already finished session\n* is not a problem.\n*/\n\nThis statement is not true after 3a9b18b.\n\"These checks are the same as we do in pg_terminate_backend.\"\n\nBut the code is still correct, I assume... or not? In fact, we are\nkilling autovacuum workers which are working with a given database\n(proc->roleId == 0), which is OK in that case. Are there any other\ncases when proc->roleId == 0 but we should not be able to kill such a\nprocess?\n\n\n[0] https://www.postgresql.org/message-id/flat/CALdSSPiOLADNe1ZS-1dnQXWVXgGAC6%3D3TBKABu9C3_97YGOoMg%40mail.gmail.com#4e869bc4b6ad88afe70e4484ffd256e2\n\n\n", "msg_date": "Thu, 11 Apr 2024 18:27:47 +0500", "msg_from": "Kirill Reshke <[email protected]>", "msg_from_op": true, "msg_subject": "TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Thu, Apr 11, 2024 at 6:58 PM Kirill Reshke <[email protected]> wrote:\n>\n> While working on [0] i have noticed this comment in\n> TerminateOtherDBBackends function:\n>\n> /*\n> * Check whether we have the necessary rights to terminate other\n> * sessions. We don't terminate any session until we ensure that we\n> * have rights on all the sessions to be terminated. These checks are\n> * the same as we do in pg_terminate_backend.\n> *\n> * In this case we don't raise some warnings - like \"PID %d is not a\n> * PostgreSQL server process\", because for us already finished session\n> * is not a problem.\n> */\n>\n> This statement is not true after 3a9b18b.\n> \"These checks are the same as we do in pg_terminate_backend.\"\n>\n> But the code is still correct, I assume... or not? In fact, we are\n> killing autovacuum workers which are working with a given database\n> (proc->roleId == 0), which is OK in that case. Are there any other\n> cases when proc->roleId == 0 but we should not be able to kill such a\n> process?\n>\n\nGood question. I am not aware of such cases but I wonder if we should\nadd a check similar to 3a9b18b [1] for the reason given in the commit\nmessage. I have added Noah to see if he has any suggestions on this\nmatter.\n\n[1] -\ncommit 3a9b18b3095366cd0c4305441d426d04572d88c1\nAuthor: Noah Misch <[email protected]>\nDate: Mon Nov 6 06:14:13 2023 -0800\n\n Ban role pg_signal_backend from more superuser backend types.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 15 Apr 2024 11:17:54 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Mon, 15 Apr 2024 at 11:18, Amit Kapila <[email protected]> wrote:\n>\n> On Thu, Apr 11, 2024 at 6:58 PM Kirill Reshke <[email protected]> wrote:\n> >\n> > While working on [0] i have noticed this comment in\n> > TerminateOtherDBBackends function:\n> >\n> > /*\n> > * Check whether we have the necessary rights to terminate other\n> > * sessions. We don't terminate any session until we ensure that we\n> > * have rights on all the sessions to be terminated. These checks are\n> > * the same as we do in pg_terminate_backend.\n> > *\n> > * In this case we don't raise some warnings - like \"PID %d is not a\n> > * PostgreSQL server process\", because for us already finished session\n> > * is not a problem.\n> > */\n> >\n> > This statement is not true after 3a9b18b.\n> > \"These checks are the same as we do in pg_terminate_backend.\"\n> >\n> > But the code is still correct, I assume... or not? In fact, we are\n> > killing autovacuum workers which are working with a given database\n> > (proc->roleId == 0), which is OK in that case. Are there any other\n> > cases when proc->roleId == 0 but we should not be able to kill such a\n> > process?\n> >\n>\n> Good question. I am not aware of such cases but I wonder if we should\n> add a check similar to 3a9b18b [1] for the reason given in the commit\n> message. I have added Noah to see if he has any suggestions on this\n> matter.\n\nThis function is only for drop database with force option, in case of\ndrop database with force option there will be a valid database id and\nwe will not include \"logical replication launcher\" and \"autovacuum\nlauncher\" processes as they will not have any database id. For\n\"autovacuum workers\" that are associated with this database, we will\nsend the termination signal and then drop the database. As we are not\ncausing any denial of service attack in this case I felt that could be\nthe reason the check was not added in this case.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 15 Apr 2024 15:10:12 +0530", "msg_from": "vignesh C <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Mon, Apr 15, 2024 at 11:17:54AM +0530, Amit Kapila wrote:\n> On Thu, Apr 11, 2024 at 6:58 PM Kirill Reshke <[email protected]> wrote:\n> >\n> > While working on [0] i have noticed this comment in\n> > TerminateOtherDBBackends function:\n> >\n> > /*\n> > * Check whether we have the necessary rights to terminate other\n> > * sessions. We don't terminate any session until we ensure that we\n> > * have rights on all the sessions to be terminated. These checks are\n> > * the same as we do in pg_terminate_backend.\n> > *\n> > * In this case we don't raise some warnings - like \"PID %d is not a\n> > * PostgreSQL server process\", because for us already finished session\n> > * is not a problem.\n> > */\n> >\n> > This statement is not true after 3a9b18b.\n> > \"These checks are the same as we do in pg_terminate_backend.\"\n\nThe comment mismatch is a problem. Thanks for reporting it. The DROP\nDATABASE doc mimics the comment, using, \"permissions are the same as with\npg_terminate_backend\".\n\n> > But the code is still correct, I assume... or not? In fact, we are\n> > killing autovacuum workers which are working with a given database\n> > (proc->roleId == 0), which is OK in that case. Are there any other\n> > cases when proc->roleId == 0 but we should not be able to kill such a\n> > process?\n> >\n> \n> Good question. I am not aware of such cases but I wonder if we should\n> add a check similar to 3a9b18b [1] for the reason given in the commit\n> message. I have added Noah to see if he has any suggestions on this\n> matter.\n> \n> [1] -\n> commit 3a9b18b3095366cd0c4305441d426d04572d88c1\n> Author: Noah Misch <[email protected]>\n> Date: Mon Nov 6 06:14:13 2023 -0800\n> \n> Ban role pg_signal_backend from more superuser backend types.\n\nThat commit distinguished several scenarios. Here's how they apply in\nTerminateOtherDBBackends():\n\n- logical replication launcher, autovacuum launcher: the proc->databaseId test\n already skips them, since they don't connect to a database. Vignesh said\n this.\n\n- autovacuum worker: should terminate, since CountOtherDBBackends() would\n terminate them in DROP DATABASE without FORCE.\n\n- background workers that connect to a database: the right thing is less clear\n on these, but I lean toward allowing termination and changing the DROP\n DATABASE doc. As a bgworker author, I would value being able to recommend\n DROP DATABASE FORCE if a worker is sticking around unexpectedly. There's\n relatively little chance of a bgworker actually wanting to block DROP\n DATABASE FORCE or having an exploitable termination-time bug.\n\nThoughts?\n\n\n", "msg_date": "Mon, 22 Apr 2024 09:26:39 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Mon, Apr 22, 2024 at 9:56 PM Noah Misch <[email protected]> wrote:\n>\n> On Mon, Apr 15, 2024 at 11:17:54AM +0530, Amit Kapila wrote:\n> > On Thu, Apr 11, 2024 at 6:58 PM Kirill Reshke <[email protected]> wrote:\n> > >\n> > > While working on [0] i have noticed this comment in\n> > > TerminateOtherDBBackends function:\n> > >\n> > > /*\n> > > * Check whether we have the necessary rights to terminate other\n> > > * sessions. We don't terminate any session until we ensure that we\n> > > * have rights on all the sessions to be terminated. These checks are\n> > > * the same as we do in pg_terminate_backend.\n> > > *\n> > > * In this case we don't raise some warnings - like \"PID %d is not a\n> > > * PostgreSQL server process\", because for us already finished session\n> > > * is not a problem.\n> > > */\n> > >\n> > > This statement is not true after 3a9b18b.\n> > > \"These checks are the same as we do in pg_terminate_backend.\"\n>\n> The comment mismatch is a problem. Thanks for reporting it. The DROP\n> DATABASE doc mimics the comment, using, \"permissions are the same as with\n> pg_terminate_backend\".\n>\n\nHow about updating the comments as in the attached? I see that commit\n3a9b18b309 didn't change the docs of pg_terminate_backend and whatever\nis mentioned w.r.t permissions in the doc of that function sounds\nvalid for drop database force to me. Do you have any specific proposal\nin your mind?\n\n> > > But the code is still correct, I assume... or not? In fact, we are\n> > > killing autovacuum workers which are working with a given database\n> > > (proc->roleId == 0), which is OK in that case. Are there any other\n> > > cases when proc->roleId == 0 but we should not be able to kill such a\n> > > process?\n> > >\n> >\n> > Good question. I am not aware of such cases but I wonder if we should\n> > add a check similar to 3a9b18b [1] for the reason given in the commit\n> > message. I have added Noah to see if he has any suggestions on this\n> > matter.\n> >\n> > [1] -\n> > commit 3a9b18b3095366cd0c4305441d426d04572d88c1\n> > Author: Noah Misch <[email protected]>\n> > Date: Mon Nov 6 06:14:13 2023 -0800\n> >\n> > Ban role pg_signal_backend from more superuser backend types.\n>\n> That commit distinguished several scenarios. Here's how they apply in\n> TerminateOtherDBBackends():\n>\n> - logical replication launcher, autovacuum launcher: the proc->databaseId test\n> already skips them, since they don't connect to a database. Vignesh said\n> this.\n>\n> - autovacuum worker: should terminate, since CountOtherDBBackends() would\n> terminate them in DROP DATABASE without FORCE.\n>\n> - background workers that connect to a database: the right thing is less clear\n> on these, but I lean toward allowing termination and changing the DROP\n> DATABASE doc. As a bgworker author, I would value being able to recommend\n> DROP DATABASE FORCE if a worker is sticking around unexpectedly. There's\n> relatively little chance of a bgworker actually wanting to block DROP\n> DATABASE FORCE or having an exploitable termination-time bug.\n>\n\nAgreed with this analysis and I don't see the need for the code change.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 29 Apr 2024 10:18:35 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Mon, Apr 29, 2024 at 10:18:35AM +0530, Amit Kapila wrote:\n> On Mon, Apr 22, 2024 at 9:56 PM Noah Misch <[email protected]> wrote:\n> >\n> > On Mon, Apr 15, 2024 at 11:17:54AM +0530, Amit Kapila wrote:\n> > > On Thu, Apr 11, 2024 at 6:58 PM Kirill Reshke <[email protected]> wrote:\n> > > >\n> > > > While working on [0] i have noticed this comment in\n> > > > TerminateOtherDBBackends function:\n> > > >\n> > > > /*\n> > > > * Check whether we have the necessary rights to terminate other\n> > > > * sessions. We don't terminate any session until we ensure that we\n> > > > * have rights on all the sessions to be terminated. These checks are\n> > > > * the same as we do in pg_terminate_backend.\n> > > > *\n> > > > * In this case we don't raise some warnings - like \"PID %d is not a\n> > > > * PostgreSQL server process\", because for us already finished session\n> > > > * is not a problem.\n> > > > */\n> > > >\n> > > > This statement is not true after 3a9b18b.\n> > > > \"These checks are the same as we do in pg_terminate_backend.\"\n> >\n> > The comment mismatch is a problem. Thanks for reporting it. The DROP\n> > DATABASE doc mimics the comment, using, \"permissions are the same as with\n> > pg_terminate_backend\".\n> >\n> \n> How about updating the comments as in the attached? I see that commit\n\nI think the rationale for the difference should appear, being non-obvious and\ndebatable in this case.\n\n> 3a9b18b309 didn't change the docs of pg_terminate_backend and whatever\n> is mentioned w.r.t permissions in the doc of that function sounds\n> valid for drop database force to me. Do you have any specific proposal\n> in your mind?\n\nSomething like the attached. One could argue the function should also check\nisBackgroundWorker and ignore even bgworkers that set proc->roleId, but I've\nnot done that.", "msg_date": "Mon, 29 Apr 2024 14:27:56 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Tue, Apr 30, 2024 at 2:58 AM Noah Misch <[email protected]> wrote:\n>\n> On Mon, Apr 29, 2024 at 10:18:35AM +0530, Amit Kapila wrote:\n> > On Mon, Apr 22, 2024 at 9:56 PM Noah Misch <[email protected]> wrote:\n>\n> > 3a9b18b309 didn't change the docs of pg_terminate_backend and whatever\n> > is mentioned w.r.t permissions in the doc of that function sounds\n> > valid for drop database force to me. Do you have any specific proposal\n> > in your mind?\n>\n> Something like the attached.\n>\n\nLGTM.\n\n> One could argue the function should also check\n> isBackgroundWorker and ignore even bgworkers that set proc->roleId, but I've\n> not done that.\n\nWhat is the argument for ignoring such workers?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Apr 2024 09:10:52 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Tue, Apr 30, 2024 at 09:10:52AM +0530, Amit Kapila wrote:\n> On Tue, Apr 30, 2024 at 2:58 AM Noah Misch <[email protected]> wrote:\n> > On Mon, Apr 29, 2024 at 10:18:35AM +0530, Amit Kapila wrote:\n> > > On Mon, Apr 22, 2024 at 9:56 PM Noah Misch <[email protected]> wrote:\n> >\n> > > 3a9b18b309 didn't change the docs of pg_terminate_backend and whatever\n> > > is mentioned w.r.t permissions in the doc of that function sounds\n> > > valid for drop database force to me. Do you have any specific proposal\n> > > in your mind?\n> >\n> > Something like the attached.\n> \n> LGTM.\n> \n> > One could argue the function should also check\n> > isBackgroundWorker and ignore even bgworkers that set proc->roleId, but I've\n> > not done that.\n> \n> What is the argument for ignoring such workers?\n\nOne of the proposed code comments says, \"For bgworker authors, it's convenient\nto be able to recommend FORCE if a worker is blocking DROP DATABASE\nunexpectedly.\" That argument is debatable, but I do think it applies equally\nto bgworkers whether or not they set proc->roleId.\n\n\n", "msg_date": "Tue, 30 Apr 2024 10:06:18 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Tue, Apr 30, 2024 at 10:36 PM Noah Misch <[email protected]> wrote:\n>\n> >\n> > > One could argue the function should also check\n> > > isBackgroundWorker and ignore even bgworkers that set proc->roleId, but I've\n> > > not done that.\n> >\n> > What is the argument for ignoring such workers?\n>\n> One of the proposed code comments says, \"For bgworker authors, it's convenient\n> to be able to recommend FORCE if a worker is blocking DROP DATABASE\n> unexpectedly.\" That argument is debatable, but I do think it applies equally\n> to bgworkers whether or not they set proc->roleId.\n>\n\nAgreed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 May 2024 14:53:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." }, { "msg_contents": "On Tue, Apr 30, 2024 at 09:10:52AM +0530, Amit Kapila wrote:\n> On Tue, Apr 30, 2024 at 2:58 AM Noah Misch <[email protected]> wrote:\n> > On Mon, Apr 29, 2024 at 10:18:35AM +0530, Amit Kapila wrote:\n> > > 3a9b18b309 didn't change the docs of pg_terminate_backend and whatever\n> > > is mentioned w.r.t permissions in the doc of that function sounds\n> > > valid for drop database force to me. Do you have any specific proposal\n> > > in your mind?\n> >\n> > Something like the attached.\n> \n> LGTM.\n\nPushed as commit 372700c. Thanks for the report and the review.\n\n\n", "msg_date": "Thu, 16 May 2024 14:18:01 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TerminateOtherDBBackends code comments inconsistency." } ]
[ { "msg_contents": "I have been checking the pg_config.h generated by configure and meson to \nsee if there is anything materially different. I found that\n\nHAVE_DECL_LLVMCREATEGDBREGISTRATIONLISTENER and\nHAVE_DECL_LLVMCREATEPERFJITEVENTLISTENER\n\nare missing on the meson side.\n\nSomething like the below would appear to fix that:\n\ndiff --git a/meson.build b/meson.build\nindex 43fad5323c0..cdfd31377d1 100644\n--- a/meson.build\n+++ b/meson.build\n@@ -2301,6 +2301,14 @@ decl_checks += [\n ['pwritev', 'sys/uio.h'],\n ]\n\n+# Check presence of some optional LLVM functions.\n+if llvm.found()\n+ decl_checks += [\n+ ['LLVMCreateGDBRegistrationListener', 'llvm-c/ExecutionEngine.h'],\n+ ['LLVMCreatePerfJITEventListener', 'llvm-c/ExecutionEngine.h'],\n+ ]\n+endif\n+\n foreach c : decl_checks\n func = c.get(0)\n header = c.get(1)\n\nI don't know what these functions do, but the symbols are used in the \nsource code. Thoughts?\n\n\n", "msg_date": "Thu, 11 Apr 2024 17:26:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "some LLVM function checks missing in meson" }, { "msg_contents": "On 11/04/2024 18:26, Peter Eisentraut wrote:\n> I have been checking the pg_config.h generated by configure and meson to\n> see if there is anything materially different. I found that\n> \n> HAVE_DECL_LLVMCREATEGDBREGISTRATIONLISTENER and\n> HAVE_DECL_LLVMCREATEPERFJITEVENTLISTENER\n> \n> are missing on the meson side.\n> \n> Something like the below would appear to fix that:\n> \n> diff --git a/meson.build b/meson.build\n> index 43fad5323c0..cdfd31377d1 100644\n> --- a/meson.build\n> +++ b/meson.build\n> @@ -2301,6 +2301,14 @@ decl_checks += [\n> ['pwritev', 'sys/uio.h'],\n> ]\n> \n> +# Check presence of some optional LLVM functions.\n> +if llvm.found()\n> + decl_checks += [\n> + ['LLVMCreateGDBRegistrationListener', 'llvm-c/ExecutionEngine.h'],\n> + ['LLVMCreatePerfJITEventListener', 'llvm-c/ExecutionEngine.h'],\n> + ]\n> +endif\n> +\n> foreach c : decl_checks\n> func = c.get(0)\n> header = c.get(1)\n> \n> I don't know what these functions do, but the symbols are used in the\n> source code. Thoughts?\n\n+1. I also don't know what they do, but clearly the configure and meson \nchecks should be in sync.\n\nThere's also this in llvmjit.c:\n\n> \t\tif (llvm_opt3_orc)\n> \t\t{\n> #if defined(HAVE_DECL_LLVMORCREGISTERPERF) && HAVE_DECL_LLVMORCREGISTERPERF\n> \t\t\tif (jit_profiling_support)\n> \t\t\t\tLLVMOrcUnregisterPerf(llvm_opt3_orc);\n> #endif\n> \t\t\tLLVMOrcDisposeInstance(llvm_opt3_orc);\n> \t\t\tllvm_opt3_orc = NULL;\n> \t\t}\n> \n> \t\tif (llvm_opt0_orc)\n> \t\t{\n> #if defined(HAVE_DECL_LLVMORCREGISTERPERF) && HAVE_DECL_LLVMORCREGISTERPERF\n> \t\t\tif (jit_profiling_support)\n> \t\t\t\tLLVMOrcUnregisterPerf(llvm_opt0_orc);\n> #endif\n> \t\t\tLLVMOrcDisposeInstance(llvm_opt0_orc);\n> \t\t\tllvm_opt0_orc = NULL;\n> \t\t}\n> \t}\n\nThe autoconf test that set HAVE_DECL_LLVMORCREGISTERPERF was removed in \ncommit e9a9843e13. I believe that's a leftover that should also have \nbeen removed.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Sat, 13 Apr 2024 11:25:15 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: some LLVM function checks missing in meson" }, { "msg_contents": "On 13.04.24 10:25, Heikki Linnakangas wrote:\n> There's also this in llvmjit.c:\n> \n>>         if (llvm_opt3_orc)\n>>         {\n>> #if defined(HAVE_DECL_LLVMORCREGISTERPERF) && \n>> HAVE_DECL_LLVMORCREGISTERPERF\n>>             if (jit_profiling_support)\n>>                 LLVMOrcUnregisterPerf(llvm_opt3_orc);\n>> #endif\n>>             LLVMOrcDisposeInstance(llvm_opt3_orc);\n>>             llvm_opt3_orc = NULL;\n>>         }\n>>\n>>         if (llvm_opt0_orc)\n>>         {\n>> #if defined(HAVE_DECL_LLVMORCREGISTERPERF) && \n>> HAVE_DECL_LLVMORCREGISTERPERF\n>>             if (jit_profiling_support)\n>>                 LLVMOrcUnregisterPerf(llvm_opt0_orc);\n>> #endif\n>>             LLVMOrcDisposeInstance(llvm_opt0_orc);\n>>             llvm_opt0_orc = NULL;\n>>         }\n>>     }\n> \n> The autoconf test that set HAVE_DECL_LLVMORCREGISTERPERF was removed in \n> commit e9a9843e13. I believe that's a leftover that should also have \n> been removed.\n\nRight, that was clearly forgotten. I have removed the dead code.\n\n\n", "msg_date": "Wed, 17 Apr 2024 10:53:49 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: some LLVM function checks missing in meson" }, { "msg_contents": "On 13.04.24 10:25, Heikki Linnakangas wrote:\n>> Something like the below would appear to fix that:\n>>\n>> diff --git a/meson.build b/meson.build\n>> index 43fad5323c0..cdfd31377d1 100644\n>> --- a/meson.build\n>> +++ b/meson.build\n>> @@ -2301,6 +2301,14 @@ decl_checks += [\n>>      ['pwritev', 'sys/uio.h'],\n>>    ]\n>>\n>> +# Check presence of some optional LLVM functions.\n>> +if llvm.found()\n>> +  decl_checks += [\n>> +    ['LLVMCreateGDBRegistrationListener', 'llvm-c/ExecutionEngine.h'],\n>> +    ['LLVMCreatePerfJITEventListener', 'llvm-c/ExecutionEngine.h'],\n>> +  ]\n>> +endif\n>> +\n>>    foreach c : decl_checks\n>>      func = c.get(0)\n>>      header = c.get(1)\n>>\n>> I don't know what these functions do, but the symbols are used in the\n>> source code.  Thoughts?\n> \n> +1. I also don't know what they do, but clearly the configure and meson \n> checks should be in sync.\n\nCommitted that, too.\n\n\n", "msg_date": "Wed, 17 Apr 2024 15:29:29 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": true, "msg_subject": "Re: some LLVM function checks missing in meson" } ]
[ { "msg_contents": "Hi,\n\nd8f5acbdb9b made me wonder if we should add a compiler option to warn when\nstack frames are large. gcc compatible compilers have -Wstack-usage=limit, so\nthat's not hard.\n\nHuge stack frames are somewhat dangerous, as they can defeat our stack-depth\nchecking logic. There are also some cases where large stack frames defeat\nstack-protector logic by compilers/libc/os.\n\nIt's not always obvious how large the stack will be. Even if you look at all\nthe sizes of the variables defined in a function, inlining can increase that\nsubstantially.\n\nHere are all the cases a limit of 64k finds.\n\nUnoptimized build:\n[138/1940 42 7%] Compiling C object src/common/libpgcommon_shlib.a.p/blkreftable.c.o\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c: In function 'WriteBlockRefTable':\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c:474:1: warning: stack usage is 65696 bytes [-Wstack-usage=]\n 474 | WriteBlockRefTable(BlockRefTable *brtab,\n | ^~~~~~~~~~~~~~~~~~\n[173/1940 42 8%] Compiling C object src/common/libpgcommon.a.p/blkreftable.c.o\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c: In function 'WriteBlockRefTable':\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c:474:1: warning: stack usage is 65696 bytes [-Wstack-usage=]\n 474 | WriteBlockRefTable(BlockRefTable *brtab,\n | ^~~~~~~~~~~~~~~~~~\n[281/1940 42 14%] Compiling C object src/common/libpgcommon_srv.a.p/blkreftable.c.o\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c: In function 'WriteBlockRefTable':\n../../../../../home/andres/src/postgresql/src/common/blkreftable.c:474:1: warning: stack usage is 65696 bytes [-Wstack-usage=]\n 474 | WriteBlockRefTable(BlockRefTable *brtab,\n | ^~~~~~~~~~~~~~~~~~\n[1311/1940 42 67%] Compiling C object src/bin/pg_basebackup/pg_basebackup.p/pg_basebackup.c.o\n../../../../../home/andres/src/postgresql/src/bin/pg_basebackup/pg_basebackup.c: In function 'BaseBackup':\n../../../../../home/andres/src/postgresql/src/bin/pg_basebackup/pg_basebackup.c:1753:1: warning: stack usage might be 66976 bytes [-Wstack-usage=]\n 1753 | BaseBackup(char *compression_algorithm, char *compression_detail,\n | ^~~~~~~~~~\n[1345/1940 42 69%] Compiling C object src/bin/pg_verifybackup/pg_verifybackup.p/pg_verifybackup.c.o\n../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: In function 'verify_file_checksum':\n../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c:842:1: warning: stack usage is 131232 bytes [-Wstack-usage=]\n 842 | verify_file_checksum(verifier_context *context, manifest_file *m,\n | ^~~~~~~~~~~~~~~~~~~~\n[1349/1940 42 69%] Compiling C object src/bin/pg_waldump/pg_waldump.p/pg_waldump.c.o\n../../../../../home/andres/src/postgresql/src/bin/pg_waldump/pg_waldump.c: In function 'main':\n../../../../../home/andres/src/postgresql/src/bin/pg_waldump/pg_waldump.c:792:1: warning: stack usage might be 105104 bytes [-Wstack-usage=]\n 792 | main(int argc, char **argv)\n | ^~~~\n[1563/1940 42 80%] Compiling C object contrib/pg_walinspect/pg_walinspect.so.p/pg_walinspect.c.o\n../../../../../home/andres/src/postgresql/contrib/pg_walinspect/pg_walinspect.c: In function 'GetWalStats':\n../../../../../home/andres/src/postgresql/contrib/pg_walinspect/pg_walinspect.c:762:1: warning: stack usage is 104624 bytes [-Wstack-usage=]\n 762 | GetWalStats(FunctionCallInfo fcinfo, XLogRecPtr start_lsn, XLogRecPtr end_lsn,\n | ^~~~~~~~~~~\n[1581/1940 42 81%] Compiling C object src/test/modules/test_dsa/test_dsa.so.p/test_dsa.c.o\n../../../../../home/andres/src/postgresql/src/test/modules/test_dsa/test_dsa.c: In function 'test_dsa_resowners':\n../../../../../home/andres/src/postgresql/src/test/modules/test_dsa/test_dsa.c:64:1: warning: stack usage is 80080 bytes [-Wstack-usage=]\n 64 | test_dsa_resowners(PG_FUNCTION_ARGS)\n | ^~~~~~~~~~~~~~~~~~\n\n\nThere is one warning that is just visible in an optimized build,\notherwise the precise amount of stack usage just differs some:\n[1165/2017 42 57%] Compiling C object src/backend/postgres_lib.a.p/tsearch_spell.c.o\n../../../../../home/andres/src/postgresql/src/backend/tsearch/spell.c: In function 'NIImportAffixes':\n../../../../../home/andres/src/postgresql/src/backend/tsearch/spell.c:1425:1: warning: stack usage might be 74080 bytes [-Wstack-usage=]\n 1425 | NIImportAffixes(IspellDict *Conf, const char *filename)\n | ^~~~~~~~~~~~~~~\n\n\nWarnings in src/bin aren't as interesting as warnings in backend code, as the\nlatter is much more likely to be \"exposed\" to deep stacks and could be\nvulnerable due to stack overflows.\n\nI did verify this would have warned about d8f5acbdb9b^:\n\n[1/2 1 50%] Compiling C object src/backend/postgres_lib.a.p/backup_basebackup_incremental.c.o\n../../../../../home/andres/src/postgresql/src/backend/backup/basebackup_incremental.c: In function 'GetFileBackupMethod':\n../../../../../home/andres/src/postgresql/src/backend/backup/basebackup_incremental.c:742:1: warning: stack usage is 524400 bytes [-Wstack-usage=]\n 742 | GetFileBackupMethod(IncrementalBackupInfo *ib, const char *path,\n | ^~~~~~~~~~~~~~~~~~~\n\n\nI don't really have an opinion about the concrete warning limit to use.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 12:01:47 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Should we add a compiler warning for large stack frames?" }, { "msg_contents": "On 2024-04-11 Th 15:01, Andres Freund wrote:\n> Hi,\n>\n> d8f5acbdb9b made me wonder if we should add a compiler option to warn when\n> stack frames are large. gcc compatible compilers have -Wstack-usage=limit, so\n> that's not hard.\n>\n> Huge stack frames are somewhat dangerous, as they can defeat our stack-depth\n> checking logic. There are also some cases where large stack frames defeat\n> stack-protector logic by compilers/libc/os.\n>\n> It's not always obvious how large the stack will be. Even if you look at all\n> the sizes of the variables defined in a function, inlining can increase that\n> substantially.\n>\n> Here are all the cases a limit of 64k finds.\n>\n>\n> [1345/1940 42 69%] Compiling C object src/bin/pg_verifybackup/pg_verifybackup.p/pg_verifybackup.c.o\n> ../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: In function 'verify_file_checksum':\n> ../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c:842:1: warning: stack usage is 131232 bytes [-Wstack-usage=]\n> 842 | verify_file_checksum(verifier_context *context, manifest_file *m,\n> | ^~~~~~~~~~~~~~~~~~~~\n>\nThis one's down to me. I asked Robert some time back why we were using a \nvery conservative buffer size, and he agreed we could probably make it \nlarger, but the number chosen is mine, not his. It was a completely \narbitrary choice.\n\nI'm happy to reduce it, but it's not clear to me why we care that much \nfor a client binary. There's no stack depth checking going on here.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-11 Th 15:01, Andres Freund\n wrote:\n\n\nHi,\n\nd8f5acbdb9b made me wonder if we should add a compiler option to warn when\nstack frames are large. gcc compatible compilers have -Wstack-usage=limit, so\nthat's not hard.\n\nHuge stack frames are somewhat dangerous, as they can defeat our stack-depth\nchecking logic. There are also some cases where large stack frames defeat\nstack-protector logic by compilers/libc/os.\n\nIt's not always obvious how large the stack will be. Even if you look at all\nthe sizes of the variables defined in a function, inlining can increase that\nsubstantially.\n\nHere are all the cases a limit of 64k finds.\n\n\n[1345/1940 42 69%] Compiling C object src/bin/pg_verifybackup/pg_verifybackup.p/pg_verifybackup.c.o\n../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: In function 'verify_file_checksum':\n../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c:842:1: warning: stack usage is 131232 bytes [-Wstack-usage=]\n 842 | verify_file_checksum(verifier_context *context, manifest_file *m,\n | ^~~~~~~~~~~~~~~~~~~~\n\n\n\n\n\nThis one's down to me. I asked Robert some time back why we were using a very conservative buffer size, and he agreed we could probably make it larger, but the number chosen is mine, not his. It was a completely arbitrary choice.\nI'm happy to reduce it, but it's not clear to me why we care that much for a client binary. There's no stack depth checking going on here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 11 Apr 2024 15:16:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 15:16:57 -0400, Andrew Dunstan wrote:\n> On 2024-04-11 Th 15:01, Andres Freund wrote:\n> > [1345/1940 42 69%] Compiling C object src/bin/pg_verifybackup/pg_verifybackup.p/pg_verifybackup.c.o\n> > ../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c: In function 'verify_file_checksum':\n> > ../../../../../home/andres/src/postgresql/src/bin/pg_verifybackup/pg_verifybackup.c:842:1: warning: stack usage is 131232 bytes [-Wstack-usage=]\n> > 842 | verify_file_checksum(verifier_context *context, manifest_file *m,\n> > | ^~~~~~~~~~~~~~~~~~~~\n> > \n> This one's down to me. I asked Robert some time back why we were using a\n> very conservative buffer size, and he agreed we could probably make it\n> larger, but the number chosen is mine, not his. It was a completely\n> arbitrary choice.\n> \n> I'm happy to reduce it, but it's not clear to me why we care that much for a\n> client binary. There's no stack depth checking going on here.\n\nThere isn't - but it that doesn't necessarily mean it's great to just use a\nlot of stack. It can even mean that you ought to be more careful, because\nyou'll just crash, rather than get an error about having exceeded the stack\nsize. When the stack size is increased by more than a few pages, the OS /\ncompiler defenses for detecting that don't work as well, if you're unlucky.\n\n\n128k is probably not going to be an issue in practice. However, it also seems\nnot great from a performance POV to use this much stack in a function that's\ncalled fairly often. I'd allocate the buffer in verify_backup_checksums() and\nreuse it across all to-be-checked files.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 13:17:13 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> d8f5acbdb9b made me wonder if we should add a compiler option to warn when\n> stack frames are large. gcc compatible compilers have -Wstack-usage=limit, so\n> that's not hard.\n\n> Huge stack frames are somewhat dangerous, as they can defeat our stack-depth\n> checking logic. There are also some cases where large stack frames defeat\n> stack-protector logic by compilers/libc/os.\n\nIndeed. I recall reading, not long ago, some Linux kernel docs to the\neffect that automatic stack growth is triggered by a reference into\nthe page just below what is currently mapped as your stack, and\ntherefore allocating a stack frame greater than one page has the\npotential to cause SIGSEGV rather than the desired stack extension.\n(If you feel like digging in the archives, I think this was brought\nup in the last round of lets-add-some-more-check_stack_depth-calls.)\n\n> Warnings in src/bin aren't as interesting as warnings in backend code, as the\n> latter is much more likely to be \"exposed\" to deep stacks and could be\n> vulnerable due to stack overflows.\n\nProbably, but it's still the case that such code is relying on the\noriginal stack allocation being large enough already.\n\n> I don't really have an opinion about the concrete warning limit to use.\n\nGiven the above, I'm tempted to say we should make it 8K. But I know\nwe have a bunch of places that allocate page images as stack space,\nso maybe that'd be too painful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 16:35:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 16:35:58 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > d8f5acbdb9b made me wonder if we should add a compiler option to warn when\n> > stack frames are large. gcc compatible compilers have -Wstack-usage=limit, so\n> > that's not hard.\n>\n> > Huge stack frames are somewhat dangerous, as they can defeat our stack-depth\n> > checking logic. There are also some cases where large stack frames defeat\n> > stack-protector logic by compilers/libc/os.\n>\n> Indeed. I recall reading, not long ago, some Linux kernel docs to the\n> effect that automatic stack growth is triggered by a reference into\n> the page just below what is currently mapped as your stack, and\n> therefore allocating a stack frame greater than one page has the\n> potential to cause SIGSEGV rather than the desired stack extension.\n> (If you feel like digging in the archives, I think this was brought\n> up in the last round of lets-add-some-more-check_stack_depth-calls.)\n\nI think it's more than a single page, but I'm not entirely sure either. I\nthink some compilers inject artificial stack accesses when extending the stack\nby a lot, but I don't remember the details.\n\nThere certainly was the issue that strict memory overcommit does not reliably\nwork with larger stack extensions.\n\nCould be worth writing a test program for...\n\n\n> > I don't really have an opinion about the concrete warning limit to use.\n>\n> Given the above, I'm tempted to say we should make it 8K.\n\nHm, why 8k? That's 2x the typical page size, isn't it?\n\n\n> But I know we have a bunch of places that allocate page images as stack\n> space, so maybe that'd be too painful.\n\n8k does generate a fair number of warnings, 111 here. I think it might also\nbe hard to ensure that inlining doesn't end up creating bigger stack frames.\n\nframe size warnings\n4096 155\n8192 111\n16384 36\n32768 14\n65536 8\n\nSuggests that starting somewhere around 16-32k might be reasonable?\n\nOne issue is of course that configuring a larger blocksize or wal_blocksize\nwill trigger more warnings. I guess we'd need to set the limit based on\nMax(blocksize, wal_blocksize) * 2 or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 15:07:11 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-11 16:35:58 -0400, Tom Lane wrote:\n>> Indeed. I recall reading, not long ago, some Linux kernel docs to the\n>> effect that automatic stack growth is triggered by a reference into\n>> the page just below what is currently mapped as your stack, and\n>> therefore allocating a stack frame greater than one page has the\n>> potential to cause SIGSEGV rather than the desired stack extension.\n\n> I think it's more than a single page, but I'm not entirely sure either. I\n> think some compilers inject artificial stack accesses when extending the stack\n> by a lot, but I don't remember the details.\n\nHmm. You're right that I was misremembering the typical RAM page\nsize. The kernel must be allowing stack frames bigger than 4K,\nor we'd see problems everywhere. I wonder how much bigger ...\n\n> frame size warnings\n> 4096 155\n> 8192 111\n> 16384 36\n> 32768 14\n> 65536 8\n\n> Suggests that starting somewhere around 16-32k might be reasonable?\n\nI'm hesitant to touch more than a handful of places on the strength\nof the info we've got; either it's wasted work or it's not enough\nwork, and we don't know which. Might be time for some\nexperimentation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 11 Apr 2024 20:56:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "Hi,\n\nOn 2024-04-11 15:07:11 -0700, Andres Freund wrote:\n> On 2024-04-11 16:35:58 -0400, Tom Lane wrote:\n> > Indeed. I recall reading, not long ago, some Linux kernel docs to the\n> > effect that automatic stack growth is triggered by a reference into\n> > the page just below what is currently mapped as your stack, and\n> > therefore allocating a stack frame greater than one page has the\n> > potential to cause SIGSEGV rather than the desired stack extension.\n> > (If you feel like digging in the archives, I think this was brought\n> > up in the last round of lets-add-some-more-check_stack_depth-calls.)\n>\n> I think it's more than a single page, but I'm not entirely sure either. I\n> think some compilers inject artificial stack accesses when extending the stack\n> by a lot, but I don't remember the details.\n>\n> There certainly was the issue that strict memory overcommit does not reliably\n> work with larger stack extensions.\n>\n> Could be worth writing a test program for...\n\nIt looks like it's a mess.\n\nIn the good cases the kernel doesn't map anything within ulimit -R of the\nstack, and the stack is extended whenever memory in that range is accessed.\nNothing is mapped into that region unless MAP_FIXED is used.\n\nHowever, in some cases linux maps the heap and the stack fairly close to each\nother at program startup. I've observed this with an executable compiled with\n-static-pie and executed with randomization disabled (via setarch -R). In\nthat case the the layout at program start is\n\n...\n7ffff7fff000-7ffff8021000 rw-p 00000000 00:00 0 [heap]\n7ffffffdd000-7ffffffff000 rw-p 00000000 00:00 0 [stack]\n\nHere the start of the heap and the end of the stack are only 128MB appart. The\nheap grows upwards, the stack downwards.\n\nWhich means that if glibc allocates a bunch of memory via sbrk() and the stack\ngrows, they clash into each other.\n\n\nI think this may be a glibc bug. If I compile with musl instead, this doesn't\nhappen, because musl stops using sbrk() style allocations before stack and\nprogram break get too close to each other.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 11 Apr 2024 19:15:22 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" }, { "msg_contents": "On 2024-04-11 Th 16:17, Andres Freund wrote:\n> 128k is probably not going to be an issue in practice. However, it also seems\n> not great from a performance POV to use this much stack in a function that's\n> called fairly often. I'd allocate the buffer in verify_backup_checksums() and\n> reuse it across all to-be-checked files.\n>\n>\n\nYes, I agree. I'll make that happen in the next day or two.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-11 Th 16:17, Andres Freund\n wrote:\n\n\n\n\n128k is probably not going to be an issue in practice. However, it also seems\nnot great from a performance POV to use this much stack in a function that's\ncalled fairly often. I'd allocate the buffer in verify_backup_checksums() and\nreuse it across all to-be-checked files.\n\n\n\n\n\n\nYes, I agree. I'll make that happen in the next day or two.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 12 Apr 2024 08:29:29 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should we add a compiler warning for large stack frames?" } ]
[ { "msg_contents": "Hi Community\n\nI am trying to upgrade PostgreSQL (RHEL 7) from version 13.7 to 15.6 using\npglogical.\nMy Standby(destination) machine has following rpms,\n\n\n*postgresql13-pglogical-3.7.16-1.el7.x86_64pglogical_15-2.4.3-1.rhel7.x86_64*\n\nAnd Primary(Source) has ,\n\n\n*postgresql13-pglogical-3.7.16-1.el7.x86_64*\npg_upgrade check mode went fine , but it failed while running real mode.\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 13027; 1255 3375648004 FUNCTION\nalter_subscription_add_log(\"text\", \"text\", boolean, \"regclass\", \"text\"[],\n\"text\"[]) postgres\npg_restore: error: could not execute query: ERROR: could not find function\n\"pglogical_alter_subscription_add_log\" in file\n\"/usr/pgsql-15/lib/pglogical.so\"\nCommand was: CREATE FUNCTION\n\"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\"\n\"text\", \"log_to_file\" boolean DEFAULT true, \"log_to_table\" \"regclass\"\nDEFAULT NULL::\"regclass\", \"conflict_type\" \"text\"[] DEFAULT NULL::\"text\"[],\n\"conflict_resolution\" \"text\"[] DEFAULT NULL::\"text\"[]) RETURNS boolean\n LANGUAGE \"c\"\n AS '$libdir/pglogical', 'pglogical_alter_subscription_add_log';\n\n-- For binary upgrade, handle extension membership the hard way\nALTER EXTENSION \"pglogical\" ADD FUNCTION\n\"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\"\n\"text\", \"log_to_file\" boolean, \"log_to_table\" \"regclass\", \"conflict_type\"\n\"text\"[], \"conflict_resolution\" \"text\"[]);\n\nAm I missing any packages?\n\nThanks,\n\nHi CommunityI am trying to upgrade PostgreSQL (RHEL 7) from version 13.7 to 15.6 using pglogical.My Standby(destination) machine has following rpms,postgresql13-pglogical-3.7.16-1.el7.x86_64pglogical_15-2.4.3-1.rhel7.x86_64And Primary(Source) has ,postgresql13-pglogical-3.7.16-1.el7.x86_64pg_upgrade check mode went fine , but it failed while running real mode.pg_restore: while PROCESSING TOC:pg_restore: from TOC entry 13027; 1255 3375648004 FUNCTION alter_subscription_add_log(\"text\", \"text\", boolean, \"regclass\", \"text\"[], \"text\"[]) postgrespg_restore: error: could not execute query: ERROR:  could not find function \"pglogical_alter_subscription_add_log\" in file \"/usr/pgsql-15/lib/pglogical.so\"Command was: CREATE FUNCTION \"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\" \"text\", \"log_to_file\" boolean DEFAULT true, \"log_to_table\" \"regclass\" DEFAULT NULL::\"regclass\", \"conflict_type\" \"text\"[] DEFAULT NULL::\"text\"[], \"conflict_resolution\" \"text\"[] DEFAULT NULL::\"text\"[]) RETURNS boolean    LANGUAGE \"c\"    AS '$libdir/pglogical', 'pglogical_alter_subscription_add_log';-- For binary upgrade, handle extension membership the hard wayALTER EXTENSION \"pglogical\" ADD FUNCTION \"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\" \"text\", \"log_to_file\" boolean, \"log_to_table\" \"regclass\", \"conflict_type\" \"text\"[], \"conflict_resolution\" \"text\"[]);Am I missing any packages?Thanks,", "msg_date": "Thu, 11 Apr 2024 17:47:57 -0700", "msg_from": "Perumal Raj <[email protected]>", "msg_from_op": true, "msg_subject": "pg_upgrde failed : logical replication : alter_subscription_add_log" }, { "msg_contents": "On Fri, Apr 12, 2024 at 6:18 AM Perumal Raj <[email protected]> wrote:\n>\n> I am trying to upgrade PostgreSQL (RHEL 7) from version 13.7 to 15.6 using pglogical.\n> My Standby(destination) machine has following rpms,\n>\n> postgresql13-pglogical-3.7.16-1.el7.x86_64\n> pglogical_15-2.4.3-1.rhel7.x86_64\n>\n> And Primary(Source) has ,\n>\n> postgresql13-pglogical-3.7.16-1.el7.x86_64\n>\n> pg_upgrade check mode went fine , but it failed while running real mode.\n>\n> pg_restore: while PROCESSING TOC:\n> pg_restore: from TOC entry 13027; 1255 3375648004 FUNCTION alter_subscription_add_log(\"text\", \"text\", boolean, \"regclass\", \"text\"[], \"text\"[]) postgres\n> pg_restore: error: could not execute query: ERROR: could not find function \"pglogical_alter_subscription_add_log\" in file \"/usr/pgsql-15/lib/pglogical.so\"\n> Command was: CREATE FUNCTION \"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\" \"text\", \"log_to_file\" boolean DEFAULT true, \"log_to_table\" \"regclass\" DEFAULT NULL::\"regclass\", \"conflict_type\" \"text\"[] DEFAULT NULL::\"text\"[], \"conflict_resolution\" \"text\"[] DEFAULT NULL::\"text\"[]) RETURNS boolean\n> LANGUAGE \"c\"\n> AS '$libdir/pglogical', 'pglogical_alter_subscription_add_log';\n>\n> -- For binary upgrade, handle extension membership the hard way\n> ALTER EXTENSION \"pglogical\" ADD FUNCTION \"pglogical\".\"alter_subscription_add_log\"(\"sub_name\" \"text\", \"log_name\" \"text\", \"log_to_file\" boolean, \"log_to_table\" \"regclass\", \"conflict_type\" \"text\"[], \"conflict_resolution\" \"text\"[]);\n>\n> Am I missing any packages?\n>\n\nWe don't maintain pglogical so difficult to answer but looking at the\nerror (ERROR: could not find function\n\"pglogical_alter_subscription_add_log\" in file\n\"/usr/pgsql-15/lib/pglogical.so\"), it seems that the required function\nis not present in pglogical.so. It is possible that the arguments\nwould have changed in newer version of pglogical or something like\nthat. You need to check with the maintainers of pglogical.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 12 Apr 2024 08:38:38 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrde failed : logical replication :\n alter_subscription_add_log" }, { "msg_contents": "Note - Please keep pgsql-hackers in CC while responding.\n\nOn Fri, Apr 12, 2024 at 10:44 AM Perumal Raj <[email protected]> wrote:\n\n> Thanks Amit for the update,\n>\n> Documentation says : https://www.postgresql.org/docs/15/upgrading.html\n>\n> 19.6.3. Upgrading Data via Replication\n>\n> It is also possible to use logical replication methods to create a standby\n> server with the updated version of PostgreSQL. This is possible because\n> logical replication supports replication between different major versions\n> of PostgreSQL. The standby can be on the same computer or a different\n> computer. Once it has synced up with the primary server (running the older\n> version of PostgreSQL), you can switch primaries and make the standby the\n> primary and shut down the older database instance. Such a switch-over\n> results in only several seconds of downtime for an upgrade.\n>\n> This method of upgrading can be performed using the built-in logical\n> replication facilities as well as using external logical replication\n> systems such as pglogical, Slony, Londiste, and Bucardo.\n>\n> What is \"built-in logical replication\" ?\n>\n\nSee docs at [1].\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication.html\n--\nWith Regards,\nAmit Kapila\n\nNote - Please keep pgsql-hackers in CC while responding.On Fri, Apr 12, 2024 at 10:44 AM Perumal Raj <[email protected]> wrote:Thanks Amit for the update,Documentation says : https://www.postgresql.org/docs/15/upgrading.html19.6.3. Upgrading Data via ReplicationIt is also possible to use logical replication methods to create a standby server with the updated version of PostgreSQL. This is possible because logical replication supports replication between different major versions of PostgreSQL. The standby can be on the same computer or a different computer. Once it has synced up with the primary server (running the older version of PostgreSQL), you can switch primaries and make the standby the primary and shut down the older database instance. Such a switch-over results in only several seconds of downtime for an upgrade.This method of upgrading can be performed using the built-in logical replication facilities as well as using external logical replication systems such as pglogical, Slony, Londiste, and Bucardo.What is \"built-in logical replication\" ? See docs at [1].[1] - https://www.postgresql.org/docs/devel/logical-replication.html--With Regards,Amit Kapila", "msg_date": "Fri, 12 Apr 2024 15:55:57 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_upgrde failed : logical replication :\n alter_subscription_add_log" } ]
[ { "msg_contents": "(moved to a new thread)\n\nOn Thu, Mar 21, 2024 at 04:31:45PM -0400, Tom Lane wrote:\n> I wrote:\n>> ... I still see the problematic GRANT taking ~250ms, compared\n>> to 5ms in v15. roles_is_member_of is clearly on the hook for that.\n> \n> Ah: looks like that is mainly the fault of the list_append_unique_oid\n> calls in roles_is_member_of. That's also an O(N^2) cost of course,\n> though with a much smaller constant factor.\n> \n> I don't think we have any really cheap way to de-duplicate the role\n> OIDs, especially seeing that it has to be done on-the-fly within the\n> collection loop, and the order of roles_list is at least potentially\n> interesting. Not sure how to make further progress without a lot of\n> work.\n\nI looked at this one again because I suspect these \"thousands of roles\"\ncases are going to continue to appear. Specifically, I tried to convert\nthe cached roles lists to hash tables to avoid the list_member_oid linear\nsearches. That actually was pretty easy to do. The most complicated part\nis juggling a couple of lists to keep track of the roles we need to iterate\nover in roles_is_member_of().\n\nAFAICT the only catch is that select_best_grantor() seems to depend on the\nordering of roles_list so that it prefers a \"closer\" role. To deal with\nthat, I added a \"depth\" field to the entry struct that can be used as a\ntiebreaker. I'm not certain that this is good enough, though.\n\nAs shown in the attached work-in-progress patch, this actually ends up\nremoving more code than it adds, and it seems to provide similar results to\nHEAD (using the benchmark from the previous thread [0]). I intend to test\nit with many more roles to see if it's better in more extreme cases.\n\n[0] https://postgr.es/m/341186.1711037256%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 11 Apr 2024 23:16:33 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "further improving roles_is_member_of()" }, { "msg_contents": "On Thu, Apr 11, 2024 at 11:16:33PM -0500, Nathan Bossart wrote:\n> As shown in the attached work-in-progress patch, this actually ends up\n> removing more code than it adds, and it seems to provide similar results to\n> HEAD (using the benchmark from the previous thread [0]). I intend to test\n> it with many more roles to see if it's better in more extreme cases.\n\nEven with 100K roles, the Bloom filter added in commit d365ae7 seems to do\na pretty good job at keeping up with the hash table approach. The callers\nof roles_is_member_of() that do list_member_oid() on the returned list\nmight benefit a little from a hash table, but I'm skeptical it makes much\ndifference in practice. This was an interesting test, but I'll likely\nwithdraw this patch shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 12 Apr 2024 14:19:28 -0500", "msg_from": "Nathan Bossart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: further improving roles_is_member_of()" } ]
[ { "msg_contents": "Hello, hackers!\n\nWhen running cpluspluscheck/headerscheck on REL_16_STABLE [1] I found \nthat everything was ok only if it was preceded by a build and make \nmaintainer-clean was not used:\n\n$ ./configure --without-icu --with-perl --with-python > configure.txt &&\nmake -j16 > make.txt &&\nmake -j16 clean > clean.txt &&\nmake -j16 cpluspluscheck > cpluspluscheck.txt\n\n$ ./configure --without-icu --with-perl --with-python > configure_1.txt \n&&\nmake -j16 > make.txt &&\nmake -j16 distclean > clean.txt &&\n./configure --without-icu --with-perl --with-python > configure_2.txt &&\nmake -j16 cpluspluscheck > cpluspluscheck.txt\n\nOtherwise cpluspluscheck/headerscheck will fail:\n\n$ ./configure --without-icu --with-perl --with-python > configure_1.txt \n&&\nmake -j16 > make.txt &&\nmake -j16 maintainer-clean > clean.txt &&\n./configure --without-icu --with-perl --with-python > configure_2.txt &&\nmake -j16 cpluspluscheck > cpluspluscheck.txt\n\n$ ./configure --without-icu --with-perl --with-python > configure.txt &&\nmake -j16 cpluspluscheck > cpluspluscheck.txt\n\nIn file included from /tmp/cpluspluscheck.Zy4645/test.cpp:3:\n/home/marina/postgrespro/postgrespro/src/backend/parser/gramparse.h:29:10: \nfatal error: gram.h: No such file or directory\n 29 | #include \"gram.h\"\n | ^~~~~~~~\ncompilation terminated.\nIn file included from /tmp/cpluspluscheck.Zy4645/test.cpp:3:\n/home/marina/postgrespro/postgrespro/src/backend/utils/adt/jsonpath_internal.h:26:10: \nfatal error: jsonpath_gram.h: No such file or directory\n 26 | #include \"jsonpath_gram.h\"\n | ^~~~~~~~~~~~~~~~~\ncompilation terminated.\nmake: *** [GNUmakefile:141: cpluspluscheck] Error 1\n\nIn the other branches everything is fine: these problems begin with \ncommits [2] (jsonpath_gram.h) and [3] (gram.h) and in the master branch \nthere're no such problems after commit [4]. The attached diff.patch \nfixes this issue for me. (IIUC internal headers generated by Bison are \nusually excluded from such checks so I also excluded gramparse.h and \njsonpath_internal.h...)\n\n[1] \nhttps://github.com/postgres/postgres/commit/e177da5c87a10abac97c028bfb427bafb7353aa2\n[2] \nhttps://github.com/postgres/postgres/commit/dac048f71ebbcf2f980d280711f8ff8001331c5d\n[3] \nhttps://github.com/postgres/postgres/commit/ecaf7c5df54f7fa9df2fdc7225d2bb4e283f0081\n[4] \nhttps://github.com/postgres/postgres/commit/721856ff24b3722ce8e894e5a32c9c063cd48455\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 12 Apr 2024 19:51:23 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": true, "msg_subject": "cpluspluscheck/headerscheck require build in REL_16_STABLE" }, { "msg_contents": "On Fri, Apr 12, 2024 at 11:51 PM Marina Polyakova\n<[email protected]> wrote:\n>\n> Hello, hackers!\n>\n> When running cpluspluscheck/headerscheck on REL_16_STABLE [1] I found\n> that everything was ok only if it was preceded by a build and make\n> maintainer-clean was not used:\n\nI can reproduce this.\n\n> In the other branches everything is fine: these problems begin with\n> commits [2] (jsonpath_gram.h) and [3] (gram.h) and in the master branch\n> there're no such problems after commit [4]. The attached diff.patch\n> fixes this issue for me. (IIUC internal headers generated by Bison are\n> usually excluded from such checks so I also excluded gramparse.h and\n> jsonpath_internal.h...)\n\nI'm not in favor of this patch because these files build fine on\nmaster, so there is no good reason to exclude them. We should arrange\nso that they build fine on PG16 as well. The problem is, not all the\nrequired headers are generated when invoking `make headerscheck`. The\nattached patch brings in some Makefile rules from master to make this\nwork. Does this fix it for you?", "msg_date": "Sat, 13 Apr 2024 12:40:55 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpluspluscheck/headerscheck require build in REL_16_STABLE" }, { "msg_contents": "On 2024-04-13 08:40, John Naylor wrote:\n> On Fri, Apr 12, 2024 at 11:51 PM Marina Polyakova\n> <[email protected]> wrote:\n...\n>> In the other branches everything is fine: these problems begin with\n>> commits [2] (jsonpath_gram.h) and [3] (gram.h) and in the master \n>> branch\n>> there're no such problems after commit [4]. The attached diff.patch\n>> fixes this issue for me. (IIUC internal headers generated by Bison are\n>> usually excluded from such checks so I also excluded gramparse.h and\n>> jsonpath_internal.h...)\n> \n> I'm not in favor of this patch because these files build fine on\n> master, so there is no good reason to exclude them. We should arrange\n> so that they build fine on PG16 as well. The problem is, not all the\n> required headers are generated when invoking `make headerscheck`. The\n> attached patch brings in some Makefile rules from master to make this\n> work. Does this fix it for you?\n\nEverything seems to work with this patch, thank you!\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 15 Apr 2024 17:20:00 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpluspluscheck/headerscheck require build in REL_16_STABLE" }, { "msg_contents": "On Mon, Apr 15, 2024 at 9:20 PM Marina Polyakova\n<[email protected]> wrote:\n>\n> On 2024-04-13 08:40, John Naylor wrote:\n> > so that they build fine on PG16 as well. The problem is, not all the\n> > required headers are generated when invoking `make headerscheck`. The\n> > attached patch brings in some Makefile rules from master to make this\n> > work. Does this fix it for you?\n>\n> Everything seems to work with this patch, thank you!\n\nGlad to hear it -- I'll push next week when I get back from vacation,\nunless there are objections.\n\n\n", "msg_date": "Wed, 17 Apr 2024 07:21:58 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpluspluscheck/headerscheck require build in REL_16_STABLE" }, { "msg_contents": "On Wed, Apr 17, 2024 at 7:21 AM John Naylor <[email protected]> wrote:\n>\n> On Mon, Apr 15, 2024 at 9:20 PM Marina Polyakova\n> <[email protected]> wrote:\n> > Everything seems to work with this patch, thank you!\n>\n> Glad to hear it -- I'll push next week when I get back from vacation,\n> unless there are objections.\n\nPushed, thanks for the report!\n\n\n", "msg_date": "Sat, 27 Apr 2024 13:14:59 +0700", "msg_from": "John Naylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cpluspluscheck/headerscheck require build in REL_16_STABLE" }, { "msg_contents": "On 2024-04-27 09:14, John Naylor wrote:\n> On Wed, Apr 17, 2024 at 7:21 AM John Naylor <[email protected]> \n> wrote:\n>> \n>> On Mon, Apr 15, 2024 at 9:20 PM Marina Polyakova\n>> <[email protected]> wrote:\n>> > Everything seems to work with this patch, thank you!\n>> \n>> Glad to hear it -- I'll push next week when I get back from vacation,\n>> unless there are objections.\n> \n> Pushed, thanks for the report!\n\nThank you again!\n\n-- \nMarina Polyakova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 27 Apr 2024 09:59:22 +0300", "msg_from": "Marina Polyakova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cpluspluscheck/headerscheck require build in REL_16_STABLE" } ]
[ { "msg_contents": "I'm creating this new thread separate from the existing Statistics\nExport/Import thread to keep the original thread focused on that patch.\n\nAssuming that the function signature for pg_set_attribute_stats() remains\nthe same (regclass, attname, inherited, version, ...stats...), how would we\ndesign the function signature for pg_set_extended_stats()?\n\nCurrently, the variant arguments in pg_set_attribute_stats are mapping\nattribute names from pg_stats onto parameter names in the function, so a\ncall looks like this:\n\nSELECT pg_set_attribute_stats('public.foo'::regclass, 'foo_id', false,\n17000,\n'null_frac', 0.4::real,\n'avg_width', 20::integer,\n'n_distinct, ', 100::real,\n...);\n\n\nAnd that works, because there's a 1:1 mapping of attribute names to param\nnames in the pairs of variants.\n\nHowever, that won't work for extended stats, as it has 2 mappings:\n\n* 1 set of stats from pg_stats_ext\n* N sets of stats from pg_stats_ext_exprs, one per expression contained in\nthe statistics object definition.\n\nMy first attempt at making this possible is to have section markers. The\nvariant arguments would be matched against column names in pg_stats_ext\nuntil the parameter 'expression' is encountered, at which point it would\nbegin matching parameters from pg_stats_ext_exprs to parameters for the\nfirst expression. Any subsequent use of 'expression' would move the\nmatching to the next expression.\n\nThis method places a burden on the caller to get all of the pg_stats_ext\nvalues specified before any expression values. It also places a burden on\nthe reader that they have to count the number of times they see\n'expression' used as a parameter, but it otherwise allows 1 variant list to\nserve two purposes.\n\nThoughts?\n\nI'm creating this new thread separate from the existing Statistics Export/Import thread to keep the original thread focused on that patch.Assuming that the function signature for pg_set_attribute_stats() remains the same (regclass, attname, inherited, version, ...stats...), how would we design the function signature for pg_set_extended_stats()?Currently, the variant arguments in pg_set_attribute_stats are mapping attribute names from pg_stats onto parameter names in the function, so a call looks like this:SELECT pg_set_attribute_stats('public.foo'::regclass, 'foo_id', false, 17000, 'null_frac', 0.4::real,'avg_width', 20::integer,'n_distinct, ', 100::real,...);And that works, because there's a 1:1 mapping of attribute names to param names in the pairs of variants.However, that won't work for extended stats, as it has 2 mappings:* 1 set of stats from pg_stats_ext* N sets of stats from pg_stats_ext_exprs, one per expression contained in the statistics object definition.My first attempt at making this possible is to have section markers. The variant arguments would be matched against column names in pg_stats_ext until the parameter 'expression' is encountered, at which point it would begin matching parameters from pg_stats_ext_exprs to parameters for the first expression. Any subsequent use of 'expression' would move the matching to the next expression.This method places a burden on the caller to get all of the pg_stats_ext values specified before any expression values. It also places a burden on the reader that they have to count the number of times they see 'expression' used as a parameter, but it otherwise allows 1 variant list to serve two purposes.Thoughts?", "msg_date": "Fri, 12 Apr 2024 13:36:28 -0400", "msg_from": "Corey Huinker <[email protected]>", "msg_from_op": true, "msg_subject": "Importing Extended Statistics" } ]
[ { "msg_contents": "Hi,\n\nWe have CI support for mingw, but don't run the task by default, as it eats up\nprecious CI credits. However, for cfbot we are using custom compute resources\n(currently donated by google), so we can afford to run the mingw tests. Right\nnow that'd require cfbot to patch .cirrus.tasks.yml.\n\nWhile one can manually trigger manual task in one one's own repo, most won't\nhave the permissions to do so for cfbot.\n\n\nI propose that we instead run the task automatically if\n$REPO_MINGW_TRIGGER_BY_DEFAULT is set, typically in cirrus' per-repository\nconfiguration.\n\nUnfortunately that's somewhat awkward to do in the cirrus-ci yaml\nconfiguration, the set of conditional expressions supported is very\nsimplistic.\n\nTo deal with that, I extended .cirrus.star to compute the required environment\nvariable. If $REPO_MINGW_TRIGGER_BY_DEFAULT is set, CI_MINGW_TRIGGER_TYPE is\nset to 'automatic', if not it's 'manual'.\n\n\nWe've also talked in other threads about adding CI support for\n1) windows, building with visual studio\n2) linux, with musl libc\n3) free/netbsd\n\nThat becomes more enticing, if we can enable them by default on cfbot but not\nelsewhere. With this change, it'd be easy to add further variables to control\nsuch future tasks.\n\n\n\nI also attached a second commit, that makes the \"code\" dealing with ci-os-only\nin .cirrus.tasks.yml simpler. While I think it does nicely simplify\n.cirrus.tasks.yml, overall it adds lines, possibly making this not worth it.\nI'm somewhat on the fence.\n\n\nThoughts?\n\n\nOn the code level, I thought if it'd be good to have a common prefix for all\nthe automatically set variables. Right now that's CI_, but I'm not at all\nwedded to that.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 12 Apr 2024 19:12:21 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "ci: Allow running mingw tests by default via environment variable" }, { "msg_contents": "Hi,\n\nThank you for working on this!\n\nOn Sat, 13 Apr 2024 at 05:12, Andres Freund <[email protected]> wrote:\n>\n> Hi,\n>\n> We have CI support for mingw, but don't run the task by default, as it eats up\n> precious CI credits. However, for cfbot we are using custom compute resources\n> (currently donated by google), so we can afford to run the mingw tests. Right\n> now that'd require cfbot to patch .cirrus.tasks.yml.\n\nAnd I think mingw ends up not running most of the time. +1 to running\nit as default at least on cfbot. Also, this gives people a chance to\nrun mingw as default on their personal repositories (which I would\nlike to run).\n\n> While one can manually trigger manual task in one one's own repo, most won't\n> have the permissions to do so for cfbot.\n>\n>\n> I propose that we instead run the task automatically if\n> $REPO_MINGW_TRIGGER_BY_DEFAULT is set, typically in cirrus' per-repository\n> configuration.\n>\n> Unfortunately that's somewhat awkward to do in the cirrus-ci yaml\n> configuration, the set of conditional expressions supported is very\n> simplistic.\n>\n> To deal with that, I extended .cirrus.star to compute the required environment\n> variable. If $REPO_MINGW_TRIGGER_BY_DEFAULT is set, CI_MINGW_TRIGGER_TYPE is\n> set to 'automatic', if not it's 'manual'.\n>\n\nChanges look good to me. My only complaint could be using only 'true'\nfor the REPO_MINGW_TRIGGER_BY_DEFAULT, not a possible list of values\nbut this is not important.\n\n> We've also talked in other threads about adding CI support for\n> 1) windows, building with visual studio\n> 2) linux, with musl libc\n> 3) free/netbsd\n>\n> That becomes more enticing, if we can enable them by default on cfbot but not\n> elsewhere. With this change, it'd be easy to add further variables to control\n> such future tasks.\n\nI agree.\n\n> I also attached a second commit, that makes the \"code\" dealing with ci-os-only\n> in .cirrus.tasks.yml simpler. While I think it does nicely simplify\n> .cirrus.tasks.yml, overall it adds lines, possibly making this not worth it.\n> I'm somewhat on the fence.\n>\n>\n> Thoughts?\n\nAlthough it adds more lines, this makes the .cirrus.tasks.yml file\nmore readable and understandable which is more important in my\nopinion.\n\n> On the code level, I thought if it'd be good to have a common prefix for all\n> the automatically set variables. Right now that's CI_, but I'm not at all\n> wedded to that.\n\nI agree with your thoughts and CI_ prefix.\n\nI tested both patches and they work as expected.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Thu, 25 Apr 2024 15:02:41 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ci: Allow running mingw tests by default via environment variable" } ]
[ { "msg_contents": "Hi.\n\nPreamble: this happens in MacOS only (in built-in Terminal and in iTerm2 at\nleast). In Linux, everything is as expected.\n\nI almost lost my mind today trying to figure out why sending a SIGINT\nprecisely to a psql interactive process delivers this SIGINT not only to\nthat psql, but also to its parents.\n\nNotice that it is NOT about Ctrl-C at all (which would've been delivered to\nthe whole process group; but again, it's not about it, there is no Ctrl-C;\nI even used pliers to remove \"Ctrl\" and \"C\" keys from my keyboard). It is\nabout a plain old \"kill -SIGINT psql_pid\" executed manually in a separate\nshell.\n\nI tried to find a place in psql source code where it fans out SIGINT to its\nparent pids upon receiving. But I did not find any. There is also no shell\ninvolved in all these (see logs below).\n\nAny ideas, who is doing this on MacOS? Who is sending 2 extra SIGINTs to 2\nparents when a SIGINT is delivered to psql?\n\nI also tried to use any other signal-trapping program instead of psql (like\nvim or an artificial Perl script which ignores signals) and did not observe\nthe same behavior. It looks like something very specific to psql.\n\nSteps to reproduce:\n\n===========================\n\n*Terminal 1:*\n\n% psql --version\npsql (PostgreSQL) 16.0\n\n% cat /tmp/my1.pl\n#!/usr/bin/perl -w\nif (fork()) {\n $SIG{INT} = sub { print(\"int in my1\\n\") };\n while (1) { wait() and exit(1); }\n} else {\n exec(\"/tmp/my2.pl\");\n}\n\n% cat /tmp/my2.pl\n#!/usr/bin/perl -w\nif (fork()) {\n $SIG{INT} = sub { print(\"int in my2\\n\") };\n while (1) { wait() and exit(1); }\n} else {\n exec(\"psql\");\n}\n\n# PostgreSQL server is running in Docker, localhost remapped port 15432\n% PGHOST=127.0.0.1 PGPORT=15432 PGUSER=postgres PGPASSWORD=postgres\nPGDATABASE=postgres /tmp/my1.pl\npsql (16.0, server 13.5 (Debian 13.5-1.pgdg110+1))\nType \"help\" for help.\npostgres=#\n\n\n*Terminal 2:*\n\n% pstree | grep -E \"bash |yarn|psql|vim|sleep|lldb|my.pl|perl\" | grep\ndmitry | grep -v grep\n\n | | \\-+= 24550 dmitry /usr/bin/perl -w /tmp/my1.pl\n | | \\-+- 24551 dmitry /usr/bin/perl -w /tmp/my2.pl\n | | \\--- 24552 dmitry psql\n\n% pgrep psql\n24552\n% kill -SIGINT 24552\n\n\n*And I see this in Terminal 1:*\n\npostgres=#\nint in my2\nint in my1\npostgres=#\n\n===========================\n\nI.e. we can see that not only psql got the SIGINT received, but also its\nparents (two artificial Perl processes).\n\nHi.Preamble: this happens in MacOS only (in built-in Terminal and in iTerm2 at least). In Linux, everything is as expected.I almost lost my mind today trying to figure out why sending a SIGINT precisely to a psql interactive process delivers this SIGINT not only to that psql, but also to its parents. Notice that it is NOT about Ctrl-C at all (which would've been delivered to the whole process group; but again, it's not about it, there is no Ctrl-C; I even used pliers to remove \"Ctrl\" and \"C\" keys from my keyboard). It is about a plain old \"kill -SIGINT psql_pid\" executed manually in a separate shell.I tried to find a place in psql source code where it fans out SIGINT to its parent pids upon receiving. But I did not find any. There is also no shell involved in all these (see logs below).Any ideas, who is doing this on MacOS? Who is sending 2 extra SIGINTs to 2 parents when a SIGINT is delivered to psql?I also tried to use any other signal-trapping program instead of psql (like vim or an artificial Perl script which ignores signals) and did not observe the same behavior. It looks like something very specific to psql.Steps to reproduce:===========================Terminal 1:% psql --versionpsql (PostgreSQL) 16.0% cat /tmp/my1.pl#!/usr/bin/perl -wif (fork()) {  $SIG{INT} = sub { print(\"int in my1\\n\") };  while (1) { wait() and exit(1); }} else {  exec(\"/tmp/my2.pl\");}% cat /tmp/my2.pl#!/usr/bin/perl -wif (fork()) {  $SIG{INT} = sub { print(\"int in my2\\n\") };  while (1) { wait() and exit(1); }} else {  exec(\"psql\");}# PostgreSQL server is running in Docker, localhost remapped port 15432% PGHOST=127.0.0.1 PGPORT=15432 PGUSER=postgres PGPASSWORD=postgres PGDATABASE=postgres /tmp/my1.plpsql (16.0, server 13.5 (Debian 13.5-1.pgdg110+1))Type \"help\" for help.postgres=#Terminal 2:% pstree | grep -E \"bash |yarn|psql|vim|sleep|lldb|my.pl|perl\" | grep dmitry | grep -v grep   |   |   \\-+= 24550 dmitry /usr/bin/perl -w /tmp/my1.pl |   |     \\-+- 24551 dmitry /usr/bin/perl -w /tmp/my2.pl |   |       \\--- 24552 dmitry psql% pgrep psql24552% kill -SIGINT 24552And I see this in Terminal 1:postgres=#int in my2int in my1postgres=#===========================I.e. we can see that not only psql got the SIGINT received, but also its parents (two artificial Perl processes).", "msg_date": "Sat, 13 Apr 2024 01:21:21 -0700", "msg_from": "Dmitry Koterov <[email protected]>", "msg_from_op": true, "msg_subject": "In MacOS, psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "Dmitry Koterov <[email protected]> writes:\n> I almost lost my mind today trying to figure out why sending a SIGINT\n> precisely to a psql interactive process delivers this SIGINT not only to\n> that psql, but also to its parents.\n\nLet me guess ... you're using zsh not bash?\n\nI do not use zsh myself, but what I read in its man page suggests\nthat this is its designed behavior. The kill command says its\narguments are \"jobs\", and elsewhere it says\n\n There are several ways to refer to jobs in the shell. A job can be\n referred to by the process ID of any process of the job or by one of\n the following: ...\n\nso I suspect zsh is treating that stack of processes as a \"job\" and\nzapping all of it. There is certainly nothing in psql that would\nattempt to signal its parent process (let alone grandparent).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2024 10:53:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "On Sat, Apr 13, 2024 at 7:53 AM Tom Lane <[email protected]> wrote:\n\n> Let me guess ... you're using zsh not bash?\n>\n\nI wish it was zsh... I tested it with zsh, but with bash (and with\nlow-level kill syscall), I observed the same effect unfortunately.\n\nSo it's still a puzzle.\n\n1. Even more, when I send a kill() low-level syscall using e.g. Perl - perl\n-e 'kill(\"INT\", 16107)' - it is the same.\n2. If any other program but psql is used (e.g. vim or my custom Perl script\nwhich ignores SIGINT), the effect is not reproducible.\n\nCan it be e.g. readline? Or something related to tty or session settings\nwhich psql could modify (I did not find any in the source code though).\n\n=========================\n\nchsh -s /bin/bash\n# then kill all terminals\n\n$ PGHOST=127.0.0.1 PGPORT=15432 PGUSER=postgres PGPASSWORD=postgres\nPGDATABASE=postgres /tmp/my1.pl\n...\npostgres=#\n\n$ ps | grep zsh | grep -v grep\n<empty>\n\n$ watch -n0.5 'pstree | grep -E\n\"yarn|psql|vim|sleep|lldb|my.pl|perl|debugserver|-bash|\nlogin\" | grep -v grep'\n | |-+= 13721 root login -fp dmitry\n | | \\-+= 13722 dmitry -bash\n | |-+= 13700 root login -fp dmitry\n | | \\-+= 13701 dmitry -bash\n | | \\-+= 16105 dmitry /usr/bin/perl -w /tmp/my1.pl\n | | \\-+- 16106 dmitry /usr/bin/perl -w /tmp/my2.pl\n | | \\--- 16107 dmitry psql\n | \\-+= 13796 root login -fp dmitry\n | \\--= 13797 dmitry -bash\n\n$ perl -e 'kill(\"INT\", 16107)'\n\nI observe as previously:\n\nint in my2\nint in my1\npostgres=#\n\n=========================\n\n\nWhen I use a custom SIGINT-trapping script instead of psql, it all works as\nexpected in MacOS, only that script receives the SIGINT and not its parents:\n\n=========================\n\n$ cat /tmp/ignore.pl\n#!/usr/bin/perl -w\n$SIG{INT} = sub { print(\"int ignored\\n\") };\nwhile (1) { sleep(1); }\n\n$ cat /tmp/my2.pl\n#!/usr/bin/perl -w\nif (fork()) {\n $SIG{INT} = sub { print(\"int in my2\\n\") };\n while (1) { wait() and exit(1); }\n} else {\n exec(\"/tmp/ignore.pl\");\n}\n\n=========================\n\n\n[image: CleanShot 2024-04-13 at [email protected]]", "msg_date": "Sat, 13 Apr 2024 16:17:42 -0700", "msg_from": "Dmitry Koterov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "On Saturday, April 13, 2024, Dmitry Koterov <[email protected]>\nwrote:\n\n>\n>\n> % psql --version\n> psql (PostgreSQL) 16.0\n>\n\nHow did you install this and can you install other, supported, versions?\n\nDavid J.\n\nOn Saturday, April 13, 2024, Dmitry Koterov <[email protected]> wrote:% psql --versionpsql (PostgreSQL) 16.0How did you install this and can you install other, supported, versions?David J.", "msg_date": "Sat, 13 Apr 2024 16:27:41 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "On Sun, Apr 14, 2024 at 11:18 AM Dmitry Koterov\n<[email protected]> wrote:\n> Can it be e.g. readline? Or something related to tty or session settings which psql could modify (I did not find any in the source code though).\n\nI was wondering about that. Are you using libedit or libreadline?\nWhat happens if you build without readline/edit support? From a quick\nglance at libedit, it does a bunch of signal interception, but I\ndidn't check the details. It is interested in stuff like SIGWINCH,\nthe window-resized-by-user signal.\n\n\n", "msg_date": "Sun, 14 Apr 2024 11:39:16 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "Dmitry Koterov <[email protected]> writes:\n> I wish it was zsh... I tested it with zsh, but with bash (and with\n> low-level kill syscall), I observed the same effect unfortunately.\n\n> So it's still a puzzle.\n\n> 1. Even more, when I send a kill() low-level syscall using e.g. Perl - perl\n> -e 'kill(\"INT\", 16107)' - it is the same.\n> 2. If any other program but psql is used (e.g. vim or my custom Perl script\n> which ignores SIGINT), the effect is not reproducible.\n\n> Can it be e.g. readline? Or something related to tty or session settings\n> which psql could modify (I did not find any in the source code though).\n\nOK, I tried dtruss'ing psql on macOS. What I see is that with\nApple's libedit, the response to SIGINT includes this:\n\nkill(0, 2) = 0 0\n\nthat is, \"SIGINT my whole process group\". If I build with libreadline\nfrom MacPorts, I just see\n\nkill(30902, 2) = 0 0\n\n(30902 being the process's own PID). I'm not real sure why either\nlibrary finds it necessary to re-signal the process --- maybe they\ntrap SIGINT and then try to hide that by reinstalling the app's\nnormal SIGINT handler and re-delivering the signal. But anyway,\nlibedit seems to be vastly exceeding its authority here. If\nsignaling the whole process group were wanted, it would have been\nthe responsibility of the original signaller to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2024 19:49:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "On Sun, Apr 14, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n> Dmitry Koterov <[email protected]> writes:\n> > I wish it was zsh... I tested it with zsh, but with bash (and with\n> > low-level kill syscall), I observed the same effect unfortunately.\n>\n> > So it's still a puzzle.\n>\n> > 1. Even more, when I send a kill() low-level syscall using e.g. Perl - perl\n> > -e 'kill(\"INT\", 16107)' - it is the same.\n> > 2. If any other program but psql is used (e.g. vim or my custom Perl script\n> > which ignores SIGINT), the effect is not reproducible.\n>\n> > Can it be e.g. readline? Or something related to tty or session settings\n> > which psql could modify (I did not find any in the source code though).\n>\n> OK, I tried dtruss'ing psql on macOS. What I see is that with\n> Apple's libedit, the response to SIGINT includes this:\n>\n> kill(0, 2) = 0 0\n>\n> that is, \"SIGINT my whole process group\". If I build with libreadline\n> from MacPorts, I just see\n>\n> kill(30902, 2) = 0 0\n>\n> (30902 being the process's own PID). I'm not real sure why either\n> library finds it necessary to re-signal the process --- maybe they\n> trap SIGINT and then try to hide that by reinstalling the app's\n> normal SIGINT handler and re-delivering the signal. But anyway,\n> libedit seems to be vastly exceeding its authority here. If\n> signaling the whole process group were wanted, it would have been\n> the responsibility of the original signaller to do that.\n\nProbably to intercept SIGWINCH -- I assume it's trying to propagate\nthe signal to the \"real\" signal handler, but it's propagating it a\nlittle too far.\n\nhttps://github.com/NetBSD/src/blob/1de18f216411bce77e26740327b0782976a89965/lib/libedit/sig.c#L110\n\n\n", "msg_date": "Sun, 14 Apr 2024 12:01:38 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "> OK, I tried dtruss'ing psql on macOS. What I see is that with\n> Apple's libedit, the response to SIGINT includes this:\n> kill(0, 2) = 0 0\n\nOK, so it's libedit who does this. I should've tried drtuss instead of not\nquite working lldb. I'll try to dig further.\n\nThank you, Tom!\n\n\n\n> How did you install this and can you install other, supported, versions?\n...\n\n> I was wondering about that. Are you using libedit or libreadline?\n>\n\nI did not build psql, I use the version delivered by brew on MacOS.\n\nI think Tom already found out that it's libedit who is guilty. But for the\nsake of history, posting the following below.\n\nI just wrote a small C wrapper which prints, WHO is sending that SIGINT to\nthe parent processes, and it was indeed the psql process:\n\n================\n\nbash-3.2$ ./0-build.sh && ./1-run.sh\n+ gcc my0.c -o my0\n+ ./my0\nmy0 pid is 45710\npsql (16.0, server 13.5 (Debian 13.5-1.pgdg110+1))\nType \"help\" for help.\npostgres=#\nint in my2\nint in my1\nmy0 got signal 2 from process *45723* running as user 501\npostgres=#\n\nbash-3.2$ ./2-watch.sh\nEvery 0.5s: pstree | grep -E \"bash\n|yarn|psql|vim|sleep|lldb|my.pl|perl|debugserver|my0|my1|my2\"\n| grep dmitry | grep -v grep\n | | \\-+= 36468 dmitry bash -rcfile .bashrc\n | | \\-+= 45709 dmitry /bin/bash ./1-run.sh\n | | \\-+- 45710 dmitry ./my0\n | | \\-+- 45711 dmitry /usr/bin/perl -w ./my1.pl\n | | \\-+- 45712 dmitry /usr/bin/perl -w /tmp/my2.pl\n | | \\--- *45723* dmitry psql\n | | \\-+= 44170 dmitry /bin/bash ./2-watch.sh\n\nbash-3.2$ ./3-kill.sh\n+ perl -e 'kill(\"INT\", `pgrep psql`)'\n\nbash-3.2$ cat ./my0.c\n#include <stdio.h>\n#include <unistd.h>\n#include <signal.h>\n#include <sys/wait.h>\n\nvoid handler(int signal, siginfo_t *info, void *data) {\n printf(\"my0 got signal %d from process %d running as user %d\\n\",\n signal, info->si_pid, info->si_uid);\n}\n\nint main(void) {\n printf(\"my0 pid is %d\\n\", getpid());\n if (fork()) {\n struct sigaction sa;\n sigset_t mask;\n sigemptyset(&mask);\n sa.sa_sigaction = &handler;\n sa.sa_mask = mask;\n sa.sa_flags = SA_SIGINFO;\n sigaction(SIGINT, &sa, NULL);\n while (wait(NULL) == -1);\n } else {\n if (execl(\"./my1.pl\", \"my1\", NULL) == -1) {\n perror(\"execl\");\n }\n }\n return 0;\n}\n\n================\n\n\n\nOn Sat, Apr 13, 2024 at 4:49 PM Tom Lane <[email protected]> wrote:\n\n> Dmitry Koterov <[email protected]> writes:\n> > I wish it was zsh... I tested it with zsh, but with bash (and with\n> > low-level kill syscall), I observed the same effect unfortunately.\n>\n> > So it's still a puzzle.\n>\n> > 1. Even more, when I send a kill() low-level syscall using e.g. Perl -\n> perl\n> > -e 'kill(\"INT\", 16107)' - it is the same.\n> > 2. If any other program but psql is used (e.g. vim or my custom Perl\n> script\n> > which ignores SIGINT), the effect is not reproducible.\n>\n> > Can it be e.g. readline? Or something related to tty or session settings\n> > which psql could modify (I did not find any in the source code though).\n>\n> OK, I tried dtruss'ing psql on macOS. What I see is that with\n> Apple's libedit, the response to SIGINT includes this:\n>\n> kill(0, 2) = 0 0\n>\n> that is, \"SIGINT my whole process group\". If I build with libreadline\n> from MacPorts, I just see\n>\n> kill(30902, 2) = 0 0\n>\n> (30902 being the process's own PID). I'm not real sure why either\n> library finds it necessary to re-signal the process --- maybe they\n> trap SIGINT and then try to hide that by reinstalling the app's\n> normal SIGINT handler and re-delivering the signal. But anyway,\n> libedit seems to be vastly exceeding its authority here. If\n> signaling the whole process group were wanted, it would have been\n> the responsibility of the original signaller to do that.\n>\n> regards, tom lane\n>\n\n> OK, I tried dtruss'ing psql on macOS.  What I see is that with> Apple's libedit, the response to SIGINT includes this:> kill(0, 2)               = 0 0OK, so it's libedit who does this. I should've tried drtuss instead of not quite working lldb. I'll try to dig further. Thank you, Tom!> How did you install this and can you install other, supported, versions?...I was wondering about that.  Are you using libedit or libreadline?I did not build psql, I use the version delivered by brew on MacOS. I think Tom already found out that it's libedit who is guilty. But for the sake of history, posting the following below.I just wrote a small C wrapper which prints, WHO is sending that SIGINT to the parent processes, and it was indeed the psql process:================bash-3.2$ ./0-build.sh && ./1-run.sh+ gcc my0.c -o my0+ ./my0my0 pid is 45710psql (16.0, server 13.5 (Debian 13.5-1.pgdg110+1))Type \"help\" for help.postgres=#int in my2int in my1my0 got signal 2 from process 45723 running as user 501postgres=#bash-3.2$ ./2-watch.shEvery 0.5s: pstree | grep -E \"bash |yarn|psql|vim|sleep|lldb|my.pl|perl|debugserver|my0|my1|my2\" | grep dmitry | grep -v grep |   |     \\-+= 36468 dmitry bash -rcfile .bashrc |   |       \\-+= 45709 dmitry /bin/bash ./1-run.sh |   |         \\-+- 45710 dmitry ./my0 |   |           \\-+- 45711 dmitry /usr/bin/perl -w ./my1.pl |   |             \\-+- 45712 dmitry /usr/bin/perl -w /tmp/my2.pl |   |               \\--- 45723 dmitry psql |   |   \\-+= 44170 dmitry /bin/bash ./2-watch.shbash-3.2$ ./3-kill.sh+ perl -e 'kill(\"INT\", `pgrep psql`)'bash-3.2$ cat ./my0.c#include <stdio.h>#include <unistd.h>#include <signal.h>#include <sys/wait.h>void handler(int signal, siginfo_t *info, void *data) {  printf(\"my0 got signal %d from process %d running as user %d\\n\",         signal, info->si_pid, info->si_uid);}int main(void) {  printf(\"my0 pid is %d\\n\", getpid());  if (fork()) {    struct sigaction sa;    sigset_t mask;    sigemptyset(&mask);    sa.sa_sigaction = &handler;    sa.sa_mask = mask;    sa.sa_flags = SA_SIGINFO;    sigaction(SIGINT, &sa, NULL);    while (wait(NULL) == -1);  } else {    if (execl(\"./my1.pl\", \"my1\", NULL) == -1) {      perror(\"execl\");    }  }  return 0;}================On Sat, Apr 13, 2024 at 4:49 PM Tom Lane <[email protected]> wrote:Dmitry Koterov <[email protected]> writes:\n> I wish it was zsh... I tested it with zsh, but with bash (and with\n> low-level kill syscall), I observed the same effect unfortunately.\n\n> So it's still a puzzle.\n\n> 1. Even more, when I send a kill() low-level syscall using e.g. Perl - perl\n> -e 'kill(\"INT\", 16107)' - it is the same.\n> 2. If any other program but psql is used (e.g. vim or my custom Perl script\n> which ignores SIGINT), the effect is not reproducible.\n\n> Can it be e.g. readline? Or something related to tty or session settings\n> which psql could modify (I did not find any in the source code though).\n\nOK, I tried dtruss'ing psql on macOS.  What I see is that with\nApple's libedit, the response to SIGINT includes this:\n\nkill(0, 2)               = 0 0\n\nthat is, \"SIGINT my whole process group\".  If I build with libreadline\nfrom MacPorts, I just see\n\nkill(30902, 2)           = 0 0\n\n(30902 being the process's own PID).  I'm not real sure why either\nlibrary finds it necessary to re-signal the process --- maybe they\ntrap SIGINT and then try to hide that by reinstalling the app's\nnormal SIGINT handler and re-delivering the signal.  But anyway,\nlibedit seems to be vastly exceeding its authority here.  If\nsignaling the whole process group were wanted, it would have been\nthe responsibility of the original signaller to do that.\n\n                        regards, tom lane", "msg_date": "Sat, 13 Apr 2024 17:01:57 -0700", "msg_from": "Dmitry Koterov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> On Sun, Apr 14, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n>> OK, I tried dtruss'ing psql on macOS. What I see is that with\n>> Apple's libedit, the response to SIGINT includes this:\n>> kill(0, 2) = 0 0\n\n> https://github.com/NetBSD/src/blob/1de18f216411bce77e26740327b0782976a89965/lib/libedit/sig.c#L110\n\nAh, I was wondering if that was from upstream libedit or was an\nApple-ism. Somebody should file a bug.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 13 Apr 2024 20:09:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" }, { "msg_contents": "Thanks to everyone!\n\nI will file a bug.\n\nAnyways, I just built a tool to work-around it all. It allows to run psql\nfrom processes which don't handle SIGINT in a proper shell-like manner\n(like yarn for instance): https://github.com/dimikot/run-in-separate-pgrp\n\nBasically, without this, an attempt to run `yarn psql` and then pressing ^C\nkills yarn (it kills it in Linux too, since ^C propagates SIGINT to all\nprocesses in the foreground group), because yarn doesn't behave nicely with\nexit codes (and exit signals) of its child processes. With\nrun-in-separate-pgrp wrapping, ^C is only delivered to psql.\n\n\nOn Sat, Apr 13, 2024 at 5:09 PM Tom Lane <[email protected]> wrote:\n\n> Thomas Munro <[email protected]> writes:\n> > On Sun, Apr 14, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n> >> OK, I tried dtruss'ing psql on macOS. What I see is that with\n> >> Apple's libedit, the response to SIGINT includes this:\n> >> kill(0, 2) = 0 0\n>\n> >\n> https://github.com/NetBSD/src/blob/1de18f216411bce77e26740327b0782976a89965/lib/libedit/sig.c#L110\n>\n> Ah, I was wondering if that was from upstream libedit or was an\n> Apple-ism. Somebody should file a bug.\n>\n> regards, tom lane\n>\n\nThanks to everyone!I will file a bug.Anyways, I just built a tool to work-around it all. It allows to run psql from processes which don't handle SIGINT in a proper shell-like manner (like yarn for instance): https://github.com/dimikot/run-in-separate-pgrpBasically, without this, an attempt to run `yarn psql` and then pressing ^C kills yarn (it kills it in Linux too, since ^C propagates SIGINT to all processes in the foreground group), because yarn doesn't behave nicely with exit codes (and exit signals) of its child processes. With run-in-separate-pgrp wrapping, ^C is only delivered to psql.On Sat, Apr 13, 2024 at 5:09 PM Tom Lane <[email protected]> wrote:Thomas Munro <[email protected]> writes:\n> On Sun, Apr 14, 2024 at 11:49 AM Tom Lane <[email protected]> wrote:\n>> OK, I tried dtruss'ing psql on macOS.  What I see is that with\n>> Apple's libedit, the response to SIGINT includes this:\n>> kill(0, 2)               = 0 0\n\n> https://github.com/NetBSD/src/blob/1de18f216411bce77e26740327b0782976a89965/lib/libedit/sig.c#L110\n\nAh, I was wondering if that was from upstream libedit or was an\nApple-ism.  Somebody should file a bug.\n\n                        regards, tom lane", "msg_date": "Sat, 13 Apr 2024 19:05:59 -0700", "msg_from": "Dmitry Koterov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: In MacOS,\n psql reacts on SIGINT in a strange fashion (Linux is fine)" } ]
[ { "msg_contents": "Hi,\n\nIt looks like there's missing ConditionVariableCancelSleep() in\nInvalidatePossiblyObsoleteSlot() after waiting for the replication\nslot to be released by another process. Although prepare to sleep\ncancels the sleep if the previous one wasn't canceled, the calling\nprocess still remains in the CV's wait queue for the last cycle after\nthe replication slot holding process releases the slot. I'm wondering\nwhy there's no issue detected if the calling process isn't removed\nfrom the CV's wait queue. It may be due to the cancel sleep call in\nthe sigsetjmp() path for almost every process. IMO, it's a clean\npractice to cancel the sleep immediately after the sleep ends like\nelsewhere.\n\nPlease find the attached patch to fix the issue. Alternatively, we can\njust add ConditionVariableCancelSleep() outside of the for loop to\ncancel the sleep of the last cycle if any. This also looks correct\nbecause every prepare to sleep does ensure the previous sleep is\ncanceled, and if no sleep at all, the cacnel sleep call exits.\n\nPS: I've encountered an assertion failure [1] some time back\nsporadically in t/001_rep_changes.pl which I've reported in\nhttps://www.postgresql.org/message-id/CALj2ACXO6TJ4rhc3Uz3MWJGob9e4H1C71qufH-DGKJh8k4QGZA%40mail.gmail.com.\nI'm not so sure if this patch fixes the assertion failure. I ran the\ntests for 100 times [2] on my dev system, I didn't see any issue with\nthe patch.\n\n[1]\nt/001_rep_changes.pl\n\n2024-01-31 12:24:38.474 UTC [840166]\npg_16435_sync_16393_7330237333761601891 STATEMENT:\nDROP_REPLICATION_SLOT pg_16435_sync_16393_7330237333761601891 WAIT\nTRAP: failed Assert(\"list->head != INVALID_PGPROCNO\"), File:\n\"../../../../src/include/storage/proclist.h\", Line: 101, PID: 840166\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ExceptionalCondition+0xbb)[0x55c8edf6b8f9]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x6637de)[0x55c8edd517de]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ConditionVariablePrepareToSleep+0x85)[0x55c8edd51b91]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ReplicationSlotAcquire+0x142)[0x55c8edcead6b]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(ReplicationSlotDrop+0x51)[0x55c8edceb47f]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x60da71)[0x55c8edcfba71]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(exec_replication_command+0x47e)[0x55c8edcfc96a]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(PostgresMain+0x7df)[0x55c8edd7d644]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5ab50c)[0x55c8edc9950c]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5aab21)[0x55c8edc98b21]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x5a70de)[0x55c8edc950de]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(PostmasterMain+0x1534)[0x55c8edc949db]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(+0x459c47)[0x55c8edb47c47]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f19fe629d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f19fe629e40]\npostgres: publisher: walsender ubuntu postgres [local]\nDROP_REPLICATION_SLOT(_start+0x25)[0x55c8ed7c4565]\n2024-01-31 12:24:38.476 UTC [840168]\npg_16435_sync_16390_7330237333761601891 LOG: statement: SELECT\na.attnum, a.attname, a.atttypid, a.attnum =\nANY(i.indkey) FROM pg_catalog.pg_attribute a LEFT JOIN\npg_catalog.pg_index i ON (i.indexrelid =\npg_get_replica_identity_index(16391)) WHERE a.attnum >\n0::pg_catalog.int2 AND NOT a.attisdropped AND a.attgenerated = ''\nAND a.attrelid = 16391 ORDER BY a.attnum\n\n[2] for i in {1..100}; do make check\nPROVE_TESTS=\"t/001_rep_changes.pl\"; if [ $? -ne 0 ]; then echo \"The\ncommand failed on iteration $i\"; break; fi; done\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 13 Apr 2024 14:34:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Add missing ConditionVariableCancelSleep() in slot.c" }, { "msg_contents": "On 2024-Apr-13, Bharath Rupireddy wrote:\n\n> It looks like there's missing ConditionVariableCancelSleep() in\n> InvalidatePossiblyObsoleteSlot() after waiting for the replication\n> slot to be released by another process. Although prepare to sleep\n> cancels the sleep if the previous one wasn't canceled, the calling\n> process still remains in the CV's wait queue for the last cycle after\n> the replication slot holding process releases the slot. I'm wondering\n> why there's no issue detected if the calling process isn't removed\n> from the CV's wait queue. It may be due to the cancel sleep call in\n> the sigsetjmp() path for almost every process. IMO, it's a clean\n> practice to cancel the sleep immediately after the sleep ends like\n> elsewhere.\n\nHmm, but shouldn't we cancel the sleep after we have completed sleeping\naltogether, that is, until we've determined that we're no longer to\nsleep waiting for this slot? That would suggest to put the call to\ncancel sleep after the for(;;) loop is complete, rather than immediately\nafter sleeping. No?\n\ndiff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\nindex cebf44bb0f..59e9bef642 100644\n--- a/src/backend/replication/slot.c\n+++ b/src/backend/replication/slot.c\n@@ -1756,6 +1756,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n \t\t}\n \t}\n \n+\tConditionVariableCancelSleep();\n+\n \tAssert(released_lock == !LWLockHeldByMe(ReplicationSlotControlLock));\n \n \treturn released_lock;\n\n\nHowever, I noticed that ConditionVariablePrepareToSleep() does a\nCancelSleep upon being called ... so what I suggest would not have any\neffect whatsoever, because the sleep would be cancelled next time\nthrough the loop anyway. But shouldn't we also modify PrepareToSleep to\nexit without doing anything if our current sleep CV is the same one\nbeing newly installed?\n\ndiff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c\nindex 112a518bae..65811ff989 100644\n--- a/src/backend/storage/lmgr/condition_variable.c\n+++ b/src/backend/storage/lmgr/condition_variable.c\n@@ -57,6 +57,14 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv)\n {\n \tint\t\t\tpgprocno = MyProcNumber;\n \n+\t/*\n+\t * If we're preparing to sleep on the same CV we were already going to\n+\t * sleep on, we don't actually need to do anything. This may seem like\n+\t * supporting sloppy coding, which is what it actually does, so ¯\\_(ツ)_/¯\n+\t */\n+\tif (cv_sleep_target == cv)\n+\t\treturn;\n+\n \t/*\n \t * If some other sleep is already prepared, cancel it; this is necessary\n \t * because we have just one static variable tracking the prepared sleep,\n\nAlternatively, maybe we need to not prepare-to-sleep in\nInvalidatePossiblyObsoleteSlot() if we have already prepared to sleep in\na previous iteration through the loop (and of course, don't cancel the\nsleep until we're out of the loop).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nTom: There seems to be something broken here.\nTeodor: I'm in sackcloth and ashes... Fixed.\n http://postgr.es/m/[email protected]\n\n\n", "msg_date": "Sat, 13 Apr 2024 13:07:14 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add missing ConditionVariableCancelSleep() in slot.c" }, { "msg_contents": "On Sat, Apr 13, 2024 at 4:37 PM Alvaro Herrera <[email protected]> wrote:\n>\n> Hmm, but shouldn't we cancel the sleep after we have completed sleeping\n> altogether, that is, until we've determined that we're no longer to\n> sleep waiting for this slot? That would suggest to put the call to\n> cancel sleep after the for(;;) loop is complete, rather than immediately\n> after sleeping. No?\n>\n> diff --git a/src/backend/replication/slot.c b/src/backend/replication/slot.c\n> index cebf44bb0f..59e9bef642 100644\n> --- a/src/backend/replication/slot.c\n> +++ b/src/backend/replication/slot.c\n> @@ -1756,6 +1756,8 @@ InvalidatePossiblyObsoleteSlot(ReplicationSlotInvalidationCause cause,\n> }\n> }\n>\n> + ConditionVariableCancelSleep();\n> +\n> Assert(released_lock == !LWLockHeldByMe(ReplicationSlotControlLock));\n>\n> return released_lock;\n\nWe can do that and +1 for it since the prepare to sleep cancel the\nprevious one anyway. I mentioned that approach in the original email:\n\n>> Alternatively, we can\n>> just add ConditionVariableCancelSleep() outside of the for loop to\n>> cancel the sleep of the last cycle if any. This also looks correct\n>> because every prepare to sleep does ensure the previous sleep is\n>> canceled, and if no sleep at all, the cacnel sleep call exits.\n\n> However, I noticed that ConditionVariablePrepareToSleep() does a\n> CancelSleep upon being called ... so what I suggest would not have any\n> effect whatsoever, because the sleep would be cancelled next time\n> through the loop anyway.\n\nBut what if we break from the loop and never come back? We have to\nwait until the sigsetjmp/exit path of the backend to hit and cancel\nthe sleep.\n\n> But shouldn't we also modify PrepareToSleep to\n> exit without doing anything if our current sleep CV is the same one\n> being newly installed?\n>\n> diff --git a/src/backend/storage/lmgr/condition_variable.c b/src/backend/storage/lmgr/condition_variable.c\n> index 112a518bae..65811ff989 100644\n> --- a/src/backend/storage/lmgr/condition_variable.c\n> +++ b/src/backend/storage/lmgr/condition_variable.c\n> @@ -57,6 +57,14 @@ ConditionVariablePrepareToSleep(ConditionVariable *cv)\n> {\n> int pgprocno = MyProcNumber;\n>\n> + /*\n> + * If we're preparing to sleep on the same CV we were already going to\n> + * sleep on, we don't actually need to do anything. This may seem like\n> + * supporting sloppy coding, which is what it actually does, so ¯\\_(ツ)_/¯\n> + */\n> + if (cv_sleep_target == cv)\n> + return;\n> +\n> /*\n> * If some other sleep is already prepared, cancel it; this is necessary\n> * because we have just one static variable tracking the prepared sleep,\n\nThat seems to work as a quick exit path avoiding spin lock acquisition\nand release if the CV is already prepared to sleep. Specifically in\nthe InvalidatePossiblyObsoleteSlot for loop, it can avoid a bunch of\nspin lock acquisitions and releases if we ever sleep on the same\nslot's CV. However, I'm not sure if it will have any other issues.\n\nBTW, I like the emoji \"¯\\_(ツ)_/¯\" in the code comments :).\n\n> Alternatively, maybe we need to not prepare-to-sleep in\n> InvalidatePossiblyObsoleteSlot() if we have already prepared to sleep in\n> a previous iteration through the loop (and of course, don't cancel the\n> sleep until we're out of the loop).\n\nI think this looks complicated. To keep things simple, I prefer to add\nthe ConditionVariableCancelSleep() out of the for loop in\nInvalidatePossiblyObsoleteSlot().\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 25 Apr 2024 12:29:18 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add missing ConditionVariableCancelSleep() in slot.c" } ]
[ { "msg_contents": "Hi,\n\nIt looks like there's an unnecessary segment number calculation after\nInvalidateObsoleteReplicationSlots in CreateCheckPoint and\nCreateRestartPoint. Since none of RedoRecPtr, _logSegNo and\nwal_segment_size are changed by the slot invalidation code [1], the\nrecalculation of\n_logSegNo with XLByteToSeg seems unnecessary.\n\nI've attached a patch to fix this.\n\n[1] Assertions like the following won't fail with make check-world\nproving InvalidateObsoleteReplicationSlots doesn't change them at all.\n\n XLByteToSeg(RedoRecPtr, _logSegNo, wal_segment_size);\n KeepLogSeg(recptr, &_logSegNo);\n+ _logSegNo_saved = _logSegNo;\n+ RedoRecPtr_saved = RedoRecPtr;\n if (InvalidateObsoleteReplicationSlots(RS_INVAL_WAL_REMOVED,\n\n _logSegNo, InvalidOid,\n\n InvalidTransactionId))\n {\n+ Assert(_logSegNo_saved == _logSegNo);\n+ Assert(RedoRecPtr_saved == RedoRecPtr);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 13 Apr 2024 14:40:00 +0530", "msg_from": "Bharath Rupireddy <[email protected]>", "msg_from_op": true, "msg_subject": "Remove unnecessary segment number calculation after wal_removed\n invalidation of replication slots" } ]
[ { "msg_contents": "Hi,\n\nI would like to propose a small patch to address an annoying issue with\nthe way how PostgreSQL does fallback in case if \"huge_pages = try\" is\nset. Here is how the problem looks like:\n\n* PostgreSQL is starting on a machine with some huge pages available\n\n* It tries to identify that fact and does mmap with MAP_HUGETLB, which\n succeeds\n\n* But it has a pleasure to run inside a cgroup with a hugetlb\n controller and limits set to 0 (or anything less than PostgreSQL\n needs)\n\n* Under this circumstances PostgreSQL will proceed allocating huge\n pages, but the first page fault will trigger SIGBUS\n\nI've sketched out how to reproduce it with cgroup v1 and v2 in the\nattached scripts.\n\nThis sounds like quite a rare combination of factors, but apparently\nit's fairly easy to face this on K8s/OpenShift. There was a bug reported\nsome time ago [1] about this behaviour, and back then I was under the\nimpression it's a solved matter with nothing to do. Yet I still observe\nthis type of issues, the latest one not longer than a week ago.\n\nAfter some research I found what looks to me like a relatively simple\nway to address the problem. In Linux kernel 5.14 a new flag to madvise\nwas introduced that might be just what we need here. It's called\nMADV_POPULATE_READ [2] and it tells kernel to populate page tables by\ntriggering read faults if required. One by-design feature of this flag\nis to fail the madvise call in the situations like one above, giving an\nopportunity to avoid SIGBUS.\n\nI've outlined a patch to implement this approach and tested it on a\nnewish Linux kernel I've got lying around (6.9.0-rc1) -- no SIGBUS,\nPostgreSQL does fallback to not use huge pages. The resulting change\nseems to be small enough to justify addressing this small but annoying\nissue. Any thoughts or commentaries about the proposal?\n\n[1]: https://www.postgresql.org/message-id/flat/HE1PR0701MB256920EEAA3B2A9C06249F339E110%40HE1PR0701MB2569.eurprd07.prod.outlook.com\n[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4ca9b3859dac14bbef0c27d00667bb5b10917adb", "msg_date": "Sat, 13 Apr 2024 18:22:55 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Identify huge pages accessibility using madvise" }, { "msg_contents": "Hi Dmitry,\n\nI've been attempting to replicate this issue directly in Kubernetes, but I\nhaven't been successful so far. I've been using EKS nodes, and it seems\nthat they all run cgroup v2 now. Do you have anything that could help me\nget started on this more quickly?\n\nThanks,\nGabriele\n\nOn Sat, 13 Apr 2024 at 18:24, Dmitry Dolgov <[email protected]> wrote:\n\n> Hi,\n>\n> I would like to propose a small patch to address an annoying issue with\n> the way how PostgreSQL does fallback in case if \"huge_pages = try\" is\n> set. Here is how the problem looks like:\n>\n> * PostgreSQL is starting on a machine with some huge pages available\n>\n> * It tries to identify that fact and does mmap with MAP_HUGETLB, which\n> succeeds\n>\n> * But it has a pleasure to run inside a cgroup with a hugetlb\n> controller and limits set to 0 (or anything less than PostgreSQL\n> needs)\n>\n> * Under this circumstances PostgreSQL will proceed allocating huge\n> pages, but the first page fault will trigger SIGBUS\n>\n> I've sketched out how to reproduce it with cgroup v1 and v2 in the\n> attached scripts.\n>\n> This sounds like quite a rare combination of factors, but apparently\n> it's fairly easy to face this on K8s/OpenShift. There was a bug reported\n> some time ago [1] about this behaviour, and back then I was under the\n> impression it's a solved matter with nothing to do. Yet I still observe\n> this type of issues, the latest one not longer than a week ago.\n>\n> After some research I found what looks to me like a relatively simple\n> way to address the problem. In Linux kernel 5.14 a new flag to madvise\n> was introduced that might be just what we need here. It's called\n> MADV_POPULATE_READ [2] and it tells kernel to populate page tables by\n> triggering read faults if required. One by-design feature of this flag\n> is to fail the madvise call in the situations like one above, giving an\n> opportunity to avoid SIGBUS.\n>\n> I've outlined a patch to implement this approach and tested it on a\n> newish Linux kernel I've got lying around (6.9.0-rc1) -- no SIGBUS,\n> PostgreSQL does fallback to not use huge pages. The resulting change\n> seems to be small enough to justify addressing this small but annoying\n> issue. Any thoughts or commentaries about the proposal?\n>\n> [1]:\n> https://www.postgresql.org/message-id/flat/HE1PR0701MB256920EEAA3B2A9C06249F339E110%40HE1PR0701MB2569.eurprd07.prod.outlook.com\n> [2]:\n> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4ca9b3859dac14bbef0c27d00667bb5b10917adb\n>\n\n\n-- \nGabriele Bartolini\nVP, Chief Architect, Kubernetes\nenterprisedb.com\n\nHi Dmitry,I've been attempting to replicate this issue directly in Kubernetes, but I haven't been successful so far. I've been using EKS nodes, and it seems that they all run cgroup v2 now. Do you have anything that could help me get started on this more quickly?Thanks,GabrieleOn Sat, 13 Apr 2024 at 18:24, Dmitry Dolgov <[email protected]> wrote:Hi,\n\nI would like to propose a small patch to address an annoying issue with\nthe way how PostgreSQL does fallback in case if \"huge_pages = try\" is\nset. Here is how the problem looks like:\n\n* PostgreSQL is starting on a machine with some huge pages available\n\n* It tries to identify that fact and does mmap with MAP_HUGETLB, which\n  succeeds\n\n* But it has a pleasure to run inside a cgroup with a hugetlb\n  controller and limits set to 0 (or anything less than PostgreSQL\n  needs)\n\n* Under this circumstances PostgreSQL will proceed allocating huge\n  pages, but the first page fault will trigger SIGBUS\n\nI've sketched out how to reproduce it with cgroup v1 and v2 in the\nattached scripts.\n\nThis sounds like quite a rare combination of factors, but apparently\nit's fairly easy to face this on K8s/OpenShift. There was a bug reported\nsome time ago [1] about this behaviour, and back then I was under the\nimpression it's a solved matter with nothing to do. Yet I still observe\nthis type of issues, the latest one not longer than a week ago.\n\nAfter some research I found what looks to me like a relatively simple\nway to address the problem. In Linux kernel 5.14 a new flag to madvise\nwas introduced that might be just what we need here. It's called\nMADV_POPULATE_READ [2] and it tells kernel to populate page tables by\ntriggering read faults if required. One by-design feature of this flag\nis to fail the madvise call in the situations like one above, giving an\nopportunity to avoid SIGBUS.\n\nI've outlined a patch to implement this approach and tested it on a\nnewish Linux kernel I've got lying around (6.9.0-rc1) -- no SIGBUS,\nPostgreSQL does fallback to not use huge pages. The resulting change\nseems to be small enough to justify addressing this small but annoying\nissue. Any thoughts or commentaries about the proposal?\n\n[1]: https://www.postgresql.org/message-id/flat/HE1PR0701MB256920EEAA3B2A9C06249F339E110%40HE1PR0701MB2569.eurprd07.prod.outlook.com\n[2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4ca9b3859dac14bbef0c27d00667bb5b10917adb\n-- Gabriele BartoliniVP, Chief Architect, Kubernetesenterprisedb.com", "msg_date": "Thu, 26 Sep 2024 07:57:12 +0200", "msg_from": "Gabriele Bartolini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identify huge pages accessibility using madvise" }, { "msg_contents": "> On Thu, Sep 26, 2024 at 07:57:12AM GMT, Gabriele Bartolini wrote:\n> Hi Dmitry,\n>\n> I've been attempting to replicate this issue directly in Kubernetes, but I\n> haven't been successful so far. I've been using EKS nodes, and it seems\n> that they all run cgroup v2 now. Do you have anything that could help me\n> get started on this more quickly?\n>\n> Thanks,\n> Gabriele\n\nHi Gabriele,\n\nThanks for testing. I can check if I can get some EKS clusters to\nexperiment with. In the meantime, what about the reproducing script for\ncgroup v2 (the plain one that I've attached with the patch, that doesn't\nrequire any k8s cluster), doesn't it work for you?\n\n\n", "msg_date": "Thu, 26 Sep 2024 08:46:17 +0200", "msg_from": "Dmitry Dolgov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identify huge pages accessibility using madvise" } ]
[ { "msg_contents": "Dear pgsql hackers,\n\nI found a mixture of fileno(stderr) and STDERR_FILENO in PostgreSQL source\ncode which in most cases mean the same. But I have found recently that\nMicrosoft's fileno() _fileno() can return -2 sometimes. After some\nexperiments I found that applications running as windows service have\nproblems with stderr. I.e. fileno(stderr) returns -2 (negative two) in\nwindows service mode. That causes some issues with the logging collector.\nMeanwhile the value of STDERR_FILENO always equals 2 and does not depend on\napplication mode because it is a macro.\n\nI wonder if there are hidden advantages of using fileno(stderr) ?? Should I\nuse only \"fileno(stderr)\" or using STDERR_FILENO is acceptable too ?? Are\nthere cases when I should not use STDERR_FILENO ??\n\nSincerely,\nDmitry\n\n\nAdditional references\n1. BUG #18400\n2.\nhttps://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fileno?view=msvc-170\nQuote: \"If stdout or stderr is not associated with an output stream (for\nexample, in a Windows application without a console window), the file\ndescriptor returned is -2. ...\"\n\nDear pgsql hackers,I found a mixture of fileno(stderr) and STDERR_FILENO in PostgreSQL source code which in most cases mean the same. But I have found recently that Microsoft's fileno() _fileno() can return -2 sometimes. After some experiments I found that applications running as windows service have problems with stderr. I.e. fileno(stderr) returns -2 (negative two) in windows service mode. That causes some issues with the logging collector. Meanwhile the value of STDERR_FILENO always equals 2 and does not depend on application mode because it is a macro.I wonder if there are hidden advantages of using fileno(stderr) ?? Should I use  only \"fileno(stderr)\" or using STDERR_FILENO is acceptable too ?? Are there cases when I should not use STDERR_FILENO  ??Sincerely,DmitryAdditional references1. BUG #184002. https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fileno?view=msvc-170Quote: \"If stdout or stderr is not associated with an output stream (for example, in a Windows application without a console window), the file descriptor returned is -2. ...\"", "msg_date": "Sun, 14 Apr 2024 02:58:21 +0400", "msg_from": "\"Dima Rybakov (Tlt)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why should I use fileno(stderr) instead of STDERR_FILENO" } ]
[ { "msg_contents": "When looking at [1], I noticed that we don't have a prosupport\nfunction for the timestamp version of generate_series.\n\nWe have this for the integer versions of generate_series(), per:\n\npostgres=# explain analyze select * from generate_series(1, 256, 2);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series (cost=0.00..1.28 rows=128 width=4)\n(actual time=0.142..0.183 rows=128 loops=1)\n\nThe timestamp version just gives the default 1000 row estimate:\n\npostgres=# explain analyze select * from generate_series('2024-01-01',\n'2025-01-01', interval '1 day');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series (cost=0.00..10.00 rows=1000\nwidth=8) (actual time=0.604..0.718 rows=367 loops=1)\n\nI had some spare time today, so wrote a patch, which gives you:\n\npostgres=# explain analyze select * from generate_series('2024-01-01',\n'2025-01-01', interval '1 day');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series (cost=0.00..3.67 rows=367 width=8)\n(actual time=0.258..0.291 rows=367 loops=1)\n\nThis required a bit of hackery to not have timestamp_mi() error out in\nthe planner when the timestamp difference calculation overflows. I\nconsidered adding ereturn support to fix that, but that felt like\nopening Pandora's box. Instead, I added some pre-checks similar to\nwhat's in timestamp_mi() to have the support function fall back on the\n1000 row estimate when there will be an overflow.\n\nAlso, there's no interval_div, so the patch has a macro that converts\ninterval to microseconds and does floating point division. I think\nthat's good enough for row estimations.\n\nI'll park this here until July CF.\n\n(I understand this doesn't help the case in [1] as the generate_series\ninputs are not const there)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAMPYKo0FouB-HZ1k-_Ur2v%2BkK71q0T5icQGrp%2BSPbQJGq0H2Rw%40mail.gmail.com", "msg_date": "Sun, 14 Apr 2024 15:14:46 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "SupportRequestRows support function for generate_series_timestamptz" }, { "msg_contents": "On Sun, 14 Apr 2024 at 15:14, David Rowley <[email protected]> wrote:\n> I had some spare time today, so wrote a patch, which gives you:\n>\n> postgres=# explain analyze select * from generate_series('2024-01-01',\n> '2025-01-01', interval '1 day');\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------\n> Function Scan on generate_series (cost=0.00..3.67 rows=367 width=8)\n> (actual time=0.258..0.291 rows=367 loops=1)\n\nHere's v2 of the patch with some added regression tests.\n\nI did this by writing a plpgsql function named explain_mask_costs()\nwhich has various boolean parameters to mask out the various portions\nof the costs. I wondered if this function should live somewhere else\nas it seems applicable to more than just misc_functions.sql. Maybe\ntest_setup.sql. I'll leave where it is for now unless anyone thinks\ndifferently.\n\nThis is a fairly simple and seemingly non-controversial patch. I plan\nto push it in the next few days unless there's some feedback before\nthen.\n\nDavid", "msg_date": "Mon, 8 Jul 2024 13:28:10 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "looks good to me.\n\nsome minor questions:\n/*\n* Protect against overflows in timestamp_mi. XXX convert to\n* ereturn one day?\n*/\nif (!TIMESTAMP_NOT_FINITE(start) && !TIMESTAMP_NOT_FINITE(finish) &&\n!pg_sub_s64_overflow(finish, start, &dummy))\n\ni don't understand the comment \"XXX convert to ereturn one day?\".\n\ndo we need to add unlikely for \"pg_sub_s64_overflow\", i saw most of\npg_sub_s64_overflow have unlikely.\n\n\n", "msg_date": "Mon, 8 Jul 2024 10:50:02 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "On Mon, 8 Jul 2024 at 14:50, jian he <[email protected]> wrote:\n>\n> looks good to me.\n\nThanks for looking.\n\n> /*\n> * Protect against overflows in timestamp_mi. XXX convert to\n> * ereturn one day?\n> */\n> if (!TIMESTAMP_NOT_FINITE(start) && !TIMESTAMP_NOT_FINITE(finish) &&\n> !pg_sub_s64_overflow(finish, start, &dummy))\n>\n> i don't understand the comment \"XXX convert to ereturn one day?\".\n\nThe problem I'm trying to work around there is that timestamp_mi\nraises an ERROR if there's an overflow. I don't want the support\nfunction to cause an ERROR so I'm trying to only call timestamp_mi in\ncases where it won't error. The ereturn mention is a reference to\nERROR raising infrastructure added by d9f7f5d32 and so far only used\nby input functions. It would be possible to use that to save from\nhaving to do the pg_sub_s64_overflow(). Instead, we could check if\nany errors were found and only proceed with the remaining part of the\ncalculation if none were found.\n\nI've tried to improve the comment in the attached version. I removed\nthe reference to ereturn.\n\n> do we need to add unlikely for \"pg_sub_s64_overflow\", i saw most of\n> pg_sub_s64_overflow have unlikely.\n\nI was hoping the condition would be likely() rather than unlikely().\nHowever, I didn't consider that the code path was hot enough for it to\nmatter. It's just a function we call once during planning if we find a\ncall to generate_series_timestamp(). It's not like it's called a\nmillion or a billion times during execution like a function such as\nint4pl() could be.\n\nDavid", "msg_date": "Mon, 8 Jul 2024 16:02:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "On Mon, Jul 8, 2024 at 12:02 PM David Rowley <[email protected]> wrote:\n>\n> > /*\n> > * Protect against overflows in timestamp_mi. XXX convert to\n> > * ereturn one day?\n> > */\n> > if (!TIMESTAMP_NOT_FINITE(start) && !TIMESTAMP_NOT_FINITE(finish) &&\n> > !pg_sub_s64_overflow(finish, start, &dummy))\n> >\n> > i don't understand the comment \"XXX convert to ereturn one day?\".\n>\n> The problem I'm trying to work around there is that timestamp_mi\n> raises an ERROR if there's an overflow. I don't want the support\n> function to cause an ERROR so I'm trying to only call timestamp_mi in\n> cases where it won't error. The ereturn mention is a reference to\n> ERROR raising infrastructure added by d9f7f5d32 and so far only used\n> by input functions. It would be possible to use that to save from\n> having to do the pg_sub_s64_overflow(). Instead, we could check if\n> any errors were found and only proceed with the remaining part of the\n> calculation if none were found.\n>\n> I've tried to improve the comment in the attached version. I removed\n> the reference to ereturn.\n>\ngot it.\n\n{ oid => '2031',\n proname => 'timestamp_mi', prorettype => 'interval',\n proargtypes => 'timestamp timestamp', prosrc => 'timestamp_mi' },\n{ oid => '1188',\n proname => 'timestamptz_mi', prorettype => 'interval',\n proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_mi' },\n\nso this also apply to\n\n{ oid => '938', descr => 'non-persistent series generator',\n proname => 'generate_series', prorows => '1000', proretset => 't',\n prorettype => 'timestamp', proargtypes => 'timestamp timestamp interval',\n prosrc => 'generate_series_timestamp' },\n\nIf so, then we need to update src/include/catalog/pg_proc.dat also?\n\n\n", "msg_date": "Mon, 8 Jul 2024 12:43:45 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "On Mon, 8 Jul 2024 at 16:43, jian he <[email protected]> wrote:\n> { oid => '2031',\n> proname => 'timestamp_mi', prorettype => 'interval',\n> proargtypes => 'timestamp timestamp', prosrc => 'timestamp_mi' },\n> { oid => '1188',\n> proname => 'timestamptz_mi', prorettype => 'interval',\n> proargtypes => 'timestamptz timestamptz', prosrc => 'timestamp_mi' },\n\nI'm not quite sure what you mean that needs to be adjusted with this.\n\n> so this also apply to\n>\n> { oid => '938', descr => 'non-persistent series generator',\n> proname => 'generate_series', prorows => '1000', proretset => 't',\n> prorettype => 'timestamp', proargtypes => 'timestamp timestamp interval',\n> prosrc => 'generate_series_timestamp' },\n>\n> If so, then we need to update src/include/catalog/pg_proc.dat also?\n\nOh, yeah. I missed setting the prosupport function for that one. Thanks.\n\nI'm not sure when I realised there were 3 of these functions, but it\nseems I didn't when I adjusted pg_proc.dat.\n\nUpdated patch attached.\n\nDavid", "msg_date": "Mon, 8 Jul 2024 17:16:00 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "On Mon, Jul 8, 2024 at 1:16 PM David Rowley <[email protected]> wrote:\n>\n>\n> Updated patch attached.\n>\n\nlooking good to me.\n\n\n", "msg_date": "Mon, 8 Jul 2024 13:52:09 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" }, { "msg_contents": "On Mon, 8 Jul 2024 at 17:52, jian he <[email protected]> wrote:\n>\n> On Mon, Jul 8, 2024 at 1:16 PM David Rowley <[email protected]> wrote:\n> > Updated patch attached.\n>\n> looking good to me.\n\nThanks for reviewing. I've pushed the patch now.\n\nDavid\n\n\n", "msg_date": "Tue, 9 Jul 2024 09:59:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SupportRequestRows support function for\n generate_series_timestamptz" } ]
[ { "msg_contents": "While working on the \"Combine Prune and Freeze records emitted by \nvacuum\" patch [1], I wished we would have an easier way to test pruning. \nThere's a lot of logic with following HOT chains etc., and it's very \nhard to construct all those scenarios just by INSERT/UPDATE/DELETE \ncommands. In principle though, pruning should be very amenable for good \ntest coverage. The input is one heap page and some parameters, the \noutput is one heap page and a few other fields that are already packaged \nneatly in the PruneFreezeResult struct.\n\nBack then, I started to work on a little tool for that to verify the \ncorrectness of pruning refactoring, but I never got around to polish it \nor write proper repeatable tests with it. I did use it for some ad hoc \ntesting, though.\n\nI don't know when I'll find the time to polish it, so here is the very \nrough work-in-progress version I've got now.\n\nOne thing I used this for was to test that we still handle \nHEAP_MOVED_IN/OFF correctly. Yes, it still works. But what surprised me \nis that when a HEAP_MOVED_IN tuple is frozen, we replace xvac with \nFrozenTransactondId, and leave the HEAP_MOVED_IN flag in place. I \nassumed that we would clear the HEAP_MOVED_IN flag instead.\n\n[1] \nhttps://www.postgresql.org/message-id/CAAKRu_azf-zH%3DDgVbquZ3tFWjMY1w5pO8m-TXJaMdri8z3933g%40mail.gmail.com\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Sun, 14 Apr 2024 23:34:55 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": true, "msg_subject": "White-box testing heap pruning" } ]
[ { "msg_contents": "Hi,\n\nTo see how well we're doing testing newly introduced code, I computed the\ndifferential code coverage between REL_16_STABLE and HEAD.\n\nWhile arguably comparing HEAD to the the merge-base between REL_16_STABLE and\nHEAD would be more accurate, I chose REL_16_STABLE because we've backpatched\nbugfixes with tests etc.\n\n\nI first got some nonsensical differences. That turns out to be due to\nimmediate shutdowns in the tests, which\n\na) can loose coverage, e.g. there were no hits for most of walsummarizer.c,\n because the test shuts always shuts it down immediately\nb) can cause corrupted coverage files if a process is shut down while writing\n out coverage files\n\nI partially worked around a) by writing out coverage files during abnormal\nshutdowns. That requires some care, I'll send a separate email about that. I\nworked around b) by rerunning tests until that didn't occur.\n\n\nThe differential code coverage view in lcov is still somewhat raw. I had to\nweaken two error checks to get it to succeed in postgres. You can hover over\nthe code coverage columns to get more details about what the various three\nletter acronyms mean. The most important ones are:\n- UNC - uncovered new code, i.e. we added code that's not tested\n- LBC - lost baseline coverage, i.e previously tested code isn't anymore\n- UBC - untested baseline, i.e. code that's still not tested\n- GBC - gained baseline coverage - unchanged code that's now tested\n- GNC - gained new coverage - new code that's tested\n\nhttps://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/\n\n\nThis includes \"branch coverage\" - I'm not sure that's worth the additional\nclutter it generates.\n\nLooking at the differential coverage results, at least the following seem\nnotable:\n\n- We added a bit less uncovered code than last year, but it's not quite a fair\n comparison, because I ran the numbers for 16 2023-04-08. Since the feature\n freeze, 17's coverage has improved by a few hundred lines (8225c2fd40c).\n\n- A good bit of the newly uncovered code is in branches that are legitimately\n hard to reach (unlikely errors etc).\n\n- Some of the new walsummary code could use more tests.\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/backup/walsummaryfuncs.c.gcov.html#L69\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_combinebackup/pg_combinebackup.c.gcov.html#L424\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/blkreftable.c.gcov.html#L790\n\n- the new buffer eviction paths aren't tested at all:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/storage/buffer/bufmgr.c.gcov.html#L6023\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/contrib/pg_buffercache/pg_buffercache_pages.c.gcov.html#L356\n It looks like it should be fairly trivial to test at least the basics?\n\n- Coverage for some of the new unicode code is pretty poor:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/unicode_category.c.gcov.html#L122\n\n- Some of the new nbtree code could use a bit more tests:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/nbtree/nbtutils.c.gcov.html#L1468\n\n- Our coverage of the non-default copy modes of pg_upgrade, pg_combinebackup\n is nonexistent, and that got worse with the introduction of a new method\n this release:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_upgrade/file.c.gcov.html#L360\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_upgrade/file.c.gcov.html#L400\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_combinebackup/copy_file.c.gcov.html#L209\n\n- Code coverage of acl.c is atrocious and got worse.\n\n- The new bump allocator has a fair amount of uncovered functionality:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L293\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L613\n\n- A lot of the new resowner functions aren't covered, but I guess the\n equivalent functionality wasn't covered before, either:\n\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/cache/catcache.c.gcov.html#L2317\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/cache/relcache.c.gcov.html#L6868\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/storage/buffer/bufmgr.c.gcov.html#L3608\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/storage/buffer/bufmgr.c.gcov.html#L5978\n ...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 14 Apr 2024 15:33:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Sun, Apr 14, 2024 at 6:33 PM Andres Freund <[email protected]> wrote:\n> - Some of the new nbtree code could use a bit more tests:\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/nbtree/nbtutils.c.gcov.html#L1468\n\nI made a conscious decision to not add coverage for the function that\nyou've highlighted here (_bt_rewind_nonrequired_arrays) back when I\nreviewed the coverage situation for the patch, which was about a month\nago now. (FWIW I also decided against adding coverage for the\nrecursive call to _bt_advance_array_keys, for similar reasons.)\n\nI don't mind adding _bt_rewind_nonrequired_arrays coverage now. I\nalready wrote 4 tests that show wrong answers (and assertion failures)\nwhen the call to _bt_rewind_nonrequired_arrays is temporarily\ncommented out. The problem with committing such a test, if any, is\nthat it'll necessitate creating an index with at least 3 columns,\ncrafted to trip up this exact issue with non-required arrays -- and it\nhas to be bigger than one page (probably several pages, at a minimum).\nThat's a relatively large number of cycles to spend on this fairly\nnarrow issue -- it's at least a lot relative to the prevailing\nstandard for these things. Plus I'd be relying on implementation\ndetails that might change, as well as relying on things like BLCKSZ\n(not that it'd be the first time that I committed a test like that).\n\nNote also that there is a general rule (explained above\n_bt_rewind_nonrequired_arrays) requiring that all non-required arrays\nbe reset to their initial positions/element (first in the current scan\ndirection) once _bt_first is reached. If _bt_advance_array_keys\nsomehow failed to follow that rule (not necessarily due to an issue\nwith missing the call to _bt_rewind_nonrequired_arrays), then we'd get\nan assertion failure within _bt_first -- the\n_bt_verify_arrays_bt_first assertion would catch the violation of the\ninvariant (the_bt_advance_array_keys postcondition invariant/_bt_first\nprecondition invariant). So we kinda do have some test coverage for\nthis function already.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Apr 2024 14:43:31 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Sun, Apr 14, 2024 at 6:33 PM Andres Freund <[email protected]> wrote:\n> - Some of the new walsummary code could use more tests.\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/backup/walsummaryfuncs.c.gcov.html#L69\n\nSo this is pg_wal_summary_contents() and\npg_get_wal_summarizer_state(). I was reluctant to try to cover these\nbecause I thought it would be hard to get the tests to be stable. The\ndifficulties in stabilizing src/bin/pg_walsummary/t/002_blocks.pl seem\nto demonstrate that this concern wasn't entire unfounded, but as far\nas I know that test is now stable, so we could probably use the same\ntechnique to test pg_wal_summary_contents(), maybe even as part of the\nsame test case. I don't really know what a good test for\npg_get_wal_summarizer_state() would look like, though.\n\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_combinebackup/pg_combinebackup.c.gcov.html#L424\n\nI guess we could test this by adding a tablespace, and a tablespace\nmapping, to one of the pg_combinebackup tests.\n\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/blkreftable.c.gcov.html#L790\n\nThis is dead code. I thought we might need to use this as a way of\nmanaging memory pressure, but it didn't end up being needed. We could\nremove it, or mark it #if NOT_USED, or whatever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 15:36:04 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 15:36:04 -0400, Robert Haas wrote:\n> On Sun, Apr 14, 2024 at 6:33 PM Andres Freund <[email protected]> wrote:\n> > - Some of the new walsummary code could use more tests.\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/backup/walsummaryfuncs.c.gcov.html#L69\n> \n> So this is pg_wal_summary_contents() and\n> pg_get_wal_summarizer_state(). I was reluctant to try to cover these\n> because I thought it would be hard to get the tests to be stable. The\n> difficulties in stabilizing src/bin/pg_walsummary/t/002_blocks.pl seem\n> to demonstrate that this concern wasn't entire unfounded, but as far\n> as I know that test is now stable, so we could probably use the same\n> technique to test pg_wal_summary_contents(), maybe even as part of the\n> same test case. I don't really know what a good test for\n> pg_get_wal_summarizer_state() would look like, though.\n\nI think even just reaching the code, without a lot of of verification of the\nreturned data, is better than not reaching the code at all. I.e. the test\ncould just check that the pid is set, the tli is right.\n\nThat'd also add at least some coverage of\nhttps://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/postmaster/walsummarizer.c.gcov.html#L433\n\n\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/bin/pg_combinebackup/pg_combinebackup.c.gcov.html#L424\n> \n> I guess we could test this by adding a tablespace, and a tablespace\n> mapping, to one of the pg_combinebackup tests.\n\nSeems worthwhile to me.\n\n\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/blkreftable.c.gcov.html#L790\n> \n> This is dead code. I thought we might need to use this as a way of\n> managing memory pressure, but it didn't end up being needed. We could\n> remove it, or mark it #if NOT_USED, or whatever.\n\nDon't really have an opinion on that. How likely do you think we'll need it\ngoing forward?\n\n\nNote that I didn't look exhaustively through the coverage of the walsummarizer\ncode - I just looked at a few things that stood out. I looked for a few\nminutes more:\n\n- It seems worth explicitly covering the various\n record types that walsummarizer needs to understand:\n https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/postmaster/walsummarizer.c.gcov.html#L1184\n i.e. XLOG_SMGR_TRUNCATE, XLOG_XACT_COMMIT_PREPARED, XLOG_XACT_ABORT, XLOG_XACT_ABORT_PREPARED.\n\n- Another thing that looks to be not covered is dealing with\n enabling/disabling summarize_wal, that also seems worth testing?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:43:55 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Mon, 15 Apr 2024 at 10:33, Andres Freund <[email protected]> wrote:\n> - The new bump allocator has a fair amount of uncovered functionality:\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L293\n\nThe attached adds a test to tuplesort to exercise BumpAllocLarge()\n\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L613\n\nI don't see a way to exercise those. They're meant to be \"can't\nhappen\" ERRORs. I could delete them and use BogusFree, BogusRealloc,\nBogusGetChunkContext, BogusGetChunkSpace instead, but the ERROR\nmessage would be misleading. I think it's best just to leave this.\n\nDavid", "msg_date": "Tue, 16 Apr 2024 10:26:57 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Hi,\n\nOn 2024-04-16 10:26:57 +1200, David Rowley wrote:\n> On Mon, 15 Apr 2024 at 10:33, Andres Freund <[email protected]> wrote:\n> > - The new bump allocator has a fair amount of uncovered functionality:\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L293\n> \n> The attached adds a test to tuplesort to exercise BumpAllocLarge()\n\nCool.\n\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/utils/mmgr/bump.c.gcov.html#L613\n> \n> I don't see a way to exercise those. They're meant to be \"can't\n> happen\" ERRORs. I could delete them and use BogusFree, BogusRealloc,\n> BogusGetChunkContext, BogusGetChunkSpace instead, but the ERROR\n> message would be misleading. I think it's best just to leave this.\n\nI guess was thinking more about BumpIsEmpty() and BumpStats() then the \"bogus\"\ncases. But BumpIsEmpty() likely is unreachable as well. BumpStats() is\nreachable, but perhaps it's not worth it?\n\nBEGIN;\nDECLARE foo CURSOR FOR SELECT LEFT(a,10),b FROM (VALUES(REPEAT('a', 512 * 1024),1),(REPEAT('b', 512 * 1024),2)) v(a,b) ORDER BY v.a DESC;\nFETCH 1 FROM foo;\nSELECT * FROM pg_backend_memory_contexts WHERE name = 'Caller tuples';\n\n\nHm, independent of this, seems a bit odd that we don't include the memory\ncontext type in pg_backend_memory_contexts?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 15:57:49 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Sun, 2024-04-14 at 15:33 -0700, Andres Freund wrote:\n> - Coverage for some of the new unicode code is pretty poor:\n>  \n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/unicode_category.c.gcov.html#L122\n\nThank you for looking. Those functions are tested by category_test.c\nwhich is run with the 'update-unicode' target.\n\nBetter testing in the SQL tests might be good, but the existing tests\nare near-exhaustive, so I'm not terribly worried. Also, it's possible\nnot all of them are reachable by SQL, yet, because some of the later\npatches in the series didn't land in 17.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:53:48 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 16:53:48 -0700, Jeff Davis wrote:\n> On Sun, 2024-04-14 at 15:33 -0700, Andres Freund wrote:\n> > - Coverage for some of the new unicode code is pretty poor:\n> > �\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/common/unicode_category.c.gcov.html#L122\n> \n> Thank you for looking. Those functions are tested by category_test.c\n> which is run with the 'update-unicode' target.\n\nTesting just during update-unicode doesn't strike me as a great - that way\nportability issues wouldn't be found. And if it were tested that way, coverage\nwould understand it too. I can just include update-unicode when running\ncoverage, but that doesn't seem great.\n\nCan't we test this as part of the normal testsuite?\n\nI don't at all like that the tests depend on downloading new unicode\ndata. What if there was an update but I just want to test the current state?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 17:05:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Mon, 2024-04-15 at 17:05 -0700, Andres Freund wrote:\n> Can't we test this as part of the normal testsuite?\n\nOne thing that complicates things a bit is that the test compares the\nresults against ICU, so a mismatch in Unicode version between ICU and\nPostgres can cause test failures. The test ignores unassigned code\npoints, so normally it just results in less-exhaustive test coverage.\nBut sometimes things really do change, and that would cause a failure.\n\nI'm not quite sure how we should handle that -- maybe only run the test\nwhen the ICU version is known to be in a range where that's not a\nproblem?\n\nAnother option is to look for another way to test this code without\nICU. We could generate a list of known mappings and compare to that,\nbut we'd have to do it some way other than what the code is doing now,\notherwise we'd just be testing the code against itself. Maybe we can\nload the Unicode data into a Postgres table and then test with a SELECT\nstatement or something?\n\nI am worried that it will end looking like an over-engineered way to\ncompare a text file to itself.\n\nStepping back a moment, my top worry is really not to test those C\nfunctions, but to test the perl code that parses the text files and\ngenerates those arrays. Imagine a future Unicode version does something\nthat the perl scripts didn't anticipate, and they fail to add array\nentries for half the code points, or something like that. By testing\nthe arrays generated from freshly-parsed files exhaustively against\nICU, then we have a good defense against that. That situation really\nonly comes up when updating Unicode.\n\nThat's not to say that the C code shouldn't be tested, of course. Maybe\nwe can just do some spot checks for the functions that are reachable\nvia SQL and get rid of the functions that aren't yet reachable (and re-\nadd them when they are)?\n\n> I don't at all like that the tests depend on downloading new unicode\n> data. What if there was an update but I just want to test the current\n> state?\n\nI was mostly following the precedent for normalization. Should we\nchange that, also?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Apr 2024 18:23:21 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Mon, 2024-04-15 at 17:05 -0700, Andres Freund wrote:\n>> I don't at all like that the tests depend on downloading new unicode\n>> data. What if there was an update but I just want to test the current\n>> state?\n\n> I was mostly following the precedent for normalization. Should we\n> change that, also?\n\nIt's definitely not OK for the standard test suite to include\ninternet access. Seems like we need to separate \"download new\nsource files\" from \"generate the derived files\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 21:35:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Tue, 16 Apr 2024 at 10:57, Andres Freund <[email protected]> wrote:\n> I guess was thinking more about BumpIsEmpty() and BumpStats() then the \"bogus\"\n> cases. But BumpIsEmpty() likely is unreachable as well.\n\nThe only call to MemoryContextIsEmpty() I see is AtSubCommit_Memory()\nand it's on an aset.c context type. I see generation.c has the same\nissue per [1].\n\n> BumpStats() is\n> reachable, but perhaps it's not worth it?\n>\n> BEGIN;\n> DECLARE foo CURSOR FOR SELECT LEFT(a,10),b FROM (VALUES(REPEAT('a', 512 * 1024),1),(REPEAT('b', 512 * 1024),2)) v(a,b) ORDER BY v.a DESC;\n> FETCH 1 FROM foo;\n> SELECT * FROM pg_backend_memory_contexts WHERE name = 'Caller tuples';\n\nI think primarily it's good to exercise that code just to make sure it\ndoes not crash. Looking at the output of the above on my machine:\n\n name | ident | parent | level | total_bytes |\ntotal_nblocks | free_bytes | free_chunks | used_bytes\n---------------+-------+----------------+-------+-------------+---------------+------------+-------------+------------\n Caller tuples | | TupleSort sort | 6 | 1056848 |\n 3 | 8040 | 0 | 1048808\n(1 row)\n\nI'd say:\n\nName: stable\nident: stable\nparent: stable\nlevel: could change from a refactor of code\ntotal_bytes: could be different on other platforms or dependent on\nMEMORY_CONTEXT_CHECKING\ntotal_nblocks: stable enough\nfree_bytes: could be different on other platforms or dependent on\nMEMORY_CONTEXT_CHECKING\nfree_chunks: always 0\nused_bytes: could be different on other platforms or dependent on\nMEMORY_CONTEXT_CHECKING\n\nI've attached a patch which includes your test with unstable columns\nstripped out.\n\nI cut the 2nd row down to just 512 bytes as I didn't see the need to\nadd two large datums. Annoyingly it still uses 3 blocks as I've opted\nto do dlist_push_head(&set->blocks, &block->node); in BumpAllocLarge()\nwhich is the block that's picked up again in BumpAlloc() per block =\ndlist_container(BumpBlock, node, dlist_head_node(&set->blocks));\nwonder if the large blocks should push tail instead.\n\n> Hm, independent of this, seems a bit odd that we don't include the memory\n> context type in pg_backend_memory_contexts?\n\nThat seems like useful information to include. It sure would be\nuseful to have in there to verify that I'm testing BumpStats(). I've\nwritten a patch [2].\n\nDavid\n\n[1] https://coverage.postgresql.org/src/backend/utils/mmgr/generation.c.gcov.html#997\n[2] https://postgr.es/m/CAApHDvrXX1OR09Zjb5TnB0AwCKze9exZN=9Nxxg1ZCVV8W-3BA@mail.gmail.com", "msg_date": "Tue, 16 Apr 2024 13:50:14 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Hi,\n\nOn 2024-04-16 13:50:14 +1200, David Rowley wrote:\n> I think primarily it's good to exercise that code just to make sure it\n> does not crash. Looking at the output of the above on my machine:\n\nAgreed.\n\n\n> name | ident | parent | level | total_bytes |\n> total_nblocks | free_bytes | free_chunks | used_bytes\n> ---------------+-------+----------------+-------+-------------+---------------+------------+-------------+------------\n> Caller tuples | | TupleSort sort | 6 | 1056848 |\n> 3 | 8040 | 0 | 1048808\n> (1 row)\n> \n> I'd say:\n> \n> Name: stable\n> ident: stable\n> parent: stable\n> level: could change from a refactor of code\n> total_bytes: could be different on other platforms or dependent on\n> MEMORY_CONTEXT_CHECKING\n> total_nblocks: stable enough\n> free_bytes: could be different on other platforms or dependent on\n> MEMORY_CONTEXT_CHECKING\n> free_chunks: always 0\n> used_bytes: could be different on other platforms or dependent on\n> MEMORY_CONTEXT_CHECKING\n\nI think total_nblocks might also not be entirely stable? How about just\nchecking if total_bytes, total_nblocks, free_bytes and used_bytes are bigger\nthan 0?\n\n> I cut the 2nd row down to just 512 bytes as I didn't see the need to\n> add two large datums.\n\nAgreed, I just quickly hacked the statement up based on your earlier one.\n\n\nLooks good to me, either testing the other columns with > 0 or as you have it.\n\n\n> > Hm, independent of this, seems a bit odd that we don't include the memory\n> > context type in pg_backend_memory_contexts?\n> \n> That seems like useful information to include. It sure would be\n> useful to have in there to verify that I'm testing BumpStats(). I've\n> written a patch [2].\n\nNice!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 19:29:05 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Tue, 16 Apr 2024 at 14:29, Andres Freund <[email protected]> wrote:\n> I think total_nblocks might also not be entirely stable?\n\nI think it is stable for this test. However, I'll let the buildfarm\nmake the final call on that.\n\nThe reason I want to include it is that I'd like to push the large\nallocations to the tail of the block list and make this workload use 2\nblocks rather than 3. If I fix that and update the test then it's a\nbit of coverage to help ensure that doesn't get broken again.\n\n> How about just\n> checking if total_bytes, total_nblocks, free_bytes and used_bytes are bigger\n> than 0?\n\nSeems like a good idea. I've done it that way and pushed.\n\nThanks\n\nDavid\n\n\n", "msg_date": "Tue, 16 Apr 2024 16:24:33 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 18:23:21 -0700, Jeff Davis wrote:\n> On Mon, 2024-04-15 at 17:05 -0700, Andres Freund wrote:\n> > Can't we test this as part of the normal testsuite?\n> \n> One thing that complicates things a bit is that the test compares the\n> results against ICU, so a mismatch in Unicode version between ICU and\n> Postgres can cause test failures. The test ignores unassigned code\n> points, so normally it just results in less-exhaustive test coverage.\n> But sometimes things really do change, and that would cause a failure.\n\nHm, that seems annoying, even for update-unicode :/. But I guess it won't be\nvery common to have such failures?\n\n\n> Stepping back a moment, my top worry is really not to test those C\n> functions, but to test the perl code that parses the text files and\n> generates those arrays. Imagine a future Unicode version does something\n> that the perl scripts didn't anticipate, and they fail to add array\n> entries for half the code points, or something like that. By testing\n> the arrays generated from freshly-parsed files exhaustively against\n> ICU, then we have a good defense against that. That situation really\n> only comes up when updating Unicode.\n\nThat's a good point.\n\n\n> That's not to say that the C code shouldn't be tested, of course. Maybe\n> we can just do some spot checks for the functions that are reachable\n> via SQL and get rid of the functions that aren't yet reachable (and re-\n> add them when they are)?\n\nYes, I think that'd be a good start. I don't think we necessarily need\nexhaustive coverage, just a bit more coverage than we have.\n\n\n> > I don't at all like that the tests depend on downloading new unicode\n> > data. What if there was an update but I just want to test the current\n> > state?\n> \n> I was mostly following the precedent for normalization. Should we\n> change that, also?\n\nYea, I think we should. But I think it's less urgent if we end up testing more\nof the code without those test binaries. I don't immediately know what\ndependencies would be best, tbh.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Apr 2024 11:58:39 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Mon, 2024-04-15 at 21:35 -0400, Tom Lane wrote:\n> It's definitely not OK for the standard test suite to include\n> internet access.\n\nThe update-unicode target is not run as part of the standard test\nsuite.\n\n>   Seems like we need to separate \"download new\n> source files\" from \"generate the derived files\".\n\nI'm not sure that's the right dividing line. There are three-ish steps:\n\n1. Download the Unicode files\n2. Generate the derived .h files\n3. Run tests\n\nIf we stop after 1, then do we check in the Unicode files? If so, then\nthere's inconsistency between the Unicode files and the .h files, which\ndoesn't seem like a good idea. If we don't check in the files, then\nnobody can skip to step 2, so I don't see the point in separating the\nsteps.\n\nIf we separate out step 3 that makes more sense: we check in the result\nafter step 2, and anyone can run step 3 without downloading anything.\nThe only problem with that is the tests I added depend on a recent-\nenough version of ICU, so I'm not sure how many people will run it,\nanyway.\n\nAndres's complaints seem mainly about code coverage in the standard\ntest suite for the thin layer of C code above the generated arrays. I\nagree: code coverage is a good goal by itself, and having a few more\nplatforms exercising that C code can't hurt. I think we should just\naddress that concern directly by spot-checking the results for a few\ncode points rather than trying to make the exhaustive ICU tests run on\nmore hosts.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 16 Apr 2024 13:10:22 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" }, { "msg_contents": "On Tue, 2024-04-16 at 11:58 -0700, Andres Freund wrote:\n> \n> Hm, that seems annoying, even for update-unicode :/. But I guess it\n> won't be\n> very common to have such failures?\n\nThings don't change a lot between Unicode versions (and are subject to\nthe stability policy), but the tests are exhaustive, so even a single\ncharacter's property being changed will cause a failure when compared\nagainst an older version of ICU. The case mapping test succeeds back to\nICU 64 (based on Unicode 12.1), but the category/properties test\nsucceeds only back to ICU 72 (based on Unicode 15.0).\n\nI agree this is annoying, and I briefly documented it in\nsrc/common/unicode/README. It means whoever updates Unicode for a\nPostgres version should probably know how to build ICU from source and\npoint the Postgres build process at it. Maybe I should add more details\nin the README to make that easier for others.\n\nBut it's also a really good test. The ICU parsing, interpretation of\ndata files, and lookup code is entirely independent of ours. Therefore,\nif the results agree for all codepoints, we have a high degree of\nconfidence that the results are correct. That level of confidence seems\nworth a bit of annoyance.\n\nThis kind of test is possible because the category/property and case\nmapping functions accept a single code point, and there are only\n0x10FFFF code points.\n\n> > That's not to say that the C code shouldn't be tested, of course.\n> > Maybe\n> > we can just do some spot checks for the functions that are\n> > reachable\n> > via SQL and get rid of the functions that aren't yet reachable (and\n> > re-\n> > add them when they are)?\n> \n> Yes, I think that'd be a good start. I don't think we necessarily\n> need\n> exhaustive coverage, just a bit more coverage than we have.\n\nOK, I'll submit a test module or something.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 16 Apr 2024 13:29:30 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Differential code coverage between 16 and HEAD" } ]
[ { "msg_contents": "Hi,\n\nPer Coverity.\n\nCoverity has reported some out-of-bounds bugs\nrelated to the GetCommandTagName function.\n\nCID 1542964: (#1 of 1): Out-of-bounds access (OVERRUN)\n7. overrun-call: Overrunning callee's array of size 193 by passing argument\ncommandtag (which evaluates to 193) in call to GetCommandTagName.[\n\nIt turns out that the root of the problem is found in the declaration of\nthe tag_behavior array, which is found in src/backend/tcop/cmdtag.c.\n\nThe size of the array is defined by COMMAND_TAG_NEXTTAG enum,\nwhose value currently corresponds to 193.\nSince enum items are evaluated starting at zero, by default.\n\nIt turns out that the final size of the array, 193, limits the number of\nitems to 192, which excludes the last TAG\nPG_CMDTAG(CMDTAG_VACUUM, \"VACUUM\", false, false, false)\n\nFixed leaving it up to the compiler to determine the final size of the\narray.\n\nPatch attached.\n\nbest regards,\nRanier Vilela\n\nHi,Per Coverity.Coverity has reported some out-of-bounds bugsrelated to the GetCommandTagName function.\n\nCID 1542964: (#1 of 1): Out-of-bounds access (OVERRUN)\n7.\noverrun-call:\nOverrunning callee's array of size 193 by passing argument commandtag (which evaluates to 193) in call to GetCommandTagName.[ \nIt turns out that the root of the problem is found in the declaration of the tag_behavior array, which is found in src/backend/tcop/cmdtag.c.The size of the array is defined by COMMAND_TAG_NEXTTAG enum,whose value currently corresponds to 193.Since enum items are evaluated starting at zero, by default.It turns out that the final size of the array, 193, limits the number of items to 192, which excludes the last TAGPG_CMDTAG(CMDTAG_VACUUM, \"VACUUM\", false, false, false)Fixed leaving it up to the compiler to determine the final size of the array.Patch attached.best regards,Ranier Vilela", "msg_date": "Sun, 14 Apr 2024 20:17:35 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "On Mon, 15 Apr 2024 at 11:17, Ranier Vilela <[email protected]> wrote:\n> Coverity has reported some out-of-bounds bugs\n> related to the GetCommandTagName function.\n>\n> The size of the array is defined by COMMAND_TAG_NEXTTAG enum,\n> whose value currently corresponds to 193.\n> Since enum items are evaluated starting at zero, by default.\n\nI think the change makes sense. I don't see any good reason to define\nCOMMAND_TAG_NEXTTAG or force the compiler's hand when it comes to\nsizing that array.\n\nClearly, Coverity does not understand that we'll never call any of\nthose GetCommandTag* functions with COMMAND_TAG_NEXTTAG.\n\n> Patch attached.\n\nYou seem to have forgotten to attach it, but my comments above were\nwritten with the assumption that the patch is what I've attached here.\n\nDavid", "msg_date": "Mon, 15 Apr 2024 11:38:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I think the change makes sense. I don't see any good reason to define\n> COMMAND_TAG_NEXTTAG or force the compiler's hand when it comes to\n> sizing that array.\n> Clearly, Coverity does not understand that we'll never call any of\n> those GetCommandTag* functions with COMMAND_TAG_NEXTTAG.\n\n+1, but would this also allow us to get rid of any default:\ncases in switches on command tags?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2024 19:54:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "Em dom., 14 de abr. de 2024 às 20:38, David Rowley <[email protected]>\nescreveu:\n\n> On Mon, 15 Apr 2024 at 11:17, Ranier Vilela <[email protected]> wrote:\n> > Coverity has reported some out-of-bounds bugs\n> > related to the GetCommandTagName function.\n> >\n> > The size of the array is defined by COMMAND_TAG_NEXTTAG enum,\n> > whose value currently corresponds to 193.\n> > Since enum items are evaluated starting at zero, by default.\n>\n> I think the change makes sense. I don't see any good reason to define\n> COMMAND_TAG_NEXTTAG or force the compiler's hand when it comes to\n> sizing that array.\n>\n> Clearly, Coverity does not understand that we'll never call any of\n> those GetCommandTag* functions with COMMAND_TAG_NEXTTAG.\n>\nI think that Coverity understood it this way because when\nincluding COMMAND_TAG_NEXTTAG, in the enum definition,\nled to 193 items, and the last item in the array is currently 192.\n\n\n> > Patch attached.\n>\n> You seem to have forgotten to attach it, but my comments above were\n> written with the assumption that the patch is what I've attached here.\n>\nYes, I actually forgot.\n\n+1 for your patch.\n\nbest regards,\nRanier Vilela\n\nEm dom., 14 de abr. de 2024 às 20:38, David Rowley <[email protected]> escreveu:On Mon, 15 Apr 2024 at 11:17, Ranier Vilela <[email protected]> wrote:\n> Coverity has reported some out-of-bounds bugs\n> related to the GetCommandTagName function.\n>\n> The size of the array is defined by COMMAND_TAG_NEXTTAG enum,\n> whose value currently corresponds to 193.\n> Since enum items are evaluated starting at zero, by default.\n\nI think the change makes sense. I don't see any good reason to define\nCOMMAND_TAG_NEXTTAG or force the compiler's hand when it comes to\nsizing that array.\n\nClearly, Coverity does not understand that we'll never call any of\nthose GetCommandTag* functions with COMMAND_TAG_NEXTTAG.I think that Coverity understood it this way because when including COMMAND_TAG_NEXTTAG, in the enum definition,led to 193 items, and the last item in the array is currently 192.\n\n> Patch attached.\n\nYou seem to have forgotten to attach it, but my comments above were\nwritten with the assumption that the patch is what I've attached here.Yes, I actually forgot.+1 for your patch. best regards,Ranier Vilela", "msg_date": "Sun, 14 Apr 2024 21:12:31 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "On Mon, 15 Apr 2024 at 11:54, Tom Lane <[email protected]> wrote:\n> would this also allow us to get rid of any default:\n> cases in switches on command tags?\n\ngit grep \"case CMDTAG_\" does not yield any results.\n\nAs far as I understand, we'd only be able to get rid of a default case\nif we had a switch that included all CMDTAG* values apart from\nCOMMAND_TAG_NEXTTAG. If we don't ever switch on CMDTAG values then I\nthink the answer to your question is \"no\".\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:09:02 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> On Mon, 15 Apr 2024 at 11:54, Tom Lane <[email protected]> wrote:\n>> would this also allow us to get rid of any default:\n>> cases in switches on command tags?\n\n> git grep \"case CMDTAG_\" does not yield any results.\n\nOK. It was worth checking.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2024 21:15:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "On Mon, 15 Apr 2024 at 12:12, Ranier Vilela <[email protected]> wrote:\n>\n> Em dom., 14 de abr. de 2024 às 20:38, David Rowley <[email protected]> escreveu:\n>> You seem to have forgotten to attach it, but my comments above were\n>> written with the assumption that the patch is what I've attached here.\n>\n> Yes, I actually forgot.\n>\n> +1 for your patch.\n\nI've added a CF entry under your name for this:\nhttps://commitfest.postgresql.org/48/4927/\n\nIf it was code new to PG17 I'd be inclined to go ahead with it now,\nbut it does not seem to align with making the release mode stable.t\nI'd bet others will feel differently about that. Delaying seems a\nbetter default choice at least.\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 14:12:34 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "Em dom., 14 de abr. de 2024 às 23:12, David Rowley <[email protected]>\nescreveu:\n\n> On Mon, 15 Apr 2024 at 12:12, Ranier Vilela <[email protected]> wrote:\n> >\n> > Em dom., 14 de abr. de 2024 às 20:38, David Rowley <[email protected]>\n> escreveu:\n> >> You seem to have forgotten to attach it, but my comments above were\n> >> written with the assumption that the patch is what I've attached here.\n> >\n> > Yes, I actually forgot.\n> >\n> > +1 for your patch.\n>\n> I've added a CF entry under your name for this:\n> https://commitfest.postgresql.org/48/4927/\n\nThank you.\n\n\n>\n>\n> If it was code new to PG17 I'd be inclined to go ahead with it now,\n> but it does not seem to align with making the release mode stable.t\n> I'd bet others will feel differently about that. Delaying seems a\n> better default choice at least.\n>\nI agree. Although I initially thought it was a live bug, that's actually\nnot the case.\nIn fact, this is a refactoring.\n\nbest regards,\nRanier Vilela\n\nEm dom., 14 de abr. de 2024 às 23:12, David Rowley <[email protected]> escreveu:On Mon, 15 Apr 2024 at 12:12, Ranier Vilela <[email protected]> wrote:\n>\n> Em dom., 14 de abr. de 2024 às 20:38, David Rowley <[email protected]> escreveu:\n>> You seem to have forgotten to attach it, but my comments above were\n>> written with the assumption that the patch is what I've attached here.\n>\n> Yes, I actually forgot.\n>\n> +1 for your patch.\n\nI've added a CF entry under your name for this:\nhttps://commitfest.postgresql.org/48/4927/Thank you. \n\nIf it was code new to PG17 I'd be inclined to go ahead with it now,\nbut it does not seem to align with making the release mode stable.t\nI'd bet others will feel differently about that.  Delaying seems a\nbetter default choice at least.I agree. Although I initially thought it was a live bug, that's actually not the case.In fact, this is a refactoring.best regards,Ranier Vilela", "msg_date": "Mon, 15 Apr 2024 08:35:55 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "David Rowley <[email protected]> writes:\n> I've added a CF entry under your name for this:\n> https://commitfest.postgresql.org/48/4927/\n\n> If it was code new to PG17 I'd be inclined to go ahead with it now,\n> but it does not seem to align with making the release mode stable.\n> I'd bet others will feel differently about that. Delaying seems a\n> better default choice at least.\n\nThe security team's Coverity instance has started to show this\ncomplaint now too. So I'm going to go ahead and push this change\nin HEAD. It's probably unwise to change it in stable branches,\nsince there's at least a small chance some external code is using\nCOMMAND_TAG_NEXTTAG for the same purpose tag_behavior[] does.\nBut we aren't anywhere near declaring v17's API stable, so\nI'd rather fix the issue than dismiss it in HEAD.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 May 2024 13:38:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" }, { "msg_contents": "Em seg., 13 de mai. de 2024 às 14:38, Tom Lane <[email protected]> escreveu:\n\n> David Rowley <[email protected]> writes:\n> > I've added a CF entry under your name for this:\n> > https://commitfest.postgresql.org/48/4927/\n>\n> > If it was code new to PG17 I'd be inclined to go ahead with it now,\n> > but it does not seem to align with making the release mode stable.\n> > I'd bet others will feel differently about that. Delaying seems a\n> > better default choice at least.\n>\n> The security team's Coverity instance has started to show this\n> complaint now too. So I'm going to go ahead and push this change\n> in HEAD. It's probably unwise to change it in stable branches,\n> since there's at least a small chance some external code is using\n> COMMAND_TAG_NEXTTAG for the same purpose tag_behavior[] does.\n> But we aren't anywhere near declaring v17's API stable, so\n> I'd rather fix the issue than dismiss it in HEAD.\n>\nThanks for the commit, Tom.\n\nbest regards,\nRanier Vilela\n\nEm seg., 13 de mai. de 2024 às 14:38, Tom Lane <[email protected]> escreveu:David Rowley <[email protected]> writes:\n> I've added a CF entry under your name for this:\n> https://commitfest.postgresql.org/48/4927/\n\n> If it was code new to PG17 I'd be inclined to go ahead with it now,\n> but it does not seem to align with making the release mode stable.\n> I'd bet others will feel differently about that.  Delaying seems a\n> better default choice at least.\n\nThe security team's Coverity instance has started to show this\ncomplaint now too.  So I'm going to go ahead and push this change\nin HEAD.  It's probably unwise to change it in stable branches,\nsince there's at least a small chance some external code is using\nCOMMAND_TAG_NEXTTAG for the same purpose tag_behavior[] does.\nBut we aren't anywhere near declaring v17's API stable, so\nI'd rather fix the issue than dismiss it in HEAD.Thanks for the commit, Tom.best regards,Ranier Vilela", "msg_date": "Mon, 13 May 2024 15:02:05 -0300", "msg_from": "Ranier Vilela <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fix out-of-bounds in the function GetCommandTagName" } ]
[ { "msg_contents": "I was recently asked internally about the stability guarantees we\noffer for queryid. My answer consisted of:\n\n1. We cannot change Node enums in minor versions\n2. We're *unlikely* to add fields to Node types in minor versions, and\nif we did we'd likely be leaving them out of the jumble calc, plus it\nseems highly unlikely any new field we wedged into the padding would\nrelate at all to the parsed query.\n\nWhile answering, I checked what our documentation says. It does not\nseem to offer much in the way of what is guaranteed between minor\nversions.\n\nIn [1] I see:\n\n\"As a rule of thumb, queryid values can be assumed to be stable and\ncomparable only so long as the underlying server version\"\n\nIt's the \"underlying server version\" that I think needs some\nclarification. It's unclear if the minor version must match or just\nthe major version number. The preceding paragraph does mention:\n\n\"Furthermore, it is not safe to assume that queryid will be stable\nacross major versions of PostgreSQL.\"\n\nbut not stable across *major* versions does *not* mean stable across\n*minor* versions. The reader is just left guessing if that's true.\n\nMaybe the paragraph starting with \"Consumers of\" can detail the\nreasons queryid might be unstable and the following paragraph can\ndescribe the scenario for when the queryid can generally assumed to be\nstable.\n\nI've drafted a patch which I think improves things, but it probably\nneeds more work and opinions.\n\nDavid\n\n[1] https://www.postgresql.org/docs/current/pgstatstatements.html", "msg_date": "Mon, 15 Apr 2024 11:20:16 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 7:20 PM David Rowley <[email protected]> wrote:\n> It's the \"underlying server version\" that I think needs some\n> clarification. It's unclear if the minor version must match or just\n> the major version number. The preceding paragraph does mention:\n>\n> \"Furthermore, it is not safe to assume that queryid will be stable\n> across major versions of PostgreSQL.\"\n>\n> but not stable across *major* versions does *not* mean stable across\n> *minor* versions. The reader is just left guessing if that's true.\n\nTechnically we don't promise that WAL records won't change in minor\nversions. In fact, the docs specifically state that the format of any\nWAL record might change, and that users should upgrade standbys first\non general principle (though I imagine few do). We try hard to avoid\nchanging the format of WAL records in point releases, of course, but\nstrictly speaking there is no guarantee. This situation seems similar\n(though much lower stakes) to me. Query normalization isn't perfect --\nthere's a trade-off.\n\n> Maybe the paragraph starting with \"Consumers of\" can detail the\n> reasons queryid might be unstable and the following paragraph can\n> describe the scenario for when the queryid can generally assumed to be\n> stable.\n\nI think that it would be reasonable to say that we strive to not break\nthe format in point releases. Fundamentally, if pg_stat_statements\nsees a hard queryid format change (e.g. due to a major version\nupgrade), then pg_stat_statements throws away the accumulated query\nstats without being asked to.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Apr 2024 19:47:11 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, Apr 15, 2024 at 11:20:16AM +1200, David Rowley wrote:\n> I was recently asked internally about the stability guarantees we\n> offer for queryid. My answer consisted of:\n> \n> 1. We cannot change Node enums in minor versions\n> 2. We're *unlikely* to add fields to Node types in minor versions, and\n> if we did we'd likely be leaving them out of the jumble calc, plus it\n> seems highly unlikely any new field we wedged into the padding would\n> relate at all to the parsed query.\n\nSince 16 these new fields would be added by default unless the node\nattribute query_jumble_ignore is appended to it. I agree that this\nmay not be entirely intuitive when it comes to force compatibility\nacross the same major version. Could there be cases where it is worth \nbreaking compatibility and include something more in the jumbling,\nthough? I've not seen the case in recent years even in stable\nbranches.\n\n> Maybe the paragraph starting with \"Consumers of\" can detail the\n> reasons queryid might be unstable and the following paragraph can\n> describe the scenario for when the queryid can generally assumed to be\n> stable.\n>\n> <para>\n> As a rule of thumb, <structfield>queryid</structfield> values can be assumed to be\n> - stable and comparable only so long as the underlying server version and\n> - catalog metadata details stay exactly the same. Two servers\n> + stable and comparable only between <productname>PostgreSQL</productname> instances\n> + which are running the same major version of <productname>PostgreSQL</productname>\n> + and are running on the same machine architecture and catalog metadata details match. Two servers\n> participating in replication based on physical WAL replay can be expected\n> to have identical <structfield>queryid</structfield> values for the same query.\n> However, logical replication schemes do not promise to keep replicas\n\nAssuming that a query ID will be always stable across major versions\nis overconfident, I think. As Peter said, like for WAL, we may face\ncases where a slight breakage for a subset of queries could be\njustified, and pg_stat_statement would be able to cope with that by\ndiscarding the oldest entries in its hash tables.\n--\nMichael", "msg_date": "Mon, 15 Apr 2024 09:04:07 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 8:04 PM Michael Paquier <[email protected]> wrote:\n> Assuming that a query ID will be always stable across major versions\n> is overconfident, I think. As Peter said, like for WAL, we may face\n> cases where a slight breakage for a subset of queries could be\n> justified, and pg_stat_statement would be able to cope with that by\n> discarding the oldest entries in its hash tables.\n\nIf there was a minor break in compatibility, that either went\nunnoticed, or was considered too minor to matter, then\npg_stat_statements would be in exactly the same position as any\nexternal tool that uses its queryid values to accumulate query costs.\nWhile external tools can't understand the provenance of old queryid\nvalues, pg_stat_statements can't either.\n\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Apr 2024 20:22:03 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, 15 Apr 2024 at 11:47, Peter Geoghegan <[email protected]> wrote:\n>\n> On Sun, Apr 14, 2024 at 7:20 PM David Rowley <[email protected]> wrote:\n> > It's the \"underlying server version\" that I think needs some\n> > clarification. It's unclear if the minor version must match or just\n> > the major version number. The preceding paragraph does mention:\n> >\n> > \"Furthermore, it is not safe to assume that queryid will be stable\n> > across major versions of PostgreSQL.\"\n> >\n> > but not stable across *major* versions does *not* mean stable across\n> > *minor* versions. The reader is just left guessing if that's true.\n>\n> Technically we don't promise that WAL records won't change in minor\n> versions. In fact, the docs specifically state that the format of any\n> WAL record might change, and that users should upgrade standbys first\n> on general principle (though I imagine few do). We try hard to avoid\n> changing the format of WAL records in point releases, of course, but\n> strictly speaking there is no guarantee. This situation seems similar\n> (though much lower stakes) to me. Query normalization isn't perfect --\n> there's a trade-off.\n\nset compute_query_id = 'on';\nexplain (costs off, verbose) select oid from pg_class;\n QUERY PLAN\n-----------------------------------------------------------------\n Index Only Scan using pg_class_oid_index on pg_catalog.pg_class\n Output: oid\n Query Identifier: -8748805461085747951\n(3 rows)\n\nAs far as I understand query ID; it's based on the parse nodes and\nvalues in the system catalogue tables and is calculated on the local\nserver. Computed values are susceptible to variations in hash values\ncalculated by different CPU architectures.\n\nWhere does WAL fit into this? And why would a WAL format change the\ncomputed value?\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:00:56 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 9:01 PM David Rowley <[email protected]> wrote:\n> On Mon, 15 Apr 2024 at 11:47, Peter Geoghegan <[email protected]> wrote:\n> > Technically we don't promise that WAL records won't change in minor\n> > versions. In fact, the docs specifically state that the format of any\n> > WAL record might change, and that users should upgrade standbys first\n> > on general principle (though I imagine few do). We try hard to avoid\n> > changing the format of WAL records in point releases, of course, but\n> > strictly speaking there is no guarantee. This situation seems similar\n> > (though much lower stakes) to me. Query normalization isn't perfect --\n> > there's a trade-off.\n\n> Where does WAL fit into this? And why would a WAL format change the\n> computed value?\n\nIt doesn't. I just compared the two situations, which seem analogous.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Apr 2024 21:11:44 -0400", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Apr 15, 2024 at 11:20:16AM +1200, David Rowley wrote:\n>> 1. We cannot change Node enums in minor versions\n>> 2. We're *unlikely* to add fields to Node types in minor versions, and\n>> if we did we'd likely be leaving them out of the jumble calc, plus it\n>> seems highly unlikely any new field we wedged into the padding would\n>> relate at all to the parsed query.\n\n> Since 16 these new fields would be added by default unless the node\n> attribute query_jumble_ignore is appended to it.\n\nThey'd also be written/read by outfuncs/readfuncs, thereby breaking\nstored views/rules if the Node is one that can appear in a parsetree.\nSo the bar to making such a change in a stable branch would be very\nhigh.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 14 Apr 2024 21:19:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, 15 Apr 2024 at 12:04, Michael Paquier <[email protected]> wrote:\n> Since 16 these new fields would be added by default unless the node\n> attribute query_jumble_ignore is appended to it. I agree that this\n> may not be entirely intuitive when it comes to force compatibility\n> across the same major version. Could there be cases where it is worth\n> breaking compatibility and include something more in the jumbling,\n> though? I've not seen the case in recent years even in stable\n> branches.\n\nI think that's a valid possible situation which could result in a\nchange. however, I think it's much less likely than it used to be\nbecause we'd have to accidentally have used query_jumble_ignore,\nwhereas before all the struct parsing stuff went in, we could have\njust forgotten to jumble the field when adding a new field to a parse\nstruct.\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:27:09 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, 15 Apr 2024 at 13:19, Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > On Mon, Apr 15, 2024 at 11:20:16AM +1200, David Rowley wrote:\n> >> 1. We cannot change Node enums in minor versions\n> >> 2. We're *unlikely* to add fields to Node types in minor versions, and\n> >> if we did we'd likely be leaving them out of the jumble calc, plus it\n> >> seems highly unlikely any new field we wedged into the padding would\n> >> relate at all to the parsed query.\n>\n> > Since 16 these new fields would be added by default unless the node\n> > attribute query_jumble_ignore is appended to it.\n>\n> They'd also be written/read by outfuncs/readfuncs, thereby breaking\n> stored views/rules if the Node is one that can appear in a parsetree.\n> So the bar to making such a change in a stable branch would be very\n> high.\n\nI think a soft guarantee in the docs for it being stable in minor\nversions would be ok then.\n\nI'm unsure if \"Rule of thumb\" is the correct way to convey that. We\ncan't really write \"We endeavour to\", as who is \"We\". Maybe something\nlike \"Generally, it can be assumed that queryid is stable between all\nminor versions of a major version of ..., providing that <other\nreasons>\".\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:31:47 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 4:20 PM David Rowley <[email protected]> wrote:\n\n>\n> I've drafted a patch which I think improves things, but it probably\n> needs more work and opinions.\n>\n>\nSeems we can improve things by simply removing the \"rule of thumb\" sentence\naltogether. The prior paragraph states the things the queryid depends upon\nat the level of detail the reader needs.\n\nThe sentence \"Two servers participating in replication based on physical\nWAL replay can be expected to have identical queryid values for the same\nquery.\" apparently assumes that to participate both servers must share the\nsame machine architecture. I am under the impression that this is only an\nadvisory, not a requirement. Rather, two servers participating in physical\nreplication will be ensured that the catalog metadata and major versions\nare identical. This is not the case for servers related via logical\nreplication.\n\nDavid J.\n\nOn Sun, Apr 14, 2024 at 4:20 PM David Rowley <[email protected]> wrote:\nI've drafted a patch which I think improves things, but it probably\nneeds more work and opinions.Seems we can improve things by simply removing the \"rule of thumb\" sentence altogether.  The prior paragraph states the things the queryid depends upon at the level of detail the reader needs.The sentence \"Two servers participating in replication based on physical WAL replay can be expected to have identical queryid values for the same query.\" apparently assumes that to participate both servers must share the same machine architecture.  I am under the impression that this is only an advisory, not a requirement.  Rather, two servers participating in physical replication will be ensured that the catalog metadata and major versions are identical.  This is not the case for servers related via logical replication.David J.", "msg_date": "Sun, 14 Apr 2024 18:36:19 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 6:32 PM David Rowley <[email protected]> wrote:\n\n> On Mon, 15 Apr 2024 at 13:19, Tom Lane <[email protected]> wrote:\n> >\n> > Michael Paquier <[email protected]> writes:\n> > > On Mon, Apr 15, 2024 at 11:20:16AM +1200, David Rowley wrote:\n> > >> 1. We cannot change Node enums in minor versions\n> > >> 2. We're *unlikely* to add fields to Node types in minor versions, and\n> > >> if we did we'd likely be leaving them out of the jumble calc, plus it\n> > >> seems highly unlikely any new field we wedged into the padding would\n> > >> relate at all to the parsed query.\n> >\n> > > Since 16 these new fields would be added by default unless the node\n> > > attribute query_jumble_ignore is appended to it.\n> >\n> > They'd also be written/read by outfuncs/readfuncs, thereby breaking\n> > stored views/rules if the Node is one that can appear in a parsetree.\n> > So the bar to making such a change in a stable branch would be very\n> > high.\n>\n> I think a soft guarantee in the docs for it being stable in minor\n> versions would be ok then.\n>\n> I'm unsure if \"Rule of thumb\" is the correct way to convey that. We\n> can't really write \"We endeavour to\", as who is \"We\". Maybe something\n> like \"Generally, it can be assumed that queryid is stable between all\n> minor versions of a major version of ..., providing that <other\n> reasons>\".\n>\n>\nSo, there are three kinds of dependencies:\n\nProduct\nMachine\nUser Data\n\nThe user data dependencies are noted as being OID among other things\nThe machine dependencies are the architecture and other facets\nThe product dependencies are not enumerated but can be simply stated to be\ninternals stable throughout a major version.\n\nA minimal rewording of the last sentence in the prior paragraph could be:\n\nLastly, the queryid depends upon aspects of PostgreSQL internals that can\nonly change with each major version release.\n\nI'm disinclined to note minor releases here given the above wording. Sure,\nlike with lots of things, circumstances may require us to break a policy,\nbut we don't seem to make that point everywhere we conceive it could happen.\n\nDavid J.\n\nOn Sun, Apr 14, 2024 at 6:32 PM David Rowley <[email protected]> wrote:On Mon, 15 Apr 2024 at 13:19, Tom Lane <[email protected]> wrote:\n>\n> Michael Paquier <[email protected]> writes:\n> > On Mon, Apr 15, 2024 at 11:20:16AM +1200, David Rowley wrote:\n> >> 1. We cannot change Node enums in minor versions\n> >> 2. We're *unlikely* to add fields to Node types in minor versions, and\n> >> if we did we'd likely be leaving them out of the jumble calc, plus it\n> >> seems highly unlikely any new field we wedged into the padding would\n> >> relate at all to the parsed query.\n>\n> > Since 16 these new fields would be added by default unless the node\n> > attribute query_jumble_ignore is appended to it.\n>\n> They'd also be written/read by outfuncs/readfuncs, thereby breaking\n> stored views/rules if the Node is one that can appear in a parsetree.\n> So the bar to making such a change in a stable branch would be very\n> high.\n\nI think a soft guarantee in the docs for it being stable in minor\nversions would be ok then.\n\nI'm unsure if \"Rule of thumb\" is the correct way to convey that. We\ncan't really write \"We endeavour to\", as who is \"We\".  Maybe something\nlike \"Generally, it can be assumed that queryid is stable between all\nminor versions of a major version of ..., providing that <other\nreasons>\".So, there are three kinds of dependencies:ProductMachineUser DataThe user data dependencies are noted as being OID among other thingsThe machine dependencies are the architecture and other facetsThe product dependencies are not enumerated but can be simply stated to be internals stable throughout a major version.A minimal rewording of the last sentence in the prior paragraph could be:Lastly, the queryid depends upon aspects of PostgreSQL internals that can only change with each major version release.I'm disinclined to note minor releases here given the above wording.  Sure, like with lots of things, circumstances may require us to break a policy, but we don't seem to make that point everywhere we conceive it could happen.David J.", "msg_date": "Sun, 14 Apr 2024 18:58:35 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, 15 Apr 2024 at 13:37, David G. Johnston\n<[email protected]> wrote:\n> Seems we can improve things by simply removing the \"rule of thumb\" sentence altogether. The prior paragraph states the things the queryid depends upon at the level of detail the reader needs.\n\nI don't think that addresses the following, which I mentioned earlier:\n\n> but not stable across *major* versions does *not* mean stable across\n> *minor* versions. The reader is just left guessing if that's true.\n\nDavid\n\n\n", "msg_date": "Mon, 15 Apr 2024 14:03:02 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, Apr 15, 2024 at 01:31:47PM +1200, David Rowley wrote:\n> I think a soft guarantee in the docs for it being stable in minor\n> versions would be ok then.\n> \n> I'm unsure if \"Rule of thumb\" is the correct way to convey that. We\n> can't really write \"We endeavour to\", as who is \"We\". Maybe something\n> like \"Generally, it can be assumed that queryid is stable between all\n> minor versions of a major version of ..., providing that <other\n> reasons>\".\n\nIt sounds to me that the term \"best-effort\" is adapted here? Like in\n\"The compatibility of query IDs is preserved across minor versions on\na best-effort basis. It is possible that the post-parse-analysis tree\nchanges across minor releases, impacting the value of queryid for the\nsame query run across two different minor versions.\".\n--\nMichael", "msg_date": "Mon, 15 Apr 2024 11:09:13 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sun, Apr 14, 2024 at 7:03 PM David Rowley <[email protected]> wrote:\n\n> On Mon, 15 Apr 2024 at 13:37, David G. Johnston\n> <[email protected]> wrote:\n> > Seems we can improve things by simply removing the \"rule of thumb\"\n> sentence altogether. The prior paragraph states the things the queryid\n> depends upon at the level of detail the reader needs.\n>\n> I don't think that addresses the following, which I mentioned earlier:\n>\n> > but not stable across *major* versions does *not* mean stable across\n> > *minor* versions. The reader is just left guessing if that's true.\n>\n>\nThe base assumption here is that changes in the things we don't mention do\nnot influence the queryid. We didn't mention minor versions, changing them\ndoesn't influence the queryid.\n\nNow, reading that entire paragraph is a bit of a challenge IMO, and agree,\nas I subsequently noted, that the sentence you pointed out could be\nreworked. I stand by my statement that removing the sentence about \"rule\nof thumb\" altogether is a win. The prior paragraph should be sufficient -\nit is technically at the moment but am not opposed to rewording.\n\nDavid J.\n\nOn Sun, Apr 14, 2024 at 7:03 PM David Rowley <[email protected]> wrote:On Mon, 15 Apr 2024 at 13:37, David G. Johnston\n<[email protected]> wrote:\n> Seems we can improve things by simply removing the \"rule of thumb\" sentence altogether.  The prior paragraph states the things the queryid depends upon at the level of detail the reader needs.\n\nI don't think that addresses the following, which I mentioned earlier:\n\n> but not stable across *major* versions does *not* mean stable across\n> *minor* versions. The reader is just left guessing if that's true.\nThe base assumption here is that changes in the things we don't mention do not influence the queryid.  We didn't mention minor versions, changing them doesn't influence the queryid.Now, reading that entire paragraph is a bit of a challenge IMO, and agree, as I subsequently noted, that the sentence you pointed out could be reworked.  I stand by my statement that removing the sentence about \"rule of thumb\" altogether is a win.  The prior paragraph should be sufficient - it is technically at the moment but am not opposed to rewording.David J.", "msg_date": "Sun, 14 Apr 2024 19:23:33 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, 15 Apr 2024 at 14:09, Michael Paquier <[email protected]> wrote:\n>\n> On Mon, Apr 15, 2024 at 01:31:47PM +1200, David Rowley wrote:\n> > I'm unsure if \"Rule of thumb\" is the correct way to convey that. We\n> > can't really write \"We endeavour to\", as who is \"We\". Maybe something\n> > like \"Generally, it can be assumed that queryid is stable between all\n> > minor versions of a major version of ..., providing that <other\n> > reasons>\".\n>\n> It sounds to me that the term \"best-effort\" is adapted here? Like in\n> \"The compatibility of query IDs is preserved across minor versions on\n> a best-effort basis. It is possible that the post-parse-analysis tree\n> changes across minor releases, impacting the value of queryid for the\n> same query run across two different minor versions.\".\n\nI had another try and ended up pushing the logical / physical replica\ndetails up to the paragraph above. It seems more relevant to mention\nthis in the section which details reasons why the queryid can be\nunstable due to metadata variations. I think keeping the 2nd\nparagraph for reasons it's stable is a good separation of\nresponsibility. I didn't include the \"best-effort\" word, but here's\nwhat I did come up with.\n\nDavid", "msg_date": "Mon, 15 Apr 2024 14:54:52 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Mon, Apr 15, 2024 at 02:54:52PM +1200, David Rowley wrote:\n> <filename>pg_stat_statements</filename> will consider two apparently-identical\n> queries to be distinct, if they reference a table that was dropped\n> and recreated between the executions of the two queries.\n> + Two servers participating in replication based on physical WAL replay can\n> + be expected to have identical <structfield>queryid</structfield> values for\n> + the same query. However, logical replication schemes do not promise to\n> + keep replicas identical in all relevant details, so\n> + <structfield>queryid</structfield> will not be a useful identifier for\n> + accumulating costs across a set of logical replicas.\n> The hashing process is also sensitive to differences in\n> machine architecture and other facets of the platform.\n> Furthermore, it is not safe to assume that <structfield>queryid</structfield>\n> will be stable across major versions of <productname>PostgreSQL</productname>.\n> + If in doubt, direct testing is recommended.\n> </para>\n\nNot sure that this is an improvement in clarity. There are a few\nbullet points that treat about the instability of the query ID, and\nyour patch is now mixing the query ID being different for two\nmostly-identical queries on the same host with larger conditions like\nthe environment involved. Perhaps it would be better to move the last\nsentence of the first <para> (\"Furthermore, it is not safe..\") with\nthe part you are adding about replication in this paragraph.\n\n> <para>\n> - As a rule of thumb, <structfield>queryid</structfield> values can be assumed to be\n> - stable and comparable only so long as the underlying server version and\n> - catalog metadata details stay exactly the same. Two servers\n> - participating in replication based on physical WAL replay can be expected\n> - to have identical <structfield>queryid</structfield> values for the same query.\n> - However, logical replication schemes do not promise to keep replicas\n> - identical in all relevant details, so <structfield>queryid</structfield> will\n> - not be a useful identifier for accumulating costs across a set of logical\n> - replicas. If in doubt, direct testing is recommended.\n> + Generally, it can be assumed that <structfield>queryid</structfield> values\n> + are stable between minor version releases of <productname>PostgreSQL</productname>,\n> + providing that instances are running on the same machine architecture and\n> + the catalog metadata details match. Compatibility will only be broken\n> + between minor versions as a last resort.\n\nThis split is cleaner.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 09:10:19 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Tue, 16 Apr 2024 at 12:10, Michael Paquier <[email protected]> wrote:\n> Not sure that this is an improvement in clarity. There are a few\n> bullet points that treat about the instability of the query ID, and\n> your patch is now mixing the query ID being different for two\n> mostly-identical queries on the same host with larger conditions like\n> the environment involved. Perhaps it would be better to move the last\n> sentence of the first <para> (\"Furthermore, it is not safe..\") with\n> the part you are adding about replication in this paragraph.\n\nYeah, I think this is better. I think the attached is what you mean.\n\nIt makes sense to talk about the hashing variations closer to the\nobject identifier part.\n\nDavid", "msg_date": "Tue, 16 Apr 2024 14:04:22 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Tue, Apr 16, 2024 at 02:04:22PM +1200, David Rowley wrote:\n> It makes sense to talk about the hashing variations closer to the\n> object identifier part.\n\nMostly what I had in mind. A separate <para> for the new part you are\nadding at the end of the first part feels a bit more natural split\nhere. Feel free to discard my comment if you think that's not worth\nit.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 12:16:42 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Tue, 16 Apr 2024 at 15:16, Michael Paquier <[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 02:04:22PM +1200, David Rowley wrote:\n> > It makes sense to talk about the hashing variations closer to the\n> > object identifier part.\n>\n> Mostly what I had in mind. A separate <para> for the new part you are\n> adding at the end of the first part feels a bit more natural split\n> here. Feel free to discard my comment if you think that's not worth\n> it.\n\nThanks for the review. I've now pushed this, backpatching to 12.\n\nDavid\n\n\n", "msg_date": "Sat, 20 Apr 2024 13:56:48 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Stability of queryid in minor versions" }, { "msg_contents": "On Sat, Apr 20, 2024 at 01:56:48PM +1200, David Rowley wrote:\n> Thanks for the review. I've now pushed this, backpatching to 12.\n\nYou've split that into two separate paragraphs with 2d3389c28c5c.\nThanks for the commit.\n--\nMichael", "msg_date": "Sat, 20 Apr 2024 11:59:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stability of queryid in minor versions" } ]
[ { "msg_contents": "Hi all,\n\nWhile bumping on a problem with an extension that relied on pg_config\n--libs, I've been recalled about the fact that pg_config produces no\noutput for --libs/--ldflags_ex when building with meson,\nsrc/include/meson.build including the following on HEAD since meson\nhas been introduced in e6927270cd18:\nvar_ldflags_ex = '' # FIXME\n# FIXME - some extensions might directly use symbols from one of libs. If\n# that symbol isn't used by postgres, and statically linked, it'll cause an\n# undefined symbol at runtime. And obviously it'll cause problems for\n# executables, although those are probably less common.\nvar_libs = ''\n\nA equivalent of pgxs.mk is generated when building with meson, and\nit uses LIBS.\n\nAnyway, looking at https://mesonbuild.com/Dependencies.html and our\nmeson.build, we rely heavily on declare_dependency() to add a library\nto the internal list used in the build after some find_library()\nlookups, but there is no easy way to extract that later on.\n\nMaintaining a list in a variable built in the main meson.build may be\nOK knowing that new dependencies are not added that often, but I\ncannot stop thinking that there is some magic I am missing from meson\nto get an access to this information once all the options are\nprocessed.\n\nThoughts or comments?\n--\nMichael", "msg_date": "Mon, 15 Apr 2024 08:41:38 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": true, "msg_subject": "meson and pg_config --libs/--ldflags_ex" } ]
[ { "msg_contents": "hi.\none minor issue.\n\nwithin ATPostAlterTypeCleanup, we call ATPostAlterTypeParse:\nATPostAlterTypeParse(oldId, relid, InvalidOid,\n(char *) lfirst(def_item),\nwqueue, lockmode, tab->rewrite);\n\n\nfunction ATPostAlterTypeParse is:\nstatic void\nATPostAlterTypeParse(Oid oldId, Oid oldRelId, Oid refRelId, char *cmd,\nList **wqueue, LOCKMODE lockmode, bool rewrite)\n\nbut tab->rewrite is an int.\nso within ATPostAlterTypeCleanup we should call it like:\n\nATPostAlterTypeParse(oldId, relid, InvalidOid,\n(char *) lfirst(def_item),\nwqueue, lockmode, tab->rewrite != 0);\n\nor\n\nATPostAlterTypeParse(oldId, relid, InvalidOid,\n(char *) lfirst(def_item),\nwqueue, lockmode, tab->rewrite > 0);\n\n\n", "msg_date": "Mon, 15 Apr 2024 08:00:00 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "call ATPostAlterTypeParse inconsistency" } ]
[ { "msg_contents": "Hi,\n\nI was grepping for iovec users and noticed that the shm_mq stuff\ndefines its own iovec struct. Is there any reason not to use the\nstandard one, now that we can? Will add to next commitfest.", "msg_date": "Mon, 15 Apr 2024 13:20:35 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "s/shm_mq_iovec/struct iovec/" }, { "msg_contents": "On 15/04/2024 04:20, Thomas Munro wrote:\n> Hi,\n> \n> I was grepping for iovec users and noticed that the shm_mq stuff\n> defines its own iovec struct. Is there any reason not to use the\n> standard one, now that we can? Will add to next commitfest.\n\nI think it's better to keep them separate. They serve a similar purpose, \nbut they belong to completely separate APIs; I think \"incidental \ndeduplication\" is the right term for that. shm_mq_iovec is only used by \nour shm queue implementation, while struct iovec is part of the POSIX \nAPI. We wouldn't want to leak IOV_MAX into how shm_mq_iovec is used, for \nexample. Or as a thought experiment, if our shm_mq implementation needed \nan extra flag in the struct or something, we would be free to just add \nit. But if it we reused struct iovec, then we couldn't, or we'd need to \ncreate a new struct again.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Thu, 4 Jul 2024 12:26:07 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: s/shm_mq_iovec/struct iovec/" }, { "msg_contents": "On Thu, Jul 4, 2024 at 9:26 PM Heikki Linnakangas <[email protected]> wrote:\n> On 15/04/2024 04:20, Thomas Munro wrote:\n> > I was grepping for iovec users and noticed that the shm_mq stuff\n> > defines its own iovec struct. Is there any reason not to use the\n> > standard one, now that we can? Will add to next commitfest.\n>\n> I think it's better to keep them separate. They serve a similar purpose,\n> but they belong to completely separate APIs; I think \"incidental\n> deduplication\" is the right term for that. shm_mq_iovec is only used by\n> our shm queue implementation, while struct iovec is part of the POSIX\n> API. We wouldn't want to leak IOV_MAX into how shm_mq_iovec is used, for\n> example. Or as a thought experiment, if our shm_mq implementation needed\n> an extra flag in the struct or something, we would be free to just add\n> it. But if it we reused struct iovec, then we couldn't, or we'd need to\n> create a new struct again.\n\nThanks for looking. I marked this \"returned with feedback\".\n\n(For future parallel query work, I have a scheme where queues can\nspill to disk instead of blocking, which unblocks a bunch of parallel\nexecution strategies that are currently impossible due to flow control\ndeadlock risk. Then the two APIs meet and it annoys me that they are\nnot the same, so maybe I'll talk about this again some day :-))\n\n\n", "msg_date": "Mon, 15 Jul 2024 15:03:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: s/shm_mq_iovec/struct iovec/" } ]
[ { "msg_contents": "Hello, hackers.\n\nWhen the checkpointer process is busy, even if we reset synchronous_standby_names, the resumption of the backend processes waiting in SyncRep are made to wait until the checkpoint is completed.\nThis prevents the prompt resumption of application processing when a problem occurs on the standby server in a synchronous replication system.\nI confirmed this in PostgreSQL 12.18.\n\nThis issue has actually become a major problem for our customer. \nWhen a problem occurred in the replication network, even after resetting synchronous_standby_names, the backend processes did not respond, resulting in timeout errors in many client applications. \nThe customer has also set the checkpoint_completion_target parameter to 0.9, and it seems to have been working fine under normal conditions.\nHowever, there was a time when VACUUM was concentrated on a huge table. At that time, more than five times the max_wal_size of WAL output occurred during checkpoint processing. \nUnfortunately, communication with the synchronous standby was lost during that checkpoint processing, and despite resetting the synchronous_standby_names, multiple client applications could not return a response while waiting for SyncRep.\n\n\nI wrote a script(reset-synchronous_standby_names-during-checkpoint.sh) to illustrate the issue. \nThe script stops the synchronous standby during a transaction, and then resets synchronous_standby_names during checkpoint.\nWhen I run this on my 1-core RHEL7 machine, I see that COMMIT does wait until the CHECKPOINT finishes, even though synchronous_standby_names has been reset.\n\nI am attaching a patch (REL_12_STABLE) for the simplest seeming solution. \nThis moves the handling of SIGHUP reception by the checkpointer outside of the sleep process. \nHowever, I am concerned that this change could affect the performance of checkpoint execution when there is a delay in the checkpoint schedule.\nCan PostgreSQL tolerate this overhead?\n\nRegards, \nYusuke Egashira.", "msg_date": "Mon, 15 Apr 2024 01:52:33 +0000", "msg_from": "\"Yusuke Egashira (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Resetting synchronous_standby_names can wait for CHECKPOINT to finish" }, { "msg_contents": "Hello,\n\n> When the checkpointer process is busy, even if we reset synchronous_standby_names, the resumption of the backend processes waiting in SyncRep are made to wait until the checkpoint is completed.\n> This prevents the prompt resumption of application processing when a problem occurs on the standby server in a synchronous replication system.\n> I confirmed this in PostgreSQL 12.18.\n\nI have tested this issue on Postgres built from the master branch (17devel) and observed the same behavior where the backend SyncRep release is blocked until CHECKPOINT completion.\n\nIn situations where a synchronous standby instance encounters an error and needs to be detached, I believe that the current behavior of waiting for SyncRep is inappropriate as it delays the backend.\nI don't think changing the position of SIGHUP processing in the Checkpointer process carries much risk. Is there any oversight in my perception?\n\n\nRegards, \nYusuke Egashira.\n\n\n\n", "msg_date": "Tue, 14 May 2024 00:12:42 +0000", "msg_from": "\"Yusuke Egashira (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Resetting synchronous_standby_names can wait for CHECKPOINT to\n finish" } ]
[ { "msg_contents": "Hackers,\n\nSince incremental backup is using INCREMENTAL as a keyword (rather than \nstoring incremental info in the manifest) it is vulnerable to any file \nin PGDATA with the pattern INCREMENTAL.*.\n\nFor example:\n\n$ pg_basebackup -c fast -D test/backup/full -F plain\n$ touch test/data/INCREMENTAL.CONFIG\n$ /home/dev/test/pg/bin/pg_basebackup -c fast -D test/backup/incr1 -F \nplain -i /home/dev/test/backup/full/backup_manifest\n\n$ /home/dev/test/pg/bin/pg_combinebackup test/backup/full \ntest/backup/incr1 -o test/backup/combine\npg_combinebackup: error: could not read file \n\"test/backup/incr1/INCREMENTAL.CONFIG\": read only 0 of 4 bytes\npg_combinebackup: removing output directory \"test/backup/combine\"\n\nThis works anywhere in PGDATA as far as I can see, e.g.\n\n$ touch test/data/base/1/INCREMENTAL.1\n\nOr just by dropping a new file into the incremental backup:\n\n$ touch test/backup/incr1/INCREMENTAL.x\n$ /home/dev/test/pg/bin/pg_combinebackup test/backup/full \ntest/backup/incr1 -o test/backup/combine\npg_combinebackup: error: could not read file \n\"test/backup/incr1/INCREMENTAL.x\": read only 0 of 4 bytes\npg_combinebackup: removing output directory \"test/backup/combine\"\n\nWe could fix the issue by forbidding this file pattern in PGDATA, i.e. \nerror when it is detected during pg_basebackup, but in my view it would \nbe better (at least eventually) to add incremental info to the manifest. \nThat would also allow us to skip storing zero-length files and \nincremental stubs (with no changes) as files.\n\nRegards,\n-David\n\n\n", "msg_date": "Mon, 15 Apr 2024 13:53:51 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Sun, Apr 14, 2024 at 11:53 PM David Steele <[email protected]> wrote:\n> Since incremental backup is using INCREMENTAL as a keyword (rather than\n> storing incremental info in the manifest) it is vulnerable to any file\n> in PGDATA with the pattern INCREMENTAL.*.\n\nYeah, that's true. I'm not greatly concerned about this, because I\nthink anyone who creates a file called INCREMENTAL.CONFIG is just\nbeing perverse. However, we could ignore those files just in subtrees\nwhere they're not expected to occur i.e. only pay attention to them\nunder base, global, and pg_tblspc. I didn't do this because I thought\nsomeone might eventually want to do something like incremental backup\nof SLRU files, and then it would be annoying if there were a bunch of\nrandom pathname restrictions. But if we're really concerned about\nthis, I think it would be a reasonable fix; you're not really supposed\nto decide to store your configuration files in directories that exist\nfor the purpose of storing relation data files.\n\n> We could fix the issue by forbidding this file pattern in PGDATA, i.e.\n> error when it is detected during pg_basebackup, but in my view it would\n> be better (at least eventually) to add incremental info to the manifest.\n> That would also allow us to skip storing zero-length files and\n> incremental stubs (with no changes) as files.\n\nI did think about this, and I do like the idea of being able to elide\nincremental stubs. If you have a tremendous number of relations and\nvery few of them have changed at all, the incremental stubs might take\nup a significant percentage of the space consumed by the incremental\nbackup, which kind of sucks, even if the incremental backup is still\nquite small compared to the original database. And, like you, I had\nthe idea of trying to use the backup_manifest to do it.\n\nBut ... I didn't really end up feeling very comfortable with it. Right\nnow, the backup manifest is something we only use to verify the\nintegrity of the backup. If we were to do this, it would become a\ncritical part of the backup. I don't think I like the idea that\nremoving the backup_manifest should be allowed to, in effect, corrupt\nthe backup. But I think I dislike even more the idea that the data\nthat is used to verify the backup gets mushed together with the backup\ndata itself. Maybe in practice it's fine, but it doesn't seem very\nconceptually clean.\n\nThere are also some practical considerations that made me not want to\ngo in this direction: we'd need to make sure that the relevant\ninformation from the backup manifest was available to the\nreconstruction process at the right part of the code. When iterating\nover a directory, it would need to consider all of the files actually\npresent in that directory plus any \"hallucinated\" incremental stubs\nfrom the manifest. I didn't feel confident of my ability to implement\nthat in the available time without messing anything up.\n\nI think we should consider other possible designs here. For example,\ninstead of including this in the manifest, we could ship one\nINCREMENTAL.STUBS file per directory that contains a list of all of\nthe incremental stubs that should be created in that directory. My\nguess is that something like this will turn out to be way simpler to\ncode.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:33:24 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On 4/16/24 06:33, Robert Haas wrote:\n> On Sun, Apr 14, 2024 at 11:53 PM David Steele <[email protected]> wrote:\n>> Since incremental backup is using INCREMENTAL as a keyword (rather than\n>> storing incremental info in the manifest) it is vulnerable to any file\n>> in PGDATA with the pattern INCREMENTAL.*.\n> \n> Yeah, that's true. I'm not greatly concerned about this, because I\n> think anyone who creates a file called INCREMENTAL.CONFIG is just\n> being perverse. \n\nWell, it's INCREMENTAL.* and you might be surprised (though I doubt it) \nat what users will do. We've been caught out by wacky file names (and \ndatabase names) a few times.\n\n> However, we could ignore those files just in subtrees\n> where they're not expected to occur i.e. only pay attention to them\n> under base, global, and pg_tblspc. I didn't do this because I thought\n> someone might eventually want to do something like incremental backup\n> of SLRU files, and then it would be annoying if there were a bunch of\n> random pathname restrictions. But if we're really concerned about\n> this, I think it would be a reasonable fix; you're not really supposed\n> to decide to store your configuration files in directories that exist\n> for the purpose of storing relation data files.\n\nI think it would be reasonable to restrict what can be put in base/ and \nglobal/ but users generally feel free to create whatever they want in \nthe root of PGDATA, despite being strongly encouraged not to.\n\nAnyway, I think it should be fixed or documented as a caveat since it \ncauses a hard failure on restore.\n\n>> We could fix the issue by forbidding this file pattern in PGDATA, i.e.\n>> error when it is detected during pg_basebackup, but in my view it would\n>> be better (at least eventually) to add incremental info to the manifest.\n>> That would also allow us to skip storing zero-length files and\n>> incremental stubs (with no changes) as files.\n> \n> I did think about this, and I do like the idea of being able to elide\n> incremental stubs. If you have a tremendous number of relations and\n> very few of them have changed at all, the incremental stubs might take\n> up a significant percentage of the space consumed by the incremental\n> backup, which kind of sucks, even if the incremental backup is still\n> quite small compared to the original database. And, like you, I had\n> the idea of trying to use the backup_manifest to do it.\n> \n> But ... I didn't really end up feeling very comfortable with it. Right\n> now, the backup manifest is something we only use to verify the\n> integrity of the backup. If we were to do this, it would become a\n> critical part of the backup. \n\nFor my 2c the manifest is absolutely a critical part of the backup. I'm \nhaving a hard time even seeing why we have the --no-manifest option for \npg_combinebackup at all. If the manifest is missing who knows what else \nmight be missing as well? If present, why wouldn't we use it?\n\nI know Tomas added some optimizations that work best with --no-manifest \nbut if we can eventually read compressed tars (which I expect to be the \ngeneral case) then those optimizations are not very useful.\n\n> I don't think I like the idea that\n> removing the backup_manifest should be allowed to, in effect, corrupt\n> the backup. But I think I dislike even more the idea that the data\n> that is used to verify the backup gets mushed together with the backup\n> data itself. Maybe in practice it's fine, but it doesn't seem very\n> conceptually clean.\n\nI don't think this is a problem. The manifest can do more than store \nverification info, IMO.\n\n> There are also some practical considerations that made me not want to\n> go in this direction: we'd need to make sure that the relevant\n> information from the backup manifest was available to the\n> reconstruction process at the right part of the code. When iterating\n> over a directory, it would need to consider all of the files actually\n> present in that directory plus any \"hallucinated\" incremental stubs\n> from the manifest. I didn't feel confident of my ability to implement\n> that in the available time without messing anything up.\n\nI think it would be better to iterate over the manifest than files in a \ndirectory. In any case we still need to fix [1], which presents more or \nless the same problem.\n\n> I think we should consider other possible designs here. For example,\n> instead of including this in the manifest, we could ship one\n> INCREMENTAL.STUBS file per directory that contains a list of all of\n> the incremental stubs that should be created in that directory. My\n> guess is that something like this will turn out to be way simpler to\n> code.\n\nSo we'd store a mini manifest per directory rather than just put the \ninfo in the main manifest? Sounds like unnecessary complexity to me.\n\nRegards,\n-David\n\n[1] \nhttps://www.postgresql.org/message-id/flat/9badd24d-5bd9-4c35-ba85-4c38a2feb73e%40pgmasters.net\n\n\n", "msg_date": "Tue, 16 Apr 2024 12:12:10 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "Hi,\n\n> I think it would be reasonable to restrict what can be put in base/ and\n> global/ but users generally feel free to create whatever they want in\n> the root of PGDATA, despite being strongly encouraged not to.\n> \n> Anyway, I think it should be fixed or documented as a caveat since it\n> causes a hard failure on restore.\n\n+1. IMHO, no matter how you'd further decide to reference the incremental stubs, the earlier we can mention the failure to the user, the better.\nTbh, I'm not very confortable seeing pg_combinebackup fail when the user would need to restore (whatever the reason). I mean, failing to take the backup is less of a problem (compared to failing to restore) because then the user might have the time to investigate and fix the issue, not being in the hurry of a down production to restore...\n\n> > But ... I didn't really end up feeling very comfortable with it. Right\n> > now, the backup manifest is something we only use to verify the\n> > integrity of the backup. If we were to do this, it would become a\n> > critical part of the backup.\n\nIsn't it already the case? I mean, you need the manifest of the previous backup to take an incremental one, right?\nAnd shouldn't we encourage to verify the backup sets before (at least) trying to combine them?\nIt's not because a file was only use for one specific purpose until now that we can't improve it later.\nSplitting the meaningful information across multiple places would be more error-prone (for both devs and users) imo.\n\n> > I don't think I like the idea that\n> > removing the backup_manifest should be allowed to, in effect, corrupt\n> > the backup. But I think I dislike even more the idea that the data\n> > that is used to verify the backup gets mushed together with the backup\n> > data itself. Maybe in practice it's fine, but it doesn't seem very\n> > conceptually clean.\n\nRemoving pretty much any file in the backup set would probably corrupt it. I mean, why would people remove the backup manifest when they already can't remove backup_label?\nA doc stating \"dont touch anything\" is IMHO easier than \"this part is critical, not that one\".\n\n> I don't think this is a problem. The manifest can do more than store\n> verification info, IMO.\n\n+1. Adding more info to the backup manifest can be handy for other use-cases too (i.e. like avoiding to store empty files, or storing the checksums state of the cluster).\n\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\n\n", "msg_date": "Tue, 16 Apr 2024 07:09:55 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Tue, Apr 16, 2024 at 3:10 AM Stefan Fercot\n<[email protected]> wrote:\n> > > But ... I didn't really end up feeling very comfortable with it. Right\n> > > now, the backup manifest is something we only use to verify the\n> > > integrity of the backup. If we were to do this, it would become a\n> > > critical part of the backup.\n>\n> Isn't it already the case? I mean, you need the manifest of the previous backup to take an incremental one, right?\n> And shouldn't we encourage to verify the backup sets before (at least) trying to combine them?\n> It's not because a file was only use for one specific purpose until now that we can't improve it later.\n> Splitting the meaningful information across multiple places would be more error-prone (for both devs and users) imo.\n\nWell, right now, if you just take a full backup, and you throw away\nthe backup manifest because you don't care, you have a working full\nbackup. Furthermore, if you took any incremental backups based on that\nfull backup before discarding the manifest, you can still restore\nthem. Now, it is possible that nobody in the world cares about those\nproperties other than me; I have been known to (a) care about weird\nstuff and (b) be a pedant. However, we've already shipped a couple of\nreleases where backup manifests were thoroughly non-critical: you\nneeded them to run pg_verifybackup, and for nothing else. I think it's\nquite likely that there are users out there who are used to things\nworking in that way, and I'm not sure that those users will adjust\ntheir expectations when a new feature comes out. I also feel that if I\nwere a user, I would think of something called a \"manifest\" as just a\ntable of contents for whatever the thing was. I still remember\ndownloading tar files from the Internet in the 1990s and there'd be a\nfile in the tarball sometimes called MANIFEST which was, you know, a\nlist of what was in the tarball. You didn't need that file for\nanything functional; it was just so you could check if anything was\nmissing.\n\nWhat I fear is that this will turn into another situation like we had\nwith pg_xlog, where people saw \"log\" in the name and just blew it\naway. Matter of fact, I recently encountered one of my few recent\nexamples of someone doing that thing since the pg_wal renaming\nhappened. Some users don't take much convincing to remove anything\nthat looks inessential. And what I'm particularly worried about with\nthis feature is tar-format backups. If you have a directory format\nbackup and you do an \"ls\", you're going to see a whole bunch of files\nin there of which backup_manifest will be one. How you treat that file\nis just going to depend on what you know about its purpose. But if you\nhave a tar-format backup, possibly compressed, the backup_manifest\nfile stands out a lot more. You may have something like this:\n\nbackup_manifest root.tar.gz 16384.tar.gz\n\nWell, at this point, it becomes much more likely that you're going to\nthink that there are special rules for the backup_manifest file.\n\nThe kicker for me is that I can't see any reason to do any of this\nstuff. Including the information that we need to elide incremental\nstubs in some other way, say with one stub-list per directory, will be\neasier to implement and probably perform better. Like, I'm not saying\nwe can't find a way to jam this into the manifest. But I'm fairly sure\nit's just making life difficult for ourselves.\n\nI may ultimately lose this argument, as I did the one about whether\nthe backup_manifest should be JSON or some bespoke format. And that's\nfine. I respect your opinion, and David's. But I also reserve the\nright to feel differently, and I do. And I would also just gently\npoint out that my level of motivation to work on a particular feature\ncan depend quite a bit on whether I'm going to be forced to implement\nit in a way that I disagree with.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 09:22:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Mon, Apr 15, 2024 at 10:12 PM David Steele <[email protected]> wrote:\n> Anyway, I think it should be fixed or documented as a caveat since it\n> causes a hard failure on restore.\n\nAlright, I'll look into this.\n\n> I know Tomas added some optimizations that work best with --no-manifest\n> but if we can eventually read compressed tars (which I expect to be the\n> general case) then those optimizations are not very useful.\n\nMy belief is that those optimizations work fine with or without\nmanifests; you only start to lose the benefit in cases where you use\ndifferent checksum types for different backups that you then try to\ncombine. Which should hopefully be a rare case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 09:25:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Tuesday, April 16th, 2024 at 3:22 PM, Robert Haas wrote:\n> What I fear is that this will turn into another situation like we had\n> with pg_xlog, where people saw \"log\" in the name and just blew it\n> away. Matter of fact, I recently encountered one of my few recent\n> examples of someone doing that thing since the pg_wal renaming\n> happened. Some users don't take much convincing to remove anything\n> that looks inessential. And what I'm particularly worried about with\n> this feature is tar-format backups. If you have a directory format\n> backup and you do an \"ls\", you're going to see a whole bunch of files\n> in there of which backup_manifest will be one. How you treat that file\n> is just going to depend on what you know about its purpose. But if you\n> have a tar-format backup, possibly compressed, the backup_manifest\n> file stands out a lot more. You may have something like this:\n> \n> backup_manifest root.tar.gz 16384.tar.gz\n\nSure, I can see your point here and how people could be tempted to through away that backup_manifest if they don't know how important it is to keep it.\nProbably in this case we'd need the list to be inside the tar, just like backup_label and tablespace_map then.\n\n> The kicker for me is that I can't see any reason to do any of this\n> stuff. Including the information that we need to elide incremental\n> stubs in some other way, say with one stub-list per directory, will be\n> easier to implement and probably perform better. Like, I'm not saying\n> we can't find a way to jam this into the manifest. But I'm fairly sure\n> it's just making life difficult for ourselves.\n> \n> I may ultimately lose this argument, as I did the one about whether\n> the backup_manifest should be JSON or some bespoke format. And that's\n> fine. I respect your opinion, and David's. But I also reserve the\n> right to feel differently, and I do.\n\nDo you mean 1 stub-list per pgdata + 1 per tablespaces?\n\nSure, it is important to respect and value each other feelings, I never said otherwise.\n\nI don't really see how it would be faster to recursively go through each sub-directories of the pgdata and tablespaces to gather all the pieces together compared to reading 1 main file.\nBut I guess, choosing one option or the other, we will only find out how well it works once people will use it on the field and possibly give some feedback.\n\nAs you mentioned in [1], we're not going to start rewriting the implementation a week after feature freeze nor probably already start building big new things now anyway.\nSo maybe let's start with documenting the possible gotchas/corner cases to make our support life easier in the future.\n\nKind Regards,\n--\nStefan FERCOT\nData Egret (https://dataegret.com)\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoaVxr_o3mrDBrBcXm3gowr9Qc4ABW-c73NR_201KkDavw%40mail.gmail.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 16:06:06 +0000", "msg_from": "Stefan Fercot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Tue, Apr 16, 2024 at 12:06 PM Stefan Fercot\n<[email protected]> wrote:\n> Sure, I can see your point here and how people could be tempted to through away that backup_manifest if they don't know how important it is to keep it.\n> Probably in this case we'd need the list to be inside the tar, just like backup_label and tablespace_map then.\n\nYeah, I think anywhere inside the tar is better than anywhere outside\nthe tar, by a mile. I'm happy to leave the specific question of where\ninside the tar as something TBD at time of implementation by fiat of\nthe person doing the work.\n\nBut that said ...\n\n> Do you mean 1 stub-list per pgdata + 1 per tablespaces?\n>\n> I don't really see how it would be faster to recursively go through each sub-directories of the pgdata and tablespaces to gather all the pieces together compared to reading 1 main file.\n> But I guess, choosing one option or the other, we will only find out how well it works once people will use it on the field and possibly give some feedback.\n\nThe reason why I was suggesting one stub-list per directory is that we\nrecurse over the directory tree. We reach each directory in turn,\nprocess it, and then move on to the next one. What I imagine that we\nwant to do is - first iterate over all of the files actually present\nin a directory. Then, iterate over the list of stubs for that\ndirectory and do whatever we would have done if there had been a stub\nfile present for each of those. So, we don't really want a list of\nevery stub in the whole backup, or even every stub in the whole\ntablespace. What we want is to be able to easily get a list of stubs\nfor a single directory. Which is very easily done if each directory\ncontains its own stub-list file.\n\nIf we instead have a centralized stub-list for the whole tablespace,\nor the whole backup, it's still quite possible to make it work. We\njust read that centralized stub list and we build an in-memory data\nstructure that is indexed by containing directory, like a hash table\nwhere the key is the directory name and the value is a list of\nfilenames within that directory. But, a slight disadvantage of this\nmodel is that you have to keep that whole data structure in memory for\nthe whole time you're reconstructing, and you have to pass around a\npointer to it everywhere so that the code that handles individual\ndirectories can access it. I'm sure this isn't the end of the world.\nIt's probably unlikely that someone has so many stub files that the\nmemory used for such a data structure is painfully high, and even if\nthey did, it's unlikely that they are spread out across multiple\ndatabases and/or tablespaces in such a way that only needing the data\nfor one directory at a time would save you. But, it's not impossible\nthat such a scenario could exist.\n\nSomebody might say - well, don't go directory by directory. Just\nhandle all of the stubs at the end. But I don't think that really\nfixes anything. I want to be able to verify that none of the stubs\nlisted in the stub-list are also present in the backup as real files,\nfor sanity checking purposes. It's quite easy to see how to do that in\nthe design I proposed above: keep a list of the files for each\ndirectory as you read it, and then when you read the stub-list for\nthat directory, check those lists against each other for duplicates.\nDoing this on the level of a whole tablespace or the whole backup is\nclearly also possible, but once again it potentially uses more memory,\nand there's no functional gain.\n\nPlus, this kind of approach would also make the reconstruction process\n\"jump around\" more. It might pull a bunch of mostly-unchanged files\nfrom the full backup while handling the non-stub files, and then come\nback to that directory a second time, much later, when it's processing\nthe stub-list. Perhaps that would lead to a less-optimal I/O pattern,\nor perhaps it would make it harder for the user to understand how much\nprogress reconstruction had made. Or perhaps it would make no\ndifference at all; I don't know. Maybe there's even some advantage in\na two-pass approach like this. I don't see one. But it might prove\notherwise on closer examination.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 12:49:40 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Tue, Apr 16, 2024 at 9:25 AM Robert Haas <[email protected]> wrote:\n> On Mon, Apr 15, 2024 at 10:12 PM David Steele <[email protected]> wrote:\n> > Anyway, I think it should be fixed or documented as a caveat since it\n> > causes a hard failure on restore.\n>\n> Alright, I'll look into this.\n\nHere's a patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 17 Apr 2024 10:14:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On 4/18/24 00:14, Robert Haas wrote:\n> On Tue, Apr 16, 2024 at 9:25 AM Robert Haas <[email protected]> wrote:\n>> On Mon, Apr 15, 2024 at 10:12 PM David Steele <[email protected]> wrote:\n>>> Anyway, I think it should be fixed or documented as a caveat since it\n>>> causes a hard failure on restore.\n>>\n>> Alright, I'll look into this.\n> \n> Here's a patch.\n\nThanks! I've tested this and it works as advertised.\n\nIdeally I'd want an error on backup if there is a similar file in any \ndata directories that would cause an error on combine, but I admit that \nit is vanishingly rare for users to put files anywhere but the root of \nPGDATA, so this approach seems OK to me.\n\nRegards,\n-David\n\n\n", "msg_date": "Thu, 18 Apr 2024 09:56:02 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On Wed, Apr 17, 2024 at 7:56 PM David Steele <[email protected]> wrote:\n> Thanks! I've tested this and it works as advertised.\n>\n> Ideally I'd want an error on backup if there is a similar file in any\n> data directories that would cause an error on combine, but I admit that\n> it is vanishingly rare for users to put files anywhere but the root of\n> PGDATA, so this approach seems OK to me.\n\nOK, committed. Thanks for the review.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:10:08 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" }, { "msg_contents": "On 4/19/24 01:10, Robert Haas wrote:\n> On Wed, Apr 17, 2024 at 7:56 PM David Steele <[email protected]> wrote:\n>> Thanks! I've tested this and it works as advertised.\n>>\n>> Ideally I'd want an error on backup if there is a similar file in any\n>> data directories that would cause an error on combine, but I admit that\n>> it is vanishingly rare for users to put files anywhere but the root of\n>> PGDATA, so this approach seems OK to me.\n> \n> OK, committed. Thanks for the review.\n\nExcellent, thank you!\n\nRegards,\n-David\n\n\n", "msg_date": "Fri, 19 Apr 2024 09:22:42 +1000", "msg_from": "David Steele <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_combinebackup fails on file named INCREMENTAL.*" } ]
[ { "msg_contents": "hi.\n\npg_log_backend_memory_contexts\nwe have\n`\nif (proc == NULL)\n{\n/*\n* This is just a warning so a loop-through-resultset will not abort\n* if one backend terminated on its own during the run.\n*/\nereport(WARNING,\n(errmsg(\"PID %d is not a PostgreSQL server process\", pid)));\nPG_RETURN_BOOL(false);\n}\n`\n\n\npg_signal_backend\n`\nif (proc == NULL)\n{\n/*\n* This is just a warning so a loop-through-resultset will not abort\n* if one backend terminated on its own during the run.\n*/\nereport(WARNING,\n(errmsg(\"PID %d is not a PostgreSQL backend process\", pid)));\n\nreturn SIGNAL_BACKEND_ERROR;\n}\n`\n\n\"is not a PostgreSQL server process\" is the same thing as \"not a\nPostgreSQL backend process\"?\nshould we unify it?\n\n\n", "msg_date": "Mon, 15 Apr 2024 17:01:18 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "\"backend process\" confused with \"server process\"" }, { "msg_contents": "\n\n> On 15 Apr 2024, at 14:01, jian he <[email protected]> wrote:\n> \n> \"is not a PostgreSQL server process\" is the same thing as \"not a\n> PostgreSQL backend process”?\n\nAs far as I understand, backend is something attached to frontend.\nThere might be infrastructure processes like syslogger or stats collector, client can signal it too.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Mon, 15 Apr 2024 14:07:27 +0500", "msg_from": "\"Andrey M. Borodin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"backend process\" confused with \"server process\"" }, { "msg_contents": "> On 15 Apr 2024, at 11:07, Andrey M. Borodin <[email protected]> wrote:\n>> On 15 Apr 2024, at 14:01, jian he <[email protected]> wrote:\n\n>> \"is not a PostgreSQL server process\" is the same thing as \"not a\n>> PostgreSQL backend process”?\n> \n> As far as I understand, backend is something attached to frontend.\n> There might be infrastructure processes like syslogger or stats collector, client can signal it too.\n\nI think that's a good summary, in line with the glossary in our docs where we\ndefine \"Auxiliary process\" and \"Backend (process)\" (but not PostgreSQL server\nprocess which maybe we should?). \"PostgreSQL server process\" is used in copy\nto/from error messaging as well in what seems to be a user-freindly way to say\n\"backend\". In the docs we mostly use \"server process\" to mean a process\nrunning in the operating system on the server. In a few places we use\n\"database server process\" too when talking about backends.\n\nIn the case of pg_log_backend_memory_contexts(), the function comment states\nthe following:\n\n\t\"Signal a backend or an auxiliary process to log its memory contexts\"\n\nSo in this case, \"PostgreSQL server process\" seems like the correct term.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 15 Apr 2024 11:27:36 +0200", "msg_from": "Daniel Gustafsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"backend process\" confused with \"server process\"" }, { "msg_contents": "On Mon, Apr 15, 2024 at 5:27 PM Daniel Gustafsson <[email protected]> wrote:\n>\n> > On 15 Apr 2024, at 11:07, Andrey M. Borodin <[email protected]> wrote:\n> >> On 15 Apr 2024, at 14:01, jian he <[email protected]> wrote:\n>\n> >> \"is not a PostgreSQL server process\" is the same thing as \"not a\n> >> PostgreSQL backend process”?\n> >\n> > As far as I understand, backend is something attached to frontend.\n> > There might be infrastructure processes like syslogger or stats collector, client can signal it too.\n>\n> I think that's a good summary, in line with the glossary in our docs where we\n> define \"Auxiliary process\" and \"Backend (process)\" (but not PostgreSQL server\n> process which maybe we should?). \"PostgreSQL server process\" is used in copy\n> to/from error messaging as well in what seems to be a user-freindly way to say\n> \"backend\". In the docs we mostly use \"server process\" to mean a process\n> running in the operating system on the server. In a few places we use\n> \"database server process\" too when talking about backends.\n>\n> In the case of pg_log_backend_memory_contexts(), the function comment states\n> the following:\n>\n> \"Signal a backend or an auxiliary process to log its memory contexts\"\n>\n> So in this case, \"PostgreSQL server process\" seems like the correct term.\n>\n\nin [1] (pg_stat_activity)\ndoes \"PostgreSQL server process\" means all backend_type:\nautovacuum launcher, autovacuum worker, logical replication launcher,\nlogical replication worker, parallel worker, background writer, client\nbackend, checkpointer, archiver, standalone backend, startup,\nwalreceiver, walsender and walwriter.\n?\n\nand\n`PostgreSQL backend process`\nmeans only `client backend`?\n\n\nI aslo found an minor inconsistency.\n`Autovacuum (process)`\nis not the same thing as\n`autovacuum launcher`\n?\n\nbut there is a link from `autovacuum launcher` to `Autovacuum (process)`.\n\n\n[1] https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n[2] https://www.postgresql.org/docs/current/glossary.html\n\n\n", "msg_date": "Tue, 16 Apr 2024 11:42:17 +0800", "msg_from": "jian he <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"backend process\" confused with \"server process\"" } ]
[ { "msg_contents": "\nHi,\n\nI'm unable to build the latest snapshot on my Fedora 39 box. I think\nthis problem appeared before the weekend (not sure, though). This is\nlibxml 2.10.4:\n\n===============================================================\n'/usr/bin/perl' ../../../src/backend/utils/activity/generate-wait_event_types.pl --docs ../../../src/backend/utils/activity/wait_event_names.txt\n/usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml --noent --valid postgres.sgml\nI/O error : Attempt to load network entity http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\npostgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n]>\n ^\npostgres.sgml:23: element book: validity error : No declaration for attribute id of element book\n<book id=\"postgres\">\n ^\npostgres.sgml:24: element title: validity error : No declaration for element title\n <title>PostgreSQL &version; Documentation</title>\n ^\npostgres.sgml:27: element corpauthor: validity error : No declaration for element corpauthor\n <corpauthor>The PostgreSQL Global Development Group</corpauthor>\n ^\npostgres.sgml:28: element productname: validity error : No declaration for element productname\n <productname>PostgreSQL</productname>\n ^\npostgres.sgml:29: element productnumber: validity error : No declaration for element productnumber\n <productnumber>&version;</productnumber>\n ^\npostgres.sgml:3: element date: validity error : No declaration for element date\nlegal.sgml:6: parser error : Entity 'ndash' not defined\n <year>1996&ndash;2024</year>\n ^\nlegal.sgml:14: parser error : Entity 'copy' not defined\n <productname>PostgreSQL</productname> is Copyright &copy; 1996&ndash;2024\n ^\nlegal.sgml:14: parser error : Entity 'ndash' not defined\n <productname>PostgreSQL</productname> is Copyright &copy; 1996&ndash;2024\n ^\nlegal.sgml:19: parser error : Entity 'copy' not defined\n <productname>Postgres95</productname> is Copyright &copy; 1994&ndash;5\n ^\nlegal.sgml:19: parser error : Entity 'ndash' not defined\n <productname>Postgres95</productname> is Copyright &copy; 1994&ndash;5\n ^\nlegal.sgml:49: parser error : chunk is not well balanced\n\n^\npostgres.sgml:30: parser error : Entity 'legal' failed to parse\n &legal;\n ^\nmake[3]: *** [Makefile:72: postgres-full.xml] Error 1\n\n====================================================\n\n\nAny hints?\n\nRegards\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Mon, 15 Apr 2024 10:59:40 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": true, "msg_subject": "HEAD build error on Fedora 39" }, { "msg_contents": "On 2024-04-15 Mo 05:59, Devrim Gündüz wrote:\n> Hi,\n>\n> I'm unable to build the latest snapshot on my Fedora 39 box. I think\n> this problem appeared before the weekend (not sure, though). This is\n> libxml 2.10.4:\n>\n> ===============================================================\n> '/usr/bin/perl' ../../../src/backend/utils/activity/generate-wait_event_types.pl --docs ../../../src/backend/utils/activity/wait_event_names.txt\n> /usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml --noent --valid postgres.sgml\n> I/O error : Attempt to load network entityhttp://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\n> postgres.sgml:21: warning: failed to load external entity\"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n> ]>\n\n\nIt's working on my Fedora 39. This error suggests to me that you don't \nhave docbook-dtds installed. If you do, then I don't know :-)\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-04-15 Mo 05:59, Devrim Gündüz\n wrote:\n\n\n\nHi,\n\nI'm unable to build the latest snapshot on my Fedora 39 box. I think\nthis problem appeared before the weekend (not sure, though). This is\nlibxml 2.10.4:\n\n===============================================================\n'/usr/bin/perl' ../../../src/backend/utils/activity/generate-wait_event_types.pl --docs ../../../src/backend/utils/activity/wait_event_names.txt\n/usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml --noent --valid postgres.sgml\nI/O error : Attempt to load network entity http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\npostgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n]>\n\n\n\nIt's working on my Fedora 39. This error suggests to me that you\n don't have docbook-dtds installed. If you do, then I don't know\n :-)\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 15 Apr 2024 06:35:46 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HEAD build error on Fedora 39" }, { "msg_contents": "Hi Andrew,\n\nOn Mon, 2024-04-15 at 06:35 -0400, Andrew Dunstan wrote:\n> It's working on my Fedora 39. This error suggests to me that you don't\n> have docbook-dtds installed. If you do, then I don't know :-)\n\nSorry for the noise. I got a new laptop, and apparently some of the\npackages (like this) were missing.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, PostgreSQL Major Contributor\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n", "msg_date": "Fri, 03 May 2024 00:37:31 +0100", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: HEAD build error on Fedora 39" }, { "msg_contents": "Hi Andrew, Devrim,\n\nI'm seeing these errors on MacOS:\n--\n/opt/local/Current_v15/bin/xsltproc --nonet --path . --path . --stringparam\npg.version '17beta1' stylesheet.xsl postgres-full.xml\nI/O error : Attempt to load network entity\nhttp://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nwarning: failed to load external entity \"\nhttp://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\ncompilation error: file stylesheet.xsl line 6 element import\nxsl:import : unable to load\nhttp://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\nI/O error : Attempt to load network entity\nhttp://docbook.sourceforge.net/release/xsl/current/common/entities.ent\nstylesheet-html-common.xsl:4: warning: failed to load external entity \"\nhttp://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n%common.entities;\n--\n\nI've set SGML_CATALOG_FILES to point to the catalog from docbook-xsl. Am\nI missing something?\n\n\nOn Mon, Apr 15, 2024 at 4:06 PM Andrew Dunstan <[email protected]> wrote:\n\n>\n> On 2024-04-15 Mo 05:59, Devrim Gündüz wrote:\n>\n> Hi,\n>\n> I'm unable to build the latest snapshot on my Fedora 39 box. I think\n> this problem appeared before the weekend (not sure, though). This is\n> libxml 2.10.4:\n>\n> ===============================================================\n> '/usr/bin/perl' ../../../src/backend/utils/activity/generate-wait_event_types.pl --docs ../../../src/backend/utils/activity/wait_event_names.txt\n> /usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml --noent --valid postgres.sgml\n> I/O error : Attempt to load network entity http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\n> postgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\" <http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd>\n> ]>\n>\n>\n> It's working on my Fedora 39. This error suggests to me that you don't\n> have docbook-dtds installed. If you do, then I don't know :-)\n>\n> cheers\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nSandeep Thakkar\n\nHi Andrew, Devrim,I'm seeing these errors on MacOS:--/opt/local/Current_v15/bin/xsltproc --nonet --path . --path . --stringparam pg.version '17beta1'  stylesheet.xsl postgres-full.xmlI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xslwarning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"compilation error: file stylesheet.xsl line 6 element importxsl:import : unable to load http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xslI/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/common/entities.entstylesheet-html-common.xsl:4: warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"%common.entities;--I've set SGML_CATALOG_FILES to point to the catalog from docbook-xsl. Am I missing something?On Mon, Apr 15, 2024 at 4:06 PM Andrew Dunstan <[email protected]> wrote:\n\n\n\nOn 2024-04-15 Mo 05:59, Devrim Gündüz\n wrote:\n\n\nHi,\n\nI'm unable to build the latest snapshot on my Fedora 39 box. I think\nthis problem appeared before the weekend (not sure, though). This is\nlibxml 2.10.4:\n\n===============================================================\n'/usr/bin/perl' ../../../src/backend/utils/activity/generate-wait_event_types.pl --docs ../../../src/backend/utils/activity/wait_event_names.txt\n/usr/bin/xmllint --nonet --path . --path . --output postgres-full.xml --noent --valid postgres.sgml\nI/O error : Attempt to load network entity http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\npostgres.sgml:21: warning: failed to load external entity \"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd\"\n]>\n\n\n\nIt's working on my Fedora 39. This error suggests to me that you\n don't have docbook-dtds installed. If you do, then I don't know\n :-)\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n-- Sandeep Thakkar", "msg_date": "Thu, 23 May 2024 13:38:22 +0530", "msg_from": "Sandeep Thakkar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HEAD build error on Fedora 39" }, { "msg_contents": "On 2024-05-23 Th 04:08, Sandeep Thakkar wrote:\n> Hi Andrew, Devrim,\n>\n> I'm seeing these errors on MacOS:\n> --\n> /opt/local/Current_v15/bin/xsltproc --nonet --path . --path . \n> --stringparam pg.version '17beta1'  stylesheet.xsl postgres-full.xml\n> I/O error : Attempt to load network entity \n> http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n> warning: failed to load external entity \n> \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\n> compilation error: file stylesheet.xsl line 6 element import\n> xsl:import : unable to load \n> http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n> I/O error : Attempt to load network entity \n> http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\n> stylesheet-html-common.xsl:4: warning: failed to load external entity \n> \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n> %common.entities;\n> --\n>\n> I've set SGML_CATALOG_FILES to point to the catalog from docbook-xsl. \n> Am I missing something?\n\n\nSandeep,\n\n\nHmm, it's working for me, via homebrews' libxslt, docbook and \ndocbook-xsl ... I'm not setting anything special in the environment \n(although  maybe meson/ninja does).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-05-23 Th 04:08, Sandeep Thakkar\n wrote:\n\n\n\n\nHi\n Andrew, Devrim,\n\n I'm seeing these errors on MacOS:\n --\n /opt/local/Current_v15/bin/xsltproc --nonet --path . --path .\n --stringparam pg.version '17beta1'  stylesheet.xsl\n postgres-full.xml\n I/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n warning: failed to load external entity \"http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\"\n compilation error: file stylesheet.xsl line 6 element import\n xsl:import : unable to load http://docbook.sourceforge.net/release/xsl/current/xhtml/chunk.xsl\n I/O error : Attempt to load network entity http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\n stylesheet-html-common.xsl:4: warning: failed to load external\n entity \"http://docbook.sourceforge.net/release/xsl/current/common/entities.ent\"\n %common.entities;\n --\n\n I've set SGML_CATALOG_FILES to point to the catalog from\n docbook-xsl. Am I missing something?\n\n\n\n\nSandeep, \n\n\n\nHmm, it's working for me, via homebrews' libxslt, docbook and\n docbook-xsl ... I'm not setting anything special in the\n environment (although  maybe meson/ninja does).\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 23 May 2024 16:57:57 -0400", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: HEAD build error on Fedora 39" } ]
[ { "msg_contents": "Forking: <[email protected]>\nSubject: Re: Strange presentaion related to inheritance in \\d+\n\nOn Tue, Aug 29, 2023 at 07:28:28PM +0200, Alvaro Herrera wrote:\n> On 2023-Aug-29, Kyotaro Horiguchi wrote:\n> \n> > Attached is the initial version of the patch. It prevents \"CREATE\n> > TABLE\" from executing if there is an inconsisntent not-null\n> > constraint. Also I noticed that \"ALTER TABLE t ADD NOT NULL c NO\n> > INHERIT\" silently ignores the \"NO INHERIT\" part and fixed it.\n> \n> Great, thank you. I pushed it after modifying it a bit -- instead of\n> throwing the error in MergeAttributes, I did it in\n> AddRelationNotNullConstraints(). It seems cleaner this way, mostly\n> because we already have to match these two constraints there. (I guess\n> you could argue that we waste catalog-insertion work before the error is\n> reported and the whole thing is aborted; but I don't think this is a\n> serious problem in practice.)\n\n9b581c5341 can break dump/restore from old versions, including\npgupgrade.\n\npostgres=# CREATE TABLE iparent(id serial PRIMARY KEY); CREATE TABLE child (id int) INHERITS (iparent); ALTER TABLE child ALTER id DROP NOT NULL; ALTER TABLE child ADD CONSTRAINT p PRIMARY KEY (id);\n\n$ pg_dump -h /tmp -p 5678 postgres -Fc |pg_restore -1 -h /tmp -p 5679 -d postgres\nERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"pgdump_throwaway_notnull_0\" on relation \"child\"\nSTATEMENT: ALTER TABLE ONLY public.iparent\n ADD CONSTRAINT iparent_pkey PRIMARY KEY (id);\n ALTER TABLE ONLY public.iparent DROP CONSTRAINT pgdump_throwaway_notnull_0;\n\nStrangely, if I name the table \"parent\", it seems to work, which might\nindicate an ordering/dependency issue.\n\nI think there are other issues related to b0e96f3119 (Catalog not-null\nconstraints) - if I dump a v16 server using v17 tools, the backup can't\nbe restored into the v16 server. I'm okay ignoring a line or two like\n'unrecognized configuration parameter \"transaction_timeout\", but not\n'syntax error at or near \"NO\"'.\n\npostgres=# CREATE TABLE a(i int not null primary key);\n\n$ pg_dump -h /tmp -p 5678 postgres |psql -h /tmp -p 5678 -d new\n\n2024-04-13 21:26:14.510 CDT [475995] ERROR: syntax error at or near \"NO\" at character 86\n2024-04-13 21:26:14.510 CDT [475995] STATEMENT: CREATE TABLE public.a (\n i integer CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT\n );\nERROR: syntax error at or near \"NO\"\nLINE 2: ...er CONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT\n\nThe other version checks in pg_dump.c are used to construct sql for\nquerying the source db, but this is used to create the sql to restore\nthe target, using syntax that didn't exist until v17.\n\n if (print_notnull) \n {\n if (tbinfo->notnull_constrs[j][0] == '\\0')\n appendPQExpBufferStr(q, \" NOT NULL\");\n else\n appendPQExpBuffer(q, \" CONSTRAINT %s NOT NULL\",\n fmtId(tbinfo->notnull_constrs[j]));\n\n if (tbinfo->notnull_noinh[j])\n appendPQExpBufferStr(q, \" NO INHERIT\");\n }\n\nThis other thread is 6 years old and forgotten again, but still seems\nrelevant.\nhttps://www.postgresql.org/message-id/flat/b8794d6a-38f0-9d7c-ad4b-e85adf860fc9%40enterprisedb.com\n\nBTW, these comments are out of date:\n\n+ * In versions 16 and up, we need pg_constraint for explicit NOT NULL\n+ if (fout->remoteVersion >= 170000)\n\n+ * that we needn't specify that again for the child. (Versions >= 16 no\n+ if (fout->remoteVersion < 170000)\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 15 Apr 2024 07:13:52 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-15, Justin Pryzby wrote:\n\n> 9b581c5341 can break dump/restore from old versions, including\n> pgupgrade.\n> \n> postgres=# CREATE TABLE iparent(id serial PRIMARY KEY); CREATE TABLE child (id int) INHERITS (iparent); ALTER TABLE child ALTER id DROP NOT NULL; ALTER TABLE child ADD CONSTRAINT p PRIMARY KEY (id);\n> \n> $ pg_dump -h /tmp -p 5678 postgres -Fc |pg_restore -1 -h /tmp -p 5679 -d postgres\n> ERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"pgdump_throwaway_notnull_0\" on relation \"child\"\n> STATEMENT: ALTER TABLE ONLY public.iparent\n> ADD CONSTRAINT iparent_pkey PRIMARY KEY (id);\n> \n> Strangely, if I name the table \"parent\", it seems to work, which might\n> indicate an ordering/dependency issue.\n\nHmm, apparently if the table is \"iparent\", the primary key is created in\nthe child first; if the table is \"parent\", then the PK is created first\nthere. I think the problem is that the ADD CONSTRAINT for the PK should\nnot be recursing at all in this case ... seeing in particular that the\ncommand specifies ONLY. Should be a simple fix, looking now.\n\n> I think there are other issues related to b0e96f3119 (Catalog not-null\n> constraints) - if I dump a v16 server using v17 tools, the backup can't\n> be restored into the v16 server. I'm okay ignoring a line or two like\n> 'unrecognized configuration parameter \"transaction_timeout\", but not\n> 'syntax error at or near \"NO\"'.\n\nThis doesn't look something that we can handle at all. The assumption\nis that pg_dump's output is going to be fed to a server that's at least\nthe same version. Running on older versions is just not supported.\n\n> The other version checks in pg_dump.c are used to construct sql for\n> querying the source db, but this is used to create the sql to restore\n> the target, using syntax that didn't exist until v17.\n> \n> if (print_notnull) \n> {\n> if (tbinfo->notnull_constrs[j][0] == '\\0')\n> appendPQExpBufferStr(q, \" NOT NULL\");\n> else\n> appendPQExpBuffer(q, \" CONSTRAINT %s NOT NULL\",\n> fmtId(tbinfo->notnull_constrs[j]));\n> \n> if (tbinfo->notnull_noinh[j])\n> appendPQExpBufferStr(q, \" NO INHERIT\");\n> }\n\nIf you have ideas on what to do about this, I'm all ears, but keep in\nmind that pg_dump doesn't necessarily know what the target version is.\n\n> \n> This other thread is 6 years old and forgotten again, but still seems\n> relevant.\n> https://www.postgresql.org/message-id/flat/b8794d6a-38f0-9d7c-ad4b-e85adf860fc9%40enterprisedb.com\n\nI only skimmed very briefly, but it looks related to commit c3709100be73\nthat I pushed earlier today. Or if you have some specific case that\nfails to be handled please let me know. (Maybe we should have the\nregress tests leave some tables behind to ensure the pg_upgrade behavior\nis what we want, if we continue to break it.)\n\n> BTW, these comments are out of date:\n> \n> + * In versions 16 and up, we need pg_constraint for explicit NOT NULL\n> + if (fout->remoteVersion >= 170000)\n> \n> + * that we needn't specify that again for the child. (Versions >= 16 no\n> + if (fout->remoteVersion < 170000)\n\nThanks, will fix. But I'm probably touching this code in the fix for\nAndrew Bille's problem, so I might not do so immediately.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n", "msg_date": "Mon, 15 Apr 2024 15:47:38 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-15, Alvaro Herrera wrote:\n\n> On 2024-Apr-15, Justin Pryzby wrote:\n\n> > postgres=# CREATE TABLE iparent(id serial PRIMARY KEY); CREATE TABLE child (id int) INHERITS (iparent); ALTER TABLE child ALTER id DROP NOT NULL; ALTER TABLE child ADD CONSTRAINT p PRIMARY KEY (id);\n> > \n> > $ pg_dump -h /tmp -p 5678 postgres -Fc |pg_restore -1 -h /tmp -p 5679 -d postgres\n> > ERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"pgdump_throwaway_notnull_0\" on relation \"child\"\n> > STATEMENT: ALTER TABLE ONLY public.iparent\n> > ADD CONSTRAINT iparent_pkey PRIMARY KEY (id);\n\n> Hmm, apparently if the table is \"iparent\", the primary key is created in\n> the child first; if the table is \"parent\", then the PK is created first\n> there. I think the problem is that the ADD CONSTRAINT for the PK should\n> not be recursing at all in this case ... seeing in particular that the\n> command specifies ONLY. Should be a simple fix, looking now.\n\nSo the problem is that the ADD CONSTRAINT PRIMARY KEY in the parent\ntable wants to recurse to the child, so that a NOT NULL constraint is\ncreated on each column. If the child is created first, there's already\na NOT NULL NO INHERIT constraint in it which was created for its own\nprimary key, so the internal recursion in the parent's ADD PK fails.\n\nA fix doesn't look all that simple:\n\n- As I said in my earlier reply, my first thought was to have ALTER\nTABLE ADD PRIMARY KEY not recurse if the command is ALTER TABLE ONLY.\nThis doesn't work, because the point of that recursion is precisely to\nhandle this case, so if we do that, we break the other stuff that this\nwas added to solve.\n\n- Second thought was to add a bespoke dependency in pg_dump.c so that\nthe child PK is dumped after the parent PK. I looked at the code,\ndidn't like the idea of adding such a hack, went looking for other\nideas.\n\n- Third thought was to hack AdjustNotNullInheritance1() so that it\nchanges the conisnoinherit flag in this particular case. Works great,\nexcept that once we mark this constraint as inherited, we cannot drop\nit; and since it's a constraint marked \"throwaway\", pg_dump expects to\nbe able to drop it, which means the ALTER TABLE DROP CONSTRAINT throws\nan error, and a constraint named pgdump_throwaway_notnull_0 remains in\nplace.\n\n- Fourth thought: we do as in the third thought, except we also allow\nDROP CONSTRAINT a constraint that's marked \"local, inherited\" to be\nsimply an inherited constraint (remove its \"local\" marker).\n\nI'm going to try to implement this fourth idea, which seems promising.\nI think if we do that, the end result will be identical to the case\nwhere the child is created after the parent.\n\nHowever, we'll also need that constraint to have a name better than\npgdump_throwaway_notnull_NN.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 15 Apr 2024 18:30:29 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Mon, Apr 15, 2024 at 03:47:38PM +0200, Alvaro Herrera wrote:\n> On 2024-Apr-15, Justin Pryzby wrote:\n> \n> > I think there are other issues related to b0e96f3119 (Catalog not-null\n> > constraints) - if I dump a v16 server using v17 tools, the backup can't\n> > be restored into the v16 server. I'm okay ignoring a line or two like\n> > 'unrecognized configuration parameter \"transaction_timeout\", but not\n> > 'syntax error at or near \"NO\"'.\n> \n> This doesn't look something that we can handle at all. The assumption\n> is that pg_dump's output is going to be fed to a server that's at least\n> the same version. Running on older versions is just not supported.\n\nYou're right - the docs say:\n\n|Also, it is not guaranteed that pg_dump's output can be loaded into a\n|server of an older major version — not even if the dump was taken from a\n|server of that version\n\nHere's a couple more issues affecting upgrades from v16 to v17.\n\npostgres=# CREATE TABLE a(i int NOT NULL); CREATE TABLE b(i int PRIMARY KEY) INHERITS (a);\npg_restore: error: could not execute query: ERROR: constraint \"pgdump_throwaway_notnull_0\" of relation \"b\" does not exist\n\npostgres=# CREATE TABLE a(i int CONSTRAINT a NOT NULL PRIMARY KEY); CREATE TABLE b()INHERITS(a); ALTER TABLE b ADD CONSTRAINT pkb PRIMARY KEY (i);\npg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pgdump_throwaway_notnull_0\" of relation \"b\"\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 15 Apr 2024 19:02:03 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-15, Alvaro Herrera wrote:\n\n> - Fourth thought: we do as in the third thought, except we also allow\n> DROP CONSTRAINT a constraint that's marked \"local, inherited\" to be\n> simply an inherited constraint (remove its \"local\" marker).\n\nHere is an initial implementation of what I was thinking. Can you\nplease give it a try and see if it fixes this problem? At least in my\nrun of your original test case, it seems to work as expected.\n\nThis is still missing some cleanup and additional tests, of course.\nSpeaking of which, I wonder if I should modify pg16's tests so that they\nleave behind tables set up in this way, to immortalize pg_upgrade\ntesting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/", "msg_date": "Tue, 16 Apr 2024 20:11:49 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-15, Justin Pryzby wrote:\n\n> Here's a couple more issues affecting upgrades from v16 to v17.\n> \n> postgres=# CREATE TABLE a(i int NOT NULL); CREATE TABLE b(i int PRIMARY KEY) INHERITS (a);\n> pg_restore: error: could not execute query: ERROR: constraint \"pgdump_throwaway_notnull_0\" of relation \"b\" does not exist\n\nThis one requires a separate pg_dump fix, which should --I hope-- be\npretty simple.\n\n> postgres=# CREATE TABLE a(i int CONSTRAINT a NOT NULL PRIMARY KEY); CREATE TABLE b()INHERITS(a); ALTER TABLE b ADD CONSTRAINT pkb PRIMARY KEY (i);\n> pg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pgdump_throwaway_notnull_0\" of relation \"b\"\n\nThis one seems to be fixed with the patch I just posted.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n https://postgr.es/m/[email protected]\n\n\n", "msg_date": "Tue, 16 Apr 2024 20:25:53 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Tue, Apr 16, 2024 at 08:11:49PM +0200, Alvaro Herrera wrote:\n> On 2024-Apr-15, Alvaro Herrera wrote:\n> \n> > - Fourth thought: we do as in the third thought, except we also allow\n> > DROP CONSTRAINT a constraint that's marked \"local, inherited\" to be\n> > simply an inherited constraint (remove its \"local\" marker).\n> \n> Here is an initial implementation of what I was thinking. Can you\n> please give it a try and see if it fixes this problem? At least in my\n> run of your original test case, it seems to work as expected.\n\nYes, this fixes the issue I reported.\n\nBTW, that seems to be the same issue Andrew reported in January.\nhttps://www.postgresql.org/message-id/CAJnzarwkfRu76_yi3dqVF_WL-MpvT54zMwAxFwJceXdHB76bOA%40mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 16 Apr 2024 17:36:21 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-16, Justin Pryzby wrote:\n\n> Yes, this fixes the issue I reported.\n\nExcellent, thanks for confirming.\n\n> BTW, that seems to be the same issue Andrew reported in January.\n> https://www.postgresql.org/message-id/CAJnzarwkfRu76_yi3dqVF_WL-MpvT54zMwAxFwJceXdHB76bOA%40mail.gmail.com\n\nThat's really good news -- I was worried it would require much more\ninvasive changes. I tested his case and noticed two additional issues,\nfirst that we fail to acquire locks down the hierarchy, so recursing\ndown like ATPrepAddPrimaryKey does fails to pin down the children\nproperly; and second, that the constraint left behind by restoring the\ndump preserves the \"throaway\" name. I made pg_dump use a different name\nwhen the table has a parent, just in case we end up not dropping the\nconstraint.\n\nI'm going to push this early tomorrow. CI run:\nhttps://cirrus-ci.com/build/5754149453692928\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Wed, 17 Apr 2024 19:45:13 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-15, Justin Pryzby wrote:\n\n> Here's a couple more issues affecting upgrades from v16 to v17.\n> \n> postgres=# CREATE TABLE a(i int NOT NULL); CREATE TABLE b(i int PRIMARY KEY) INHERITS (a);\n> pg_restore: error: could not execute query: ERROR: constraint \"pgdump_throwaway_notnull_0\" of relation \"b\" does not exist\n> \n> postgres=# CREATE TABLE a(i int CONSTRAINT a NOT NULL PRIMARY KEY); CREATE TABLE b()INHERITS(a); ALTER TABLE b ADD CONSTRAINT pkb PRIMARY KEY (i);\n> pg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pgdump_throwaway_notnull_0\" of relation \"b\"\n\nI pushed a fix now, and it should also cover these two issues, which\nrequired only minor changes over what I posted yesterday. Also, thank\nyou for pointing out that the patch also fixed Andrew's problem. It\ndid, except there was a locking problem which required an additional\ntweak.\n\nThanks for reporting these.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n", "msg_date": "Thu, 18 Apr 2024 15:41:28 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Tue, Apr 16, 2024 at 08:11:49PM +0200, Alvaro Herrera wrote:\n> This is still missing some cleanup and additional tests, of course.\n> Speaking of which, I wonder if I should modify pg16's tests so that they\n> leave behind tables set up in this way, to immortalize pg_upgrade\n> testing.\n\nThat seems like it could be important. I considered but never actually\ntest your patch by pg_upgrading across major versions.\n\nBTW, this works up to v16 (although maybe it should not):\n\n| CREATE TABLE ip(id int PRIMARY KEY); CREATE TABLE ic(id int) INHERITS (ip); ALTER TABLE ic ALTER id DROP NOT NULL;\n\nUnder v17, this fails. Maybe that's okay, but it should probably be\ncalled out in the release notes.\n| ERROR: cannot drop inherited constraint \"ic_id_not_null\" of relation \"ic\"\n\nThat's the issue that I mentioned in the 6 year old thread. In the\nfuture (upgrading *from* v17) it won't be possible anymore, right? It'd\nstill be nice to detect the issue in advance rather than failing halfway\nthrough the upgrade. I have a rebased patch while I'll send on that\nthread. I guess it's mostly unrelated to your patch but it'd be nice if\nyou could take a look.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:07:34 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-Apr-18, Justin Pryzby wrote:\n\n> That seems like it could be important. I considered but never actually\n> test your patch by pg_upgrading across major versions.\n\nIt would be a welcome contribution for sure. I've been doing it rather\nhaphazardly, which is not great.\n\n> BTW, this works up to v16 (although maybe it should not):\n> \n> | CREATE TABLE ip(id int PRIMARY KEY); CREATE TABLE ic(id int) INHERITS (ip); ALTER TABLE ic ALTER id DROP NOT NULL;\n> \n> Under v17, this fails. Maybe that's okay, but it should probably be\n> called out in the release notes.\n\nSure, we should mention that.\n\n> | ERROR: cannot drop inherited constraint \"ic_id_not_null\" of relation \"ic\"\n> \n> That's the issue that I mentioned in the 6 year old thread. In the\n> future (upgrading *from* v17) it won't be possible anymore, right?\n\nYeah, trying to drop the constraint in 17 fails as it should; it was one\nof the goals of this whole thing in fact.\n\n> It'd still be nice to detect the issue in advance rather than failing\n> halfway through the upgrade.\n\nMaybe we can have pg_upgrade --check look for cases we might have\ntrouble upgrading. (I mean: such cases would fail if you have rows with\nnulls in the affected columns, but the schema upgrade should be\nsuccessful. Is that what you have in mind?)\n\n> I have a rebased patch while I'll send on that thread. I guess it's\n> mostly unrelated to your patch but it'd be nice if you could take a\n> look.\n\nOkay.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Thu, 18 Apr 2024 18:23:30 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Thu, Apr 18, 2024 at 06:23:30PM +0200, Alvaro Herrera wrote:\n> On 2024-Apr-18, Justin Pryzby wrote:\n> \n> > BTW, this works up to v16 (although maybe it should not):\n> > \n> > | CREATE TABLE ip(id int PRIMARY KEY); CREATE TABLE ic(id int) INHERITS (ip); ALTER TABLE ic ALTER id DROP NOT NULL;\n> \n> > It'd still be nice to detect the issue in advance rather than failing\n> > halfway through the upgrade.\n> \n> Maybe we can have pg_upgrade --check look for cases we might have\n> trouble upgrading. (I mean: such cases would fail if you have rows with\n> nulls in the affected columns, but the schema upgrade should be\n> successful. Is that what you have in mind?)\n\nBefore v16, pg_upgrade failed in the middle of restoring the schema,\nwithout being caught during --check. The patch to implement that was\nforgotten and never progressed.\n\nI'm not totally clear on what's intended in v17 - maybe it'd be dead\ncode, and maybe it shouldn't even be applied to master branch. But I do\nthink it's worth patching earlier versions (even though it'll be less\nuseful than having done so 5 years ago).\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:52:02 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Thu, Apr 18, 2024 at 12:52 PM Justin Pryzby <[email protected]> wrote:\n> I'm not totally clear on what's intended in v17 - maybe it'd be dead\n> code, and maybe it shouldn't even be applied to master branch. But I do\n> think it's worth patching earlier versions (even though it'll be less\n> useful than having done so 5 years ago).\n\nThis thread is still on the open items list, but I'm not sure whether\nthere's still stuff here that needs to be fixed for the current\nrelease. If not, this thread should be moved to the \"resolved before\n17beta1\" section. If so, we should try to reach consensus on what the\nremaining issues are and what we're going to do about them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Apr 2024 13:52:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Tue, Apr 30, 2024 at 01:52:02PM -0400, Robert Haas wrote:\n> On Thu, Apr 18, 2024 at 12:52 PM Justin Pryzby <[email protected]> wrote:\n> > I'm not totally clear on what's intended in v17 - maybe it'd be dead\n> > code, and maybe it shouldn't even be applied to master branch. But I do\n> > think it's worth patching earlier versions (even though it'll be less\n> > useful than having done so 5 years ago).\n> \n> This thread is still on the open items list, but I'm not sure whether\n> there's still stuff here that needs to be fixed for the current\n> release. If not, this thread should be moved to the \"resolved before\n> 17beta1\" section. If so, we should try to reach consensus on what the\n> remaining issues are and what we're going to do about them.\n\nI think the only thing that's relevant for v17 is this:\n\nOn Tue, Apr 16, 2024 at 08:11:49PM +0200, Alvaro Herrera wrote:\n> Speaking of which, I wonder if I should modify pg16's tests so that they\n> leave behind tables set up in this way, to immortalize pg_upgrade testing.\n\nThe patch on the other thread for pg_upgrade --check is an old issue\naffecting all stable releases.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 30 Apr 2024 13:52:58 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On another thread [0], Alexander Lakhin pointed out, indirectly, that\npartitions created using LIKE+ATTACH now have different not-null constraints\nfrom partitions created using PARTITION OF.\n\npostgres=# CREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\npostgres=# CREATE TABLE t1 PARTITION OF t DEFAULT ;\npostgres=# \\d+ t1\n...\nPartition of: t DEFAULT\nNo partition constraint\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (i)\nAccess method: heap\n\nBut if it's created with LIKE:\npostgres=# CREATE TABLE t1 (LIKE t);\npostgres=# ALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n\n..one also sees:\n\nNot-null constraints:\n \"t1_i_not_null\" NOT NULL \"i\"\n\nIt looks like ATTACH may have an issue with constraints implied by pkey.\n\n[0] https://www.postgresql.org/message-id/8034d1c6-5f0e-e858-9af9-45d5e246515e%40gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 3 May 2024 09:05:19 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-May-03, Justin Pryzby wrote:\n\n> But if it's created with LIKE:\n> postgres=# CREATE TABLE t1 (LIKE t);\n> postgres=# ALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n> \n> ..one also sees:\n> \n> Not-null constraints:\n> \"t1_i_not_null\" NOT NULL \"i\"\n\nHmm, I think the problem here is not ATTACH; the not-null constraint is\nthere immediately after CREATE. I think this is all right actually,\nbecause we derive a not-null constraint from the primary key and this is\ndefinitely intentional. But I also think that if you do CREATE TABLE t1\n(LIKE t INCLUDING CONSTRAINTS) then you should get only the primary key\nand no separate not-null constraint. That will make the table more\nsimilar to the one being copied.\n\nDoes that make sense to you?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira sí existe y tu estás mintiendo\" (G. Lama)\n\n\n", "msg_date": "Sat, 4 May 2024 11:20:32 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-May-04, Alvaro Herrera wrote:\n\n> On 2024-May-03, Justin Pryzby wrote:\n> \n> > But if it's created with LIKE:\n> > postgres=# CREATE TABLE t1 (LIKE t);\n> > postgres=# ALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n> > \n> > ..one also sees:\n> > \n> > Not-null constraints:\n> > \"t1_i_not_null\" NOT NULL \"i\"\n> \n> Hmm, I think the problem here is not ATTACH; the not-null constraint is\n> there immediately after CREATE. I think this is all right actually,\n> because we derive a not-null constraint from the primary key and this is\n> definitely intentional. But I also think that if you do CREATE TABLE t1\n> (LIKE t INCLUDING CONSTRAINTS) then you should get only the primary key\n> and no separate not-null constraint. That will make the table more\n> similar to the one being copied.\n\nI misspoke -- it's INCLUDING INDEXES that we need here, not INCLUDING\nCONSTRAINTS ... and it turns out we already do it that way, so with this\nscript\n\nCREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\nCREATE TABLE t1 (LIKE t INCLUDING INDEXES);\nALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n\nyou end up with this\n\n55432 17devel 71313=# \\d+ t\n Partitioned table \"public.t\"\n Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n i │ integer │ │ not null │ │ plain │ │ │ \nPartition key: RANGE (i)\nIndexes:\n \"t_pkey\" PRIMARY KEY, btree (i)\nPartitions: t1 DEFAULT\n\n55432 17devel 71313=# \\d+ t1\n Table \"public.t1\"\n Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n i │ integer │ │ not null │ │ plain │ │ │ \nPartition of: t DEFAULT\nNo partition constraint\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (i)\nAccess method: heap\n\nwhich I think is what you want. (Do you really want the partition to be\ncreated without the primary key already there?)\n\n\nNow maybe in https://www.postgresql.org/docs/devel/sql-createtable.html\nwe need some explanation for this. Right now we have\n\n INCLUDING INDEXES \n Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the original table\n will be created on the new table. Names for the new indexes and constraints\n are chosen according to the default rules, regardless of how the originals were\n named. (This behavior avoids possible duplicate-name failures for the new\n indexes.)\n\nMaybe something like this before the naming considerations:\n When creating a table like another that has a primary key and indexes\n are excluded, a not-null constraint will be added to every column of\n the primary key.\n\nresulting in\n\n\n INCLUDING INDEXES \n Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the original table\n will be created on the new table. [When/If ?] indexes are excluded while\n creating a table like another that has a primary key, a not-null\n constraint will be added to every column of the primary key.\n\n Names for the new indexes and constraints are chosen according to\n the default rules, regardless of how the originals were named. (This\n behavior avoids possible duplicate-name failures for the new\n indexes.)\n\nWhat do you think?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 6 May 2024 17:56:54 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Mon, May 06, 2024 at 05:56:54PM +0200, Alvaro Herrera wrote:\n> On 2024-May-04, Alvaro Herrera wrote:\n> > On 2024-May-03, Justin Pryzby wrote:\n> > \n> > > But if it's created with LIKE:\n> > > postgres=# CREATE TABLE t1 (LIKE t);\n> > > postgres=# ALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n> > > \n> > > ..one also sees:\n> > > \n> > > Not-null constraints:\n> > > \"t1_i_not_null\" NOT NULL \"i\"\n> > \n> > Hmm, I think the problem here is not ATTACH; the not-null constraint is\n> > there immediately after CREATE. I think this is all right actually,\n> > because we derive a not-null constraint from the primary key and this is\n> > definitely intentional. But I also think that if you do CREATE TABLE t1\n> > (LIKE t INCLUDING CONSTRAINTS) then you should get only the primary key\n> > and no separate not-null constraint. That will make the table more\n> > similar to the one being copied.\n> \n> I misspoke -- it's INCLUDING INDEXES that we need here, not INCLUDING\n> CONSTRAINTS ... and it turns out we already do it that way, so with this\n> script\n> \n> CREATE TABLE t (i int PRIMARY KEY) PARTITION BY RANGE (i);\n> CREATE TABLE t1 (LIKE t INCLUDING INDEXES);\n> ALTER TABLE t ATTACH PARTITION t1 DEFAULT ;\n> \n> you end up with this\n> \n> 55432 17devel 71313=# \\d+ t\n> Partitioned table \"public.t\"\n> Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n> ────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n> i │ integer │ │ not null │ │ plain │ │ │ \n> Partition key: RANGE (i)\n> Indexes:\n> \"t_pkey\" PRIMARY KEY, btree (i)\n> Partitions: t1 DEFAULT\n> \n> 55432 17devel 71313=# \\d+ t1\n> Table \"public.t1\"\n> Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n> ────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n> i │ integer │ │ not null │ │ plain │ │ │ \n> Partition of: t DEFAULT\n> No partition constraint\n> Indexes:\n> \"t1_pkey\" PRIMARY KEY, btree (i)\n> Access method: heap\n> \n> which I think is what you want. (Do you really want the partition to be\n> created without the primary key already there?)\n\nWhy not ? The PK will be added when I attach it one moment later.\n\nCREATE TABLE part (LIKE parent);\nALTER TABLE parent ATTACH PARTITION part ...\n\nDo you really think that after ATTACH, the constraints should be\ndifferent depending on whether the child was created INCLUDING INDEXES ?\nI'll continue to think about this, but I still find that surprising.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 May 2024 11:12:07 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On 2024-May-06, Justin Pryzby wrote:\n\n> > (Do you really want the partition to be\n> > created without the primary key already there?)\n> \n> Why not ? The PK will be added when I attach it one moment later.\n> \n> CREATE TABLE part (LIKE parent);\n> ALTER TABLE parent ATTACH PARTITION part ...\n\nWell, if you load data in the meantime, you'll spend time during `ALTER\nTABLE parent` for the index to be created. (On the other hand, you may\nwant to first create the table, then load data, then create the\nindexes.)\n\n> Do you really think that after ATTACH, the constraints should be\n> different depending on whether the child was created INCLUDING INDEXES ?\n> I'll continue to think about this, but I still find that surprising.\n\nI don't think I have a choice about this, because the standard says that\nthe resulting table must have NOT NULL on all columns which have a\nnullability characteristic is known not nullable; and the primary key\nforces that to be the case.\n\nThinking again, maybe this is wrong in the opposite direction: perhaps\nwe should have not-null constraints on those columns even if INCLUDING\nCONSTRAINTS is given, because failing to do that (which is the current\nbehavior) is non-conformant. In turn, this suggests that in order to\nmake the partitioning behavior consistent, we should _in addition_ make\nCREATE TABLE PARTITION OF add explicit not-null constraints to the\ncolumns of the primary key of the partitioned table.\n\nThis would also solve your complaint, because then the table would have\nthe not-null constraint in all cases.\n\nThis might be taking the whole affair too far, though; not sure.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"We’ve narrowed the problem down to the customer’s pants being in a situation\n of vigorous combustion\" (Robert Haas, Postgres expert extraordinaire)\n\n\n", "msg_date": "Mon, 6 May 2024 18:34:16 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg17 issues with not-null contraints" }, { "msg_contents": "On Mon, May 06, 2024 at 06:34:16PM +0200, Alvaro Herrera wrote:\n> On 2024-May-06, Justin Pryzby wrote:\n> \n> > > (Do you really want the partition to be\n> > > created without the primary key already there?)\n> > \n> > Why not ? The PK will be added when I attach it one moment later.\n> > \n> > CREATE TABLE part (LIKE parent);\n> > ALTER TABLE parent ATTACH PARTITION part ...\n> \n> Well, if you load data in the meantime, you'll spend time during `ALTER\n> TABLE parent` for the index to be created. (On the other hand, you may\n> want to first create the table, then load data, then create the\n> indexes.)\n\nTo be clear, I'm referring to the case of CREATE+ATTACH to avoid a\nstrong lock while creating a partition in advance of loading data. See:\[email protected]\nf170b572d2b4cc232c5b6d391b4ecf3e368594b7\n898e5e3290a72d288923260143930fb32036c00c\n\n> This would also solve your complaint, because then the table would have\n> the not-null constraint in all cases.\n\nI agree that it would solve my complaint, but at this time I've no\nfurther opinion.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 6 May 2024 11:41:22 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg17 issues with not-null contraints" } ]
[ { "msg_contents": "Hi,\n\nGlobalVisTestNonRemovableHorizon()/GlobalVisTestNonRemovableFullHorizon() only\nexisted for snapshot_too_old - but that was removed in f691f5b80a8. I'm\ninclined to think we should remove those functions for 17. No new code should\nuse them.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 11:57:20 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Removing GlobalVisTestNonRemovableHorizon" }, { "msg_contents": "On Mon, Apr 15, 2024 at 2:57 PM Andres Freund <[email protected]> wrote:\n> GlobalVisTestNonRemovableHorizon()/GlobalVisTestNonRemovableFullHorizon() only\n> existed for snapshot_too_old - but that was removed in f691f5b80a8. I'm\n> inclined to think we should remove those functions for 17. No new code should\n> use them.\n\n+1 for removing whatever people shouldn't be using. I recently spent a\nlot of time looking at this code and it's quite complicated and hard\nto understand. It would of course have been nice to have done this\nsooner, but I don't think waiting for next release cycle will make\nanything better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Apr 2024 15:13:51 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Removing GlobalVisTestNonRemovableHorizon" }, { "msg_contents": "On Mon, Apr 15, 2024 at 11:57:20AM -0700, Andres Freund wrote:\n> GlobalVisTestNonRemovableHorizon()/GlobalVisTestNonRemovableFullHorizon() only\n> existed for snapshot_too_old - but that was removed in f691f5b80a8. I'm\n> inclined to think we should remove those functions for 17. No new code should\n> use them.\n\nRMT hat on. Feel free to go ahead and clean up that now. No\nobjections from here as we don't want to take the risk of this stuff\ngetting more used in the wild.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 07:32:55 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Removing GlobalVisTestNonRemovableHorizon" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 15:13:51 -0400, Robert Haas wrote:\n> It would of course have been nice to have done this sooner, but I don't\n> think waiting for next release cycle will make anything better.\n\nI don't really know how it could have been discovered sooner. We don't have\nany infrastructure for finding code that's not used anymore. And even if we\nhad something finding symbols that aren't referenced within the backend, we\nhave lots of functions that are just used by extensions, which would thus\nappear unused.\n\nIn my local build we have several hundred functions that are not used within\nthe backend, according to -Wl,--gc-sections,--print-gc-sections. A lot of that\nis entirely expected stuff, like RegisterXactCallback(). But there are also\nlong-unused things like TransactionIdIsActive().\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Apr 2024 12:41:29 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Removing GlobalVisTestNonRemovableHorizon" }, { "msg_contents": "On 2024-04-16 07:32:55 +0900, Michael Paquier wrote:\n> On Mon, Apr 15, 2024 at 11:57:20AM -0700, Andres Freund wrote:\n> > GlobalVisTestNonRemovableHorizon()/GlobalVisTestNonRemovableFullHorizon() only\n> > existed for snapshot_too_old - but that was removed in f691f5b80a8. I'm\n> > inclined to think we should remove those functions for 17. No new code should\n> > use them.\n> \n> RMT hat on. Feel free to go ahead and clean up that now. No\n> objections from here as we don't want to take the risk of this stuff\n> getting more used in the wild.\n\nCool. Pushed the removal..\n\n\n", "msg_date": "Wed, 17 Apr 2024 12:41:59 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Removing GlobalVisTestNonRemovableHorizon" } ]
[ { "msg_contents": "Hi,\n\nThis patch introduces the Karatsuba algorithm to speed up multiplication\noperations in numeric.c, where the operands have many digits.\n\nIt is implemented via a new conditional in mul_var() that determines whether\nthe sizes of the factors are sufficiently large to justify its use. This decision\nis non-trivial due to its recursive nature, depending on the size and ratio of\nthe factors. Moreover, the optimal threshold varies across different\narchitectures.\n\nThis benefits all users of mul_var() in numeric.c, such as:\nnumeric_mul()\nnumeric_lcm()\nnumeric_fac()\nint64_div_fast_to_numeric()\nmod_var()\ndiv_mod_var()\nsqrt_var()\nexp_var()\nln_var()\npower_var()\n\nThe macro KARATSUBA_CONDITION(var1ndigits, var2ndigits) is responsible of this\ndecision. It is deliberately conservative to prevent performance regressions on\nthe tested architectures while maximizing potential gains and maintaining\nsimplicity.\n\nPatches:\n\n1. mul_var-karatsuba.patch\nModifies mul_var() to use the Karatsuba-functions\nfor multiplying larger numerical factors.\n\n2. mul_var-karatsuba-benchmark.patch\nIntroduces numeric_mul_karatsuba() and mul_var_karatsuba()\nalongside the existing numeric_mul() and mul_var() functions.\nThis enables benchmark comparisons between the original multiplication method\nand the Karatsuba-optimized version.\n\nSome benchmark numbers, tested on Intel Core i9-14900K:\n\nHelper-function to generate numeric of given ndigits,\nusing the new random(min numeric, max numeric):\n\nCREATE OR REPLACE FUNCTION random_ndigits(ndigits INT) RETURNS NUMERIC AS $$\nSELECT random(\n ('1000'||repeat('0000',ndigits-1))::numeric,\n (repeat('9999',ndigits))::numeric\n)\n$$ LANGUAGE sql;\n\nBenchmark equal factor sizes, 16384 x 16384 ndigits:\n\nSELECT random_ndigits(16384) * random_ndigits(16384) > 0;\nTime: 33.990 ms\nTime: 33.961 ms\nTime: 34.183 ms\n\nSELECT numeric_mul_karatsuba(random_ndigits(16384), random_ndigits(16384)) > 0;\nTime: 17.621 ms\nTime: 17.209 ms\nTime: 16.444 ms\n\nBenchmark equal factor sizes, 8192 x 8192 ndigits:\n\nSELECT random_ndigits(8192) * random_ndigits(8192) > 0;\nTime: 12.568 ms\nTime: 12.563 ms\nTime: 12.701 ms\n\nSELECT numeric_mul_karatsuba(random_ndigits(8192), random_ndigits(8192)) > 0;\nTime: 9.919 ms\nTime: 9.929 ms\nTime: 9.659 ms\n\nTo measure smaller factor sizes, \\timing doesn't provide enough precision.\nBelow measurements are made using my pg-timeit extension:\n\nBenchmark equal factor sizes, 1024 x 1024 ndigits:\n\nSELECT timeit.h('numeric_mul',ARRAY[random_ndigits(1024)::TEXT,random_ndigits(1024)::TEXT],significant_figures:=2);\n 100 µs\nSELECT timeit.h('numeric_mul_karatsuba',ARRAY[random_ndigits(1024)::TEXT,random_ndigits(1024)::TEXT],significant_figures:=2);\n 73 µs\n\nBenchmark equal factor sizes, 512 x 512 ndigits:\n\nSELECT timeit.h('numeric_mul',ARRAY[random_ndigits(512)::TEXT,random_ndigits(512)::TEXT],significant_figures:=2);\n 27 µs\nSELECT timeit.h('numeric_mul_karatsuba',ARRAY[random_ndigits(512)::TEXT,random_ndigits(512)::TEXT],significant_figures:=2);\n 23 µs\n\nBenchmark unequal factor sizes, 2048 x 16384 ndigits:\n\nSELECT timeit.h('numeric_mul',ARRAY[random_ndigits(2048)::TEXT,random_ndigits(16384)::TEXT],significant_figures:=2);\n3.6 ms\nSELECT timeit.h('numeric_mul_karatsuba',ARRAY[random_ndigits(2048)::TEXT,random_ndigits(16384)::TEXT],significant_figures:=2);\n2.7 ms\n\nThe KARATSUBA_CONDITION was determined through benchmarking on the following architectures:\n\n- Intel Core i9-14900K (desktop)\n- AMD Ryzen 9 7950X3D (desktop)\n- Apple M1Max (laptop)\n- AWS EC2 m7g.4xlarge (cloud server, AWS Graviton3 CPU)\n- AWS EC2 m7i.4xlarge (cloud server, Intel Xeon 4th Gen, Sapphire Rapids)\n\nThe images depicting the benchmark plots are rather large, so I've refrained\nfrom including them as attachments. Instead, I've provided URLs to\nthe benchmarks for direct access:\n\nhttps://gist.githubusercontent.com/joelonsql/e9d06cdbcdf56cd8ffa673f499880b0d/raw/69df06e95bc254090f8397765079e1a8145eb5ac/derive_threshold_function_using_dynamic_programming.png\nThis image shows the best possible performance ratio per architecture,\nderived using Dynamic Programming. The black line segment shows the manually crafted\nthreshold function, which aims to avoid performance regressions, while capturing\nthe beneficial regions, as a relatively simple threshold function,\nwhich has been implemented in both patches as the KARATSUBA_CONDITION macro.\n\nhttps://gist.githubusercontent.com/joelonsql/e9d06cdbcdf56cd8ffa673f499880b0d/raw/69df06e95bc254090f8397765079e1a8145eb5ac/benchmark.png\nThis plot displays the actual performance ratio per architecture,\nmeasured after applying the mul_var-karatsuba-benchmark.patch.\n\nThe performance_ratio scale in both plots uses a rainbow scale,\nwhere blue is at 1.0 and means no change. The maximum at 4.0\nmeans that the Karatsuba version was four times faster\nthan the traditional mul_var() at that architecture.\n\nTo make it easier to distinguish performance regressions from,\na magenta color scale that goes from pure magenta just below 1.0,\nto dark at 0.0. I picked magenta for this purpose since it's\nnot part of the rainbow colors.\n\n/Joel", "msg_date": "Mon, 15 Apr 2024 23:33:01 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "Hi Joel, thanks for posting this.  Although I have only a cursory \nfamiliarity with fast multiplication algorithms, I'd like to try and \ngive it a review.  To start with, can you help me understand the choice \nof this algorithm versus others like Toom?  If this looks correct on a \ncloser view I'll propose it for inclusion. Along the way though I'd like \nto have it explicitly called out whether this is superior in general to \nother choices, better for more realistic use cases, simpler, clearer to \nlicense or something similar.  It would be nice for future dicussions to \nhave some context around whether it would make sense to have conditions \nto choose other algorithms as well, or if this one is generally the best \nfor what Postgres users are usually doing.\n\nContinuing with code review in any case.  Interested to hear more.\n\nRegards,\n\nAaron Altman\n\n\n\n", "msg_date": "Tue, 11 Jun 2024 17:16:38 +0000", "msg_from": "Aaron Altman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Tue, Jun 11, 2024, at 19:16, Aaron Altman wrote:\n> Hi Joel, thanks for posting this.  Although I have only a cursory \n> familiarity with fast multiplication algorithms, I'd like to try and \n> give it a review.  To start with, can you help me understand the choice \n> of this algorithm versus others like Toom?  If this looks correct on a \n> closer view I'll propose it for inclusion. Along the way though I'd like \n> to have it explicitly called out whether this is superior in general to \n> other choices, better for more realistic use cases, simpler, clearer to \n> license or something similar.  It would be nice for future dicussions to \n> have some context around whether it would make sense to have conditions \n> to choose other algorithms as well, or if this one is generally the best \n> for what Postgres users are usually doing.\n>\n> Continuing with code review in any case.  Interested to hear more.\n\nHi Aaron, thanks for looking at this!\n\nThe choice of best algorithm depends on the factor sizes.\n\nThe larger factor sizes, the more complicated algorithm can be \"afforded\".\n\nList of fast multiplication algorithms,\nordered by factor sizes they are suitable for:\n\n- Long multiplication, aka \"schoolbook\" multiplication.\n- Karatsuba\n- Toom-3\n- Schönhage–Strassen algorithm (Fast Fourier Transform)\n\nThe Toom-3 algorithm can be modified to split the smaller and larger factors\ninto different number of parts. The notation used at Wikipedia is e.g. Toom-2.5\nwhich I think means splitting the larger into three parts, and the smaller into\ntwo parts, while GMP uses Toom32 to mean the same thing.\nPersonally, I think GMPs notation is easier to understand as the number of parts\ncan be directly derived from the name.\n\nI experimented with implementing Toom-3 as well, but there was only a marginal\nwin, at very large factor sizes, and since numeric's max ndigits\n(number of base-digits) is capped at 32768, I didn't think it was worth it,\nsince it adds quite a lot of complexity.\n\nThe Karatsuba algorithm is the next step in the hierarchy of fast multiplication\nalgorithms, and all other bigint libs I've looked at implement Karatsuba,\neven if they also implement Toom-3, since Karatsuba is faster than Toom-3 for\nsufficiently small factors, but that are at the same time sufficiently large for\nKaratsuba to be faster than schoolbook.\n\nI was initially surprised by the quite large threshold, where Karatsuba started\nto be a win over schoolbook.\n\nI think the explanation why mul_var() stays fast up to quite remarkably high\nfactor sizes, could be a combination of several things, such as:\n\n- mul_var() is already heavily optimized, with clever tricks,\n such as deferred carry propagation.\n\n- numeric uses NBASE=10000, while other bigint libs usually use a power of two.\n\nIn the Karatsuba implementation, I tried to keep the KARATSUBA_CONDITION()\nquite simple, but it's way more complex than what most bigint libs use,\nthat usually just check if the smaller factor is smaller than some threshold,\nand if so, use schoolbook. For instance, this is what Rust's num-bigint does:\n\n if x.len() <= 32 {\n // Long multiplication\n } else if x.len() * 2 <= y.len() {\n // Half-Karatsuba, for factors with significant length disparity\n } else if x.len() <= 256 {\n // Karatsuba multiplication\n } else {\n // Toom-3 multiplication\n }\n\nSource: https://github.com/rust-num/num-bigint/blob/master/src/biguint/multiplication.rs#L101\n\nSide note: When working on Karatsuba in mul_var(), I looked at some other bigint\nimplementations, to try to understand their threshold functions.\nI noticed that Rust's num-bigint didn't optimise for factors with significant\nlength disparity, so I contributed a patch based on my \"Half-Karatsuba\" idea,\nthat I got when working with mul_var(), which has now been merged:\nhttps://github.com/rust-num/num-bigint/commit/06b61c8138ad8a9959ac54d9773d0a9ebe25b346\n\nIn mul_var(), if we don't like the complexity of KARATSUBA_CONDITION(),\nwe could go for a more traditional threshold approach, i.e. just checking\nthe smaller factor size. However, I believe that would be at the expense\nof missing out of some performance gains.\n\nI've tried quite hard to find the best KARATSUBA_CONDITION(), but I found it to\nbe a really hard problem, the differences between different CPU architectures,\nin combination with wanting a simple expression, means there is no obvious\nperfect threshold function, there will always be a trade-off.\n\nI eventually stopped trying to improve it, and just settled on the version in\nthe patch, and thought that I'll leave it up to the community to give feedback\non what complexity for the threshold function is motivated. If we absolutely\njust want to check the smallest factor size, like Rust, then it's super simple,\nthen the threshold can easily be found just by testing different values.\nIt's when both factor sizes are input to the threshold function that makes it\ncomplicated.\n\n/Joel\n\n\n", "msg_date": "Thu, 13 Jun 2024 11:08:25 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis applies cleanly to master, builds and passes regression tests in my Windows/Cygwin environment. \r\n\r\nI also read through comments and confirmed for myself that the assumption about the caller ensuring var1 is shorter is done already in unchanged code from mul_var. Frees correspond to inits. The \"Karatsuba condition\" reasoning for deciding whether a number is big enough to use this algorithm appears to match what Joel has stated in this thread.\r\n\r\nThe arithmetic appears to match what's been described in the comments. I have *not* confirmed that with any detailed review of the Karatsuba algorithm from outside sources, other implementations like the Rust one referenced here, or anything similar. I'm hoping that the regression tests give sufficient coverage that if the arithmetic was incorrect there would be obvious failures. If additional coverage was needed, cases falling immediately on either side of the limiting conditions used in the patch would probably be useful. From the limited precedent I've exposed myself to, that doesn't seem to be required here, but I'm open to contrary input from other reviewers. In the meantime, I'm marking this approved. \r\n\r\nThanks for the detailed background and comments, Joel!\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Fri, 14 Jun 2024 01:07:21 +0000", "msg_from": "Aaron Altman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Fri, Jun 14, 2024, at 03:07, Aaron Altman wrote:\n> Thanks for the detailed background and comments, Joel!\n>\n> The new status of this patch is: Ready for Committer\n\nThanks for reviewing.\n\nAttached, rebased version of the patch that implements the Karatsuba algorithm in numeric.c's mul_var().\n\n/Joel", "msg_date": "Sun, 23 Jun 2024 09:00:29 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sun, Jun 23, 2024 at 09:00:29AM +0200, Joel Jacobson wrote:\n> Attached, rebased version of the patch that implements the Karatsuba algorithm in numeric.c's mul_var().\n\nIt's one of these areas where Dean Rasheed would be a good match for a\nreview, so adding him in CC. He has been doing a lot of stuff in this\narea over the years.\n\n+#define KARATSUBA_BASE_LIMIT 384\n+#define KARATSUBA_VAR1_MIN1 128\n+#define KARATSUBA_VAR1_MIN2 2000\n+#define KARATSUBA_VAR2_MIN1 2500\n+#define KARATSUBA_VAR2_MIN2 9000\n+#define KARATSUBA_SLOPE 0.764\n+#define KARATSUBA_INTERCEPT 90.737\n\nThese numbers feel magic, and there are no explanations behind these\nchoices so it is hard to know whether these are good or not, or if\nthere are potentially \"better\" choices. I'd suggest to explain why\nthese variables are here as well as the basics of the method in this\narea of the code, with the function doing the operation pointing at\nthat so as all the explanations are in a single place. Okay, these\nare thresholds based on the number of digits to decide if the normal\nor Karatsuba's method should be used, but grouping all the\nexplanations in a single place is simpler.\n\nI may have missed something, but did you do some benchmark when the\nthresholds are at their limit where we would fallback to the\ncalculation method of HEAD? I guess that the answer to my question of\n\"Is HEAD performing better across these thresholds?\" is clearly \"no\"\nbased on what I read at [1] and the threshold numbers chosen, still\nasking.\n\n[1]: https://en.wikipedia.org/wiki/Karatsuba_algorithm\n--\nMichael", "msg_date": "Mon, 24 Jun 2024 09:10:56 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sun, Jun 23, 2024 at 09:00:29AM +0200, Joel Jacobson wrote:\n> Attached, rebased version of the patch that implements the Karatsuba algorithm in numeric.c's mul_var().\n>\n\nSomething to watch out for is that not all callers of mul_var() want\nan exact result. Several internal callers request an approximate\nresult by passing it an rscale value less than the sum of the input\ndscales. The schoolbook algorithm handles that by computing up to\nrscale plus MUL_GUARD_DIGITS extra digits and then rounding, whereas\nthe new Karatsuba code always computes the full result and then\nrounds. That impacts the performance of various functions, for\nexample:\n\nselect sum(exp(x)) from generate_series(5999.9, 5950.0, -0.1) x;\n\nTime: 1790.825 ms (00:01.791) [HEAD]\nTime: 2161.244 ms (00:02.161) [with patch]\n\nLooking at mul_var_karatsuba_half(), I don't really like the approach\nit takes. The whole correctness proof using the Karatsuba formula\nseems to somewhat miss the point that this function isn't actually\nimplementing the Karatsuba algorithm, it is implementing the\nschoolbook algorithm in two steps, by splitting the longer input into\ntwo pieces. But why just split it into two pieces? That will just lead\nto a lot of unnecessary recursion for very unbalanced inputs. Instead,\nwhy not split the longer input into N roughly equal sized pieces, each\naround the same length as the shorter input, multiplying and adding\nthem at the appropriate offsets? As an example, given inputs with\nvar1ndigits = 1000 and var2ndigits = 10000, mul_var() will invoke\nmul_var_karatsuba_half(), which will then recursively invoke mul_var()\ntwice with var1ndigits = 1000 and var2ndigits = 5000, which no longer\nsatisfies KARATSUBA_CONDITION(), so it will just invoke the schoolbook\nalgorithm on each half, which stands no chance of being any faster. On\nthe other hand, if it divided var2 into 10 chunks of length 1000, it\nwould invoke the Karatsuba algorithm on each chunk, which would at\nleast stand a chance of being faster.\n\nRelated to that, KARATSUBA_HIGH_RANGE_CONDITION() doesn't appear to\nmake a lot of sense. For inputs with var1ndigits between 128 and 2000,\nand var2ndigits > 9000, this condition will pass and it will\nrecursively break up the longer input into smaller and smaller pieces\nuntil eventually that condition no longer passes, but none of the\nother conditions in KARATSUBA_CONDITION() will pass either, so it'll\njust invoke the schoolbook algorithm on each piece, which is bound to\nbe slower once all the overheads are taken into account. For example,\ngiven var1ndigits = 200 and var2ndigits = 30000, KARATSUBA_CONDITION()\nwill pass due to KARATSUBA_HIGH_RANGE_CONDITION(), and it will recurse\nwith var1ndigits = 200 and var2ndigits = 15000, and then again with\nvar1ndigits = 200 and var2ndigits = 7500, at which point\nKARATSUBA_CONDITION() no longer passes. With mul_var_karatsuba_half()\nimplemented as it is, that is bound to happen, because each half will\nend up having var2ndigits between 4500 and 9000, which fails\nKARATSUBA_CONDITION() if var1ndigits < 2000. If\nmul_var_karatsuba_half() was replaced by something that recursed with\nmore balanced chunks, then it might make more sense, though allowing\nvalues of var1ndigits down to 128 doesn't make sense, since the\nKaratsuba algorithm will never be invoked for inputs shorter than 384.\n\nLooking at KARATSUBA_MIDDLE_RANGE_CONDITION(), the test that\nvar2ndigits > 2500 seems to be redundant. If var1ndigits > 2000 and\nvar2ndigits < 2500, then KARATSUBA_LOW_RANGE_CONDITION() is satisfied,\nso these tests could be simplified, eliminating some of those magic\nconstants.\n\nHowever, I really don't like having these magic constants at all,\nbecause in practice the threshold above which the Karatsuba algorithm\nis a win can vary depending on a number of factors, such as whether\nit's running on 32-bit or 64-bit, whether or not SIMD instructions are\navailable, the relative timings of CPU instructions, the compiler\noptions used, and probably a bunch of other things. The last time I\nlooked at the Java source code, for example, they had separate\nthresholds for 32-bit and 64-bit platforms, and even that's probably\ntoo crude. Some numeric libraries tune the thresholds for a large\nnumber of different platforms, but that takes a lot of effort. I think\na better approach would be to have a configurable threshold. Ideally,\nthis would be just one number, with all other numbers being derived\nfrom it, possibly using some simple heuristic to reduce the effective\nthreshold for more balanced inputs, for which the Karatsuba algorithm\nis more efficient.\n\nHaving a configurable threshold would allow people to tune for best\nperformance on their own platforms, and also it would make it easier\nto write tests that hit the new code. As it stands, it's not obvious\nhow much of the new code is being hit by the existing tests.\n\nDoing a quick test on my machine, using random equal-length inputs of\nvarious sizes, I got the following performance results:\n\n digits | rate (HEAD) | rate (patch) | change\n--------+---------------+---------------+--------\n 10 | 6.060014e+06 | 6.0189365e+06 | -0.7%\n 100 | 2.7038752e+06 | 2.7287925e+06 | +0.9%\n 1000 | 88640.37 | 90504.82 | +2.1%\n 1500 | 39885.23 | 41041.504 | +2.9%\n 1600 | 36355.24 | 33368.28 | -8.2%\n 2000 | 23308.582 | 23105.932 | -0.9%\n 3000 | 10765.185 | 11360.11 | +5.5%\n 4000 | 6118.2554 | 6645.4116 | +8.6%\n 5000 | 3928.4985 | 4639.914 | +18.1%\n 10000 | 1003.80164 | 1431.9335 | +42.7%\n 20000 | 255.46135 | 456.23462 | +78.6%\n 30000 | 110.69313 | 226.53398 | +104.7%\n 40000 | 62.29333 | 148.12916 | +137.8%\n 50000 | 39.867493 | 95.16788 | +138.7%\n 60000 | 27.7672 | 74.01282 | +166.5%\n\nThe Karatsuba algorithm kicks in at 384*4 = 1536 decimal digits, so\npresumably the variations below that are just noise, but this does\nseem to suggest that KARATSUBA_BASE_LIMIT = 384 is too low for me, and\nI'd probably want it to be something like 500-700.\n\nThere's another complication though (if the threshold is made\nconfigurable): the various numeric functions that use mul_var() are\nimmutable, which means that the results from the Karatsuba algorithm\nmust match those from the schoolbook algorithm exactly, for all\ninputs. That's not currently the case when computing approximate\nresults with a reduced rscale. That's fixable, but it's not obvious\nwhether or not the Karatsuba algorithm can actually be made beneficial\nwhen computing such approximate results.\n\nThere's a wider question as to how many people use such big numeric\nvalues -- i.e., how many people are actually going to benefit from\nthis? I don't have a good feel for that.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 29 Jun 2024 13:22:09 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> There's another complication though (if the threshold is made\n> configurable): the various numeric functions that use mul_var() are\n> immutable, which means that the results from the Karatsuba algorithm\n> must match those from the schoolbook algorithm exactly, for all\n> inputs.\n\nThat seems like an impossible standard to meet. What we'd probably\nhave to do is enable Karatsuba only when mul_var is being asked\nfor an exact (full-precision) result. This'd complicate the check\ncondition further, possibly reaching the point where there is a\nvisible drag on performance in the non-Karatsuba case.\n\nAnother possible source of drag: if mul_var is now recursive,\ndoes it need a stack depth check? If we can prove that the\nnumber of recursion levels is no more than O(log(N)) in the\nnumber of input digits, it's probably safe to skip that ...\nbut I see no such proof here. (In general I find this patch\nseriously undercommented.)\n\n> There's a wider question as to how many people use such big numeric\n> values -- i.e., how many people are actually going to benefit from\n> this? I don't have a good feel for that.\n\nI have heard of people doing calculations on bignum integers in\nPostgres, but they are very few and far between --- usually that\nsort of task is better done in another language. (No matter how\nfast we make mul_var, the general overhead of SQL expressions in\ngeneral and type numeric in particular means it's probably not\nthe right tool for heavy-duty bignum arithmetic.)\n\nThere is definitely an argument to be made that this proposal is\nnot worth the development effort and ongoing maintenance effort\nwe'd have to sink into it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 29 Jun 2024 11:25:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sat, 29 Jun 2024 at 16:25, Tom Lane <[email protected]> wrote:\n>\n> Dean Rasheed <[email protected]> writes:\n> > There's another complication though (if the threshold is made\n> > configurable): the various numeric functions that use mul_var() are\n> > immutable, which means that the results from the Karatsuba algorithm\n> > must match those from the schoolbook algorithm exactly, for all\n> > inputs.\n>\n> That seems like an impossible standard to meet. What we'd probably\n> have to do is enable Karatsuba only when mul_var is being asked\n> for an exact (full-precision) result.\n\nYeah, using Karatsuba for approximate products is probably a bit too\nambitious. I think it'd be reasonably straightforward to have it\nproduce the same results for all rscale values. You'd just have to\ndecide on the required rscale for each sub-product, based on where\nit's being added, truncating at various points, and being sure to only\nround once at the very end. The problem is that it'd end up computing\na larger fraction of the full product than the schoolbook algorithm\nwould have done, so the threshold for using Karatsuba would have to be\nhigher (probably quite a lot higher) and figuring out how that varied\nwith the requested rscale would be hard.\n\nSo using Karatsuba only in the full-precision case seems like a\nreasonable restriction. That'd still benefit some other functions like\nsqrt() and therefore ln() and pow() to some extent. However,...\n\n> There is definitely an argument to be made that this proposal is\n> not worth the development effort and ongoing maintenance effort\n> we'd have to sink into it.\n\nI'm leaning more towards this opinion, especially since I think the\npatch needs a lot more work to be committable.\n\nRegards,\nDean\n\n\n", "msg_date": "Sun, 30 Jun 2024 09:44:16 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sat, Jun 29, 2024, at 14:22, Dean Rasheed wrote:\n> On Sun, Jun 23, 2024 at 09:00:29AM +0200, Joel Jacobson wrote:\n>> Attached, rebased version of the patch that implements the Karatsuba algorithm in numeric.c's mul_var().\n>>\n>\n> Something to watch out for is that not all callers of mul_var() want\n> an exact result. Several internal callers request an approximate\n> result by passing it an rscale value less than the sum of the input\n> dscales. The schoolbook algorithm handles that by computing up to\n> rscale plus MUL_GUARD_DIGITS extra digits and then rounding, whereas\n> the new Karatsuba code always computes the full result and then\n> rounds. That impacts the performance of various functions, for\n> example:\n>\n> select sum(exp(x)) from generate_series(5999.9, 5950.0, -0.1) x;\n>\n> Time: 1790.825 ms (00:01.791) [HEAD]\n> Time: 2161.244 ms (00:02.161) [with patch]\n\nOps. Thanks for spotting this and clarifying.\n\nI read Tom's reply and note that we should only do Karatsuba\nan exact (full-precision) result is requested.\n\n> Looking at mul_var_karatsuba_half(), I don't really like the approach\n> it takes. The whole correctness proof using the Karatsuba formula\n> seems to somewhat miss the point that this function isn't actually\n> implementing the Karatsuba algorithm, it is implementing the\n> schoolbook algorithm in two steps, by splitting the longer input into\n> two pieces.\n\nThe surprising realization here is that there are actually (var1ndigits, var2ndigits)\ncombinations where *only* doing mul_var_karatsuba_half() recursively\nall the way down to schoolbook *is* a performance win,\neven though we don't do any mul_var_karatsuba_full().\n\nmul_var_karatsuba_half() *is* actually implementing the exact\nKaratsuba formula, it's just taking a shortcut exploiting the pre-known\nthat splitting `var1` at `m2` would result in `high1` being zero.\nThis allows the provably correct substitutions to be made,\nwhich avoids the meaningless computations.\n\n> But why just split it into two pieces? That will just lead\n> to a lot of unnecessary recursion for very unbalanced inputs. Instead,\n> why not split the longer input into N roughly equal sized pieces, each\n> around the same length as the shorter input, multiplying and adding\n> them at the appropriate offsets?\n\nThe approach you're describing is implemented by e.g. CPython\nand is called \"lopsided\" in their code base. It has some different\nperformance characteristics, compared to the recursive Half-Karatsuba\napproach.\n\nWhat I didn't like about lopsided is the degenerate case where the\nlast chunk is much shorter than the var1, for example, if we pretend\nwe would be doing Karatsuba all the way down to ndigits 2,\nand think about the example var1ndigits = 3 and var2ndigits = 10,\nthen lopsided would do\nvar1ndigits=3 var2ndigits=3\nvar1ndigits=3 var2ndigits=3\nvar1ndigits=3 var2ndigits=3\nvar1ndigits=3 var2ndigits=1\n\nwhereas Half-Karatsuba would do\nvar1ndigits=3 var2ndigits=5\nvar1ndigits=3 var2ndigits=5\n\nYou can find contrary examples too of course where lopsided\nis better than Half-Karatsuba, none of them seem substantially better\nthan the other.\n\nMy measurements indicated that overall, Half-Karatsuba seemed like\nthe overall marginal winner, on the architectures I tested, but they were all\nvery similar, i.e. almost the same number of \"wins\" and \"losses\",\nfor different (var1ndigits, var2ndigits) combinations.\n\nNote that even with lopsided, there will still be recursion, since Karatsuba\nis a recursive algorithm, so to satisfy Tom Lane's request about\nproving the recursion is limited, we will still need to prove the same\nthing for lopsided+Karatsuba.\n\nHere is some old code from my experiments, if we want to evaluate lopsided:\n\n```\nstatic void slice_var(const NumericVar *var, int start, int length,\n\t\t\t\t\t NumericVar *slice);\n\nstatic void mul_var_lopsided(const NumericVar *var1, const NumericVar *var2,\n\t\t\t\t\t\t\t NumericVar *result);\n\n/*\n * slice_var() -\n *\n * Extract a slice of a NumericVar starting at a specified position\n * and with a specified length.\n */\nstatic void\nslice_var(const NumericVar *var, int start, int length,\n\t\t NumericVar *slice)\n{\n\tAssert(start >= 0);\n\tAssert(start + length <= var->ndigits);\n\n\tinit_var(slice);\n\n\tslice->ndigits = length;\n\tslice->digits = var->digits + start;\n\tslice->buf = NULL;\n\tslice->weight = var->weight - var->ndigits + length;\n\tslice->sign = var->sign;\n\tslice->dscale = (var->ndigits - var->weight - 1) * DEC_DIGITS;\n}\n\n/*\n * mul_var_lopsided() -\n *\n * Lopsided Multiplication for unequal-length factors.\n *\n * This function handles the case where var1 has significantly fewer digits\n * than var2. In such a scenario, splitting var1 for a balanced multiplication\n * algorithm would be inefficient, as the high part would be zero.\n *\n * To overcome this inefficiency, the function divides factor2 into a series of\n * slices, each containing the same number of digits as var1, and multiplies\n * var1 with each slice one at a time. As a result, the recursive call to\n * mul_var() will have balanced inputs, which improves the performance of\n * divide-and-conquer algorithm, such as the Karatsuba.\n */\nstatic void\nmul_var_lopsided(const NumericVar *var1, const NumericVar *var2,\n\t\t\t\t NumericVar *result)\n{\n\tint\t\t\tvar1ndigits = var1->ndigits;\n\tint\t\t\tvar2ndigits = var2->ndigits;\n\tint\t\t\tprocessed = 0;\n\tint\t\t\tremaining = var2ndigits;\n\tint\t\t\tlength;\n\tNumericVar\tslice;\n\tNumericVar\tproduct;\n\tNumericVar\tsum;\n\n\tAssert(var1ndigits <= var2ndigits);\n\tAssert(var1ndigits > MUL_SMALL);\n\tAssert(var1ndigits * 2 <= var2ndigits);\n\n\tinit_var(&slice);\n\tinit_var(&product);\n\tinit_var(&sum);\n\n\twhile (remaining > 0)\n\t{\n\t\tlength = Min(remaining, var1ndigits);\n\t\tslice_var(var2, var2ndigits - processed - length, length, &slice);\n\t\tmul_var(var1, &slice, &product, var1->dscale + slice.dscale);\n\t\tproduct.weight += processed;\n\t\tadd_var(&sum, &product, &sum);\n\t\tremaining -= length;\n\t\tprocessed += length;\n\t}\n\n\tset_var_from_var(&sum, result);\n\n\tfree_var(&slice);\n\tfree_var(&product);\n\tfree_var(&sum);\n}\n```\n\n> As an example, given inputs with\n> var1ndigits = 1000 and var2ndigits = 10000, mul_var() will invoke\n> mul_var_karatsuba_half(), which will then recursively invoke mul_var()\n> twice with var1ndigits = 1000 and var2ndigits = 5000, which no longer\n> satisfies KARATSUBA_CONDITION(), so it will just invoke the schoolbook\n> algorithm on each half, which stands no chance of being any faster. On\n> the other hand, if it divided var2 into 10 chunks of length 1000, it\n> would invoke the Karatsuba algorithm on each chunk, which would at\n> least stand a chance of being faster.\n\nInteresting example!\n\nIndeed only mul_var_karatsuba_half() will be called with the inputs:\nvar1ndigits=1000 var2ndigits=10000\nvar1ndigits=1000 var2ndigits=5000\nIt will never call mul_var_karatsuba_full().\n\nSurprisingly, this still gives a 13% speed-up on a Intel Core i9-14900K.\n\nThis performance gain comes from the splitting of the larger factor.\n\nHere is how I benchmarked using pg-timeit [1] and the\nmul_var-karatsuba-benchmark.patch [2] from my original post:\n\nTo test the patch, you have to edit pg_proc.dat for\nnumeric_mul_karatsuba and give it a new unique oid.\n\n```\nSELECT\n timeit.pretty_time(total_time_a / 1e6 / executions,3) AS execution_time_a,\n timeit.pretty_time(total_time_b / 1e6 / executions,3) AS execution_time_b,\n total_time_a::numeric/total_time_b - 1 AS execution_time_difference\nFROM timeit.cmp(\n 'numeric_mul',\n 'numeric_mul_karatsuba',\n input_values := ARRAY[\n random_ndigits(1000)::TEXT,\n random_ndigits(10000)::TEXT\n ],\n min_time := 1000000,\n timeout := '10 s'\n);\n-[ RECORD 1 ]-------------+-------------------\nexecution_time_a | 976 µs\nexecution_time_b | 864 µs\nexecution_time_difference | 0.1294294200936600\n```\n\nThe KARATSUBA_CONDITION tries to capture the interesting region of performance gains, as observed from measurements [3], in an expression that is not too complex.\n\nIn the image [3], the purple to black area is performance regressions,\nand the rainbow colors are performance gains.\n\nThe black colored line segment is the KARATSUBA_CONDITION,\nwhich tries to capture the three performance gain regions,\nas defined by:\nKARATSUBA_LOW_RANGE_CONDITION\nKARATSUBA_MIDDLE_RANGE_CONDITION\nKARATSUBA_HIGH_RANGE_CONDITION \n\n> Related to that, KARATSUBA_HIGH_RANGE_CONDITION() doesn't appear to\n> make a lot of sense. For inputs with var1ndigits between 128 and 2000,\n> and var2ndigits > 9000, this condition will pass and it will\n> recursively break up the longer input into smaller and smaller pieces\n> until eventually that condition no longer passes, but none of the\n> other conditions in KARATSUBA_CONDITION() will pass either, so it'll\n> just invoke the schoolbook algorithm on each piece, which is bound to\n> be slower once all the overheads are taken into account. For example,\n> given var1ndigits = 200 and var2ndigits = 30000, KARATSUBA_CONDITION()\n> will pass due to KARATSUBA_HIGH_RANGE_CONDITION(), and it will recurse\n> with var1ndigits = 200 and var2ndigits = 15000, and then again with\n> var1ndigits = 200 and var2ndigits = 7500, at which point\n> KARATSUBA_CONDITION() no longer passes. With mul_var_karatsuba_half()\n> implemented as it is, that is bound to happen, because each half will\n> end up having var2ndigits between 4500 and 9000, which fails\n> KARATSUBA_CONDITION() if var1ndigits < 2000. If\n> mul_var_karatsuba_half() was replaced by something that recursed with\n> more balanced chunks, then it might make more sense,though allowing\n> values of var1ndigits down to 128 doesn't make sense, since the\n> Karatsuba algorithm will never be invoked for inputs shorter than 384.\n\nLike explained above, the mul_var_karatsuba_half() is not meaningless,\neven for cases where we never reach mul_var_karatsuba_full().\n\nRegarding the last comment on 128 and 384, I think you're reading the\nconditions wrong, note that 128 is for var1ndigits while 384 is for var2ndigits:\n\n+#define KARATSUBA_BASE_LIMIT 384\n+#define KARATSUBA_VAR1_MIN1 128\n...\n+\t((var2ndigits) >= KARATSUBA_BASE_LIMIT && \\\n...\n+\t (var1ndigits) > KARATSUBA_VAR1_MIN1)\n\n> Looking at KARATSUBA_MIDDLE_RANGE_CONDITION(), the test that\n> var2ndigits > 2500 seems to be redundant. If var1ndigits > 2000 and\n> var2ndigits < 2500, then KARATSUBA_LOW_RANGE_CONDITION() is satisfied,\n> so these tests could be simplified, eliminating some of those magic\n> constants.\n\nYes, I realized that myself too, but chose to keep the start and end\nboundaries for each of the three ranges. Since it's just boolean logic,\nwith constants, I think the compiler should be smart enough to optimize\naway the redundancy, but maybe better to keep the redundant condition\nas a comment instead of actual code, I have no strong opinion what's best.\n\n> However, I really don't like having these magic constants at all,\n> because in practice the threshold above which the Karatsuba algorithm\n> is a win can vary depending on a number of factors, such as whether\n> it's running on 32-bit or 64-bit, whether or not SIMD instructions are\n> available, the relative timings of CPU instructions, the compiler\n> options used, and probably a bunch of other things. The last time I\n> looked at the Java source code, for example, they had separate\n> thresholds for 32-bit and 64-bit platforms, and even that's probably\n> too crude. Some numeric libraries tune the thresholds for a large\n> number of different platforms, but that takes a lot of effort. I think\n> a better approach would be to have a configurable threshold. Ideally,\n> this would be just one number, with all other numbers being derived\n> from it, possibly using some simple heuristic to reduce the effective\n> threshold for more balanced inputs, for which the Karatsuba algorithm\n> is more efficient.\n>\n> Having a configurable threshold would allow people to tune for best\n> performance on their own platforms, and also it would make it easier\n> to write tests that hit the new code. As it stands, it's not obvious\n> how much of the new code is being hit by the existing tests.\n>\n> Doing a quick test on my machine, using random equal-length inputs of\n> various sizes, I got the following performance results:\n>\n> digits | rate (HEAD) | rate (patch) | change\n> --------+---------------+---------------+--------\n> 10 | 6.060014e+06 | 6.0189365e+06 | -0.7%\n> 100 | 2.7038752e+06 | 2.7287925e+06 | +0.9%\n> 1000 | 88640.37 | 90504.82 | +2.1%\n> 1500 | 39885.23 | 41041.504 | +2.9%\n> 1600 | 36355.24 | 33368.28 | -8.2%\n> 2000 | 23308.582 | 23105.932 | -0.9%\n> 3000 | 10765.185 | 11360.11 | +5.5%\n> 4000 | 6118.2554 | 6645.4116 | +8.6%\n> 5000 | 3928.4985 | 4639.914 | +18.1%\n> 10000 | 1003.80164 | 1431.9335 | +42.7%\n> 20000 | 255.46135 | 456.23462 | +78.6%\n> 30000 | 110.69313 | 226.53398 | +104.7%\n> 40000 | 62.29333 | 148.12916 | +137.8%\n> 50000 | 39.867493 | 95.16788 | +138.7%\n> 60000 | 27.7672 | 74.01282 | +166.5%\n>\n> The Karatsuba algorithm kicks in at 384*4 = 1536 decimal digits, so\n> presumably the variations below that are just noise, but this does\n> seem to suggest that KARATSUBA_BASE_LIMIT = 384 is too low for me, and\n> I'd probably want it to be something like 500-700.\n\nI've tried hard to reduce the magic part of these constants,\nby benchmarking on numerous architectures, and picking them\nmanually by making a balanced judgement about what complexity\ncould possibly be acceptable for the threshold function,\nand what performance gains that are important to try to capture.\n\nI think this approach is actually less magical than the hard-coded\nsingle value constants I've seen in many other numeric libraries,\nwhere it's not clear at all what the full two dimensional performance\nimage looks like.\n\nI considered if my initial post on this should propose a patch with a\nsimple threshold function, that just checks if var1ndigits is larger\nthan some constant, like many other numeric libraries do.\nHowever, I decided I should at least try to do something smarter,\nsince it seemed possible.\n\n> There's another complication though (if the threshold is made\n> configurable): the various numeric functions that use mul_var() are\n> immutable, which means that the results from the Karatsuba algorithm\n> must match those from the schoolbook algorithm exactly, for all\n> inputs. That's not currently the case when computing approximate\n> results with a reduced rscale. That's fixable, but it's not obvious\n> whether or not the Karatsuba algorithm can actually be made beneficial\n> when computing such approximate results.\n\nI read Tom's reply on this part, and understand we can only do Karatsuba\nif full rscale is desired.\n\n> There's a wider question as to how many people use such big numeric\n> values -- i.e., how many people are actually going to benefit from\n> this? I don't have a good feel for that.\n\nPersonally, I started working on this because I wanted a to use numeric\nto implement the ECDSA verify algorithm in PL/pgSQL, to avoid\ndependency on a C extension that I discovered could segfault.\n\nUnfortunately, Karatsuba didn't help much for this particular case,\nsince the ECDSA factors are not big enough.\nI ended up implementing ECDSA verify as a pgrx extension instead.\n\nHowever, I can imagine other crypto algorithms might require larger factors,\nas well as other scientific research use cases, such as astronomy and physics,\nthat could desire storage of numeric values of very high precision.\n\nToom-3 is probably overkill, since we \"only\" support up to 32768 base digits,\nbut I think we should at least consider optimizing for the range of numeric values\nthat are supported by numeric.\n\nRegards,\nJoel\n\n[1] https://github.com/joelonsql/pg-timeit\n[2] https://www.postgresql.org/message-id/attachment/159528/mul_var-karatsuba-benchmark.patch\n[3] https://gist.githubusercontent.com/joelonsql/e9d06cdbcdf56cd8ffa673f499880b0d/raw/69df06e95bc254090f8397765079e1a8145eb5ac/derive_threshold_function_using_dynamic_programming.png\n\n\n", "msg_date": "Sun, 30 Jun 2024 12:22:28 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On 2024-Jun-30, Joel Jacobson wrote:\n\n> However, I can imagine other crypto algorithms might require larger factors,\n> as well as other scientific research use cases, such as astronomy and physics,\n> that could desire storage of numeric values of very high precision.\n\nFWIW I was talking to some people with astronomical databases, and for\nreasons that seem pretty relatable they prefer to do all numerical\nanalysis outside of the database, and keep the database server only for\nthe data storage and retrieval parts. It seems a really hard sell that\nthey would try to do that analysis in the database, because the\nparalellizability aspect is completely different for the analysis part\nthan for the database part -- I mean, they can easily add more servers\nto do photography analysis, but trying to add database servers (to cope\nwith additional load from doing the analysis there) is a much harder\nsell.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n\n", "msg_date": "Sun, 30 Jun 2024 12:41:46 +0200", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sat, Jun 29, 2024, at 17:25, Tom Lane wrote:\n> Dean Rasheed <[email protected]> writes:\n>> There's another complication though (if the threshold is made\n>> configurable): the various numeric functions that use mul_var() are\n>> immutable, which means that the results from the Karatsuba algorithm\n>> must match those from the schoolbook algorithm exactly, for all\n>> inputs.\n>\n> That seems like an impossible standard to meet. What we'd probably\n> have to do is enable Karatsuba only when mul_var is being asked\n> for an exact (full-precision) result. This'd complicate the check\n> condition further, possibly reaching the point where there is a\n> visible drag on performance in the non-Karatsuba case.\n\nOK, noted that we should only do Karatsuba when mul_var is\nbeing asked for an exact (full-precision) result.\n\n> Another possible source of drag: if mul_var is now recursive,\n> does it need a stack depth check? If we can prove that the\n> number of recursion levels is no more than O(log(N)) in the\n> number of input digits, it's probably safe to skip that ...\n> but I see no such proof here.\n\n> (In general I find this patch seriously undercommented.)\n\nYes, the #define's around KARATSUBA_CONDITION needs\nto be documented, but I felt this region of the patch might evolve\nand need some discussion first, if this level of complexity is\nacceptable.\n\nHowever, I think the comments above split_var_at(),\nmul_var_karatsuba_full() and mul_var_karatsuba_half()\nare quite good already, what do you think?\n\n>> There's a wider question as to how many people use such big numeric\n>> values -- i.e., how many people are actually going to benefit from\n>> this? I don't have a good feel for that.\n>\n> I have heard of people doing calculations on bignum integers in\n> Postgres, but they are very few and far between --- usually that\n> sort of task is better done in another language. (No matter how\n> fast we make mul_var, the general overhead of SQL expressions in\n> general and type numeric in particular means it's probably not\n> the right tool for heavy-duty bignum arithmetic.)\n>\n> There is definitely an argument to be made that this proposal is\n> not worth the development effort and ongoing maintenance effort\n> we'd have to sink into it.\n\nIt's a good question, I'm not sure what I think, maybe status quo is the best,\nbut it feels like something could be done at least.\n\nThe patch basically consists of three parts:\n\n- Threshold function KARATSUBA_CONDITION\nThis is a bit unorthodox, since it's more ambitious than many popular numeric\nlibraries. I've only found GMP to be even more complex.\n\n- mul_var_karatsuba_half()\nSince it's provably correct using simple school-grade substitutions,\nI don't think this function is unorthodox, and shouldn't need to change,\nunless we change the definition of NumericVar in the future.\n\n- mul_var_karatsuba()\nThis follows the canonical pseudo-code at Wikipedia for the Karatsuba\nalgorithm precisely, so nothing unorthodox here either.\n\nSo, I think the KARATSUBA_CONDITION require more development and maintenance\neffort than the rest of the patch, since it's based on measurements\non numerous architectures, which will be different in the future.\n\nThe rest of the patch, split_var_at(), mul_var_karatsuba_full(),\nmul_var_karatsuba_half(), should require much less effort to maintain,\nsince they are should remain the same,\neven when we need to support new architectures.\n\nI'm eager to hear your thoughts after you've also read my other reply moments ago to Dean.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 30 Jun 2024 13:24:15 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sat, Jun 29, 2024, at 14:22, Dean Rasheed wrote:\n> However, I really don't like having these magic constants at all,\n> because in practice the threshold above which the Karatsuba algorithm\n> is a win can vary depending on a number of factors, such as whether\n> it's running on 32-bit or 64-bit, whether or not SIMD instructions are\n> available, the relative timings of CPU instructions, the compiler\n> options used, and probably a bunch of other things. \n...\n> Doing a quick test on my machine, using random equal-length inputs of\n> various sizes, I got the following performance results:\n>\n> digits | rate (HEAD) | rate (patch) | change\n> --------+---------------+---------------+--------\n> 10 | 6.060014e+06 | 6.0189365e+06 | -0.7%\n> 100 | 2.7038752e+06 | 2.7287925e+06 | +0.9%\n\nDoes the PostgreSQL community these days have access to some kind\nof performance farm, covering some/all of the supported hardware architectures?\n\nPersonally, I have three machines:\nMacBook Pro M3 Max\nIntel Core i9-14900K\nAMD Ryzen 9 7950X3D\n\nIn addition I usually spin up a few AWS instances of different types,\nbut this is scary, because one time I forgot to turn them off for a week,\nwhich was quite costly.\n\nWould be much nicer with a performance farm!\n\nIf one exists, please let me know and no need to read the rest of this email.\nOtherwise:\n\nImagine if we could send a patch to a separate mailing list,\nand the system would auto-detect what catalog functions are affected,\nand automatically generate a performance report, showing the delta per platform.\n\nBinary functions, like numeric_mul(), should generate an image where the two\naxes would be the size of the inputs, and the color of each pixel should show\nthe performance gain/loss, whereas unary functions like sqrt() should have\nthe size of the input as the x-axis and performance gain/loss as the y-axis.\n\nHow to test each catalog function would of course need to be designed\nmanually, but maybe the detection of affected function would be\nautomated, if accepting some false positives/negatives, i.e. benchmarking\ntoo many or too few catalog functions, given a certain patch.\n\nCatalog functions are just a tiny part of PostgreSQL, so there should\nof course be other tests as well to cover other things, but since they are\nsimple to test predictably, maybe it could be a good start for the project,\neven if it's far from the most important thing to benchmark.\n\nI found an old performance farm topic from 2012 but it seems the discussion\njust stopped for some reason not clear to me.\n\nRegards,\nJoel\n\n\n", "msg_date": "Sun, 30 Jun 2024 17:17:44 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "\"Joel Jacobson\" <[email protected]> writes:\n> On Sat, Jun 29, 2024, at 17:25, Tom Lane wrote:\n>> (In general I find this patch seriously undercommented.)\n\n> However, I think the comments above split_var_at(),\n> mul_var_karatsuba_full() and mul_var_karatsuba_half()\n> are quite good already, what do you think?\n\nNot remarkably so. For starters, heaven help the reader who has\nno idea what \"the Karatsuba algorithm\" refers to. Nor is there\nany mention of why (or when) it's better than the traditional\nalgorithm. You could at least do people the courtesy of providing\na link to the wikipedia article that you're assuming they've\nmemorized.\n\nThere's also a discussion to be had about why Karatsuba is\na better choice than other divide-and-conquer multiplication\nmethods. Why not Toom-Cook, for example, which the aforesaid\nwikipedia page says is faster yet? I suppose you concluded\nthat the extra complexity is unwarranted, but this is the\nsort of thing I'd expect to see explained in the comments.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jun 2024 11:44:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sun, Jun 30, 2024, at 17:44, Tom Lane wrote:\n> \"Joel Jacobson\" <[email protected]> writes:\n>> On Sat, Jun 29, 2024, at 17:25, Tom Lane wrote:\n>>> (In general I find this patch seriously undercommented.)\n>\n>> However, I think the comments above split_var_at(),\n>> mul_var_karatsuba_full() and mul_var_karatsuba_half()\n>> are quite good already, what do you think?\n>\n> Not remarkably so. For starters, heaven help the reader who has\n> no idea what \"the Karatsuba algorithm\" refers to. Nor is there\n> any mention of why (or when) it's better than the traditional\n> algorithm. You could at least do people the courtesy of providing\n> a link to the wikipedia article that you're assuming they've\n> memorized.\n\nThanks for guidance. New patch attached.\n\nI've added as an introduction to Karatsuba:\n\n+/*\n+ * mul_var_karatsuba_full() -\n+ *\n+ *\tMultiplication using the Karatsuba algorithm.\n+ *\n+ *\tThe Karatsuba algorithm is a divide-and-conquer algorithm that reduces\n+ *\tthe complexity of large number multiplication. It splits each number\n+ *\tinto two halves and performs three multiplications on the parts,\n+ *\trather than four as in the traditional method. This results in\n+ *\ta significant performance improvement for sufficiently large numbers.\n...\n+ *\tFor more details on the Karatsuba algorithm, see the Wikipedia article:\n+ *\thttps://en.wikipedia.org/wiki/Karatsuba_algorithm\n\n> There's also a discussion to be had about why Karatsuba is\n> a better choice than other divide-and-conquer multiplication\n> methods. Why not Toom-Cook, for example, which the aforesaid\n> wikipedia page says is faster yet? I suppose you concluded\n> that the extra complexity is unwarranted, but this is the\n> sort of thing I'd expect to see explained in the comments.\n\nI've added this to the end of the comment on mul_var_karatsuba_full():\n\n+ *\tThe Karatsuba algorithm is preferred over other divide-and-conquer methods\n+ *\tlike Toom-Cook for this implementation due to its balance of complexity and\n+ *\tperformance gains given Numeric's constraints.\n+ *\n+ *\tFor Toom-Cook to be worth the added complexity, the factors would need to\n+ *\tbe much larger than supported by Numeric, making Karatsuba a more\n+ *\tappropriate choice.\n\nAlso, I added this comment on the #define's at the beginning:\n\n+/*\n+ * Constants used to determine when the Karatsuba algorithm should be used\n+ * for multiplication. These thresholds were determined empirically through\n+ * benchmarking across various architectures, aiming to avoid performance\n+ * regressions while capturing potential gains. The choice of these values\n+ * involves trade-offs and balances simplicity and performance.\n+ */\n\nAs well as this comment on KARATSUBA_CONDITION:\n\n+/*\n+ * KARATSUBA_CONDITION() -\n+ *\n+ * This macro determines if the Karatsuba algorithm should be applied\n+ * based on the number of digits in the multiplicands. It checks if\n+ * the number of digits in the larger multiplicand exceeds a base limit\n+ * and if the sizes of the multiplicands fall within specific ranges\n+ * where Karatsuba multiplication is usually beneficial.\n+ *\n+ * The conditions encapsulated by KARATSUBA_CONDITION are:\n+ * 1. The larger multiplicand has more digits than the base limit.\n+ * 2. The sizes of the multiplicands fall within low, middle, or high range\n+ * conditions which were identified as performance beneficial regions during\n+ * benchmarks.\n+ *\n+ * The macro ensures that the algorithm is applied only when it is likely\n+ * to provide performance benefits, considering the size and ratio of the\n+ * factors.\n+ */\n\nWhat do you think?\n\nRegards,\nJoel", "msg_date": "Sun, 30 Jun 2024 20:30:59 +0200", "msg_from": "\"Joel Jacobson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sun, 30 Jun 2024 at 11:22, Joel Jacobson <[email protected]> wrote:\n>\n> The surprising realization here is that there are actually (var1ndigits, var2ndigits)\n> combinations where *only* doing mul_var_karatsuba_half() recursively\n> all the way down to schoolbook *is* a performance win,\n> even though we don't do any mul_var_karatsuba_full().\n>\n> Indeed only mul_var_karatsuba_half() will be called with the inputs:\n> var1ndigits=1000 var2ndigits=10000\n> var1ndigits=1000 var2ndigits=5000\n> It will never call mul_var_karatsuba_full().\n>\n> Surprisingly, this still gives a 13% speed-up on a Intel Core i9-14900K.\n>\n\nHmm, I don't see any gains in that example. Logically, it is doing\nslightly more work splitting up var2 and adding partial products.\nHowever, it's possible that what you're seeing is a memory access\nproblem where, if var2 is too large, it won't fit in cache close to\nthe CPU, and since the schoolbook algorithm traverses var2 multiple\ntimes, it ends up being quicker to split up var2. Since I have a\nslower CPU, I'm more likely to be CPU-limited, while you might be\nmemory-limited.\n\nThis makes me think that this is always going to be very\nhardware-dependent, and we shouldn't presume what will work best on\nthe user's hardware.\n\nHowever, if a 1000x1000 ndigit product is known to be faster using\nKaratsuba on some particular hardware (possibly nearly all hardware),\nthen why wouldn't it make sense to do the above as 10 invocations of\nKaratsuba?\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:34:00 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" }, { "msg_contents": "On Sun, 30 Jun 2024 at 11:22, Joel Jacobson <[email protected]> wrote:\n>\n> On Sat, Jun 29, 2024, at 14:22, Dean Rasheed wrote:\n>\n> > But why just split it into two pieces? That will just lead\n> > to a lot of unnecessary recursion for very unbalanced inputs. Instead,\n> > why not split the longer input into N roughly equal sized pieces, each\n> > around the same length as the shorter input, multiplying and adding\n> > them at the appropriate offsets?\n>\n> The approach you're describing is implemented by e.g. CPython\n> and is called \"lopsided\" in their code base. It has some different\n> performance characteristics, compared to the recursive Half-Karatsuba\n> approach.\n>\n> What I didn't like about lopsided is the degenerate case where the\n> last chunk is much shorter than the var1, for example, if we pretend\n> we would be doing Karatsuba all the way down to ndigits 2,\n> and think about the example var1ndigits = 3 and var2ndigits = 10,\n> then lopsided would do\n> var1ndigits=3 var2ndigits=3\n> var1ndigits=3 var2ndigits=3\n> var1ndigits=3 var2ndigits=3\n> var1ndigits=3 var2ndigits=1\n>\n> whereas Half-Karatsuba would do\n> var1ndigits=3 var2ndigits=5\n> var1ndigits=3 var2ndigits=5\n>\n> You can find contrary examples too of course where lopsided\n> is better than Half-Karatsuba, none of them seem substantially better\n> than the other.\n\nActually, that wasn't quite what I was thinking of. The approach I've\nseen (I can't remember where) is to split var2 into roughly\nequal-sized chunks as follows:\n\nIf var2ndigits >= 2 * var1ndigits, split it into a number chunks where\n\n nchunks = var2ndigits / var1ndigits\n\nusing integer division with truncation. Then call mul_var_chunks()\n(say), passing it nchunks, to perform the multiplication using that\nmany chunks.\n\nmul_var_chunks() would use nchunks to decides on the chunk size according to\n\n chunk_size = var2ndigits / nchunks\n\nagain using integer division with truncation. That has a remainder in\nthe range [0, nchunks), which is divided up between the chunks so that\nsome of them end up being chunk_size + 1 digits. I.e., something like:\n\n chunk_remainder = var2ndigits - chunk_size * nchunks\n\n for (int start = 0, end = chunk_size;\n start < var2ndigits;\n start = end, end = start + chunk_size)\n {\n /* Distribute remainder over the first few chunks */\n if (chunk_remainder > 0)\n {\n end++;\n chunk_remainder--;\n }\n\n /* Process chunk between \"start\" and \"end\" */\n }\n\nWhat this does is adapt the shape of the chunks to the region where\nthe Karatsuba algorithm is most efficient. For example, suppose\nvar1ndigits = 1000. Then:\n\nFor 2000x1000 to 2999x1000 digit products, nchunks will be 2 and\nchunk_size will be at most (2999/2)+1 = 1500.\n\nFor 3000x1000 to 3999x1000 digit products, nchunks will be 3 and\nchunk_size will be at most (3999/3)+1 = 1334.\n\nAnd so on.\n\nThe result is that all the chunks will fall into that region where\nvar2ndigits / var1ndigits is between 1 and 1.5, for which\nKARATSUBA_LOW_RANGE_CONDITION() will almost certainly pass, and\nKaratsuba will operate efficiently all the way down to\nKARATSUBA_BASE_LIMIT.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 2 Jul 2024 10:42:06 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimize numeric.c mul_var() using the Karatsuba algorithm" } ]
[ { "msg_contents": "I started looking into the ideas discussed at [1] about reimplementing\necpg's string handling. Before I could make any progress I needed\nto understand the existing input code, part of which is the macro\nexpansion mechanism ... and the more I looked at that the more bugs\nI found, not to mention that it uses misleading field names and is\nnext door to uncommented. I found two ways to crash ecpg outright\nand several more cases in which it'd produce surprising behavior.\nAs an example,\n\n$ cd .../src/interfaces/ecpg/test/preproc/\n$ ../../preproc/ecpg --regression -I./../../include -I. -DNAMELEN=99 -o define.c define.pgc\nmunmap_chunk(): invalid pointer\nAborted\n\nAttached is a patch that cleans all that up and attempts to add a\nlittle documentation about how things work. One thing it's missing\nis any test of the behavior when command-line macro definitions are\ncarried from one file to the next one. To test that, we'd need to\ncompile more than one ecpg input file at a time. I can see how\nto kluge the Makefiles to make that happen, basically this'd do:\n\n define.c: define.pgc $(ECPG_TEST_DEPENDENCIES)\n-\t$(ECPG) -DCMDLINESYM=123 -o $@ $<\n+\t$(ECPG) -DCMDLINESYM=123 -o $@ $< $<\n\nBut I have no idea about making it work in meson. Any suggestions?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3897526.1712710536%40sss.pgh.pa.us", "msg_date": "Mon, 15 Apr 2024 17:48:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Bugs in ecpg's macro mechanism" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 17:48:32 -0400, Tom Lane wrote:\n> I started looking into the ideas discussed at [1] about reimplementing\n> ecpg's string handling. Before I could make any progress I needed\n> to understand the existing input code, part of which is the macro\n> expansion mechanism ... and the more I looked at that the more bugs\n> I found, not to mention that it uses misleading field names and is\n> next door to uncommented.\n\nAs part of the discussion leading to [1] I had looked at parse.pl and found it\nfairly impressively obfuscated and devoid of helpful comments.\n\n\n> I found two ways to crash ecpg outright and several more cases in which it'd\n> produce surprising behavior.\n\n:/\n\n\n> One thing it's missing is any test of the behavior when command-line macro\n> definitions are carried from one file to the next one. To test that, we'd\n> need to compile more than one ecpg input file at a time. I can see how to\n> kluge the Makefiles to make that happen, basically this'd do:\n> \n> define.c: define.pgc $(ECPG_TEST_DEPENDENCIES)\n> -\t$(ECPG) -DCMDLINESYM=123 -o $@ $<\n> +\t$(ECPG) -DCMDLINESYM=123 -o $@ $< $<\n> \n> But I have no idea about making it work in meson. Any suggestions?\n\nSo you just want to compile define.c twice? The below should suffice:\n\ndiff --git i/src/interfaces/ecpg/test/sql/meson.build w/src/interfaces/ecpg/test/sql/meson.build\nindex e04684065b0..202dc69c6ea 100644\n--- i/src/interfaces/ecpg/test/sql/meson.build\n+++ w/src/interfaces/ecpg/test/sql/meson.build\n@@ -31,7 +31,7 @@ pgc_files = [\n ]\n \n pgc_extra_flags = {\n- 'define': ['-DCMDLINESYM=123'],\n+ 'define': ['-DCMDLINESYM=123', files('define.pgc')],\n 'oldexec': ['-r', 'questionmarks'],\n }\n \n\nI assume that was just an test hack, because it leads to the build failing\nbecause of main being duplicated. But it'd work the same with another, \"non\noverlapping\", file.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:10:44 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bugs in ecpg's macro mechanism" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-15 17:48:32 -0400, Tom Lane wrote:\n>> But I have no idea about making it work in meson. Any suggestions?\n\n> So you just want to compile define.c twice? The below should suffice:\n\n> - 'define': ['-DCMDLINESYM=123'],\n> + 'define': ['-DCMDLINESYM=123', files('define.pgc')],\n\nAh, thanks. I guess this depends on getopt_long reordering arguments\n(since the \"-o outfile\" bit will come later). That is safe enough\nin HEAD since 411b72034, but it might fail on weird platforms in v16.\nHow much do we care about that? (We can avoid that hazard in the\nmakefile build easily enough.)\n\n> I assume that was just an test hack, because it leads to the build failing\n> because of main being duplicated. But it'd work the same with another, \"non\n> overlapping\", file.\n\nYeah, I hadn't actually worked through what to do in detail.\nHere's a v2 that adds that testing. I also added some more\nuser-facing doco, and fixed a small memory leak that I noted\nfrom valgrind testing. (It's hardly the only one in ecpg,\nbut it was easy to fix as part of this patch.)\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 15 Apr 2024 20:47:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bugs in ecpg's macro mechanism" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 20:47:16 -0400, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > On 2024-04-15 17:48:32 -0400, Tom Lane wrote:\n> >> But I have no idea about making it work in meson. Any suggestions?\n> \n> > So you just want to compile define.c twice? The below should suffice:\n> \n> > - 'define': ['-DCMDLINESYM=123'],\n> > + 'define': ['-DCMDLINESYM=123', files('define.pgc')],\n> \n> Ah, thanks. I guess this depends on getopt_long reordering arguments\n> (since the \"-o outfile\" bit will come later). That is safe enough\n> in HEAD since 411b72034, but it might fail on weird platforms in v16.\n> How much do we care about that? (We can avoid that hazard in the\n> makefile build easily enough.)\n\nOh, I didn't even think of that. If we do care, we can just move the -o to\nearlier. Or just officially add it as another input, that'd just be a bit of\nnotational overhead.\n\nAs moving the arguments around would just be the following, I see no reason to\njust do so.\n\ndiff --git i/src/interfaces/ecpg/test/meson.build w/src/interfaces/ecpg/test/meson.build\nindex c1e508ccc82..d7c0e9de7d6 100644\n--- i/src/interfaces/ecpg/test/meson.build\n+++ w/src/interfaces/ecpg/test/meson.build\n@@ -45,9 +45,10 @@ ecpg_preproc_test_command_start = [\n '--regression',\n '-I@CURRENT_SOURCE_DIR@',\n '-I@SOURCE_ROOT@' + '/src/interfaces/ecpg/include/',\n+ '-o', '@OUTPUT@',\n ]\n ecpg_preproc_test_command_end = [\n- '-o', '@OUTPUT@', '@INPUT@'\n+ '@INPUT@',\n ]\n \n ecpg_test_dependencies = []\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 18:05:44 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bugs in ecpg's macro mechanism" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On 2024-04-15 20:47:16 -0400, Tom Lane wrote:\n>> Ah, thanks. I guess this depends on getopt_long reordering arguments\n>> (since the \"-o outfile\" bit will come later). That is safe enough\n>> in HEAD since 411b72034, but it might fail on weird platforms in v16.\n>> How much do we care about that? (We can avoid that hazard in the\n>> makefile build easily enough.)\n\n> As moving the arguments around would just be the following, I see no reason to\n> just do so.\n\nFair enough. I'm inclined to include that change only in v16, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 21:33:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bugs in ecpg's macro mechanism" } ]
[ { "msg_contents": "I just noticed that my animal indri has been failing in the\nback branches since I updated its MacPorts packages a few\ndays ago:\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Werror=unguarded-availability-new -Wendif-labels -Wmissing-format-attribute -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument -Wno-compound-token-split-by-macro -g -O2 -fno-common -Werror -fvisibility=hidden -I. -I. -I../../src/include -I/opt/local/include -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk -I/opt/local/include/libxml2 -I/opt/local/include -I/opt/local/include -I/opt/local/include -I/opt/local/include -c -o mbuf.o mbuf.c\nxpath.c:77:2: error: 'xmlSubstituteEntitiesDefault' is deprecated [-Werror,-Wdeprecated-declarations]\n xmlSubstituteEntitiesDefault(1);\n ^\n/opt/local/include/libxml2/libxml/parser.h:952:1: note: 'xmlSubstituteEntitiesDefault' has been explicitly marked deprecated here\nXML_DEPRECATED XMLPUBFUN int\n^\n/opt/local/include/libxml2/libxml/xmlversion.h:447:43: note: expanded from macro 'XML_DEPRECATED'\n# define XML_DEPRECATED __attribute__((deprecated))\n ^\n1 error generated.\nmake[1]: *** [xpath.o] Error 1\n\nI could switch the animal to use -Wno-deprecated-declarations in the\nback branches, but I'd rather not. I think the right answer is to\nback-patch Michael's 65c5864d7 (xml2: Replace deprecated routines with\nrecommended ones). We speculated about that at the time (see e.g.,\n400928b83) but didn't pull the trigger. I think 65c5864d7 has now\nbaked long enough that it'd be safe to back-patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 19:14:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 19:14:22 -0400, Tom Lane wrote:\n> I think the right answer is to\n> back-patch Michael's 65c5864d7 (xml2: Replace deprecated routines with\n> recommended ones). We speculated about that at the time (see e.g.,\n> 400928b83) but didn't pull the trigger. I think 65c5864d7 has now\n> baked long enough that it'd be safe to back-patch.\n\nLooks like a reasonable plan to me.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:20:43 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "On Mon, Apr 15, 2024 at 07:14:22PM -0400, Tom Lane wrote:\n> I could switch the animal to use -Wno-deprecated-declarations in the\n> back branches, but I'd rather not. I think the right answer is to\n> back-patch Michael's 65c5864d7 (xml2: Replace deprecated routines with\n> recommended ones). We speculated about that at the time (see e.g.,\n> 400928b83) but didn't pull the trigger. I think 65c5864d7 has now\n> baked long enough that it'd be safe to back-patch.\n\nYeah, I saw the failure with indri this morning while screening the\nbuildfarm, and was going to send a message about that. Backpatching\n65c5864d7 would be the right answer to that, agreed, and that should\nbe rather straight-forward.\n\nNote however the presence of xml_is_well_formed in the back-branches,\nwhere there is an extra xmlParseMemory that needs to be switched to\nxmlReadMemory but that's a simple switch.\n\nWould you prefer if I do it?\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 08:30:46 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Apr 15, 2024 at 07:14:22PM -0400, Tom Lane wrote:\n>> I could switch the animal to use -Wno-deprecated-declarations in the\n>> back branches, but I'd rather not. I think the right answer is to\n>> back-patch Michael's 65c5864d7 (xml2: Replace deprecated routines with\n>> recommended ones). We speculated about that at the time (see e.g.,\n>> 400928b83) but didn't pull the trigger. I think 65c5864d7 has now\n>> baked long enough that it'd be safe to back-patch.\n\n> Would you prefer if I do it?\n\nPlease, if you have the time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 19:42:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "On Mon, Apr 15, 2024 at 07:42:38PM -0400, Tom Lane wrote:\n> Please, if you have the time.\n\nOkay, done that in the 12~16 range then, removing all traces of\nxmlParseMemory() including for xml_is_well_formed() in 12~14. That\nshould calm down indri.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 12:34:45 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Mon, Apr 15, 2024 at 07:42:38PM -0400, Tom Lane wrote:\n>> Please, if you have the time.\n\n> Okay, done that in the 12~16 range then, removing all traces of\n> xmlParseMemory() including for xml_is_well_formed() in 12~14. That\n> should calm down indri.\n\nThanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 23:56:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" }, { "msg_contents": "On Mon, Apr 15, 2024 at 11:56:40PM -0400, Tom Lane wrote:\n> Thanks!\n\nindri has reported on some branches, and is now green for\nREL_12_STABLE and REL_13_STABLE. The rest should be OK.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 13:44:23 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to back-patch libxml deprecation fixes?" } ]
[ { "msg_contents": "I'm not sure I understand all of the history behind pg_wchar, but it\nseems to be some blend of:\n\n (a) Postgres's own internal representation of a decoded character\n (b) libc's wchar_t\n (c) Unicode code point\n\nFor example, Postgres has its own encoding/decoding routines, so (a) is\nthe most obvious definition. When the server encoding is UTF-8, the\ninternal representation is a Unicode code point, which is convenient\nfor the builtin and ICU providers, as well as some (most? all?) libc\nimplementations. Other encodings have different represenations which\nseem to favor the libc provider.\n\npg_wchar is also passed directly to libc routines like iswalpha_l()\n(see pg_wc_isalpha()), which is depending on definition (b). We guard\nit with:\n\n if (sizeof(wchar_t) >= 4 || c <= (pg_wchar) 0xFFFF)\n\nto ensure that the pg_wchar is representable in the libc's wchar_t\ntype. As far as I can tell this is still no guarantee of correctness;\nit's just a sanity check. I didn't find an obviously better way of\ndoing it, however.\n\nWhen using ICU, we also pass a pg_wchar directly to ICU routines, which\ndepends on definition (c), and can lead to problems like:\n\nhttps://www.postgresql.org/message-id/[email protected]\n\n\nThe comment at the top of pg_regc_locale.c explains some of the above,\nbut not all. I'd like to organize this a bit better:\n\n * a new typedef for a Unicode code point (\"codepoint\"? \"uchar\"?)\n * a no-op conversion routine from pg_wchar to a codepoint that would\nassert that the server encoding is UTF-8 (#ifndef FRONTEND, of course)\n * a no-op conversion routine from pg_wchar to wchar_t that would be a\ngood place for a comment describing that it's a \"best effort\" and may\nnot be correct in all cases\n\nWe could even go so far as to make the pg_wchar type not implicitly-\ncastable, so that callers would be forced to convert it to either a\nwchar_t or a code point.\n\nTom also suggested here:\n\nhttps://www.postgresql.org/message-id/360857.1701302164%40sss.pgh.pa.us\n\nthat we don't necessarily need to use libc at all, and I like that\nidea. Perhaps the suggestions above are a step in that direction, or\nperhaps we can skip ahead?\n\nI intend to submit a patch for the July CF. Thoughts?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 15 Apr 2024 16:40:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": true, "msg_subject": "[18] clarify the difference between pg_wchar, wchar_t, and Unicode\n code points" }, { "msg_contents": "On 16.04.24 01:40, Jeff Davis wrote:\n> I'm not sure I understand all of the history behind pg_wchar, but it\n> seems to be some blend of:\n> \n> (a) Postgres's own internal representation of a decoded character\n> (b) libc's wchar_t\n> (c) Unicode code point\n> \n> For example, Postgres has its own encoding/decoding routines, so (a) is\n> the most obvious definition.\n\n(a) is the correct definition, I think. The other ones are just \noccasional conveniences, and occasionally wrong.\n\n> When using ICU, we also pass a pg_wchar directly to ICU routines, which\n> depends on definition (c), and can lead to problems like:\n> \n> https://www.postgresql.org/message-id/[email protected]\n\nThat's just a plain bug, I think. It's missing the encoding check that \nfor example pg_strncoll_icu() does.\n\n> The comment at the top of pg_regc_locale.c explains some of the above,\n> but not all. I'd like to organize this a bit better:\n> \n> * a new typedef for a Unicode code point (\"codepoint\"? \"uchar\"?)\n> * a no-op conversion routine from pg_wchar to a codepoint that would\n> assert that the server encoding is UTF-8 (#ifndef FRONTEND, of course)\n> * a no-op conversion routine from pg_wchar to wchar_t that would be a\n> good place for a comment describing that it's a \"best effort\" and may\n> not be correct in all cases\n\nI guess sometimes you really want to just store an array of Unicode code \npoints. But I'm not sure how this would actually address coding \nmistakes like the one above. You still need to check the server \nencoding and do encoding conversion when necessary.\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 21:18:33 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [18] clarify the difference between pg_wchar, wchar_t, and\n Unicode code points" } ]
[ { "msg_contents": "installation.sgml says our minimum meson version is 0.54, but it's\nsilent on what the minimum ninja version is. What RHEL8 supplies\ndoesn't work for me:\n\n$ meson setup build\n...\nFound ninja-1.8.2 at /usr/bin/ninja\nninja: error: build.ninja:7140: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n\nWARNING: Could not create compilation database.\n\nThat's not a huge problem in itself, but I think we should figure\nout what's the minimum version that works, and document that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Apr 2024 20:26:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "What's our minimum ninja version?" }, { "msg_contents": "Hi,\n\nOn 2024-04-15 20:26:49 -0400, Tom Lane wrote:\n> installation.sgml says our minimum meson version is 0.54, but it's\n> silent on what the minimum ninja version is. What RHEL8 supplies\n> doesn't work for me:\n\nYea. There was some thread where we'd noticed that, think you were on that\ntoo.\n\nWe could probably work around the issue, if we needed to, possibly at the\nprice of a bit more awkward notation. But I'm inclined to just document it.\n\n\n> $ meson setup build\n> ...\n> Found ninja-1.8.2 at /usr/bin/ninja\n> ninja: error: build.ninja:7140: multiple outputs aren't (yet?) supported by depslog; bring this up on the mailing list if it affects you\n> \n> WARNING: Could not create compilation database.\n> \n> That's not a huge problem in itself, but I think we should figure\n> out what's the minimum version that works, and document that.\n\nLooks like it's 1.10, released 2020-01-27.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 15 Apr 2024 17:56:07 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's our minimum ninja version?" } ]
[ { "msg_contents": "In [1] Andres mentioned that there's no way to determine the memory\ncontext type in pg_backend_memory_contexts. This is a bit annoying as\nI'd like to add a test to exercise BumpStats().\n\nHaving the context type in the test's expected output helps ensure we\nare exercising BumpStats() and any future changes to the choice of\ncontext type in tuplesort.c gets flagged up by the test breaking.\n\nIt's probably too late for PG17, but I'll leave this here for the July CF.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/[email protected]", "msg_date": "Tue, 16 Apr 2024 13:30:01 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "On Tue, 16 Apr 2024 at 13:30, David Rowley <[email protected]> wrote:\n> In [1] Andres mentioned that there's no way to determine the memory\n> context type in pg_backend_memory_contexts. This is a bit annoying as\n> I'd like to add a test to exercise BumpStats().\n>\n> Having the context type in the test's expected output helps ensure we\n> are exercising BumpStats() and any future changes to the choice of\n> context type in tuplesort.c gets flagged up by the test breaking.\n\nbea97cd02 added a new regression test in sysviews.sql to call\npg_backend_memory_contexts to test the BumpStats() function.\n\nThe attached updates the v1 patch to add the new type column to the\nnew call to pg_backend_memory_contexts() to ensure the type = \"Bump\"\n\nNo other changes.\n\nDavid", "msg_date": "Wed, 24 Apr 2024 17:47:51 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "Hi David,\n\nGiving this a once-through, this seems straightforward and useful. I\nhave a slight preference for keeping \"name\" the first field in the\nview and moving \"type\" to the second, but that's minor.\n\nJust confirming that the allocator types are not extensible without a\nrecompile, since it's using a specific node tag to switch on, so there\nare no concerns with not properly displaying the output of something\nelse.\n\nThe \"????\" text placeholder might be more appropriate as \"<unknown>\",\nor perhaps stronger, include a WARNING in the logs, since an unknown\ntag at this point would be an indication of some sort of memory\ncorruption.\n\nSince there are only four possible values, I think there would be\nutility in including them in the docs for this field. I also think it\nwould be useful to have some sort of comments at least in mmgr/README\nto indicate that if a new type of allocator is introduced that you\nwill also need to add the node to the function for this type, since\nit's not an automatic conversion. (For that matter, instead of\nswitching on node type and outputting a given string, is there a\ngeneric function that could just give us the string value for node\ntype so we don't need to teach anything else about it anyway?)\n\nThanks,\n\nDavid\n\n\n", "msg_date": "Thu, 30 May 2024 12:21:13 -0700", "msg_from": "David Christensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "On Fri, 31 May 2024 at 07:21, David Christensen <[email protected]> wrote:\n> Giving this a once-through, this seems straightforward and useful. I\n> have a slight preference for keeping \"name\" the first field in the\n> view and moving \"type\" to the second, but that's minor.\n\nNot having it first make sense, but I don't think putting it between\nname and ident is a good idea. I think name and ident belong next to\neach other. parent likely should come after those since that also\nrelates to the name.\n\nHow about putting it after \"parent\"?\n\n> Just confirming that the allocator types are not extensible without a\n> recompile, since it's using a specific node tag to switch on, so there\n> are no concerns with not properly displaying the output of something\n> else.\n\nThey're not extensible.\n\n> The \"????\" text placeholder might be more appropriate as \"<unknown>\",\n> or perhaps stronger, include a WARNING in the logs, since an unknown\n> tag at this point would be an indication of some sort of memory\n> corruption.\n\nThis follows what we do in other places. If you look at explain.c,\nyou'll see lots of \"???\"s.\n\nI think if you're worried about corrupted memory, then garbled output\nin pg_get_backend_memory_contexts wouldn't be that high on the list of\nconcerns.\n\n> Since there are only four possible values, I think there would be\n> utility in including them in the docs for this field.\n\nI'm not sure about this. We do try not to expose too much internal\ndetail in the docs. I don't know all the reasons for that, but at\nleast one reason is that it's easy for things to get outdated as code\nevolves. I'm also unsure how much value there is in listing 4 possible\nvalues unless we were to document the meaning of each of those values,\nand doing that puts us even further down the path of detailing\nPostgres internals in the documents. I don't think that's a\nmaintenance burden that's often worth the trouble.\n\n> I also think it\n> would be useful to have some sort of comments at least in mmgr/README\n> to indicate that if a new type of allocator is introduced that you\n> will also need to add the node to the function for this type, since\n> it's not an automatic conversion.\n\nI don't think it's sustainable to do this. If we have to maintain\ndocumentation that lists everything you must do in order to add some\nnew node types then I feel it's just going to get outdated as soon as\nsomeone adds something new that needs to be done. I'm only one\ndeveloper, but for me, I'd not even bother looking there if I was\nplanning to add a new memory context type.\n\nWhat I would be doing is searching the entire code base for where\nspecial handling is done for the existing types and ensuring I\nconsider if I need to include a case for the new node type. In this\ncase, I'd probably choose to search for \"T_AllocSetContext\", and I'd\nquickly land on PutMemoryContextsStatsTupleStore() and update it. This\nmethod doesn't get outdated, provided you do \"git pull\" occasionally.\n\n> (For that matter, instead of\n> switching on node type and outputting a given string, is there a\n> generic function that could just give us the string value for node\n> type so we don't need to teach anything else about it anyway?)\n\nThere isn't. nodeToString() does take some node types as inputs and\nserialise those to something JSON-like, but that includes serialising\neach field of the node type too. The memory context types are not\nhandled by those functions. I think it's fine to copy what's done in\nexplain.c. \"git grep \\\"???\\\" -- *.c | wc -l\" gives me 31 occurrences,\nso I'm not doing anything new.\n\nI've attached an updated patch which changes the position of the new\ncolumn in the view.\n\nThank you for the review.\n\nDavid", "msg_date": "Fri, 31 May 2024 12:35:58 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "\n> On May 30, 2024, at 5:36 PM, David Rowley <[email protected]> wrote:\n> \n> On Fri, 31 May 2024 at 07:21, David Christensen <[email protected]> wrote:\n>> Giving this a once-through, this seems straightforward and useful. I\n>> have a slight preference for keeping \"name\" the first field in the\n>> view and moving \"type\" to the second, but that's minor.\n> \n> Not having it first make sense, but I don't think putting it between\n> name and ident is a good idea. I think name and ident belong next to\n> each other. parent likely should come after those since that also\n> relates to the name.\n> \n> How about putting it after \"parent\"?\n\nThat works for me. I skimmed the new patch and it seems fine but on my phone so didn’t do any build tests. \n\n>> Just confirming that the allocator types are not extensible without a\n>> recompile, since it's using a specific node tag to switch on, so there\n>> are no concerns with not properly displaying the output of something\n>> else.\n> \n> They're not extensible.\n\nGood to confirm. \n\n>> The \"????\" text placeholder might be more appropriate as \"<unknown>\",\n>> or perhaps stronger, include a WARNING in the logs, since an unknown\n>> tag at this point would be an indication of some sort of memory\n>> corruption.\n> \n> This follows what we do in other places. If you look at explain.c,\n> you'll see lots of \"???\"s.\n> \n> I think if you're worried about corrupted memory, then garbled output\n> in pg_get_backend_memory_contexts wouldn't be that high on the list of\n> concerns.\n\nHeh, indeed. +1 for precedent. \n\n>> Since there are only four possible values, I think there would be\n>> utility in including them in the docs for this field.\n> \n> I'm not sure about this. We do try not to expose too much internal\n> detail in the docs. I don't know all the reasons for that, but at\n> least one reason is that it's easy for things to get outdated as code\n> evolves. I'm also unsure how much value there is in listing 4 possible\n> values unless we were to document the meaning of each of those values,\n> and doing that puts us even further down the path of detailing\n> Postgres internals in the documents. I don't think that's a\n> maintenance burden that's often worth the trouble.\n\nI can see that and it’s consistent with what we do, just was thinking as a user that that may be useful, but if you’re using this view you likely already know what it means.\n\n>> I also think it\n>> would be useful to have some sort of comments at least in mmgr/README\n>> to indicate that if a new type of allocator is introduced that you\n>> will also need to add the node to the function for this type, since\n>> it's not an automatic conversion.\n> \n> I don't think it's sustainable to do this. If we have to maintain\n> documentation that lists everything you must do in order to add some\n> new node types then I feel it's just going to get outdated as soon as\n> someone adds something new that needs to be done. I'm only one\n> developer, but for me, I'd not even bother looking there if I was\n> planning to add a new memory context type.\n> \n> What I would be doing is searching the entire code base for where\n> special handling is done for the existing types and ensuring I\n> consider if I need to include a case for the new node type. In this\n> case, I'd probably choose to search for \"T_AllocSetContext\", and I'd\n> quickly land on PutMemoryContextsStatsTupleStore() and update it. This\n> method doesn't get outdated, provided you do \"git pull\" occasionally.\n\nFair. \n\n>> (For that matter, instead of\n>> switching on node type and outputting a given string, is there a\n>> generic function that could just give us the string value for node\n>> type so we don't need to teach anything else about it anyway?)\n> \n> There isn't. nodeToString() does take some node types as inputs and\n> serialise those to something JSON-like, but that includes serialising\n> each field of the node type too. The memory context types are not\n> handled by those functions. I think it's fine to copy what's done in\n> explain.c. \"git grep \\\"???\\\" -- *.c | wc -l\" gives me 31 occurrences,\n> so I'm not doing anything new.\n> \n> I've attached an updated patch which changes the position of the new\n> column in the view.\n\n+1\n\n\n\n", "msg_date": "Fri, 31 May 2024 07:12:56 -0700", "msg_from": "David Christensen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "On Fri, May 31, 2024 at 12:35:58PM +1200, David Rowley wrote:\n> This follows what we do in other places. If you look at explain.c,\n> you'll see lots of \"???\"s.\n> \n> I think if you're worried about corrupted memory, then garbled output\n> in pg_get_backend_memory_contexts wouldn't be that high on the list of\n> concerns.\n\n+\tconst char *type;\n[...]\n+\tswitch (context->type)\n+\t{\n+\t\tcase T_AllocSetContext:\n+\t\t\ttype = \"AllocSet\";\n+\t\t\tbreak;\n+\t\tcase T_GenerationContext:\n+\t\t\ttype = \"Generation\";\n+\t\t\tbreak;\n+\t\tcase T_SlabContext:\n+\t\t\ttype = \"Slab\";\n+\t\t\tbreak;\n+\t\tcase T_BumpContext:\n+\t\t\ttype = \"Bump\";\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\ttype = \"???\";\n+\t\t\tbreak;\n+\t}\n\nYeah, it's a common practice to use that as fallback. What you are\ndoing is OK, and it is not possible to remove the default case as\nthese are nodetags to generate warnings if a new value needs to be\nadded.\n\nThis patch looks like a good idea, so +1 from here. (PS: catversion\nbump). \n--\nMichael", "msg_date": "Fri, 31 May 2024 17:55:25 -0700", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" }, { "msg_contents": "On Sat, 1 Jun 2024 at 12:55, Michael Paquier <[email protected]> wrote:\n> This patch looks like a good idea, so +1 from here.\n\nThank you to both of you for reviewing this. I've now pushed the patch.\n\nDavid\n\n\n", "msg_date": "Mon, 1 Jul 2024 21:20:18 +1200", "msg_from": "David Rowley <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Add memory context type to pg_backend_memory_contexts view" } ]
[ { "msg_contents": "\nHi,\n\nUpon reviewing the login event trigger, I noticed a potential typo about\nthe SetDatabaseHasLoginEventTriggers function name.\n\ndiff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c\nindex 0d3214df9c..7a5ed6b985 100644\n--- a/src/backend/commands/event_trigger.c\n+++ b/src/backend/commands/event_trigger.c\n@@ -111,7 +111,7 @@ static void validate_table_rewrite_tags(const char *filtervar, List *taglist);\n static void EventTriggerInvoke(List *fn_oid_list, EventTriggerData *trigdata);\n static const char *stringify_grant_objtype(ObjectType objtype);\n static const char *stringify_adefprivs_objtype(ObjectType objtype);\n-static void SetDatatabaseHasLoginEventTriggers(void);\n+static void SetDatabaseHasLoginEventTriggers(void);\n\n /*\n * Create an event trigger.\n@@ -315,7 +315,7 @@ insert_event_trigger_tuple(const char *trigname, const char *eventname, Oid evtO\n \t * faster lookups in hot codepaths. Set the flag unless already True.\n \t */\n \tif (strcmp(eventname, \"login\") == 0)\n-\t\tSetDatatabaseHasLoginEventTriggers();\n+\t\tSetDatabaseHasLoginEventTriggers();\n\n \t/* Depend on owner. */\n \trecordDependencyOnOwner(EventTriggerRelationId, trigoid, evtOwner);\n@@ -383,7 +383,7 @@ filter_list_to_array(List *filterlist)\n * current database has on login event triggers.\n */\n void\n-SetDatatabaseHasLoginEventTriggers(void)\n+SetDatabaseHasLoginEventTriggers(void)\n {\n \t/* Set dathasloginevt flag in pg_database */\n \tForm_pg_database db;\n@@ -453,7 +453,7 @@ AlterEventTrigger(AlterEventTrigStmt *stmt)\n \t */\n \tif (namestrcmp(&evtForm->evtevent, \"login\") == 0 &&\n \t\ttgenabled != TRIGGER_DISABLED)\n-\t\tSetDatatabaseHasLoginEventTriggers();\n+\t\tSetDatabaseHasLoginEventTriggers();\n\n \tInvokeObjectPostAlterHook(EventTriggerRelationId,\n \t\t\t\t\t\t\t trigoid, 0);\n@@ -925,7 +925,7 @@ EventTriggerOnLogin(void)\n \t/*\n \t * There is no active login event trigger, but our\n \t * pg_database.dathasloginevt is set. Try to unset this flag. We use the\n-\t * lock to prevent concurrent SetDatatabaseHasLoginEventTriggers(), but we\n+\t * lock to prevent concurrent SetDatabaseHasLoginEventTriggers(), but we\n \t * don't want to hang the connection waiting on the lock. Thus, we are\n \t * just trying to acquire the lock conditionally.\n \t */\n--\nRegards,\nJapin Li\n\n\n", "msg_date": "Tue, 16 Apr 2024 14:05:46 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Typo about the SetDatatabaseHasLoginEventTriggers?" }, { "msg_contents": "On Tue, Apr 16, 2024 at 02:05:46PM +0800, Japin Li wrote:\n> Upon reviewing the login event trigger, I noticed a potential typo about\n> the SetDatabaseHasLoginEventTriggers function name.\n\nIndeed, thanks! Will fix and double-check the surroundings.\n--\nMichael", "msg_date": "Tue, 16 Apr 2024 15:31:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typo about the SetDatatabaseHasLoginEventTriggers?" }, { "msg_contents": "On Tue, Apr 16, 2024 at 03:31:49PM +0900, Michael Paquier wrote:\n> Indeed, thanks! Will fix and double-check the surroundings.\n\nAnd fixed this one.\n--\nMichael", "msg_date": "Wed, 17 Apr 2024 15:10:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Typo about the SetDatatabaseHasLoginEventTriggers?" }, { "msg_contents": "\nOn Wed, 17 Apr 2024 at 14:10, Michael Paquier <[email protected]> wrote:\n> On Tue, Apr 16, 2024 at 03:31:49PM +0900, Michael Paquier wrote:\n>> Indeed, thanks! Will fix and double-check the surroundings.\n>\n> And fixed this one.\n\nThanks for the pushing!\n\n--\nRegards,\nJapin Li\n\n\n", "msg_date": "Wed, 17 Apr 2024 16:30:30 +0800", "msg_from": "Japin Li <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Typo about the SetDatatabaseHasLoginEventTriggers?" } ]
[ { "msg_contents": "Hi,\n\nKuroda-San reported an issue off-list that:\n\nIf user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\nand rollback, only the subscription option change can be rolled back, while the\nreplication slot's failover change is preserved.\n\nThis is because ALTER SUBSCRIPTION SET (failover) command internally executes\nthe replication command ALTER_REPLICATION_SLOT to change the replication slot's\nfailover property, but this replication command execution cannot be\nrollback.\n\nTo fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n(failover) inside a txn block, which is also consistent to the ALTER\nSUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\npatch to address this.\n\nBest Regards,\nHou Zhijie", "msg_date": "Tue, 16 Apr 2024 06:32:11 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Hi,\n\nOn Tue, Apr 16, 2024 at 06:32:11AM +0000, Zhijie Hou (Fujitsu) wrote:\n> Hi,\n> \n> Kuroda-San reported an issue off-list that:\n> \n> If user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\n> and rollback, only the subscription option change can be rolled back, while the\n> replication slot's failover change is preserved.\n\nNice catch, thanks!\n\n> To fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n> (failover) inside a txn block, which is also consistent to the ALTER\n> SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\n> patch to address this.\n\nAgree. The patch looks pretty straightforward to me. Worth to add this\ncase in the doc? (where we already mention that \"Commands ALTER SUBSCRIPTION ...\nREFRESH PUBLICATION and ALTER SUBSCRIPTION ... {SET|ADD|DROP} PUBLICATION ...\nwith refresh option as true cannot be executed inside a transaction block\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 16 Apr 2024 07:29:32 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Dear Hou,\n\n> Kuroda-San reported an issue off-list that:\n> \n> If user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\n> and rollback, only the subscription option change can be rolled back, while the\n> replication slot's failover change is preserved.\n> \n> This is because ALTER SUBSCRIPTION SET (failover) command internally\n> executes\n> the replication command ALTER_REPLICATION_SLOT to change the replication\n> slot's\n> failover property, but this replication command execution cannot be\n> rollback.\n> \n> To fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n> (failover) inside a txn block, which is also consistent to the ALTER\n> SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\n> patch to address this.\n\nThanks for posting the patch, the fix is same as my expectation.\nAlso, should we add the restriction to the doc? I feel [1] can be updated.\n\n\n[1]:https://www.postgresql.org/docs/devel/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\nhttps://www.fujitsu.com/ \n\n\n\n", "msg_date": "Tue, 16 Apr 2024 08:15:33 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Tue, Apr 16, 2024 at 1:45 PM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Hou,\n>\n> > Kuroda-San reported an issue off-list that:\n> >\n> > If user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\n> > and rollback, only the subscription option change can be rolled back, while the\n> > replication slot's failover change is preserved.\n> >\n> > This is because ALTER SUBSCRIPTION SET (failover) command internally\n> > executes\n> > the replication command ALTER_REPLICATION_SLOT to change the replication\n> > slot's\n> > failover property, but this replication command execution cannot be\n> > rollback.\n> >\n> > To fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n> > (failover) inside a txn block, which is also consistent to the ALTER\n> > SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\n> > patch to address this.\n>\n> Thanks for posting the patch, the fix is same as my expectation.\n> Also, should we add the restriction to the doc? I feel [1] can be updated.\n\n+1.\n\nSimilar to ALTER SUB, CREATE SUB also needs to be fixed. This is\nbecause we call alter_replication_slot in CREATE SUB as well, for the\ncase when slot_name is provided and create_slot=false. But the tricky\npart is we call alter_replication_slot() when creating subscription\nfor both failover=true and false. That means if we want to fix it on\nthe similar line of ALTER SUB, we have to disallow user from executing\nthe CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\nto break some existing use cases. (previously, user can execute such a\ncommand inside a txn block).\n\nSo, we need to think if there are better ways to fix it. After\ndiscussion with Hou-San offlist, here are some ideas:\n\n1. do not alter replication slot's failover option when CREATE\nSUBSCRIPTION WITH failover=false. This means we alter replication\nslot only when failover is set to true. And thus we can disallow\nCREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\ninside a txn block.\n\nThis option allows user to run CREATE-SUB(create_slot=false) with\nfailover=false in txn block like earlier. But on the downside, it\nmakes the behavior inconsistent for otherwise simpler option like\nfailover, i.e. with failover=true, CREATE SUB is not allowed in txn\nblock while with failover=false, it is allowed. It makes it slightly\ncomplex to be understood by user.\n\n2. let's not disallow CREATE SUB in txn block as earlier, just don't\nperform internal alter-failover during CREATE SUB for existing\nslots(create_slot=false, slot_name=xx) i.e. when create_slot is\nfalse, we will ignore failover parameter of CREATE SUB and it is\nuser's responsibility to set it appropriately using ALTER SUB cmd. For\ncreate_slot=true, the behaviour of CREATE-SUB remains same as earlier.\n\nThis option does not add new restriction for CREATE SUB wrt txn block.\nIn context of failover with create_slot=false, we already have a\nsimilar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\nSUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\nfailover by himself. CREAT SUB can also be documented in similar way.\nThis seems simpler to be understood considering existing ALTER SUB's\nbehavior as well. Plus, this will make CREATE-SUB code slightly\nsimpler and thus easily manageable.\n\n3. add a alter_slot option for CREATE SUBSCRIPTION, we can only alter\nthe slot's failover if alter_slot=true. And so we can disallow CREATE\nSUB WITH (slot_name =xx, alter_slot=true) inside a txn block.\n\nThis seems a clean way, as everything will be as per user's consent\nbased on alter_slot parameter. But on the downside, this will need\nintroducing additional parameter and also adding new restriction of\nrunning CREATE-sub in txn block for a specific case.\n\n4. Don't alter replication in subscription DDLs. Instead, try to alter\nreplication slot's failover in the apply worker. This means we need to\nexecute alter_replication_slot each time before starting streaming in\nthe apply worker.\n\nThis does not seem appealing to execute alter_replication_slot\neverytime the apply worker starts. But if others think it as a better\noption, it can be further analyzed.\n\n\nCurrently, our preference is option 2, as that looks a clean solution\nand also aligns with ALTER-SUB behavior which is already documented.\nThoughts?\n\n--------------------------------\nNote that we could not refer to the design of two_phase here, because\ntwo_phase can be considered as a streaming option, so it's fine to\nchange the two_phase along with START_REPLICATION command. (the\ntwo_phase is not changed in subscription DDLs, but get changed in\nSTART_REPLICATION command). But the failover is closely related to a\nreplication slot itself.\n--------------------------------\n\n\nThanks\nShveta\n\n\n", "msg_date": "Tue, 16 Apr 2024 17:06:28 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Tue, Apr 16, 2024 at 5:06 PM shveta malik <[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 1:45 PM Hayato Kuroda (Fujitsu)\n> <[email protected]> wrote:\n> >\n> > Dear Hou,\n> >\n> > > Kuroda-San reported an issue off-list that:\n> > >\n> > > If user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\n> > > and rollback, only the subscription option change can be rolled back, while the\n> > > replication slot's failover change is preserved.\n> > >\n> > > This is because ALTER SUBSCRIPTION SET (failover) command internally\n> > > executes\n> > > the replication command ALTER_REPLICATION_SLOT to change the replication\n> > > slot's\n> > > failover property, but this replication command execution cannot be\n> > > rollback.\n> > >\n> > > To fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n> > > (failover) inside a txn block, which is also consistent to the ALTER\n> > > SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\n> > > patch to address this.\n> >\n> > Thanks for posting the patch, the fix is same as my expectation.\n> > Also, should we add the restriction to the doc? I feel [1] can be updated.\n>\n> +1.\n>\n> Similar to ALTER SUB, CREATE SUB also needs to be fixed. This is\n> because we call alter_replication_slot in CREATE SUB as well, for the\n> case when slot_name is provided and create_slot=false. But the tricky\n> part is we call alter_replication_slot() when creating subscription\n> for both failover=true and false. That means if we want to fix it on\n> the similar line of ALTER SUB, we have to disallow user from executing\n> the CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\n> to break some existing use cases. (previously, user can execute such a\n> command inside a txn block).\n>\n> So, we need to think if there are better ways to fix it. After\n> discussion with Hou-San offlist, here are some ideas:\n>\n> 1. do not alter replication slot's failover option when CREATE\n> SUBSCRIPTION WITH failover=false. This means we alter replication\n> slot only when failover is set to true. And thus we can disallow\n> CREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\n> inside a txn block.\n>\n> This option allows user to run CREATE-SUB(create_slot=false) with\n> failover=false in txn block like earlier. But on the downside, it\n> makes the behavior inconsistent for otherwise simpler option like\n> failover, i.e. with failover=true, CREATE SUB is not allowed in txn\n> block while with failover=false, it is allowed. It makes it slightly\n> complex to be understood by user.\n>\n> 2. let's not disallow CREATE SUB in txn block as earlier, just don't\n> perform internal alter-failover during CREATE SUB for existing\n> slots(create_slot=false, slot_name=xx) i.e. when create_slot is\n> false, we will ignore failover parameter of CREATE SUB and it is\n> user's responsibility to set it appropriately using ALTER SUB cmd. For\n> create_slot=true, the behaviour of CREATE-SUB remains same as earlier.\n>\n> This option does not add new restriction for CREATE SUB wrt txn block.\n> In context of failover with create_slot=false, we already have a\n> similar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\n> SUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\n> failover by himself. CREAT SUB can also be documented in similar way.\n> This seems simpler to be understood considering existing ALTER SUB's\n> behavior as well. Plus, this will make CREATE-SUB code slightly\n> simpler and thus easily manageable.\n>\n\n+1 for option 2 as it sounds logical to me and consistent with ALTER\nSUBSCRIPTION. BTW, IIUC, you are referring to: \"When altering the\nslot_name, the failover and two_phase property values of the named\nslot may differ from the counterpart failover and two_phase parameters\nspecified in the subscription. When creating the slot, ensure the slot\nproperties failover and two_phase match their counterpart parameters\nof the subscription.\" in docs [1], right?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:22:24 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Thu, Apr 18, 2024 at 11:22 AM Amit Kapila <[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 5:06 PM shveta malik <[email protected]> wrote:\n> >\n> > On Tue, Apr 16, 2024 at 1:45 PM Hayato Kuroda (Fujitsu)\n> > <[email protected]> wrote:\n> > >\n> > > Dear Hou,\n> > >\n> > > > Kuroda-San reported an issue off-list that:\n> > > >\n> > > > If user execute ALTER SUBSCRIPTION SET (failover) command inside a txn block\n> > > > and rollback, only the subscription option change can be rolled back, while the\n> > > > replication slot's failover change is preserved.\n> > > >\n> > > > This is because ALTER SUBSCRIPTION SET (failover) command internally\n> > > > executes\n> > > > the replication command ALTER_REPLICATION_SLOT to change the replication\n> > > > slot's\n> > > > failover property, but this replication command execution cannot be\n> > > > rollback.\n> > > >\n> > > > To fix it, I think we can prevent user from executing ALTER SUBSCRIPTION set\n> > > > (failover) inside a txn block, which is also consistent to the ALTER\n> > > > SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach a small\n> > > > patch to address this.\n> > >\n> > > Thanks for posting the patch, the fix is same as my expectation.\n> > > Also, should we add the restriction to the doc? I feel [1] can be updated.\n> >\n> > +1.\n> >\n> > Similar to ALTER SUB, CREATE SUB also needs to be fixed. This is\n> > because we call alter_replication_slot in CREATE SUB as well, for the\n> > case when slot_name is provided and create_slot=false. But the tricky\n> > part is we call alter_replication_slot() when creating subscription\n> > for both failover=true and false. That means if we want to fix it on\n> > the similar line of ALTER SUB, we have to disallow user from executing\n> > the CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\n> > to break some existing use cases. (previously, user can execute such a\n> > command inside a txn block).\n> >\n> > So, we need to think if there are better ways to fix it. After\n> > discussion with Hou-San offlist, here are some ideas:\n> >\n> > 1. do not alter replication slot's failover option when CREATE\n> > SUBSCRIPTION WITH failover=false. This means we alter replication\n> > slot only when failover is set to true. And thus we can disallow\n> > CREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\n> > inside a txn block.\n> >\n> > This option allows user to run CREATE-SUB(create_slot=false) with\n> > failover=false in txn block like earlier. But on the downside, it\n> > makes the behavior inconsistent for otherwise simpler option like\n> > failover, i.e. with failover=true, CREATE SUB is not allowed in txn\n> > block while with failover=false, it is allowed. It makes it slightly\n> > complex to be understood by user.\n> >\n> > 2. let's not disallow CREATE SUB in txn block as earlier, just don't\n> > perform internal alter-failover during CREATE SUB for existing\n> > slots(create_slot=false, slot_name=xx) i.e. when create_slot is\n> > false, we will ignore failover parameter of CREATE SUB and it is\n> > user's responsibility to set it appropriately using ALTER SUB cmd. For\n> > create_slot=true, the behaviour of CREATE-SUB remains same as earlier.\n> >\n> > This option does not add new restriction for CREATE SUB wrt txn block.\n> > In context of failover with create_slot=false, we already have a\n> > similar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\n> > SUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\n> > failover by himself. CREAT SUB can also be documented in similar way.\n> > This seems simpler to be understood considering existing ALTER SUB's\n> > behavior as well. Plus, this will make CREATE-SUB code slightly\n> > simpler and thus easily manageable.\n> >\n>\n> +1 for option 2 as it sounds logical to me and consistent with ALTER\n> SUBSCRIPTION. BTW, IIUC, you are referring to: \"When altering the\n> slot_name, the failover and two_phase property values of the named\n> slot may differ from the counterpart failover and two_phase parameters\n> specified in the subscription. When creating the slot, ensure the slot\n> properties failover and two_phase match their counterpart parameters\n> of the subscription.\" in docs [1], right?\n\nYes. Here:\nhttps://www.postgresql.org/docs/devel/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 18 Apr 2024 11:39:41 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Dear Shveta,\r\n\r\nSorry for delay response. I missed your post.\r\n\r\n> +1.\r\n> \r\n> Similar to ALTER SUB, CREATE SUB also needs to be fixed. This is\r\n> because we call alter_replication_slot in CREATE SUB as well, for the\r\n> case when slot_name is provided and create_slot=false. But the tricky\r\n> part is we call alter_replication_slot() when creating subscription\r\n> for both failover=true and false. That means if we want to fix it on\r\n> the similar line of ALTER SUB, we have to disallow user from executing\r\n> the CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\r\n> to break some existing use cases. (previously, user can execute such a\r\n> command inside a txn block).\r\n> \r\n> So, we need to think if there are better ways to fix it. After\r\n> discussion with Hou-San offlist, here are some ideas:\r\n> 1. do not alter replication slot's failover option when CREATE\r\n> SUBSCRIPTION WITH failover=false. This means we alter replication\r\n> slot only when failover is set to true. And thus we can disallow\r\n> CREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\r\n> inside a txn block.\r\n> \r\n> This option allows user to run CREATE-SUB(create_slot=false) with\r\n> failover=false in txn block like earlier. But on the downside, it\r\n> makes the behavior inconsistent for otherwise simpler option like\r\n> failover, i.e. with failover=true, CREATE SUB is not allowed in txn\r\n> block while with failover=false, it is allowed. It makes it slightly\r\n> complex to be understood by user.\r\n> \r\n> 2. let's not disallow CREATE SUB in txn block as earlier, just don't\r\n> perform internal alter-failover during CREATE SUB for existing\r\n> slots(create_slot=false, slot_name=xx) i.e. when create_slot is\r\n> false, we will ignore failover parameter of CREATE SUB and it is\r\n> user's responsibility to set it appropriately using ALTER SUB cmd. For\r\n> create_slot=true, the behaviour of CREATE-SUB remains same as earlier.\r\n> \r\n> This option does not add new restriction for CREATE SUB wrt txn block.\r\n> In context of failover with create_slot=false, we already have a\r\n> similar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\r\n> SUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\r\n> failover by himself. CREAT SUB can also be documented in similar way.\r\n> This seems simpler to be understood considering existing ALTER SUB's\r\n> behavior as well. Plus, this will make CREATE-SUB code slightly\r\n> simpler and thus easily manageable.\r\n> \r\n> 3. add a alter_slot option for CREATE SUBSCRIPTION, we can only alter\r\n> the slot's failover if alter_slot=true. And so we can disallow CREATE\r\n> SUB WITH (slot_name =xx, alter_slot=true) inside a txn block.\r\n> \r\n> This seems a clean way, as everything will be as per user's consent\r\n> based on alter_slot parameter. But on the downside, this will need\r\n> introducing additional parameter and also adding new restriction of\r\n> running CREATE-sub in txn block for a specific case.\r\n> \r\n> 4. Don't alter replication in subscription DDLs. Instead, try to alter\r\n> replication slot's failover in the apply worker. This means we need to\r\n> execute alter_replication_slot each time before starting streaming in\r\n> the apply worker.\r\n> \r\n> This does not seem appealing to execute alter_replication_slot\r\n> everytime the apply worker starts. But if others think it as a better\r\n> option, it can be further analyzed.\r\n\r\nThanks for describing, I also prefer 2, because it seems bit strange that\r\nCREATE statement leads ALTER.\r\n\r\n> Currently, our preference is option 2, as that looks a clean solution\r\n> and also aligns with ALTER-SUB behavior which is already documented.\r\n> Thoughts?\r\n> \r\n> --------------------------------\r\n> Note that we could not refer to the design of two_phase here, because\r\n> two_phase can be considered as a streaming option, so it's fine to\r\n> change the two_phase along with START_REPLICATION command. (the\r\n> two_phase is not changed in subscription DDLs, but get changed in\r\n> START_REPLICATION command). But the failover is closely related to a\r\n> replication slot itself.\r\n> --------------------------------\r\n\r\nSorry, I cannot find statements. Where did you refer?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Thu, 18 Apr 2024 06:09:57 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Thu, Apr 18, 2024 at 11:40 AM Hayato Kuroda (Fujitsu)\n<[email protected]> wrote:\n>\n> Dear Shveta,\n>\n> Sorry for delay response. I missed your post.\n>\n> > +1.\n> >\n> > Similar to ALTER SUB, CREATE SUB also needs to be fixed. This is\n> > because we call alter_replication_slot in CREATE SUB as well, for the\n> > case when slot_name is provided and create_slot=false. But the tricky\n> > part is we call alter_replication_slot() when creating subscription\n> > for both failover=true and false. That means if we want to fix it on\n> > the similar line of ALTER SUB, we have to disallow user from executing\n> > the CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\n> > to break some existing use cases. (previously, user can execute such a\n> > command inside a txn block).\n> >\n> > So, we need to think if there are better ways to fix it. After\n> > discussion with Hou-San offlist, here are some ideas:\n> > 1. do not alter replication slot's failover option when CREATE\n> > SUBSCRIPTION WITH failover=false. This means we alter replication\n> > slot only when failover is set to true. And thus we can disallow\n> > CREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\n> > inside a txn block.\n> >\n> > This option allows user to run CREATE-SUB(create_slot=false) with\n> > failover=false in txn block like earlier. But on the downside, it\n> > makes the behavior inconsistent for otherwise simpler option like\n> > failover, i.e. with failover=true, CREATE SUB is not allowed in txn\n> > block while with failover=false, it is allowed. It makes it slightly\n> > complex to be understood by user.\n> >\n> > 2. let's not disallow CREATE SUB in txn block as earlier, just don't\n> > perform internal alter-failover during CREATE SUB for existing\n> > slots(create_slot=false, slot_name=xx) i.e. when create_slot is\n> > false, we will ignore failover parameter of CREATE SUB and it is\n> > user's responsibility to set it appropriately using ALTER SUB cmd. For\n> > create_slot=true, the behaviour of CREATE-SUB remains same as earlier.\n> >\n> > This option does not add new restriction for CREATE SUB wrt txn block.\n> > In context of failover with create_slot=false, we already have a\n> > similar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\n> > SUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\n> > failover by himself. CREAT SUB can also be documented in similar way.\n> > This seems simpler to be understood considering existing ALTER SUB's\n> > behavior as well. Plus, this will make CREATE-SUB code slightly\n> > simpler and thus easily manageable.\n> >\n> > 3. add a alter_slot option for CREATE SUBSCRIPTION, we can only alter\n> > the slot's failover if alter_slot=true. And so we can disallow CREATE\n> > SUB WITH (slot_name =xx, alter_slot=true) inside a txn block.\n> >\n> > This seems a clean way, as everything will be as per user's consent\n> > based on alter_slot parameter. But on the downside, this will need\n> > introducing additional parameter and also adding new restriction of\n> > running CREATE-sub in txn block for a specific case.\n> >\n> > 4. Don't alter replication in subscription DDLs. Instead, try to alter\n> > replication slot's failover in the apply worker. This means we need to\n> > execute alter_replication_slot each time before starting streaming in\n> > the apply worker.\n> >\n> > This does not seem appealing to execute alter_replication_slot\n> > everytime the apply worker starts. But if others think it as a better\n> > option, it can be further analyzed.\n>\n> Thanks for describing, I also prefer 2, because it seems bit strange that\n> CREATE statement leads ALTER.\n\nThanks for feedback.\n\n> > Currently, our preference is option 2, as that looks a clean solution\n> > and also aligns with ALTER-SUB behavior which is already documented.\n> > Thoughts?\n> >\n> > --------------------------------\n> > Note that we could not refer to the design of two_phase here, because\n> > two_phase can be considered as a streaming option, so it's fine to\n> > change the two_phase along with START_REPLICATION command. (the\n> > two_phase is not changed in subscription DDLs, but get changed in\n> > START_REPLICATION command). But the failover is closely related to a\n> > replication slot itself.\n> > --------------------------------\n\nSorry for causing confusion. This is not the statement which is\ndocumented one, this was an additional note to support our analysis.\n\n> Sorry, I cannot find statements. Where did you refer?\n\nWhen I said that option 2 aligns with ALTER-SUB documented behaviour,\nI meant the doc described in [1] stating \"When altering the slot_name,\nthe failover and two_phase property values of the named slot may\ndiffer from the counterpart failover and two_phase parameters\nspecified in the subscription....\"\n\n[1]: https://www.postgresql.org/docs/devel/sql-altersubscription.html#SQL-ALTERSUBSCRIPTION-PARAMS-SET\n\nthanks\nShveta\n\n\n", "msg_date": "Thu, 18 Apr 2024 15:06:18 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Thu, Apr 18, 2024 at 11:22:24AM +0530, Amit Kapila wrote:\n> +1 for option 2 as it sounds logical to me and consistent with ALTER\n> SUBSCRIPTION. BTW, IIUC, you are referring to: \"When altering the\n> slot_name, the failover and two_phase property values of the named\n> slot may differ from the counterpart failover and two_phase parameters\n> specified in the subscription. When creating the slot, ensure the slot\n> properties failover and two_phase match their counterpart parameters\n> of the subscription.\" in docs [1], right?\n\nFWIW, I'd also favor option 2, mostly on consistency ground as it\nwould offer a better user-experience. On top of that, you're saying\nthat may lead to some simplifications in the CREATE path. Without a\npatch, it's hard to tell, though.\n\nAs far as I can see, this is not tracked as an open item and it should\nbe. So I have added one.\n--\nMichael", "msg_date": "Fri, 19 Apr 2024 08:34:49 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Thursday, April 18, 2024 1:52 PM Amit Kapila <[email protected]> wrote:\r\n> \r\n> On Tue, Apr 16, 2024 at 5:06 PM shveta malik <[email protected]>\r\n> wrote:\r\n> >\r\n> > On Tue, Apr 16, 2024 at 1:45 PM Hayato Kuroda (Fujitsu)\r\n> > <[email protected]> wrote:\r\n> > >\r\n> > > Dear Hou,\r\n> > >\r\n> > > > Kuroda-San reported an issue off-list that:\r\n> > > >\r\n> > > > If user execute ALTER SUBSCRIPTION SET (failover) command inside a\r\n> > > > txn block and rollback, only the subscription option change can be\r\n> > > > rolled back, while the replication slot's failover change is preserved.\r\n> > > >\r\n> > > > This is because ALTER SUBSCRIPTION SET (failover) command\r\n> > > > internally executes the replication command ALTER_REPLICATION_SLOT\r\n> > > > to change the replication slot's failover property, but this\r\n> > > > replication command execution cannot be rollback.\r\n> > > >\r\n> > > > To fix it, I think we can prevent user from executing ALTER\r\n> > > > SUBSCRIPTION set\r\n> > > > (failover) inside a txn block, which is also consistent to the\r\n> > > > ALTER SUBSCRIPTION REFRESH/DROP SUBSCRIPTION command. Attach\r\n> a\r\n> > > > small patch to address this.\r\n> > >\r\n> > > Thanks for posting the patch, the fix is same as my expectation.\r\n> > > Also, should we add the restriction to the doc? I feel [1] can be updated.\r\n> >\r\n> > +1.\r\n> >\r\n> > Similar to ALTER SUB, CREATE SUB also needs to be fixed. This is\r\n> > because we call alter_replication_slot in CREATE SUB as well, for the\r\n> > case when slot_name is provided and create_slot=false. But the tricky\r\n> > part is we call alter_replication_slot() when creating subscription\r\n> > for both failover=true and false. That means if we want to fix it on\r\n> > the similar line of ALTER SUB, we have to disallow user from executing\r\n> > the CREATE SUBSCRIPTION (slot_name = xx) in a txn block, which seems\r\n> > to break some existing use cases. (previously, user can execute such a\r\n> > command inside a txn block).\r\n> >\r\n> > So, we need to think if there are better ways to fix it. After\r\n> > discussion with Hou-San offlist, here are some ideas:\r\n> >\r\n> > 1. do not alter replication slot's failover option when CREATE\r\n> > SUBSCRIPTION WITH failover=false. This means we alter replication\r\n> > slot only when failover is set to true. And thus we can disallow\r\n> > CREATE SUB WITH (slot_name =xx, failover=true, create_slot=false)\r\n> > inside a txn block.\r\n> >\r\n> > This option allows user to run CREATE-SUB(create_slot=false) with\r\n> > failover=false in txn block like earlier. But on the downside, it\r\n> > makes the behavior inconsistent for otherwise simpler option like\r\n> > failover, i.e. with failover=true, CREATE SUB is not allowed in txn\r\n> > block while with failover=false, it is allowed. It makes it slightly\r\n> > complex to be understood by user.\r\n> >\r\n> > 2. let's not disallow CREATE SUB in txn block as earlier, just don't\r\n> > perform internal alter-failover during CREATE SUB for existing\r\n> > slots(create_slot=false, slot_name=xx) i.e. when create_slot is\r\n> > false, we will ignore failover parameter of CREATE SUB and it is\r\n> > user's responsibility to set it appropriately using ALTER SUB cmd. For\r\n> > create_slot=true, the behaviour of CREATE-SUB remains same as earlier.\r\n> >\r\n> > This option does not add new restriction for CREATE SUB wrt txn block.\r\n> > In context of failover with create_slot=false, we already have a\r\n> > similar restriction (documented one) for ALTER SUB, i.e. with 'ALTER\r\n> > SUBSCRIPTION SET(slot_name = new)', user needs to alter the new slot's\r\n> > failover by himself. CREAT SUB can also be documented in similar way.\r\n> > This seems simpler to be understood considering existing ALTER SUB's\r\n> > behavior as well. Plus, this will make CREATE-SUB code slightly\r\n> > simpler and thus easily manageable.\r\n> >\r\n> \r\n> +1 for option 2 as it sounds logical to me and consistent with ALTER\r\n> SUBSCRIPTION.\r\n\r\n+1.\r\n\r\nHere is V2 patch which includes the changes for CREATE SUBSCRIPTION as\r\nsuggested. Since we don't connect pub to alter slot when (create_slot=false)\r\nanymore, the restriction that disallows failover=true when connect=false is\r\nalso removed.\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Fri, 19 Apr 2024 00:39:40 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Dear Shveta,\r\n\r\n> When I said that option 2 aligns with ALTER-SUB documented behaviour,\r\n> I meant the doc described in [1] stating \"When altering the slot_name,\r\n> the failover and two_phase property values of the named slot may\r\n> differ from the counterpart failover and two_phase parameters\r\n> specified in the subscription....\"\r\n> \r\n> [1]:\r\n> https://www.postgresql.org/docs/devel/sql-altersubscription.html#SQL-ALTER\r\n> SUBSCRIPTION-PARAMS-SET\r\n\r\nI see, thanks for the clarification. Agreed that the description is not conflict with\r\noption 2.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Fri, 19 Apr 2024 02:21:09 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Dear Hou,\r\n\r\nThanks for updating the patch! Let me confirm the content.\r\n\r\nIn your patch, the pg_dump.c was updated. IIUC the main reason was that\r\npg_restore executes some queries as single transactions so that ALTER\r\nSUBSCRIPTION cannot be done, right?\r\nAlso, failover was synchronized only when we were in the upgrade mode, but\r\nyour patch seems to remove the condition. Can you clarify the reason?\r\n\r\nOther than that, the patch LGTM.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\nhttps://www.fujitsu.com/ \r\n\r\n", "msg_date": "Fri, 19 Apr 2024 02:54:17 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 19, 2024 at 12:39:40AM +0000, Zhijie Hou (Fujitsu) wrote:\n> Here is V2 patch which includes the changes for CREATE SUBSCRIPTION as\n> suggested. Since we don't connect pub to alter slot when (create_slot=false)\n> anymore, the restriction that disallows failover=true when connect=false is\n> also removed.\n\nThanks!\n\n+ specified in the subscription. When creating the slot, ensure the slot\n+ property <literal>failover</literal> matches the counterpart parameter\n+ of the subscription.\n\nThe slot could be created before or after the subscription is created, so I think\nit needs a bit of rewording (as here it sounds like the sub is already created),\n, something like?\n\n\"Always ensure the slot property <literal>failover</literal> matches the\ncounterpart parameter of the subscription and vice versa.\"\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 19 Apr 2024 08:56:25 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Fri, Apr 19, 2024 at 6:09 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> Here is V2 patch which includes the changes for CREATE SUBSCRIPTION as\n> suggested. Since we don't connect pub to alter slot when (create_slot=false)\n> anymore, the restriction that disallows failover=true when connect=false is\n> also removed.\n\nThanks for the patch. I feel getSubscription() also needs to get\n'subfailover' option independent of dopt->binary_upgrade i.e. it needs\nsimilar changes as that of dumpSubscription(). I tested pg_dump,\ncurrently it is not dumping failover parameter for failover-enabled\nsubscriptions, perhaps due to the same bug. Create-sub and Alter-sub\nchanges look good and work well.\n\nthanks\nShveta\n\n\n", "msg_date": "Fri, 19 Apr 2024 15:44:38 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Friday, April 19, 2024 10:54 AM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\r\n> In your patch, the pg_dump.c was updated. IIUC the main reason was that\r\n> pg_restore executes some queries as single transactions so that ALTER\r\n> SUBSCRIPTION cannot be done, right?\r\n\r\nYes, and please see below for other reasons.\r\n\r\n> Also, failover was synchronized only when we were in the upgrade mode, but\r\n> your patch seems to remove the condition. Can you clarify the reason?\r\n\r\nWe used ALTER SUBSCRIPTION in upgrade mode because it was not allowed to use\r\nconnect=false and failover=true together when CREATE SUBSCRIPTION. But since we\r\ndon't have this restriction anymore(we don't alter slot when creating sub\r\nanymore), we can directly specify failover in CREATE SUBSCRIPTION and do that\r\nin non-upgrade mode as well.\r\n\r\nAttach the V3 patch which also addressed Shveta[1] and Bertrand[2]'s comments.\r\n\r\n[1] https://www.postgresql.org/message-id/CAJpy0uD3YOeDg-tTCUi3EZ8vznRDfDqO_k6LepJpXUV1Z_%3DgkA%40mail.gmail.com\r\n[2] https://www.postgresql.org/message-id/ZiIxuaiINsuaWuDK%40ip-10-97-1-34.eu-west-3.compute.internal\r\n\r\nBest Regards,\r\nHou zj", "msg_date": "Mon, 22 Apr 2024 00:27:42 +0000", "msg_from": "\"Zhijie Hou (Fujitsu)\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Mon, Apr 22, 2024 at 5:57 AM Zhijie Hou (Fujitsu)\n<[email protected]> wrote:\n>\n> On Friday, April 19, 2024 10:54 AM Kuroda, Hayato/黒田 隼人 <[email protected]> wrote:\n> > In your patch, the pg_dump.c was updated. IIUC the main reason was that\n> > pg_restore executes some queries as single transactions so that ALTER\n> > SUBSCRIPTION cannot be done, right?\n>\n> Yes, and please see below for other reasons.\n>\n> > Also, failover was synchronized only when we were in the upgrade mode, but\n> > your patch seems to remove the condition. Can you clarify the reason?\n>\n> We used ALTER SUBSCRIPTION in upgrade mode because it was not allowed to use\n> connect=false and failover=true together when CREATE SUBSCRIPTION. But since we\n> don't have this restriction anymore(we don't alter slot when creating sub\n> anymore), we can directly specify failover in CREATE SUBSCRIPTION and do that\n> in non-upgrade mode as well.\n>\n> Attach the V3 patch which also addressed Shveta[1] and Bertrand[2]'s comments.\n\n Tested the patch, works well.\n\nthanks\nShveta\n\n\n", "msg_date": "Mon, 22 Apr 2024 11:32:14 +0530", "msg_from": "shveta malik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 22, 2024 at 11:32:14AM +0530, shveta malik wrote:\n> On Mon, Apr 22, 2024 at 5:57 AM Zhijie Hou (Fujitsu)\n> <[email protected]> wrote:\n> > Attach the V3 patch which also addressed Shveta[1] and Bertrand[2]'s comments.\n\nThanks!\n\n> Tested the patch, works well.\n\nSame here.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 22 Apr 2024 09:01:14 +0000", "msg_from": "Bertrand Drouvot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" }, { "msg_contents": "On Mon, Apr 22, 2024 at 2:31 PM Bertrand Drouvot\n<[email protected]> wrote:\n>\n> On Mon, Apr 22, 2024 at 11:32:14AM +0530, shveta malik wrote:\n> > On Mon, Apr 22, 2024 at 5:57 AM Zhijie Hou (Fujitsu)\n> > <[email protected]> wrote:\n> > > Attach the V3 patch which also addressed Shveta[1] and Bertrand[2]'s comments.\n>\n> Thanks!\n>\n> > Tested the patch, works well.\n>\n> Same here.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 24 Apr 2024 08:56:45 +0530", "msg_from": "Amit Kapila <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disallow changing slot's failover option in transaction block" } ]
[ { "msg_contents": "Oversight of 0294df2f1f84 [1].\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0294df2f1f84\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 16 Apr 2024 12:48:23 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "[PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On Tue, Apr 16, 2024 at 5:48 PM Aleksander Alekseev <\[email protected]> wrote:\n\n> Oversight of 0294df2f1f84 [1].\n>\n> [1]:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0294df2f1f84\n\n\n+1. I think this change improves the code quality. I searched for\nother arrays indexed by merge match kind, but found none. So this patch\nseems thorough.\n\nThanks\nRichard\n\nOn Tue, Apr 16, 2024 at 5:48 PM Aleksander Alekseev <[email protected]> wrote:Oversight of 0294df2f1f84 [1].\n\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0294df2f1f84+1.  I think this change improves the code quality.  I searched forother arrays indexed by merge match kind, but found none.  So this patchseems thorough.ThanksRichard", "msg_date": "Tue, 16 Apr 2024 18:35:19 +0800", "msg_from": "Richard Guo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On Tue, 16 Apr 2024 at 11:35, Richard Guo <[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 5:48 PM Aleksander Alekseev <[email protected]> wrote:\n>>\n>> Oversight of 0294df2f1f84 [1].\n>>\n>> [1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0294df2f1f84\n>\n> +1. I think this change improves the code quality. I searched for\n> other arrays indexed by merge match kind, but found none. So this patch\n> seems thorough.\n>\n\nYes this makes sense, though I note that some other similar code uses\na #define rather than inserting an enum element at the end (e.g.,\nNUM_ROWFILTER_PUBACTIONS).\n\nI guess the argument against inserting an enum element at the end is\nthat a switch statement on the enum value might generate a compiler\nwarning if it didn't have a default clause.\n\nLooking at how NUM_ROWFILTER_PUBACTIONS is defined as the last element\nplus one, it might seem to be barely any better than just defining it\nto be 3, since any new enum element would probably be added at the\nend, requiring it to be updated in any case. But if the number of\nelements were much larger, it would be much more obviously correct,\nmaking it a good general pattern to follow. So in the interests of\ncode consistency, I think we should do the same here.\n\nRegards,\nDean\n\n\n", "msg_date": "Tue, 16 Apr 2024 14:21:40 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "Hi,\n\n> I guess the argument against inserting an enum element at the end is\n> that a switch statement on the enum value might generate a compiler\n> warning if it didn't have a default clause.\n\nFair point. PFA the alternative version of the patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 18 Apr 2024 15:00:09 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On Thu, 18 Apr 2024 at 13:00, Aleksander Alekseev\n<[email protected]> wrote:\n>\n> Fair point. PFA the alternative version of the patch.\n>\n\nThanks. Committed.\n\nRegards,\nDean\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:02:42 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "Hi,\n\n> > Fair point. PFA the alternative version of the patch.\n> >\n>\n> Thanks. Committed.\n\nThanks. I see a few pieces of code that use special FOO_NUMBER enum\nvalues instead of a macro. Should we refactor these pieces\naccordingly? PFA another patch.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Fri, 19 Apr 2024 12:47:43 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On Fri, Apr 19, 2024 at 12:47:43PM +0300, Aleksander Alekseev wrote:\n> Thanks. I see a few pieces of code that use special FOO_NUMBER enum\n> values instead of a macro. Should we refactor these pieces\n> accordingly? PFA another patch.\n\nI don't see why not for the places you are changing here, we can be\nmore consistent. Now, such changes are material for v18.\n--\nMichael", "msg_date": "Mon, 22 Apr 2024 14:04:04 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On Mon, 22 Apr 2024 at 06:04, Michael Paquier <[email protected]> wrote:\n>\n> On Fri, Apr 19, 2024 at 12:47:43PM +0300, Aleksander Alekseev wrote:\n> > Thanks. I see a few pieces of code that use special FOO_NUMBER enum\n> > values instead of a macro. Should we refactor these pieces\n> > accordingly? PFA another patch.\n>\n> I don't see why not for the places you are changing here, we can be\n> more consistent.\n\n[Shrug] I do prefer using a macro. Adding a counter element to the end\nof the enum feels like a hack, because the counter isn't the same kind\nof thing as all the other enum elements, so it feels out of place in\nthe enum. On the other hand, I think it's a fairly common pattern that\nmost people will recognise, and for other enums that are more likely\nto grow over time, it might be less error-prone than a macro, which\npeople might overlook and fail to update.\n\n> Now, such changes are material for v18.\n\nAgreed. This has been added to the next commitfest, so let's see what\nothers think.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 22 Apr 2024 08:55:37 +0100", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "On 19.04.24 11:47, Aleksander Alekseev wrote:\n> Thanks. I see a few pieces of code that use special FOO_NUMBER enum\n> values instead of a macro. Should we refactor these pieces\n> accordingly? PFA another patch.\n\nI think this is a sensible improvement.\n\nBut please keep the trailing commas on the last enum items.\n\n\n\n", "msg_date": "Sun, 12 May 2024 13:57:43 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" }, { "msg_contents": "Hi,\n\n> > Thanks. I see a few pieces of code that use special FOO_NUMBER enum\n> > values instead of a macro. Should we refactor these pieces\n> > accordingly? PFA another patch.\n>\n> I think this is a sensible improvement.\n>\n> But please keep the trailing commas on the last enum items.\n\nThanks, fixed.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 13 May 2024 13:22:22 +0300", "msg_from": "Aleksander Alekseev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Replace magic constant 3 with NUM_MERGE_MATCH_KINDS" } ]
[ { "msg_contents": "Hi,\n\nI am working on using read streams in the CREATE DATABASE command when the\nstrategy is wal_log. RelationCopyStorageUsingBuffer() function is used in\nthis context. This function reads source buffers then copies them to the\ndestination buffers. I used read streams only when reading source buffers\nbecause the destination buffers are read by 'RBM_ZERO_AND_LOCK' option, so\nit is not important.\n\nI created a ~6 GB table [1] and created a new database with the wal_log\nstrategy using the database that table was created in as a template [2]. My\nbenchmarking results are:\n\na. Timings:\n\npatched:\n12955.027 ms\n12917.475 ms\n13177.846 ms\n12971.308 ms\n13059.985 ms\n\nmaster:\n13156.375 ms\n13054.071 ms\n13151.607 ms\n13152.633 ms\n13160.538 ms\n\nThere is no difference in timings, the patched version is a tiny bit better\nbut it is negligible. I actually expected the patched version to be better\nbecause there was no prefetching before, but the read stream API detects\nsequential access and disables prefetching.\n\nb. strace:\n\npatched:\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 68.02 3.749359 2 1285782 pwrite64\n 18.54 1.021734 21 46730 preadv\n 9.49 0.522889 826 633 fdatasync\n 2.55 0.140339 59 2368 pwritev\n 1.14 0.062583 409 153 fsync\n\nmaster:\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 59.71 3.831542 2 1288365 pwrite64\n 29.84 1.914540 2 747936 pread64\n 7.90 0.506843 837 605 fdatasync\n 1.58 0.101575 54 1856 pwritev\n 0.75 0.048431 400 121 fsync\n\nThere are fewer (~1/16) read system calls in the patched version.\n\nc. perf:\n\npatched:\n\n- 97.83% 1.13% postgres postgres [.]\nRelationCopyStorageUsingBuffer\n\n - 97.83% RelationCopyStorageUsingBuffer\n\n - 44.28% ReadBufferWithoutRelcache\n\n + 42.20% GetVictimBuffer\n\n 0.81% ZeroBuffer\n\n + 31.86% log_newpage_buffer\n\n - 19.51% read_stream_next_buffer\n\n - 17.92% WaitReadBuffers\n\n + 17.61% mdreadv\n\n - 1.47% read_stream_start_pending_read\n\n + 1.46% StartReadBuffers\n\nmaster:\n\n- 97.68% 0.57% postgres postgres [.]\nRelationCopyStorageUsingBuffer\n\n - RelationCopyStorageUsingBuffer\n\n - 65.48% ReadBufferWithoutRelcache\n\n + 41.16% GetVictimBuffer\n\n - 20.42% WaitReadBuffers\n\n + 19.90% mdreadv\n\n + 1.85% StartReadBuffer\n\n 0.75% ZeroBuffer\n\n + 30.82% log_newpage_buffer\n\nPatched version spends less CPU time in read calls and more CPU time in\nother calls such as write.\n\nThere are three patch files attached. First two are optimization and adding\na way to create a read stream object by using SMgrRelation, these are\nalready proposed in the streaming I/O thread [3]. The third one is the\nactual patch file.\n\nAny kind of feedback would be appreciated.\n\n[1] CREATE TABLE t as select repeat('a', 100) || i || repeat('b', 500) as\nfiller from generate_series(1, 9000000) as i;\n[2] CREATE DATABASE test_1 STRATEGY 'wal_log' TEMPLATE test;\n[3]\nhttps://www.postgresql.org/message-id/CAN55FZ1yGvCzCW_aufu83VimdEYHbG_zuOY3J9JL-nBptyJyKA%40mail.gmail.com\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 16 Apr 2024 14:12:19 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> I am working on using read streams in the CREATE DATABASE command when the\n> strategy is wal_log. RelationCopyStorageUsingBuffer() function is used in\n> this context. This function reads source buffers then copies them to the\n\nPlease rebase. I applied this to 40126ac for review purposes.\n\n> a. Timings:\n> b. strace:\n> c. perf:\n\nThanks for including those details. That's valuable confirmation.\n\n> Subject: [PATCH v1 1/3] Refactor PinBufferForBlock() to remove if checks about\n> persistence\n> \n> There are if checks in PinBufferForBlock() function to set persistence\n> of the relation and this function is called for the each block in the\n> relation. Instead of that, set persistence of the relation before\n> PinBufferForBlock() function.\n\nI tried with the following additional patch to see if PinBufferForBlock() ever\ngets invalid smgr_relpersistence:\n\n====\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -1098,6 +1098,11 @@ PinBufferForBlock(Relation rel,\n \n \tAssert(blockNum != P_NEW);\n \n+\tif (!(smgr_persistence == RELPERSISTENCE_TEMP ||\n+\t\t smgr_persistence == RELPERSISTENCE_PERMANENT ||\n+\t\t smgr_persistence == RELPERSISTENCE_UNLOGGED))\n+\t\telog(WARNING, \"unexpected relpersistence %d\", smgr_persistence);\n+\n \tif (smgr_persistence == RELPERSISTENCE_TEMP)\n \t{\n \t\tio_context = IOCONTEXT_NORMAL;\n====\n\nThat still gets relpersistence==0 in various src/test/regress cases. I think\nthe intent was to prevent that. If not, please add a comment about when\nrelpersistence==0 is still allowed.\n\n> --- a/src/backend/storage/aio/read_stream.c\n> +++ b/src/backend/storage/aio/read_stream.c\n> @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> \t{\n> \t\tstream->ios[i].op.rel = rel;\n> \t\tstream->ios[i].op.smgr = RelationGetSmgr(rel);\n> -\t\tstream->ios[i].op.smgr_persistence = 0;\n> +\t\tstream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n\nDoes the following comment in ReadBuffersOperation need an update?\n\n\t/*\n\t * The following members should be set by the caller. If only smgr is\n\t * provided without rel, then smgr_persistence can be set to override the\n\t * default assumption of RELPERSISTENCE_PERMANENT.\n\t */\n\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n\n> +/*\n> + * Helper struct for read stream object used in\n> + * RelationCopyStorageUsingBuffer() function.\n> + */\n> +struct copy_storage_using_buffer_read_stream_private\n> +{\n> +\tBlockNumber blocknum;\n> +\tint64\t\tlast_block;\n> +};\n\nWhy is last_block an int64, not a BlockNumber?\n\n> @@ -4667,19 +4698,31 @@ RelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n\n> \t/* Iterate over each block of the source relation file. */\n> \tfor (blkno = 0; blkno < nblocks; blkno++)\n> \t{\n> \t\tCHECK_FOR_INTERRUPTS();\n> \n> \t\t/* Read block from source relation. */\n> -\t\tsrcBuf = ReadBufferWithoutRelcache(srclocator, forkNum, blkno,\n> -\t\t\t\t\t\t\t\t\t\t RBM_NORMAL, bstrategy_src,\n> -\t\t\t\t\t\t\t\t\t\t permanent);\n> +\t\tsrcBuf = read_stream_next_buffer(src_stream, NULL);\n> \t\tLockBuffer(srcBuf, BUFFER_LOCK_SHARE);\n\nI think this should check for read_stream_next_buffer() returning\nInvalidBuffer. pg_prewarm doesn't, but the other callers do, and I think the\nother callers are a better model. LockBuffer() doesn't check the\nInvalidBuffer case, so let's avoid the style of using a\nread_stream_next_buffer() return value without checking.\n\nThanks,\nnm\n\n\n", "msg_date": "Thu, 11 Jul 2024 16:52:09 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "Hi,\n\nThank you for the review!\n\nOn Fri, 12 Jul 2024 at 02:52, Noah Misch <[email protected]> wrote:\n>\n> On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> > I am working on using read streams in the CREATE DATABASE command when the\n> > strategy is wal_log. RelationCopyStorageUsingBuffer() function is used in\n> > this context. This function reads source buffers then copies them to the\n>\n> Please rebase. I applied this to 40126ac for review purposes.\n\nRebased.\n\n> > Subject: [PATCH v1 1/3] Refactor PinBufferForBlock() to remove if checks about\n> > persistence\n> >\n> > There are if checks in PinBufferForBlock() function to set persistence\n> > of the relation and this function is called for the each block in the\n> > relation. Instead of that, set persistence of the relation before\n> > PinBufferForBlock() function.\n>\n> I tried with the following additional patch to see if PinBufferForBlock() ever\n> gets invalid smgr_relpersistence:\n>\n> ====\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -1098,6 +1098,11 @@ PinBufferForBlock(Relation rel,\n>\n> Assert(blockNum != P_NEW);\n>\n> + if (!(smgr_persistence == RELPERSISTENCE_TEMP ||\n> + smgr_persistence == RELPERSISTENCE_PERMANENT ||\n> + smgr_persistence == RELPERSISTENCE_UNLOGGED))\n> + elog(WARNING, \"unexpected relpersistence %d\", smgr_persistence);\n> +\n> if (smgr_persistence == RELPERSISTENCE_TEMP)\n> {\n> io_context = IOCONTEXT_NORMAL;\n> ====\n>\n> That still gets relpersistence==0 in various src/test/regress cases. I think\n> the intent was to prevent that. If not, please add a comment about when\n> relpersistence==0 is still allowed.\n\nI fixed it, it is caused by (mode == RBM_ZERO_AND_CLEANUP_LOCK | mode\n== RBM_ZERO_AND_LOCK) case in the ReadBuffer_common(). The persistence\nwas not updated for this path before. I also added an assert check for\nthis problem to PinBufferForBlock().\n\n> > --- a/src/backend/storage/aio/read_stream.c\n> > +++ b/src/backend/storage/aio/read_stream.c\n> > @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> > {\n> > stream->ios[i].op.rel = rel;\n> > stream->ios[i].op.smgr = RelationGetSmgr(rel);\n> > - stream->ios[i].op.smgr_persistence = 0;\n> > + stream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n>\n> Does the following comment in ReadBuffersOperation need an update?\n>\n> /*\n> * The following members should be set by the caller. If only smgr is\n> * provided without rel, then smgr_persistence can be set to override the\n> * default assumption of RELPERSISTENCE_PERMANENT.\n> */\n>\n\nI believe it does not need to be updated but I renamed\n'ReadBuffersOperation.smgr_persistence' as\n'ReadBuffersOperation.persistence'. So, this comment is updated as\nwell. I think that rename suits better because persistence does not\nneed to come from smgr, it could come from relation, too. Do you think\nit is a good idea? If it is, does it need a separate commit?\n\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n>\n> > +/*\n> > + * Helper struct for read stream object used in\n> > + * RelationCopyStorageUsingBuffer() function.\n> > + */\n> > +struct copy_storage_using_buffer_read_stream_private\n> > +{\n> > + BlockNumber blocknum;\n> > + int64 last_block;\n> > +};\n>\n> Why is last_block an int64, not a BlockNumber?\n\nYou are right, the type of last_block should be BlockNumber; done. I\ncopied it from pg_prewarm_read_stream_private struct and I guess the\nsame should be applied to it as well but it is not the topic of this\nthread, so I did not update it yet.\n\n> > @@ -4667,19 +4698,31 @@ RelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n>\n> > /* Iterate over each block of the source relation file. */\n> > for (blkno = 0; blkno < nblocks; blkno++)\n> > {\n> > CHECK_FOR_INTERRUPTS();\n> >\n> > /* Read block from source relation. */\n> > - srcBuf = ReadBufferWithoutRelcache(srclocator, forkNum, blkno,\n> > - RBM_NORMAL, bstrategy_src,\n> > - permanent);\n> > + srcBuf = read_stream_next_buffer(src_stream, NULL);\n> > LockBuffer(srcBuf, BUFFER_LOCK_SHARE);\n>\n> I think this should check for read_stream_next_buffer() returning\n> InvalidBuffer. pg_prewarm doesn't, but the other callers do, and I think the\n> other callers are a better model. LockBuffer() doesn't check the\n> InvalidBuffer case, so let's avoid the style of using a\n> read_stream_next_buffer() return value without checking.\n\nThere is an assert in the LockBuffer which checks for the\nInvalidBuffer. If that is not enough, we may add an if check for\nInvalidBuffer but what should we do in this case? It should not\nhappen, so erroring out may be a good idea.\n\nUpdated patches are attached (without InvalidBuffer check for now).\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 16 Jul 2024 14:11:20 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Tue, Jul 16, 2024 at 02:11:20PM +0300, Nazir Bilal Yavuz wrote:\n> On Fri, 12 Jul 2024 at 02:52, Noah Misch <[email protected]> wrote:\n> > On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> > > --- a/src/backend/storage/aio/read_stream.c\n> > > +++ b/src/backend/storage/aio/read_stream.c\n> > > @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> > > {\n> > > stream->ios[i].op.rel = rel;\n> > > stream->ios[i].op.smgr = RelationGetSmgr(rel);\n> > > - stream->ios[i].op.smgr_persistence = 0;\n> > > + stream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n> >\n> > Does the following comment in ReadBuffersOperation need an update?\n> >\n> > /*\n> > * The following members should be set by the caller. If only smgr is\n> > * provided without rel, then smgr_persistence can be set to override the\n> > * default assumption of RELPERSISTENCE_PERMANENT.\n> > */\n> \n> I believe it does not need to be updated but I renamed\n> 'ReadBuffersOperation.smgr_persistence' as\n> 'ReadBuffersOperation.persistence'. So, this comment is updated as\n> well. I think that rename suits better because persistence does not\n> need to come from smgr, it could come from relation, too. Do you think\n> it is a good idea? If it is, does it need a separate commit?\n\nThe rename is good. I think the comment implies \"persistence\" is unused when\nrel!=NULL. That implication is true before the patch but false after the\npatch.\n\n> > > @@ -4667,19 +4698,31 @@ RelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n> >\n> > > /* Iterate over each block of the source relation file. */\n> > > for (blkno = 0; blkno < nblocks; blkno++)\n> > > {\n> > > CHECK_FOR_INTERRUPTS();\n> > >\n> > > /* Read block from source relation. */\n> > > - srcBuf = ReadBufferWithoutRelcache(srclocator, forkNum, blkno,\n> > > - RBM_NORMAL, bstrategy_src,\n> > > - permanent);\n> > > + srcBuf = read_stream_next_buffer(src_stream, NULL);\n> > > LockBuffer(srcBuf, BUFFER_LOCK_SHARE);\n> >\n> > I think this should check for read_stream_next_buffer() returning\n> > InvalidBuffer. pg_prewarm doesn't, but the other callers do, and I think the\n> > other callers are a better model. LockBuffer() doesn't check the\n> > InvalidBuffer case, so let's avoid the style of using a\n> > read_stream_next_buffer() return value without checking.\n> \n> There is an assert in the LockBuffer which checks for the\n> InvalidBuffer. If that is not enough, we may add an if check for\n> InvalidBuffer but what should we do in this case? It should not\n> happen, so erroring out may be a good idea.\n\nI like this style from read_stream_reset():\n\n\twhile ((buffer = read_stream_next_buffer(stream, NULL)) != InvalidBuffer)\n\t{\n\t\t...\n\t}\n\nThat is, don't iterate over block numbers. Drain the stream until empty. If\nthe stream returns a number of blocks higher or lower than we expected, we\nwon't detect that, and that's okay. It's not a strong preference, so I'm open\nto arguments against that from you or others. A counterargument could be that\nread_stream_reset() doesn't know the buffer count, so it has no choice. The\ncounterargument could say that callers knowing the block count should use the\npg_prewarm() style, and others should use the read_stream_reset() style.\n\n\n", "msg_date": "Tue, 16 Jul 2024 05:19:35 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "Hi,\n\nOn Tue, 16 Jul 2024 at 15:19, Noah Misch <[email protected]> wrote:\n>\n> On Tue, Jul 16, 2024 at 02:11:20PM +0300, Nazir Bilal Yavuz wrote:\n> > On Fri, 12 Jul 2024 at 02:52, Noah Misch <[email protected]> wrote:\n> > > On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> > > > --- a/src/backend/storage/aio/read_stream.c\n> > > > +++ b/src/backend/storage/aio/read_stream.c\n> > > > @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> > > > {\n> > > > stream->ios[i].op.rel = rel;\n> > > > stream->ios[i].op.smgr = RelationGetSmgr(rel);\n> > > > - stream->ios[i].op.smgr_persistence = 0;\n> > > > + stream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n> > >\n> > > Does the following comment in ReadBuffersOperation need an update?\n> > >\n> > > /*\n> > > * The following members should be set by the caller. If only smgr is\n> > > * provided without rel, then smgr_persistence can be set to override the\n> > > * default assumption of RELPERSISTENCE_PERMANENT.\n> > > */\n> >\n> > I believe it does not need to be updated but I renamed\n> > 'ReadBuffersOperation.smgr_persistence' as\n> > 'ReadBuffersOperation.persistence'. So, this comment is updated as\n> > well. I think that rename suits better because persistence does not\n> > need to come from smgr, it could come from relation, too. Do you think\n> > it is a good idea? If it is, does it need a separate commit?\n>\n> The rename is good. I think the comment implies \"persistence\" is unused when\n> rel!=NULL. That implication is true before the patch but false after the\n> patch.\n\nWhat makes it false after the patch? I think the logic did not change.\nIf there is rel, the value of persistence is obtained from\n'rel->rd_rel->relpersistence'. If there is no rel, then smgr is used\nto obtain its value.\n\n> > > > @@ -4667,19 +4698,31 @@ RelationCopyStorageUsingBuffer(RelFileLocator srclocator,\n> > >\n> > > > /* Iterate over each block of the source relation file. */\n> > > > for (blkno = 0; blkno < nblocks; blkno++)\n> > > > {\n> > > > CHECK_FOR_INTERRUPTS();\n> > > >\n> > > > /* Read block from source relation. */\n> > > > - srcBuf = ReadBufferWithoutRelcache(srclocator, forkNum, blkno,\n> > > > - RBM_NORMAL, bstrategy_src,\n> > > > - permanent);\n> > > > + srcBuf = read_stream_next_buffer(src_stream, NULL);\n> > > > LockBuffer(srcBuf, BUFFER_LOCK_SHARE);\n> > >\n> > > I think this should check for read_stream_next_buffer() returning\n> > > InvalidBuffer. pg_prewarm doesn't, but the other callers do, and I think the\n> > > other callers are a better model. LockBuffer() doesn't check the\n> > > InvalidBuffer case, so let's avoid the style of using a\n> > > read_stream_next_buffer() return value without checking.\n> >\n> > There is an assert in the LockBuffer which checks for the\n> > InvalidBuffer. If that is not enough, we may add an if check for\n> > InvalidBuffer but what should we do in this case? It should not\n> > happen, so erroring out may be a good idea.\n>\n> I like this style from read_stream_reset():\n>\n> while ((buffer = read_stream_next_buffer(stream, NULL)) != InvalidBuffer)\n> {\n> ...\n> }\n>\n> That is, don't iterate over block numbers. Drain the stream until empty. If\n> the stream returns a number of blocks higher or lower than we expected, we\n> won't detect that, and that's okay. It's not a strong preference, so I'm open\n> to arguments against that from you or others. A counterargument could be that\n> read_stream_reset() doesn't know the buffer count, so it has no choice. The\n> counterargument could say that callers knowing the block count should use the\n> pg_prewarm() style, and others should use the read_stream_reset() style.\n\nI think what you said in the counter argument makes sense. Also, there\nis an 'Assert(read_stream_next_buffer(src_stream, NULL) ==\nInvalidBuffer);' after the loop. Which means all the blocks in the\nstream are read and there is no block left.\n\nv3 is attached. The only change is 'read_stream.c' changes in the 0003\nare moved to 0002.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Wed, 17 Jul 2024 12:22:49 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Wed, Jul 17, 2024 at 12:22:49PM +0300, Nazir Bilal Yavuz wrote:\n> On Tue, 16 Jul 2024 at 15:19, Noah Misch <[email protected]> wrote:\n> > On Tue, Jul 16, 2024 at 02:11:20PM +0300, Nazir Bilal Yavuz wrote:\n> > > On Fri, 12 Jul 2024 at 02:52, Noah Misch <[email protected]> wrote:\n> > > > On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> > > > > --- a/src/backend/storage/aio/read_stream.c\n> > > > > +++ b/src/backend/storage/aio/read_stream.c\n> > > > > @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> > > > > {\n> > > > > stream->ios[i].op.rel = rel;\n> > > > > stream->ios[i].op.smgr = RelationGetSmgr(rel);\n> > > > > - stream->ios[i].op.smgr_persistence = 0;\n> > > > > + stream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n> > > >\n> > > > Does the following comment in ReadBuffersOperation need an update?\n> > > >\n> > > > /*\n> > > > * The following members should be set by the caller. If only smgr is\n> > > > * provided without rel, then smgr_persistence can be set to override the\n> > > > * default assumption of RELPERSISTENCE_PERMANENT.\n> > > > */\n> > >\n> > > I believe it does not need to be updated but I renamed\n> > > 'ReadBuffersOperation.smgr_persistence' as\n> > > 'ReadBuffersOperation.persistence'. So, this comment is updated as\n> > > well. I think that rename suits better because persistence does not\n> > > need to come from smgr, it could come from relation, too. Do you think\n> > > it is a good idea? If it is, does it need a separate commit?\n> >\n> > The rename is good. I think the comment implies \"persistence\" is unused when\n> > rel!=NULL. That implication is true before the patch but false after the\n> > patch.\n> \n> What makes it false after the patch? I think the logic did not change.\n> If there is rel, the value of persistence is obtained from\n> 'rel->rd_rel->relpersistence'. If there is no rel, then smgr is used\n> to obtain its value.\n\nFirst, the patch removes the \"default assumption of RELPERSISTENCE_PERMANENT\".\nIt's now an assertion failure.\n\nThe second point is about \"If only smgr is provided without rel\". Before the\npatch, the extern functions that take a ReadBuffersOperation argument examine\nsmgr_persistence if and only if rel==NULL. That's consistent with the\ncomment. After the patch, StartReadBuffersImpl() calling PinBufferForBlock()\nuses the field unconditionally.\n\nOn that note, does WaitReadBuffers() still have a reason to calculate its\npersistence as follows, or should this patch make it \"persistence =\noperation->persistence\"?\n\n\tpersistence = operation->rel\n\t\t? operation->rel->rd_rel->relpersistence\n\t\t: RELPERSISTENCE_PERMANENT;\n\n> I think what you said in the counter argument makes sense.\n\nOkay.\n\n\n", "msg_date": "Wed, 17 Jul 2024 13:41:47 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "Hi,\n\nOn Wed, 17 Jul 2024 at 23:41, Noah Misch <[email protected]> wrote:\n>\n> On Wed, Jul 17, 2024 at 12:22:49PM +0300, Nazir Bilal Yavuz wrote:\n> > On Tue, 16 Jul 2024 at 15:19, Noah Misch <[email protected]> wrote:\n> > > On Tue, Jul 16, 2024 at 02:11:20PM +0300, Nazir Bilal Yavuz wrote:\n> > > > On Fri, 12 Jul 2024 at 02:52, Noah Misch <[email protected]> wrote:\n> > > > > On Tue, Apr 16, 2024 at 02:12:19PM +0300, Nazir Bilal Yavuz wrote:\n> > > > > > --- a/src/backend/storage/aio/read_stream.c\n> > > > > > +++ b/src/backend/storage/aio/read_stream.c\n> > > > > > @@ -549,7 +549,7 @@ read_stream_begin_relation(int flags,\n> > > > > > {\n> > > > > > stream->ios[i].op.rel = rel;\n> > > > > > stream->ios[i].op.smgr = RelationGetSmgr(rel);\n> > > > > > - stream->ios[i].op.smgr_persistence = 0;\n> > > > > > + stream->ios[i].op.smgr_persistence = rel->rd_rel->relpersistence;\n> > > > >\n> > > > > Does the following comment in ReadBuffersOperation need an update?\n> > > > >\n> > > > > /*\n> > > > > * The following members should be set by the caller. If only smgr is\n> > > > > * provided without rel, then smgr_persistence can be set to override the\n> > > > > * default assumption of RELPERSISTENCE_PERMANENT.\n> > > > > */\n> > > >\n> > > > I believe it does not need to be updated but I renamed\n> > > > 'ReadBuffersOperation.smgr_persistence' as\n> > > > 'ReadBuffersOperation.persistence'. So, this comment is updated as\n> > > > well. I think that rename suits better because persistence does not\n> > > > need to come from smgr, it could come from relation, too. Do you think\n> > > > it is a good idea? If it is, does it need a separate commit?\n> > >\n> > > The rename is good. I think the comment implies \"persistence\" is unused when\n> > > rel!=NULL. That implication is true before the patch but false after the\n> > > patch.\n> >\n> > What makes it false after the patch? I think the logic did not change.\n> > If there is rel, the value of persistence is obtained from\n> > 'rel->rd_rel->relpersistence'. If there is no rel, then smgr is used\n> > to obtain its value.\n>\n> First, the patch removes the \"default assumption of RELPERSISTENCE_PERMANENT\".\n> It's now an assertion failure.\n>\n> The second point is about \"If only smgr is provided without rel\". Before the\n> patch, the extern functions that take a ReadBuffersOperation argument examine\n> smgr_persistence if and only if rel==NULL. That's consistent with the\n> comment. After the patch, StartReadBuffersImpl() calling PinBufferForBlock()\n> uses the field unconditionally.\n\nI see, thanks for the explanation. I removed that part of the comment.\n\n>\n> On that note, does WaitReadBuffers() still have a reason to calculate its\n> persistence as follows, or should this patch make it \"persistence =\n> operation->persistence\"?\n>\n> persistence = operation->rel\n> ? operation->rel->rd_rel->relpersistence\n> : RELPERSISTENCE_PERMANENT;\n\nNice catch, I do not think it is needed now. WaitReadBuffers() is\ncalled only from ReadBuffer_common() and read_stream_next_buffer().\nFor the ReadBuffer_common(), persistence is calculated before calling\nWaitReadBuffers(). And for the read_stream_next_buffer(), it is\ncalculated while creating a read stream object in the\nread_stream_begin_impl().\n\nv4 is attached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 18 Jul 2024 14:11:13 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Thu, Jul 18, 2024 at 02:11:13PM +0300, Nazir Bilal Yavuz wrote:\n> v4 is attached.\n\nRemoval of the PinBufferForBlock() comment about the \"persistence =\nRELPERSISTENCE_PERMANENT\" fallback started to feel like a loss. I looked for\na way to re-add a comment about the fallback.\nhttps://coverage.postgresql.org/src/backend/storage/buffer/bufmgr.c.gcov.html\nshows no test coverage of that fallback, and I think the fallback is\nunreachable. Hence, I've removed the fallback in a separate commit. I've\npushed that and your three patches. Thanks.\n\n\n", "msg_date": "Sat, 20 Jul 2024 04:27:18 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "Hi,\n\nOn Sat, 20 Jul 2024 at 14:27, Noah Misch <[email protected]> wrote:\n>\n> On Thu, Jul 18, 2024 at 02:11:13PM +0300, Nazir Bilal Yavuz wrote:\n> > v4 is attached.\n>\n> Removal of the PinBufferForBlock() comment about the \"persistence =\n> RELPERSISTENCE_PERMANENT\" fallback started to feel like a loss. I looked for\n> a way to re-add a comment about the fallback.\n> https://coverage.postgresql.org/src/backend/storage/buffer/bufmgr.c.gcov.html\n> shows no test coverage of that fallback, and I think the fallback is\n> unreachable. Hence, I've removed the fallback in a separate commit. I've\n> pushed that and your three patches. Thanks.\n\nThanks for the separate commit and push!\n\nWith the separate commit (e00c45f685), does it make sense to rename\nthe smgr_persistence parameter of the ReadBuffer_common() to\npersistence? Because, ExtendBufferedRelTo() calls ReadBuffer_common()\nwith rel's persistence now, not with smgr's persistence.\n\n-- \nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n", "msg_date": "Sat, 20 Jul 2024 15:01:31 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Sat, Jul 20, 2024 at 03:01:31PM +0300, Nazir Bilal Yavuz wrote:\n> On Sat, 20 Jul 2024 at 14:27, Noah Misch <[email protected]> wrote:\n> >\n> > On Thu, Jul 18, 2024 at 02:11:13PM +0300, Nazir Bilal Yavuz wrote:\n> > > v4 is attached.\n> >\n> > Removal of the PinBufferForBlock() comment about the \"persistence =\n> > RELPERSISTENCE_PERMANENT\" fallback started to feel like a loss. I looked for\n> > a way to re-add a comment about the fallback.\n> > https://coverage.postgresql.org/src/backend/storage/buffer/bufmgr.c.gcov.html\n> > shows no test coverage of that fallback, and I think the fallback is\n> > unreachable. Hence, I've removed the fallback in a separate commit. I've\n> > pushed that and your three patches. Thanks.\n> \n> Thanks for the separate commit and push!\n> \n> With the separate commit (e00c45f685), does it make sense to rename\n> the smgr_persistence parameter of the ReadBuffer_common() to\n> persistence? Because, ExtendBufferedRelTo() calls ReadBuffer_common()\n> with rel's persistence now, not with smgr's persistence.\n\nBMR_REL() doesn't set relpersistence, so bmr.relpersistence is associated with\nbmr.smgr and is unset if bmr.rel is set. That is to say, bmr.relpersistence\nis an smgr_persistence. It could make sense to change ReadBuffer_common() to\ntake a BufferManagerRelation instead of the three distinct arguments.\n\nOn a different naming topic, my review missed that field name\ncopy_storage_using_buffer_read_stream_private.last_block doesn't fit how the\nfield is used. Code uses it like an nblocks. So let's either rename the\nfield or change the code to use it as a last_block (e.g. initialize it to\nnblocks-1, not nblocks).\n\n\n", "msg_date": "Sat, 20 Jul 2024 11:14:05 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "Hi,\n\nOn Sat, 20 Jul 2024 at 21:14, Noah Misch <[email protected]> wrote:\n>\n> On Sat, Jul 20, 2024 at 03:01:31PM +0300, Nazir Bilal Yavuz wrote:\n> >\n> > With the separate commit (e00c45f685), does it make sense to rename\n> > the smgr_persistence parameter of the ReadBuffer_common() to\n> > persistence? Because, ExtendBufferedRelTo() calls ReadBuffer_common()\n> > with rel's persistence now, not with smgr's persistence.\n>\n> BMR_REL() doesn't set relpersistence, so bmr.relpersistence is associated with\n> bmr.smgr and is unset if bmr.rel is set. That is to say, bmr.relpersistence\n> is an smgr_persistence. It could make sense to change ReadBuffer_common() to\n> take a BufferManagerRelation instead of the three distinct arguments.\n\nGot it.\n\n>\n> On a different naming topic, my review missed that field name\n> copy_storage_using_buffer_read_stream_private.last_block doesn't fit how the\n> field is used. Code uses it like an nblocks. So let's either rename the\n> field or change the code to use it as a last_block (e.g. initialize it to\n> nblocks-1, not nblocks).\n\nI prefered renaming it as nblocks, since that is how it was used in\nRelationCopyStorageUsingBuffer() before. Also, I realized that instead\nof setting p.blocknum = 0; initializing blkno as 0 and using\np.blocknum = blkno makes sense. Because, p.blocknum and blkno should\nalways start with the same block number. The relevant patch is\nattached.\n\n--\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Mon, 22 Jul 2024 12:00:45 +0300", "msg_from": "Nazir Bilal Yavuz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" }, { "msg_contents": "On Mon, Jul 22, 2024 at 12:00:45PM +0300, Nazir Bilal Yavuz wrote:\n> On Sat, 20 Jul 2024 at 21:14, Noah Misch <[email protected]> wrote:\n> > On a different naming topic, my review missed that field name\n> > copy_storage_using_buffer_read_stream_private.last_block doesn't fit how the\n> > field is used. Code uses it like an nblocks. So let's either rename the\n> > field or change the code to use it as a last_block (e.g. initialize it to\n> > nblocks-1, not nblocks).\n> \n> I prefered renaming it as nblocks, since that is how it was used in\n> RelationCopyStorageUsingBuffer() before. Also, I realized that instead\n> of setting p.blocknum = 0; initializing blkno as 0 and using\n> p.blocknum = blkno makes sense. Because, p.blocknum and blkno should\n> always start with the same block number. The relevant patch is\n> attached.\n\nI felt the local variable change was not a clear improvement. It would have\nbeen fine for the original patch to do it in that style, but the style of the\noriginal patch was also fine. So I've pushed just the struct field rename.\n\n\n", "msg_date": "Tue, 23 Jul 2024 05:34:00 -0700", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use read streams in CREATE DATABASE command when the strategy is\n wal_log" } ]
[ { "msg_contents": "Hi,\n\nAt 2024.pgconf.dev, Andres and I will be hosting a patch review\nworkshop.[1] Part of the workshop will be a presentation, and part of\nit will be a practicum. That is, we're going to actually ask attendees\nto review some patches during the workshop. We'll also comment on\nthose reviews, and the patches themselves, with our own thoughts.\nWhile we could just pick some things from the CommitFest, I believe we\nboth felt a little uncomfortable with the idea of potentially turning\na spotlight on someone's patch where they might not have been\nexpecting it. So, instead, I'd like to invite you to email me, and/or\nAndres, if you have a patch that isn't committed yet and which you\nthink would be a good candidate for review during this workshop. If\nyour patch is selected to be reviewed during the workshop, then you\nwill very likely get some reviews for your patch posted on\npgsql-hackers. But, there are no guarantees about how positive or\nnegative those reviews will be, so you do need to be prepared to take\nthe bad with the good.\n\nNote that this is really an exercise in helping more people in the\ncommunity to get better at reviewing patches. So, if Andres and I\nthink that what your patch really needs is an opinion from Tom Lane\nspecifically, or even an opinion from Andres Freund or Robert Haas\nspecifically, we probably won't choose to include it in the workshop.\nBut there are lots of patches that just need attention from someone,\nat least for starters, and perhaps this workshop can help some of\nthose patches to make progress, in addition to (hopefully) being\neducational for the attendees.\n\nKey points:\n\n1. If you have a patch you think would be a good candidate for this\nevent, please email me and/or Andres.\n\n2. Please only volunteer a patch that you wrote, not one that somebody\nelse wrote.\n\n3. Please don't suggest a patch that's already committed.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n[1] https://www.pgevents.ca/events/pgconfdev2024/schedule/session/40-patch-review-workshop-registration-required/\n\n\n", "msg_date": "Tue, 16 Apr 2024 12:09:10 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "soliciting patches to review" }, { "msg_contents": "Hi,\n\nJust a quick update. We have so far had 8 suggested patches from 6\npeople, if I haven't missed anything. I'm fairly certain that not all\nof those patches are going to be good candidates for this session, so\nit would be great if a few more people wanted to volunteer their\npatches.\n\nThanks,\n\n...Robert\n\n\n", "msg_date": "Tue, 23 Apr 2024 13:27:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: soliciting patches to review" }, { "msg_contents": "On Tue, Apr 23, 2024 at 1:27 PM Robert Haas <[email protected]> wrote:\n>\n> Hi,\n>\n> Just a quick update. We have so far had 8 suggested patches from 6\n> people, if I haven't missed anything. I'm fairly certain that not all\n> of those patches are going to be good candidates for this session, so\n> it would be great if a few more people wanted to volunteer their\n> patches.\n\nSince you are going to share the patches anyway at the workshop, do\nyou mind giving an example of a patch that is a good fit for the\nworkshop? Alternatively, you could provide a hypothetical example. I,\nof course, have patches that I'd like reviewed. But, I'm unconvinced\nany of them would be particularly interesting in a workshop.\n\n- Melanie\n\n\n", "msg_date": "Tue, 23 Apr 2024 13:39:02 -0400", "msg_from": "Melanie Plageman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: soliciting patches to review" }, { "msg_contents": "On Tue, Apr 23, 2024 at 1:39 PM Melanie Plageman\n<[email protected]> wrote:\n> Since you are going to share the patches anyway at the workshop, do\n> you mind giving an example of a patch that is a good fit for the\n> workshop? Alternatively, you could provide a hypothetical example. I,\n> of course, have patches that I'd like reviewed. But, I'm unconvinced\n> any of them would be particularly interesting in a workshop.\n\nAndres and I haven't discussed our selection criteria yet, but my\nfeeling is that we're going to want patches that are somewhat\nmedium-sized. If your patch makes PostgreSQL capable of\nfaster-than-light travel, it's probably too big to be reviewed\nmeaningfully in the time we will have. If your patch changes corrects\na bunch of typos, it probably lacks enough substance to be worth\ndiscussing. I hesitate to propose more specific parameters. On the one\nhand, a patch that changes something user-visible that someone could\nreasonably like or dislike is probably easier to review, in some\nsense, than a patch that refactors code or tries to improve\nperformance. However, talking about how to review patches where it's\nless obvious what you should be trying to evaluate might be an\nimportant part of the workshop, so my feeling is that I would prefer\nit if more people would volunteer and then let Andres and I sort\nthrough what we think makes sense to include.\n\nI would also be happy to have people \"blanket submit\" without naming\npatches i.e. if anyone wants to email and say \"hey, feel free to\ninclude any of my stuff if you want\" that is great. Our concern was\nthat we didn't want to look like we were picking on anyone who wasn't\nup for it. I'm happy to keep getting emails from people with specific\npatches they want reviewed -- if we can hit a patch that someone wants\nreviewed that is better for everyone than if we just pick randomly --\nbut my number one concern is not offending anyone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Apr 2024 13:56:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: soliciting patches to review" }, { "msg_contents": "On Tue, Apr 23, 2024 at 1:27 PM Robert Haas <[email protected]> wrote:\n> Just a quick update. We have so far had 8 suggested patches from 6\n> people, if I haven't missed anything. I'm fairly certain that not all\n> of those patches are going to be good candidates for this session, so\n> it would be great if a few more people wanted to volunteer their\n> patches.\n\nWith approximately 1 week to go, I now have a list of 24 patches that\nI think are good candidates for this session and another 11 that were\nsuggested but which I think are not good candidates for various\nreasons, including (1) being trivial, (2) being so complicated that\nit's not reasonable to review them in the time we'll have, and (3)\npatching something other than the C code, which I consider too\nspecialized for this session.\n\nI think that's a long enough list that we probably won't get to all of\nthe patches in the session, so I don't necessarily *need* more things\nto put on the list at this point. However, I am still accepting\nfurther nominations until approximately this time on Friday. Hence, if\nyou have written a patch that you think would be a good candidate for\nsome folks to review in their effort to become better reviewers, let\nme (and Andres) know.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 20 May 2024 13:41:09 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": true, "msg_subject": "Re: soliciting patches to review" } ]
[ { "msg_contents": "Hi,\n\nI noticed that margay (Solaris) has started running more of the tests\nlately, but is failing in pg_basebaseup/010_pg_basebackup. It runs\nsuccessfully on wrasse (in older branches, Solaris 11.3 is desupported\nin 17/master), and also on pollock (illumos, forked from common\nancestor Solaris 10 while it was open source).\n\nHmm, wrasse is using \"/opt/csw/bin/gtar xf ...\" and pollock is using\n\"/usr/gnu/bin/tar xf ...\", while margay is using \"/usr/bin/tar xf\n...\". The tar command is indicating success (it's run by\nsystem_or_bail and it's not bailing), but the replica doesn't want to\ncome up:\n\npg_ctl: directory\n\"/home/marcel/build-farm-15/buildroot/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/pgdata\"\nis not a database cluster directory\"\n\nSo one idea would be that our tar format is incompatible with Sun tar\nin some way that corrupts the output, or there is some still\ndifference in the nesting of the directory structure it creates, or\nsomething like that. I wonder if this is already common knowledge in\nthe repressed memories of this list, but I couldn't find anything\nspecific. I'd be curious to know why exactly, if so (in terms of\nPOSIX conformance etc, who is doing something wrong).\n\n\n", "msg_date": "Wed, 17 Apr 2024 16:21:23 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Solaris tar issues,\n or other reason why margay fails 010_pg_basebackup?" }, { "msg_contents": "Hi\n\nIs there a way to configure which tar to use?\n\ngnu tar would be available.\n\n-bash-5.1$ ls -l /usr/gnu/bin/tar\n-r-xr-xr-x 1 root bin 1226248 Jul 1 2022 /usr/gnu/bin/tar\n\nWhich tar file is used?\nI could try to untar manually to see what happens.\n\nBest regards,\nMarcel\n\n\n\nAm 17.04.2024 um 06:21 schrieb Thomas Munro:\n> Hi,\n> \n> I noticed that margay (Solaris) has started running more of the tests\n> lately, but is failing in pg_basebaseup/010_pg_basebackup. It runs\n> successfully on wrasse (in older branches, Solaris 11.3 is desupported\n> in 17/master), and also on pollock (illumos, forked from common\n> ancestor Solaris 10 while it was open source).\n> \n> Hmm, wrasse is using \"/opt/csw/bin/gtar xf ...\" and pollock is using\n> \"/usr/gnu/bin/tar xf ...\", while margay is using \"/usr/bin/tar xf\n> ...\". The tar command is indicating success (it's run by\n> system_or_bail and it's not bailing), but the replica doesn't want to\n> come up:\n> \n> pg_ctl: directory\n> \"/home/marcel/build-farm-15/buildroot/HEAD/pgsql.build/src/bin/pg_basebackup/tmp_check/t_010_pg_basebackup_replica_data/pgdata\"\n> is not a database cluster directory\"\n> \n> So one idea would be that our tar format is incompatible with Sun tar\n> in some way that corrupts the output, or there is some still\n> difference in the nesting of the directory structure it creates, or\n> something like that. I wonder if this is already common knowledge in\n> the repressed memories of this list, but I couldn't find anything\n> specific. I'd be curious to know why exactly, if so (in terms of\n> POSIX conformance etc, who is doing something wrong).\n\n\n\n", "msg_date": "Wed, 17 Apr 2024 09:17:17 +0200", "msg_from": "Marcel Hofstetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris tar issues, or other reason why margay fails\n 010_pg_basebackup?" }, { "msg_contents": "On Wed, Apr 17, 2024 at 7:17 PM Marcel Hofstetter\n<[email protected]> wrote:\n> Is there a way to configure which tar to use?\n>\n> gnu tar would be available.\n>\n> -bash-5.1$ ls -l /usr/gnu/bin/tar\n> -r-xr-xr-x 1 root bin 1226248 Jul 1 2022 /usr/gnu/bin/tar\n\nCool. I guess you could fix the test either by setting\nTAR=/usr/gnu/bin/tar or PATH=/usr/gnu/bin:$PATH.\n\nIf we want to understand *why* it doesn't work, someone would need to\ndig into that. It's possible that PostgreSQL is using some GNU\nextension (if so, apparently the BSDs' tar is OK with it too, and I\nguess AIX's and HP-UX's was too in the recent times before we dropped\nthose OSes). I vaguely recall (maybe 20 years ago, time flies) that\nSolaris tar wasn't able to extract some tarballs but I can't remember\nwhy... I'm also happy to leave it at \"Sun's tar doesn't work for us,\nwe don't know why\" if you are.\n\n\n", "msg_date": "Wed, 17 Apr 2024 20:52:30 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris tar issues,\n or other reason why margay fails 010_pg_basebackup?" }, { "msg_contents": "Hi Thomas\n\nUsing gnu tar helps to make pg_basebackup work.\nIt fails now at a later step.\n\nBest regards,\nMarcel\n\n\n\nAm 17.04.2024 um 10:52 schrieb Thomas Munro:\n> On Wed, Apr 17, 2024 at 7:17 PM Marcel Hofstetter\n> <[email protected]> wrote:\n>> Is there a way to configure which tar to use?\n>>\n>> gnu tar would be available.\n>>\n>> -bash-5.1$ ls -l /usr/gnu/bin/tar\n>> -r-xr-xr-x 1 root bin 1226248 Jul 1 2022 /usr/gnu/bin/tar\n> \n> Cool. I guess you could fix the test either by setting\n> TAR=/usr/gnu/bin/tar or PATH=/usr/gnu/bin:$PATH.\n> \n> If we want to understand *why* it doesn't work, someone would need to\n> dig into that. It's possible that PostgreSQL is using some GNU\n> extension (if so, apparently the BSDs' tar is OK with it too, and I\n> guess AIX's and HP-UX's was too in the recent times before we dropped\n> those OSes). I vaguely recall (maybe 20 years ago, time flies) that\n> Solaris tar wasn't able to extract some tarballs but I can't remember\n> why... I'm also happy to leave it at \"Sun's tar doesn't work for us,\n> we don't know why\" if you are.\n\n\n\n", "msg_date": "Wed, 17 Apr 2024 15:40:47 +0200", "msg_from": "Marcel Hofstetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris tar issues, or other reason why margay fails\n 010_pg_basebackup?" }, { "msg_contents": "On Thu, Apr 18, 2024 at 1:40 AM Marcel Hofstetter\n<[email protected]> wrote:\n> Using gnu tar helps to make pg_basebackup work.\n\nThanks! I guess that'll remain a mystery.\n\n> It fails now at a later step.\n\nOh, this rings a bell:\n\n[14:54:58] t/010_tab_completion.pl ..\nDubious, test returned 29 (wstat 7424, 0x1d00)\n\nWe had another thread[1] where we figured out that Solaris's termios\ndefaults include TABDLY=TAB3, meaning \"expand tabs to spaces on\noutput\", and that was upsetting our tab-completion test. Other Unixes\nused to vary on this point too, but they all converged on not doing\nthat, except Solaris, apparently. Perhaps IPC::Run could fix that by\ncalling ->set_raw() on the pseudo-terminal, but I'm not very sure\nabout that.\n\nThis test suite is passing on pollock because it doesn't have IO::Pty\ninstalled. Could you try uninstalling that perl package for now, so\nwe can see what breaks next?\n\n[06:34:40] t/010_tab_completion.pl .. skipped: IO::Pty is needed to\nrun this test\n\n[1] https://www.postgresql.org/message-id/flat/MEYP282MB1669E2E11495A2DEAECE8736B6A7A%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n\n", "msg_date": "Thu, 18 Apr 2024 05:42:06 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris tar issues,\n or other reason why margay fails 010_pg_basebackup?" }, { "msg_contents": "Thomas Munro <[email protected]> writes:\n> This test suite is passing on pollock because it doesn't have IO::Pty\n> installed. Could you try uninstalling that perl package for now, so\n> we can see what breaks next?\n\nIf that's inconvenient for some reason, you could also skip the\ntab-completion test by setting SKIP_READLINE_TESTS in the\nanimal's build_env options.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Apr 2024 15:12:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris tar issues,\n or other reason why margay fails 010_pg_basebackup?" }, { "msg_contents": "\nThank you tom.\n\nSKIP_READLINE_TESTS works. margay is now green again.\n\nBest regards,\nMarcel\n\n\nAm 17.04.2024 um 21:12 schrieb Tom Lane:\n> Thomas Munro <[email protected]> writes:\n>> This test suite is passing on pollock because it doesn't have IO::Pty\n>> installed. Could you try uninstalling that perl package for now, so\n>> we can see what breaks next?\n> \n> If that's inconvenient for some reason, you could also skip the\n> tab-completion test by setting SKIP_READLINE_TESTS in the\n> animal's build_env options.\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Thu, 18 Apr 2024 14:57:14 +0200", "msg_from": "Marcel Hofstetter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris tar issues, or other reason why margay fails\n 010_pg_basebackup?" }, { "msg_contents": "On Fri, Apr 19, 2024 at 12:57 AM Marcel Hofstetter\n<[email protected]> wrote:\n> SKIP_READLINE_TESTS works. margay is now green again.\n\nGreat! FTR there was a third thing revealed by margay since you\nenabled the TAP tests: commit e2a23576.\n\nI would guess that the best chance of getting the readline stuff to\nactually work would be to interest someone who hacks on\nIPC::Run-and-related-stuff (*cough* Noah *cough*) and who has Solaris\naccess to look at that... I would guess it needs a one-line fix\nrelating to raw/cooked behaviour, but as the proverbial mechanic said,\nmost of the fee is for knowing where to hit it...\n\n\n", "msg_date": "Fri, 19 Apr 2024 10:24:26 +1200", "msg_from": "Thomas Munro <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris tar issues,\n or other reason why margay fails 010_pg_basebackup?" } ]